Clustered Riak CS with Eucalyptus Object Storage Gateway

In the last post, we have installed Riak CS on a single node. For production, a deployment of five or more nodes is required for better performance, reliability. Riak has a default three times data replication mechanism and in smaller deployment the replication requirement may not be met properly and also it may compromise the fault-tolerance ability of the cluster. Fewer nodes will have higher workloads.

According to the documentation:

If you have 3 nodes, and require all 3 to replicate, 100% of your nodes will respond to a single request. If you have 5, only 60% need respond.

In this post, we will use 5 nodes to create a Riak cluster for our Eucalyptus setup. Since we will be using Riak CS, we will be installing Riak CS in each node. We will also need a Stanchion server.

Overall, our setup will look like below:

a) 5x Riak nodes
b) 5x Riak CS nodes (one in each Riak node)
c) 1x Stanchion node
d) 1x Riak Control
e) 1x Riak CS Control
f) 1x Nginx server for load balancing between the Riak CS nodes

First we will install Riak, Riak CS on all the nodes,

yum install http://yum.basho.com/gpg/basho-release-6-1.noarch.rpm -y
yum install riak riak-cs -y

Configure Riak:

Modify the following lines from /etc/riak/app.config with the host IP address,

{pb, [ {"127.0.0.1", 8087 } ]}

{http, [ {"127.0.0.1", 8098 } ]},

Find the following line from /etc/riak/app.config and replace it with multi backend setup,

from:

{storage_backend, riak_kv_bitcask_backend},

to:

            {add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.4.5/ebin"]},
            {storage_backend, riak_cs_kv_multi_backend},
            {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
            {multi_backend_default, be_default},
            {multi_backend, [
              {be_default, riak_kv_eleveldb_backend, [
                {max_open_files, 50},
                {data_root, "/var/lib/riak/leveldb"}
              ]},
              {be_blocks, riak_kv_bitcask_backend, [
                {data_root, "/var/lib/riak/bitcask"}
              ]}
            ]},

And add the following line to riak_core section in the config file,

{default_bucket_props, [{allow_mult, true}]},

Change the following line from /etc/riak/vm.args with host IP address,

-name riak@127.0.0.1

Configure Riak CS:

Modify the following lines from /etc/riak-cs/app.config with the host IP address,

{cs_ip, "127.0.0.1"},

{riak_ip, "127.0.0.1"},

{stanchion_ip, "127.0.0.1"},

We are going to have to create an admin user, modify the following line and set the value to true for now, change it back before going into production,

{anonymous_user_creation, false},

Change the following line from /etc/riak-cs/vm.args with host IP address,

-name riak@127.0.0.1

Follow the same procedure for rest of the nodes.

Configure Stanchion:

Install Stanchion in one of the servers,

yum install stanchion -y

Modify the following lines from /etc/stanchion/app.config with the host IP address,

{stanchion_ip, "127.0.0.1"},

{riak_ip, "127.0.0.1"},

Modify the following lines from /etc/stanchion/vm.args with the host IP address,

-name stanchion@127.0.0.1

Start Riak components on the server where Stanchion is installed:

riak start
riak-cs start
stanchion start

Create admin user:

curl -H 'Content-Type: application/json' \
-X POST http://10.111.5.181:8080/riak-cs/user \
--data '{"email":"admin@admin.com", "name":"admin"}'

From the output save the following two variables,

"key_id":"UMSNH00MXO57XNQ4FH05",
"key_secret":"sApGkHzUaNQ0_54BqwbiofH50qzRb4RLi7hFnQ=="

In production system, you may want to change the anonymous_user_creation settings to false after creating the admin user.

From /etc/riak-cs/app.config and /etc/stanchion/app.config change the following two values with key_id and key_secret,

{admin_key, "admin-key"},
{admin_secret, "admin-secret"},

Restart both riak-cs and stanchion.

Setting up Riak Cluster:

Now we will join all the nodes. Run the following from each node (for this guide, I kept the stanchion node’s IP constant)

riak-admin cluster join riak@<node-ip>
riak-admin cluster plan
riak-admin cluster commit

Activate Riak Control:

Modify the following lines from /etc/riak/app.config with the host IP address,

{https, [{ "127.0.0.1", 8098 }]},

Uncomment the ssl configuration and set file path as appropriate,

{ssl, [
       {certfile, "/etc/riak/cert.pem"},
       {keyfile, "/etc/riak/key.pem"}
     ]},

Follow this guideline to create self-signed certificate,

http://www.akadia.com/services/ssh_test_certificate.html

Set the following to true from riak_control section,

{enabled, false},

and set username/password,

{userlist, [{"user", "pass"}
]},

Login to the following url to access Riak Control web interface,

https://RIAK-NODE-IP:8069/admin

Riak Control

Riak Control

Install and Configure Nginx:

We will use Nginx as a load balancer between the nodes. It can be installed on any node or on an external server as well which can work as a load balancer between the Riak CS nodes.

We will be installing Riak CS Control which seems to be using HTTP 1.1 version (available from Nginx 1.1.4). So, we will be using latest stable version of Nginx on our Nginx server.

yum install http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm -y
yum install nginx -y

Nginx config file will look like this [source],

upstream riak_cs_host {
  server :8080;
  server :8080;
  server :8080;
  server :8080;
  server :8080;
  }

server {
  listen   80;
  server_name  _;
  access_log  /var/log/nginx/riak_cs.access.log;
  client_max_body_size 0;

location / {
  proxy_set_header Host $http_host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_redirect off;

  proxy_connect_timeout      90;
  proxy_send_timeout         90;
  proxy_read_timeout         90;
  proxy_buffer_size    128k;
  proxy_buffers     4 256k;
  proxy_busy_buffers_size 256k;
  proxy_temp_file_write_size 256k;
  proxy_http_version    1.1;

  proxy_pass http://riak_cs_host;
  }
}

Install and Configure Riak CS Control:

Run the following command to install Riak CS Control,

yum install http://s3.amazonaws.com/downloads.basho.com/riak-cs-control/1.0/1.0.2/rhel/6/riak-cs-control-1.0.2-1.el6.x86_64.rpm -y

Modify the following lines from /etc/riak-cs-control/app.config with the Nginx server’s IP address and proxy,

{cs_proxy_host, "127.0.0.1" },
{cs_proxy_port, 8080 },

Set the admin creds downloaded above in /etc/riak-cs-control/app.config file.

Riak CS Control

Riak CS Control

Configure Eucalyptus:

Now as usual, set the Riak CS property to use as Eucalyptus Object Storage Gateway’s (OSG) backend,

Update:

Instead of using Riak CS admin account since it has special admin privileges, we need to create a regular Riak CS account, via Riak CS Control or command line (like above) and use it for Eucalyptus.

euca-modify-property -p objectstorage.s3provider.s3endpoint=NGINX-IP

euca-modify-property -p objectstorage.s3provider.s3accesskey=ACCESS-KEY

euca-modify-property -p objectstorage.s3provider.s3secretkey=SECRET-KEY

Enjoy multi-clustered Riak CS with Eucalyptus!

James Iry’s history of programming languages (illustrated with pictures and large fonts)

Originally posted on The Quick Word:

jacquardAda LovelaceAlan Turing04-church05-bartik06-backus07-mccarthy08-hopper09-kemeney-kutz10-steele11-wirth12-ritchie-thompson13-colmerauer14-milner15-kay16-ichbah17-stroustrup18-cox19-wall20-haskell21-rossum22-lerdorf23-hansson24-eich25-gosling26-hejlsberg26-odersky

–This post is a tribute to James Iry’s fantastic One Div Zero blog.

View original

Eucalyptus Object Storage Gateway with Riak CS

Eucalyptus 4.0 is the next major release of Eucalyptus. One of the exciting features of this release is Object Storage Gateways (OSG). It uses Riak CS as scalable storage backend. It also works with Walrus as storage backend. Object Storage Gateway first came out as tech preview in 3.4 release. To use Riak CS with OSG it is required to have an existing Riak CS setup.

In this post we will setup a minimal Riak CS setup to work with Eucalyptus OSG. For this demo I am using a Eucalyptus 4.0 setup from the currently available source from github. Here, we will be installing all the necessary Riak CS components on the same host that we are using for frontend, which is what we say a proof of concept setup and not recommended for production deployment.

eucalyptus-logo-349x83

Eucalyptus 4.0 introduces a new component Object Storage Gateway (OSG). Run the following command from Cloud Controller(CLC) to register this new component,

euca-register-object-storage-gateway \
--partition objectstorage \
--host <osg host ip address> <component name>

Most likely the OSG component status will be BROKEN at this point, until we configure Eucalyptus properties to work with Riak CS.

Riak CS installation and configuration:

riak-cs-hdr2

Riak CS is built on top of Riak, one of the most popular open source distributed database. To install basic Riak CS we will need to install Riak, Stanchion and finally Riak CS. (Riak 1.4.6, Stanchion 1.4.3, Riak 1.4.3).

Set the user limit to a higher number,

ulimit -n 65536

Install Riak CS,

yum install -y http://yum.basho.com/gpg/basho-release-6-1.noarch.rpm

yum install -y riak stanchion riak-cs

Rest of the configuration steps are very straight-forward and can be found here.

By default Riak CS uses port 8080, while Eucalyptus also uses this port for http redirect. We need to change either of the ports to resolve the port conflict and get both Riak CS and Eucalyptus running on the same host.

To modify Eucalyptus port, run the following from CLC,

euca-modify-property -p www.http_port=<port>

To modify Riak CS port, change the port from /etc/riak-cs/app.config,

{cs_port, "<port>" } ,

While installing Riak CS, we created an admin user and got a similar json output with id, key_id and key_secret. These credentials can be used to access Riak Cloud Storage like Amazon S3.

{
    "email": "admin@admin.com",
    "display_name": "admin",
    "name": "admin",
    "key_id": "BMNVZPO4ZXYAYEIFF9PG",
    "key_secret": "JXvFrTEx4eqirMGJnYZqvZiek7ZDema_1FM2CQ==",
    "id": "f181fac1f8d24fdeec39adbbbba5d13297aa6de056e1b26dc0c9e4a723cec7b2",
    "status": "enabled"
}

To use Riak CS with Eucalyptus OSG we need to modify the at least the following Eucalyptus properties,

PROPERTY	objectstorage.providerclient	s3
DESCRIPTION	objectstorage.providerclient	Object Storage Provider client to use for backend
PROPERTY	objectstorage.s3provider.s3endpoint	<riakcs_ip:port>
DESCRIPTION	objectstorage.s3provider.s3endpoint	External S3 endpoint.
PROPERTY	objectstorage.s3provider.s3accesskey	********
DESCRIPTION	objectstorage.s3provider.s3accesskey	External S3 Access Key.
PROPERTY	objectstorage.s3provider.s3secretkey	********
DESCRIPTION	objectstorage.s3provider.s3secretkey	External S3 Secret Key.

For Riak CS the objectstorage.providerclient will be s3, when using Walrus, the value for this property will be walrus.

Check this wiki link for few other optional configurable options.

Eucalyptus Object Storage Gateway (OSG) service status should be ENABLED now and ready to be used. Point your favorite S3 client to OSG and start using AWS compatible Object Storage.

You can use Eutester if you want to try out Eucalyptus 4.0 with OSG and Riak CS quickly. Setup Eutester and run the following script,

./install_riak_cs.py \
--password foobar \
--config /path/to/config \
--template-path "/templates/" \
--riak-cs-port <port> \
--admin-name <admin_user_name> \
--admin-email <admin_user_email>

The following python script can be used for quick OSG + Riak CS test,

import boto
from boto.ec2.regioninfo import RegionInfo
from boto.s3.connection import OrdinaryCallingFormat
boto_debug=2
boto.set_stream_logger('paws')

if __name__ == '__main__':

    accesskey="<access_key>"
    secretkey="<secret_key>"
    hostname="<hostname>"

    conns3osg = boto.connect_s3(aws_access_key_id=accesskey,
                              aws_secret_access_key=secretkey,
                              is_secure=False,
                              host=hostname,
                              port=8773,
                              path="/services/objectstorage",
                              calling_format=OrdinaryCallingFormat(),
                              debug=boto_debug)

    conns3osg.create_bucket('testbucket')
    conns3osg.get_bucket('testbucket')

Happy New Year Everyone!

Pursuit of Quality at Eucalyptus

At Eucalyptus, if one thing we care about it is the quality of the product. Our goal is to deliver an AWS compatible software that just works. To ensure the highest level of quality we try to follow the optimum strategy possible in both Development and in QA.

We have adopted Agile Software Development model a while back and we love it. As a part of our strategy, there are several type of projects we open in Jira. After getting the green signal from Product Management, we get Epic and then after architectural discussion Story tickets are created with Sub-tasks for developers to work on. We also have Jira tickets for New Features which are basically similar to Story but comparatively smaller tasks and generally have no Sub-tasks. At this point one scrum master from each team works closely with the developers to keep track of the progress. Then comes Bugs, everyone in the company or outside the company can report bugs, of course they can open new features as well. Typically, the workflow is very simple, all the bugs becomes confirmed either by the Tier3 members or in the bug scrub meetings. Then a developer gets assigned to a bug and the bug status gets from In Progress to In QA and after successful verification it goes into Release Pending state by a Quality Engineer. At this point the bits are ready to be merged in the master branch. And YES! Everything we do is publicly visible, we love Open Source!

As Quality Engineers, our job is to be in the process where a Story or New Feature or Bug’s status becomes In QA.

Eucalyptus is a complex piece of software and like most other system application out there, it is necessary to check not only the bug or any specific feature but also the entire product and with continuous development the testing is also a continuous process. Here, I will quote from one of colleagues, Kyo Lee, “Q: When do you stop testing? A: You never stop testing.” Achieving this goal would never be possible if we would plan this to do manually (or multiply the number of employees by 4 times?). Here comes Eutester, Eutester4J and se34euca, test frameworks built for Eucalyptus, but not limited to Eucalyptus.

Eutester is a functional testing framework for Eucalyptus written in Python. It uses Boto, a python interface to Amazon Web Services to ensure the AWS compatibility. Most of the Eucalyptus services are covered with various test cases which are written on top of Eutester. On the other hand, Eutester is definitely not limited to Eucalyptus functional testing, there are scripts to help with installation of Eucalyptus, Riak CS etc. It is also used to collect logs and various configuration, artifacts from the Eucalyptus hosts to debug issues. There are test cases for both administrators and users. Also, many test cases are built to reproduce and verify bugs. All the test cases are found on Eutester github repository.

For Java developers we have Eutester4J, another test framework built with AWS Java SDK, yet another product to test AWS Java SDK compatibility a.k.a AWS compatibility. It uses TestNG to build various test suites for Eucalyptus. For dependency management, it uses Ivy to make life easier for all.

Now to test the user console we have this fantastic project that we call se34euca, which uses Selenium and automates the entire process of testing the user console.

Well, we have all these projects, but how do we achieve automation at it’s highest level? The answer is Jenkins! We use Jenkins to automate all the test cases. We took the advantage of continuous integration to automate the test frameworks. Now the answer leads us to more questions, “how/where do I install Jenkins? how do I integrate those frameworks?” Yes, to answer all those questions, we have MicroQA. It’s the home of all those frameworks. MicroQA uses Vagrant (yet another cool tool, eh?) to speed up the installation. Yes, it is even possible to test Eucalyptus cloud from your laptop.

With all these, what we never forget is Manual Testing. We do test manually, mainly when a new feature becomes available, sometimes when we verify certain bugs, manual testing is unavoidable. We use open source test management software Testlink to manage manual test cases.

Last but not least, unlike typical tech organizations, we work very closely with developers, we all make sure we understand our job which is not to count the number of bugs fixed/raised/verified, nether the number of features/test cases, rather it is to deliver a quality product.

live-long-and-prosper-tee-shirt-cbs114b

Eucalyptus Faststart 3.4.1 – cloud-in-a-vm on Fedora 19

Eucalyptus 3.4.1 is releasing soon, I mean, very soon. So, as a part of testing Eucalyptus Faststart, we used Fedora 19 box to try out Eucalyptus.

Cloud-in-a-vm, eh?

The journey wasn’t so bad, but I did have to touch couple of things that I never used, things that I did a while back and forgot, things I didn’t know and so on. But this morning our QA lead Victor Iglesias helped with the missing parts, in other words the reason of my suffering for a while.

Anyway, so, here is what I did to get a cloud running in a vm on a Fedora 19 box.

Installed Fedora 19 on a core i5 Dell Inspiron laptop. It is better to have some free space in the volume group while installing Fedora. We will be creating a 100GB logical volume (LV) for the cloud VM later on. Also, if there is unallocated space available in the HDD, we can use that space for the LV.

The first thing I did is disabled selinux from /etc/selinux/config

For this setup, I decided to install the @virtualization from the base group installer to keep things simpler.

yum install @virtualization

Since we will have to run AWS-like instances inside a VM, we need to enabled the nested-kvm feature from kvm_intel module. By default nested kvm is off in most of the systems.

Open/Create the following file, /etc/modprobe.d/kvm-nested.conf and add this line,

options kvm_intel nested=1

Reboot the system to enable this nested virtualization feature.

Ensure that we have nested kvm enabled,

cat /sys/module/kvm_intel/parameters/nested

“Y” represents nested kvm availability.

More on nested-kvm, here.

Now we have to create bridge (e.g br0) for the VMs.

Once the VM setup is complete, it’s almost time to start the Faststart VM.

For this setup we used a 100GB logical volume (LV).

lvcreate -L 100G -n "fslv" "volumegroup"

If there is only unallocated space available on the HDD but no free space in the volume group, create a partition (e.g /dev/sda3). Then create physical volume and extend volume group,

pvcreate /dev/sda3
vgextend "volumegroup" /dev/sda3

Uncomment the following lines in /etc/libvirt/qemu.conf file,

vnc_listen = "0.0.0.0"
vnc_password = "<password>"
user = "root"
group = "root"

Restart libvirtd service,

systemctl restart libvirtd.service

Copy the faststart-3.4.1.iso to /var/lib/libvirt/images/ directory.

Create a 3 liner script or run these commands manually,

#!/bin/bash
virsh undefine fs
dd if=/dev/zero of=/dev/fedora/fslv bs=1M count=1
virt-install --name fs --cpu host-passthrough --disk /dev/fedora/fslv --ram 4000 --cdrom $1 --graphics vnc,listen=0.0.0.0 --bridge br0

Make sure, to pass the iso file after –cdrom if you are running the above lines manually, otherwise pass the iso file as a argument with the script.

Connect to the instance with Fedora Remote Desktop Viewer or any other you like and follow the instruction to install Eucalyptus. For this installation I selected cloud-in-a-box to have all the component in the same box. This might take a while, so make sure you are caffeine enabled.

When the installation is completed, reboot the VM. In this case, you might have to start the VM again.

virsh --connect qemu:///system
virsh # start fs

It will now configure the Eucalyptus cloud and within couple of minutes an Eucalyptus cloud will be ready with one basic image and one load-balancer image.

Connect the Eucalyptus hybrid user console with the VMs IP,

https://faststart_vm_ip:8888
User Credentials:
  * Account:  demo
  * Username: admin
  * Password: password

By default Eucalyptus Faststart installation creates admin and demo credentials.

Follow the Eucalyptus documentation to discover Eucalyptus more.

More on cloud in a vm: https://github.com/eucalyptus/eucalyptus/wiki/Eucalyptus-Virtual-Cloud

Choose Your Own Adventure: Which Eucalyptus UI is Right for You?

Originally posted on UX Doctor:

Open source projects can often lead to an embarrassment of riches that can really be confusing. This is especially true when it comes to tools you can use to interact with your Eucalyptus cloud.

Options include CLIs, SDKs, and different graphical user interfaces. Each has advantages and disadvantages, strengths and weaknesses, and is targeted to a specific kind of person.

Depending on what you want to do, your level of cloud knowledge, and the kind of user experience you desire, there is a Eucalyptus UI that will fit you perfectly. Let’s take a quick look at each and then you can Choose Your Own Adventure to help you find your ideal fit.

EUCALYPTUS USER CONSOLE

Eucalyptus User Console Dashboard

The Eucalyptus User Console is an open source project with development primarily done internally by Eucalyptus employees. We wanted to give our customers a graphical user interface for self-service deployment of virtual machines on Eucalyptus…

View original 747 more words

Eucalyptus 3.3.0 in a nutshell

Eucalyptus 3.3.0, the most exciting Eucalyptus release so far is knocking on the door or perhaps it has been already released when you are reading this post.

Eucalyptus 3.3.0 has couple of most desired Amazon Web Services (AWS) features by the cloud users:

1. Elastic Load Balancing (ELB)

Needless to say, this is an AWS ELB compatible feature which is being introduced in Eucalyptus 3.3.0.

Creating a basic loadbalancer:

eulb-create-lb -z PARTI00 -l 'lb-port=80, protocol=HTTP, instance-port=80' MyElb
# output
# DNS_NAME	MyElb-576514848852.lb.localhost

eulb-describe-lbs
# output
# LOAD_BALANCER	MyElb	MyElb-576514848852.lb.localhost	2013-06-10T06:57:52.07Z

Register instances with Eucalyptus Elastic Load Balancer,

eulb-register-instances-with-lb MyElb --instances i-25D3415E,i-16463E17
# output
# INSTANCE i-25D3415E
# INSTANCE i-16463E17

eulb-describe-instance-health MyElb
# output
# INSTANCE	i-25D3415E	InService
# INSTANCE	i-16463E17	InService

Few other ELB operations,

# deregister instances from ELB
eulb-deregister-instances-from-lb MyElb --instances i-16463E17

# delete ELB
eulb-delete-lb MyElb

2. CloudWatch

CloudWatch is another AWS-compatible feature which is shipping with Eucalyptus 3.3.0. It enables cloud users to view, collect and analyze metrics of their could resources. It also lets cloud users to configure alarm actions based on the data from the metrics.

Enable instance monitoring,

# on existing instance
euca-monitor-instances i-25D3415E

# during instance run
euca-run-instances -k batman1key emi-90E83973 --monitor

# disable monitoring
euca-unmonitor-instances i-DB5842DC

Euwatch

# returns all the available metrics
euwatch-list-metrics

# returns list of metrics with particular metric name
euwatch-list-metrics --metric-name CPUUtilization

# returns list of metrics with particular namespace
euwatch-list-metrics --namespace AWS/EC2

# returns list of metrics with particular dimensions
euwatch-list-metrics --dimensions "InstanceId=i-25D3415E"

# returns time-series data for one or more statistics of a given MetricName
euwatch-get-stats CPUUtilization \
> --start-time 2013-06-10T07:09:00.043Z \
> --end-time 2013-06-10T08:46:54.043Z \
> --period 3600 \
> --statistics "Average,Minimum,Maximum" \
> --namespace "AWS/EC2" \
> --dimensions "InstanceId=i-25D3415E"

3. Auto Scaling

Eucalyptus Auto Scaling is consists of three fundamental principles,

  1. Launch Configurations
  2. Auto Scaling Groups
  3. Auto Scaling Policies

Create a launch configuration,

euscale-create-launch-config MyLC \
> --image-id emi-90E83973 \
> --instance-type m1.small

Create auto scaling group,

euscale-create-auto-scaling-group MyASGroup \
> --launch-configuration MyLC \
> --availability-zones PARTI00 \
> --min-size 1 --max-size 3

# describe auto scaling groups
euscale-describe-auto-scaling-groups

Create scale out policy,

euscale-put-scaling-policy MyScaleoutPolicy \
> --auto-scaling-group MyASGroup \
> --adjustment=30 \
> --type PercentChangeInCapacity

# output
# arn:aws:autoscaling::576514848852:scalingPolicy:c2a8f9dc-1c75-49d5-b54d-8ef87fe29e9a:autoScalingGroupName/MyASGroup:policyName/MyScaleoutPolicy

Creating scale in policy,

euscale-put-scaling-policy MyScaleInPolicy \
> --auto-scaling-group MyASGroup \
> --adjustment=-2  --type ChangeInCapacity

# output
# arn:aws:autoscaling::576514848852:scalingPolicy:a4148c27-81da-4eff-9140-cba3ba9381cb:autoScalingGroupName/MyASGroup:policyName/MyScaleInPolicy

CloudWatch Alarm

Eucalyptus CloudWatch alarm currently helps cloud users to take decisions on the resources (e.g instances, EBS volumes, Auto Scaling instances, ELBs) automatically based on the rules defined by the users based on the metrics. Eucalyptus CloudWatch alarm currently works with Auto Scaling policies.

Create alarm for scale out capacity and scale in capacity,

# create scale out alarm
euwatch-put-metric-alarm AddCapacity \
> --metric-name CPUUtilization \
> --namespace "AWS/EC2" \
> --statistic Average \
> --period 120 --threshold 80 \
> --comparison-operator GreaterThanOrEqualToThreshold \
> --dimensions "AutoScalingGroupName=MyASGroup" \
> --evaluation-periods 2 \
> --alarm-actions arn:aws:autoscaling::576514848852:scalingPolicy:c2a8f9dc-1c75-49d5-b54d-8ef87fe29e9a:autoScalingGroupName/MyASGroup:policyName/MyScaleoutPolicy

# create scale in alarm
euwatch-put-metric-alarm RemoveCapacity \
> --metric-name CPUUtilization \
> --namespace "AWS/EC2" \
> --statistic Average \
> --period 120 --threshold 40 \
> --comparison-operator LessThanOrEqualToThreshold \
> --dimensions "AutoScalingGroupName=MyASGroup" \
> --evaluation-periods 2 \
> --alarm-actions arn:aws:autoscaling::576514848852:scalingPolicy:a4148c27-81da-4eff-9140-cba3ba9381cb:autoScalingGroupName/MyASGroup:policyName/MyScaleInPolicy

# delete alarm
euwatch-delete-alarms

Set the alarm state to OK/ALARM for testing,

euwatch-set-alarm-state --state-value OK \
> --state-reason "testing" AddCapacity

euwatch-set-alarm-state --state-value OK \
> --state-reason "testing" RemoveCapacity

euwatch-describe-alarms

# output
# AddCapacity	OK	arn:aws:autoscaling::576514848852:scalingPolicy:c2a8f9dc-1c75-49d5-b54d-8ef87fe29e9a:autoScalingGroupName/MyASGroup:policyName/MyScaleoutPolicy	AWS/EC2	CPUUtilization	120	Average	2	GreaterThanOrEqualToThreshold	80.0
# RemoveCapacity	OK	arn:aws:autoscaling::576514848852:scalingPolicy:a4148c27-81da-4eff-9140-cba3ba9381cb:autoScalingGroupName/MyASGroup:policyName/MyScaleInPolicy	AWS/EC2	CPUUtilization	120	Average	2	LessThanOrEqualToThreshold	40.0

4. Resource Tagging

Resource tagging was another missing AWS feature which was not there until 3.2.2. This is a very important feature and also used by many 3rd party tools and application.

euca-create-tags vol-65803EB8 --tag "testtag"
# TAG volume vol-65803EB8 testtag

euca-describe-volumes
# VOLUME vol-65803EB8 2 PARTI00 available 2013-06-10T12:29:41.082Z standard
# TAG volume vol-65803EB8 testtag

5. More instance type

euca-describe-instance-types
INSTANCETYPE	Name         CPUs  Memory (MB)  Disk (GB)
INSTANCETYPE	m1.small        1          256          5
INSTANCETYPE	t1.micro        1          256          5
INSTANCETYPE	m1.medium       1          512         10
INSTANCETYPE	c1.medium       2          512         10
INSTANCETYPE	m1.large        2          512         10
INSTANCETYPE	m1.xlarge       2         1024         10
INSTANCETYPE	c1.xlarge       2         2048         10
INSTANCETYPE	m2.xlarge       2         2048         10
INSTANCETYPE	m3.xlarge       4         2048         15
INSTANCETYPE	m2.2xlarge      2         4096         30
INSTANCETYPE	m3.2xlarge      4         4096         30
INSTANCETYPE	cc1.4xlarge     8         3072         60
INSTANCETYPE	m2.4xlarge      8         4096         60
INSTANCETYPE	hi1.4xlarge     8         6144        120
INSTANCETYPE	cc2.8xlarge    16         6144        120
INSTANCETYPE	cg1.4xlarge    16        12288        200
INSTANCETYPE	cr1.8xlarge    16        16384        240
INSTANCETYPE	hs1.8xlarge    48       119808      24000

Well, if you used Eucalyptus before, I think, the improvement is very much visible :)

6. Maintenance Mode:

Eucalyptus 3.3.0 also comes with the feature which many cloud administrator might be waiting for such a long time, which is Maintenance Mode.

In other words, migrating a single instance to another Node Controller or evacuating a certain Node Controller are now supported by Eucalyptus.

# evacuate a Node Controller
euca-migrate-instances --source 10.111.1.119

# migrate specific instance to another destination
euca-migrate-instances -i i-38A74228 --dest 10.111.1.116

For more information check the Eucalyptus 3.3.0 roadmap. Architectural overview for 3.3.x release can be found on githubHere is a list of new stories that are going to take place in the 3.3.0 release.

More AWS Compatibility

Eucalyptus 3.3.x is the most AWS compatible release ever. It has more API compatibility than Eucalyptus ever had. Here is couple of our ongoing work on the different AWS SDKs and open source libraries.

  1. AWS SDK for Java
  2. AWS SDK for Ruby
  3. AWS SDK for PHP
  4. AWS toolkit for Eclipse
  5. jcloud on Eucalyptus - This is comparatively newest among all, we are tracking this as a story on jira, EUCA-5671.

Eucalyptus 3.3.0 has few very important improvements on Boot-from-EBS instances,

1. Root block device is /dev/sda and not /dev/sda1
2. Allow multiple EBS block device mappings
3. No more default ephemeral disk at /dev/sdb
4. Metadata service changes

Euca2ools 3.0 is huge in Eucalyptus 3.3.x. It has been completely ported to requestbuilder. Euca2ools 3 is slim and beautiful and it works!

One interesting fact about Euca2ools from the developers,

% git diff –shortstat 2.1.3.. — bin euca2ools generate-manpages.sh
install-manpages.sh setup.py
432 files changed, 14973 insertions(+), 15097 deletions(-)

euca2ools 3 adds three entirely new services and tons of new
functionality to the previous version, but it still manages to weigh
in at less code than it had before.

Read more about euca2ools 3, “What’s new in Euca2ools 3″ Part 1 and Part 2.

With all these new features, Eucalyptus 3.3.0 has many bug fixes as well. There are many others documented/undocumented fixes are coming in 3.3.0. Some administrator tool are also on the way to see the light very soon.

If you are interested in trying from source code, you are more that welcome to checkout Eucalyptus from the public github repository.

Some places to give you feedback:

Bug report: eucalyptus.atlassian.net
Questions: engage.eucalyptus.com