Eucalyptus Block Storage Service with Ceph RBD

Press: HP acquires Eucalyptus :)

Embracing open source technology is not something new to us and here at Eucalyptus we practice this everyday. In Eucalyptus 4.1.0, we chose this beautiful commodity storage technology Ceph for our Block Storage Service. yay! While the product is under heavy development, I am happy to give update about how Ceph can be used as a block storage backend for Eucalyptus Storage Controller (SC).

First thing first, so to use Ceph as a block storage manager, we need a basic Ceph cluster deployed. Apart from having several automation technologies to deploy Ceph, it also has a beautiful tool called ceph-deploy. We found it really pleasing while deploying Ceph clusters with ceph-deploy. But as our requirement is to be able to deploy Ceph clusters of different sizes on-demand for testing ceph+eucalyptus setup, so we needed some automation for this. After giving some thought about our use case, we decided to take the a newer path which would be easily maintainable for us. So, we decided to write a small chef cookbook to deploy a Ceph cluster with ceph-deploy, fun isn’t it? ;-)

 

For those who want to take a quick taste of Eucalyptus and Ceph at it’s early phase here is how you do it:

To deploy using ceph-cookbook which uses ceph-deploy,

1. clone the cookbook from https://github.com/shaon/ceph-cookbook.git (tested with 1x Monitor node and 2x OSDs, should work with a large number of OSDs)

2. upload the cookbook to your Chef server

3. update the environment file and upload it to the Chef server

4. Deploy Ceph cluster using Motherbrain

mb ceph-deploy bootstrap bootstrap.json --environment ceph_cluster -v

To add Ceph cluster as block storage backend for Eucalyptus, we will need a newer kernel with RBD module support and Ceph installed on the Eucalyptus Storage Controller (SC). For this demo we will use the LT kernel from ELRepo.

yum install ceph
yum --enablerepo=elrepo-kernel install kernel-lt

You may need to reboot the host by setting up the proper default value in grub.conf.

Now on your MON node, copy ceph.conf and ceph.client.admin.keyring from /root/mycluster directory and place those into /etc/ceph/ directory of the Eucalyptus SC host.

Register Eucalyptus Storage Controller and set the <cluster>.storage.blockstoragemanager property to “ceph”, e.g

euca-modify-property -p PARTI00.storage.blockstoragemanager=ceph

At this point your Eucalyptus Storage Controller should be ready to be used with Ceph as a storage backend!

Now test your Eucalyptus SC with Ceph RBD by creating volumes and snapshots.

To check the ongoing events on your Ceph cluster, run `ceph -w` on the Monitor node.

Happy Cephing with Eucalyptus!

Even though it’s under heavy development, we will love to hear your experience with Ceph+Eucalyptus.

Join us at #eucalyptus on Freenode.

Update: My colleague and good friend Swathi Gangisetty pointed out that for Eucalyptus Storage Controller to be operational no newer kernel is required. Currently, for librbd, SC interacts with Ceph via JNA bindings that are packaged as jar and comes with Eucalyptus.

Installing Apache Hadoop on Eucalyptus using Hortonworks Data Platform stack and Apache Ambari

This post demonstrates Hadoop deployment on Eucalyptus using Apache Ambari on Horton Data Platform stack. Bits and pieces:

  1. Eucalyptus 4.0.0
  2. Hortonworks Data Platform (HDP) stack
  3. Apache Ambari
  4. 4 instance-store/EBS-backed instances
    1. 2 vcpus, 1024MB memory, 20GB disk space
    2. CentOS 6.5 base image

For this is demo we tried to use very minimum resources. Our Eucalyptus deployment topology looks like below, 1x (Cloud Controller + Walrus + Storage Controller + Cluster Controller) 1x (Node Controller + Object Storage Gateway) To meet our instance requirement, we changed the instance-type according to our need.

AVAILABILITYZONE|- m1.xlarge   0003 / 0004   2   1024    20

Preparation Run an instance of m1.xlarge or any other instance type that meets the above requirement. When the instance is running copy the keypair that is used to run this instance at .ssh/id_rsa, we will be using this same keypair for all the instances.

[root@b-11 ~]# euca-describe-instances
RESERVATION	r-46725ad6	691659499425	default
INSTANCE	i-b65be3ed	emi-a4aac1df	euca-10-111-100-0.eucalyptus.b-11.autoqa.qa1.eucalyptus-systems.com	euca-172-27-227-26.eucalyptus.internal	running	sshlogin	0		m1.xlarge	2014-06-25T10:39:37.952Z	PARTI00				monitoring-disabled	10.111.100.0	172.27.227.26			instance-store					hvm			sg-57c0750a
TAG	instance	i-b65be3ed	euca:node	10.111.1.12

[root@b-11 ~]# scp -i sshlogin sshlogin root@euca-10-111-100-0.eucalyptus.b-11.autoqa.qa1.eucalyptus-systems.com:/root/.ssh/id_rsa

Run three more instance of same type with same keypair and image and copy their private IPs for later use. Add security group rules for Apache Ambari,

[root@b-11 ~]# euca-authorize -P tcp -p 8080 default
[root@b-11 ~]# euca-authorize -P tcp -p 0-65535 -s 172.0.0.0/7 default

Set Apache Ambari and HDP repository inside the instance,

P. S. [root@euca-172-27-227-26 ~] refers to the Eucalyptus instance

[root@euca-172-27-227-26 ~]# wget http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.6.0/ambari.repo -O /etc/yum.repos.d/ambari.repo

[root@euca-172-27-227-26 ~]# wget http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.1-latest/hdp.repo -O /etc/yum.repos.d/hdp.repo

Install Apache Ambari,

[root@euca-172-27-227-26 ~]# yum install ambari-server -y

Configure Apache Ambari,

[root@euca-172-27-227-26 ~]# ambari-server setup

[Select all default options, unless necessary] Start the Apache Ambari server,

[root@euca-172-27-227-26 ~]# ambari-server start

Our Apache Ambari installation is completed. Now login to the Ambari dashboard with username ‘admin’ and password ‘admin’ at http://instance-public-ip:8080 Installing Hadoop using HDP stack

  • Add a cluster name
  • From Select Stack menu, select HDP 2.1 stack
  • In Install Options menu, add the private IPs/hostnames and private key that we copied previously at .ssh/id_rsa
Ambari Hadoop Installation

Ambari Hadoop Installation

  • Confirm Hosts view will check the hosts availability status and reports back. All green indicates that we are good to go.
Ambari Hadoop Installation

Ambari Hadoop Installation

  • For this demo select all services from Choose Services view
  • In Assign Masters view Ambari will be suggesting a reasonable settings, for this demo we will be using the defaults
  • Again, in Assign Slaves and Clients view, we will stick with the defaults
Ambari Hadoop Installation

Ambari Hadoop Installation

  • In the Customize Services view set the credentials for Hive, Oozie and Nagios.
  • Review and then Install, Start and Test
Ambari Hadoop Installation

Ambari Hadoop Installation

And there is our Hadoop cluster with Hortonworks HDP stack on top of Eucalyptus. Happy Hadooping!

Eucalyptus FourZero (4.0)

Eucalyptus 4.0 is one of the biggest releases in Eucalyptus history with several major architectural changes. Lots of new re-engineered components and some behavioral changes have landed with this new release.

Major changes in Eucalyptus 4.0

 

Service Separation

This is the biggest one and probably the one many of us were waiting for a long time. From 4.0 CLC DB and user-facing services can be installed/registered in different hosts. With that said, now it is also possible to have multiple user-facing services (UFS).

UFS registration command looks like this,

euca_conf --register-service --service-type user-api --host 10.111.1.110 --service-name API_110

And describe UFS command is given below,

euca-describe-services -T user-api

Output:

SERVICE user-api API_110 API_110 ENABLED 45 http://10.111.1.110:8773/services/User-API arn:euca:bootstrap:API_110:user-api:API_110/
SERVICE user-api API_112 API_112 ENABLED 45 http://10.111.1.112:8773/services/User-API arn:euca:bootstrap:API_112:user-api:API_112/
SERVICE user-api API_119 API_119 ENABLED 45 http://10.111.1.119:8773/services/User-API arn:euca:bootstrap:API_119:user-api:API_119/
SERVICE user-api API_179 API_179 ENABLED 45 http://10.111.1.179:8773/services/User-API arn:euca:bootstrap:API_179:user-api:API_179/

Object Storage Gateway (OSG)

Another attractive feature in Eucalyptus 4.0. With this new service, it is possible to use different object storage backends. For now OSG has complete support for RiakCS and WalrusBackend as object storage backends. Other object storages like Ceph should be pluggable as well with OSG, but is not fully tested.

More about Object Storage Gateway and RiakCS were discussed in previous posts.

Image Management

This is another great addition to Eucalyptus. Now image management was never been so fun than this. One important thing is, from 4.0 Eustore has been replaced with couple of other interesting commands in the toolset.

Installing an HVM image was never been easier,

euca-install-image -i /root/precise-server-cloudimg-amd64-disk1.img -n "demoimage" -r x86_64 --virtualization-type hvm -b demobucket

Another interesting fact is, now it is possible to get an EBS backed image from HVM image with just one single command,

euca-import-volume /root/precise-server-cloudimg-amd64-disk1.img --format raw \
--availability-zone PARTI00 --bucket demobucket --owner-akid $EC2_ACCESS_KEY \
--owner-sak $EC2_SECRET_KEY --prefix demoimportvol --description "demo import volume"

Run the following command to check the conversion task status,

euca-describe-conversion-tasks

When completed create a snapshot from the volume Id in the describe result and register the EBS-backed image.

Heads up: an imaging worker instance will appear running the conversion task is started.

There is another super handy command that will create an EBS backed image from a HVM image and run an instance with provided detail,

euca-import-instance /root/precise-server-cloudimg-amd64-disk1.img --format raw \
--architecture x86_64 --platform Linux --availability-zone PARTI00 --bucket ibucket \
--owner-akid $EC2_ACCESS_KEY \ --owner-sak $EC2_SECRET_KEY --prefix image-name-prefix \
--description "textual description" --key sshlogin --instance-type m1.small

EDGE Networking Mode

EDGE is a new networking mode which was introduced in 3.4 as a tech-preview feature. The main reason behind this networking mode is to remove the need of Cluster Controller to be in the data for all the running VMs. Also, this helps to eradicate the need of tagging VLAN packets to achieve Layer 2 isolation between the VMs. With this network mode, now there will be a new standalone component called eucanetd will be running on the Node Controller. In EDGE networking mode eucanetd running on the Node Controller maintains the networking and ensures any single point of failure.

Re-engineered Eucalyptus Console

This is one of the biggest changes that happened in 4.0. We said goodbye to the Eucalyptus Admin UI (https://<CLC_IP_address&gt;:8443), Eucalyptus User Console and welcomed the newly designed EucaConsole with the administrative features.

EucaConsole 4.0.0

EucaConsole 4.0

Tech-Preview of CloudFormation

CloudFormation!!! Yes, CloudFormation feature has been implemented and released in Eucalyptus 4.0 as a tech-preview, though the implementation is pretty well.

In the currently implementation of CloudFormation, the service does not come with other user-facing services, it needs to be registered separately on the same host with CLC/DB (EUCA-9505).

euca_conf --register-service -T CloudFormation -H 10.111.1.11 -N API_11

Here is a basic CloudFormation template just to try it out right away,

{
  "Parameters" : {
    "KeyName" : {
      "Description" : "The EC2 Key Pair to allow SSH access to the instance",
      "Type" : "String"
    }
  },
  "Resources" : {
    "Ec2Instance" : {
      "Type" : "AWS::EC2::Instance",
      "Properties" : {
        "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" }, "default" ],
        "KeyName" : { "Ref" : "KeyName"},
        "ImageId" : "emi-3c17bd33"
      }
{
    },

    "InstanceSecurityGroup" : {
      "Type" : "AWS::EC2::SecurityGroup",
      "Properties" : {
{
        "GroupDescription" : "Enable SSH access via port 22",
        "SecurityGroupIngress" : [ {
          "IpProtocol" : "tcp",
          "FromPort" : "22",
          "ToPort" : "22",
          "CidrIp" : "0.0.0.0/0"
        } ]
      }
    }
  }
}

The following command can be used to validate the template,

euform-validate-template --template-file cloudformationdemo.template

Then create a stack with the template,

euform-create-stack --template-file cloudformationdemo.template --parameter KeyName=demokey MyDemoStack

Check CloudFormation stack status,

euform-describe-stacks MyDemoStack

Output:
STACK MyDemoStack CREATE_COMPLETE Complete! 2014-06-04T14:02:27.38Z

Check CF stack resources,

euform-describe-stack-resources -n MyDemoStack

More FourZero

Apart from those, another big improvement was with Administrative Roles. There are now pre-defined roles for Eucalyptus admin account, e.g Cloud Account Admin, Cloud Resource Admin, Infrastructure Admin. ELB supports session stickiness, modify attributes of instances is supported and so on. Also many AWS compatibility issues have been fixed in this Fantastic release.

Installing Eucalyptus is now easier than ever. You can start with a CentOS 6.5 minimal server and get your own Amazon compatible Eucalyptus cloud.

To get started run the following command and have your own private cloud up and running,

bash <(curl -Ls http://eucalyptus.com/install)

Enjoy Eucalyptus 4.0!!!

Clustered Riak CS with Eucalyptus Object Storage Gateway

In the last post, we have installed Riak CS on a single node. For production, a deployment of five or more nodes is required for better performance, reliability. Riak has a default three times data replication mechanism and in smaller deployment the replication requirement may not be met properly and also it may compromise the fault-tolerance ability of the cluster. Fewer nodes will have higher workloads.

According to the documentation:

If you have 3 nodes, and require all 3 to replicate, 100% of your nodes will respond to a single request. If you have 5, only 60% need respond.

In this post, we will use 5 nodes to create a Riak cluster for our Eucalyptus setup. Since we will be using Riak CS, we will be installing Riak CS in each node. We will also need a Stanchion server.

Overall, our setup will look like below:

a) 5x Riak nodes
b) 5x Riak CS nodes (one in each Riak node)
c) 1x Stanchion node
d) 1x Riak Control
e) 1x Riak CS Control
f) 1x Nginx server for load balancing between the Riak CS nodes

First we will install Riak, Riak CS on all the nodes,

yum install http://yum.basho.com/gpg/basho-release-6-1.noarch.rpm -y
yum install riak riak-cs -y

Configure Riak:

Modify the following lines from /etc/riak/app.config with the host IP address,

{pb, [ {"127.0.0.1", 8087 } ]}

{http, [ {"127.0.0.1", 8098 } ]},

Find the following line from /etc/riak/app.config and replace it with multi backend setup,

from:

{storage_backend, riak_kv_bitcask_backend},

to:

            {add_paths, ["/usr/lib64/riak-cs/lib/riak_cs-1.4.5/ebin"]},
            {storage_backend, riak_cs_kv_multi_backend},
            {multi_backend_prefix_list, [{<<"0b:">>, be_blocks}]},
            {multi_backend_default, be_default},
            {multi_backend, [
              {be_default, riak_kv_eleveldb_backend, [
                {max_open_files, 50},
                {data_root, "/var/lib/riak/leveldb"}
              ]},
              {be_blocks, riak_kv_bitcask_backend, [
                {data_root, "/var/lib/riak/bitcask"}
              ]}
            ]},

And add the following line to riak_core section in the config file,

{default_bucket_props, [{allow_mult, true}]},

Change the following line from /etc/riak/vm.args with host IP address,

-name riak@127.0.0.1

Configure Riak CS:

Modify the following lines from /etc/riak-cs/app.config with the host IP address,

{cs_ip, "127.0.0.1"},

{riak_ip, "127.0.0.1"},

{stanchion_ip, "127.0.0.1"},

We are going to have to create an admin user, modify the following line and set the value to true for now, change it back before going into production,

{anonymous_user_creation, false},

Change the following line from /etc/riak-cs/vm.args with host IP address,

-name riak@127.0.0.1

Follow the same procedure for rest of the nodes.

Configure Stanchion:

Install Stanchion in one of the servers,

yum install stanchion -y

Modify the following lines from /etc/stanchion/app.config with the host IP address,

{stanchion_ip, "127.0.0.1"},

{riak_ip, "127.0.0.1"},

Modify the following lines from /etc/stanchion/vm.args with the host IP address,

-name stanchion@127.0.0.1

Start Riak components on the server where Stanchion is installed:

riak start
riak-cs start
stanchion start

Create admin user:

curl -H 'Content-Type: application/json' \
-X POST http://10.111.5.181:8080/riak-cs/user \
--data '{"email":"admin@admin.com", "name":"admin"}'

From the output save the following two variables,

"key_id":"UMSNH00MXO57XNQ4FH05",
"key_secret":"sApGkHzUaNQ0_54BqwbiofH50qzRb4RLi7hFnQ=="

In production system, you may want to change the anonymous_user_creation settings to false after creating the admin user.

From /etc/riak-cs/app.config and /etc/stanchion/app.config change the following two values with key_id and key_secret,

{admin_key, "admin-key"},
{admin_secret, "admin-secret"},

Restart both riak-cs and stanchion.

Setting up Riak Cluster:

Now we will join all the nodes. Run the following from each node (for this guide, I kept the stanchion node’s IP constant)

riak-admin cluster join riak@<node-ip>
riak-admin cluster plan
riak-admin cluster commit

Activate Riak Control:

Modify the following lines from /etc/riak/app.config with the host IP address,

{https, [{ "127.0.0.1", 8098 }]},

Uncomment the ssl configuration and set file path as appropriate,

{ssl, [
       {certfile, "/etc/riak/cert.pem"},
       {keyfile, "/etc/riak/key.pem"}
     ]},

Follow this guideline to create self-signed certificate,

http://www.akadia.com/services/ssh_test_certificate.html

Set the following to true from riak_control section,

{enabled, false},

and set username/password,

{userlist, [{"user", "pass"}
]},

Login to the following url to access Riak Control web interface,

https://RIAK-NODE-IP:8069/admin

Riak Control

Riak Control

Install and Configure Nginx:

We will use Nginx as a load balancer between the nodes. It can be installed on any node or on an external server as well which can work as a load balancer between the Riak CS nodes.

We will be installing Riak CS Control which seems to be using HTTP 1.1 version (available from Nginx 1.1.4). So, we will be using latest stable version of Nginx on our Nginx server.

yum install http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm -y
yum install nginx -y

Nginx config file will look like this [source],

upstream riak_cs_host {
  server :8080;
  server :8080;
  server :8080;
  server :8080;
  server :8080;
  }

server {
  listen   80;
  server_name  _;
  access_log  /var/log/nginx/riak_cs.access.log;
  client_max_body_size 0;

location / {
  proxy_set_header Host $http_host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_redirect off;

  proxy_connect_timeout      90;
  proxy_send_timeout         90;
  proxy_read_timeout         90;
  proxy_buffer_size    128k;
  proxy_buffers     4 256k;
  proxy_busy_buffers_size 256k;
  proxy_temp_file_write_size 256k;
  proxy_http_version    1.1;

  proxy_pass http://riak_cs_host;
  }
}

Install and Configure Riak CS Control:

Run the following command to install Riak CS Control,

yum install http://s3.amazonaws.com/downloads.basho.com/riak-cs-control/1.0/1.0.2/rhel/6/riak-cs-control-1.0.2-1.el6.x86_64.rpm -y

Modify the following lines from /etc/riak-cs-control/app.config with the Nginx server’s IP address and proxy,

{cs_proxy_host, "127.0.0.1" },
{cs_proxy_port, 8080 },

Set the admin creds downloaded above in /etc/riak-cs-control/app.config file.

Riak CS Control

Riak CS Control

Configure Eucalyptus:

Now as usual, set the Riak CS property to use as Eucalyptus Object Storage Gateway’s (OSG) backend,

Update:

Instead of using Riak CS admin account since it has special admin privileges, we need to create a regular Riak CS account, via Riak CS Control or command line (like above) and use it for Eucalyptus.

euca-modify-property -p objectstorage.s3provider.s3endpoint=NGINX-IP

euca-modify-property -p objectstorage.s3provider.s3accesskey=ACCESS-KEY

euca-modify-property -p objectstorage.s3provider.s3secretkey=SECRET-KEY

Enjoy multi-clustered Riak CS with Eucalyptus!

Eucalyptus Object Storage Gateway with Riak CS

Eucalyptus 4.0 is the next major release of Eucalyptus. One of the exciting features of this release is Object Storage Gateways (OSG). It uses Riak CS as scalable storage backend. It also works with Walrus as storage backend. Object Storage Gateway first came out as tech preview in 3.4 release. To use Riak CS with OSG it is required to have an existing Riak CS setup.

In this post we will setup a minimal Riak CS setup to work with Eucalyptus OSG. For this demo I am using a Eucalyptus 4.0 setup from the currently available source from github. Here, we will be installing all the necessary Riak CS components on the same host that we are using for frontend, which is what we say a proof of concept setup and not recommended for production deployment.

eucalyptus-logo-349x83

Eucalyptus 4.0 introduces a new component Object Storage Gateway (OSG). Run the following command from Cloud Controller(CLC) to register this new component,

euca-register-object-storage-gateway --partition objectstorage --host <osg host ip address> <component name>

Most likely the OSG component status will be BROKEN at this point, until we configure Eucalyptus properties to work with Riak CS.

Riak CS installation and configuration:

riak-cs-hdr2

Riak CS is built on top of Riak, one of the most popular open source distributed database. To install basic Riak CS we will need to install Riak, Stanchion and finally Riak CS. (Riak 1.4.6, Stanchion 1.4.3, Riak 1.4.3).

Set the user limit to a higher number,

ulimit -n 65536

Install Riak CS,

yum install -y http://yum.basho.com/gpg/basho-release-6-1.noarch.rpm

yum install -y riak stanchion riak-cs

Rest of the configuration steps are very straight-forward and can be found here.

By default Riak CS uses port 8080, while Eucalyptus also uses this port for http redirect. We need to change either of the ports to resolve the port conflict and get both Riak CS and Eucalyptus running on the same host.

To modify Eucalyptus port, run the following from CLC,

euca-modify-property -p www.http_port=<port>

To modify Riak CS port, change the port from /etc/riak-cs/app.config,

{cs_port, "<port>" } ,

While installing Riak CS, we created an admin user and got a similar json output with id, key_id and key_secret. These credentials can be used to access Riak Cloud Storage like Amazon S3.

{
    "email": "admin@admin.com",
    "display_name": "admin",
    "name": "admin",
    "key_id": "BMNVZPO4ZXYAYEIFF9PG",
    "key_secret": "JXvFrTEx4eqirMGJnYZqvZiek7ZDema_1FM2CQ==",
    "id": "f181fac1f8d24fdeec39adbbbba5d13297aa6de056e1b26dc0c9e4a723cec7b2",
    "status": "enabled"
}

To use Riak CS with Eucalyptus OSG we need to modify the at least the following Eucalyptus properties,

PROPERTY	objectstorage.providerclient	s3
DESCRIPTION	objectstorage.providerclient	Object Storage Provider client to use for backend
PROPERTY	objectstorage.s3provider.s3endpoint	<riakcs_ip:port>
DESCRIPTION	objectstorage.s3provider.s3endpoint	External S3 endpoint.
PROPERTY	objectstorage.s3provider.s3accesskey	********
DESCRIPTION	objectstorage.s3provider.s3accesskey	External S3 Access Key.
PROPERTY	objectstorage.s3provider.s3secretkey	********
DESCRIPTION	objectstorage.s3provider.s3secretkey	External S3 Secret Key.

For Riak CS the objectstorage.providerclient will be s3, when using Walrus, the value for this property will be walrus.

Check this wiki link for few other optional configurable options.

Eucalyptus Object Storage Gateway (OSG) service status should be ENABLED now and ready to be used. Point your favorite S3 client to OSG and start using AWS compatible Object Storage.

You can use Eutester if you want to try out Eucalyptus 4.0 with OSG and Riak CS quickly. Setup Eutester and run the following script,

./install_riak_cs.py \
--password foobar \
--config /path/to/config \
--template-path "/templates/" \
--riak-cs-port <port> \
--admin-name <admin_user_name> \
--admin-email <admin_user_email>

The following python script can be used for quick OSG + Riak CS test,

import boto
from boto.ec2.regioninfo import RegionInfo
from boto.s3.connection import OrdinaryCallingFormat
boto_debug=2
boto.set_stream_logger('paws')

if __name__ == '__main__':

    accesskey="<access_key>"
    secretkey="<secret_key>"
    hostname="<hostname>"

    conns3osg = boto.connect_s3(aws_access_key_id=accesskey,
                              aws_secret_access_key=secretkey,
                              is_secure=False,
                              host=hostname,
                              port=8773,
                              path="/services/objectstorage",
                              calling_format=OrdinaryCallingFormat(),
                              debug=boto_debug)

    conns3osg.create_bucket('testbucket')
    conns3osg.get_bucket('testbucket')

Happy New Year Everyone!

Pursuit of Quality at Eucalyptus

At Eucalyptus, if one thing we care about it is the quality of the product. Our goal is to deliver an AWS compatible software that just works. To ensure the highest level of quality we try to follow the optimum strategy possible in both Development and in QA.

We have adopted Agile Software Development model a while back and we love it. As a part of our strategy, there are several type of projects we open in Jira. After getting the green signal from Product Management, we get Epic and then after architectural discussion Story tickets are created with Sub-tasks for developers to work on. We also have Jira tickets for New Features which are basically similar to Story but comparatively smaller tasks and generally have no Sub-tasks. At this point one scrum master from each team works closely with the developers to keep track of the progress. Then comes Bugs, everyone in the company or outside the company can report bugs, of course they can open new features as well. Typically, the workflow is very simple, all the bugs becomes confirmed either by the Tier3 members or in the bug scrub meetings. Then a developer gets assigned to a bug and the bug status gets from In Progress to In QA and after successful verification it goes into Release Pending state by a Quality Engineer. At this point the bits are ready to be merged in the master branch. And YES! Everything we do is publicly visible, we love Open Source!

As Quality Engineers, our job is to be in the process where a Story or New Feature or Bug’s status becomes In QA.

Eucalyptus is a complex piece of software and like most other system application out there, it is necessary to check not only the bug or any specific feature but also the entire product and with continuous development the testing is also a continuous process. Here, I will quote from one of colleagues, Kyo Lee, “Q: When do you stop testing? A: You never stop testing.” Achieving this goal would never be possible if we would plan this to do manually (or multiply the number of employees by 4 times?). Here comes Eutester, Eutester4J and se34euca, test frameworks built for Eucalyptus, but not limited to Eucalyptus.

Eutester is a functional testing framework for Eucalyptus written in Python. It uses Boto, a python interface to Amazon Web Services to ensure the AWS compatibility. Most of the Eucalyptus services are covered with various test cases which are written on top of Eutester. On the other hand, Eutester is definitely not limited to Eucalyptus functional testing, there are scripts to help with installation of Eucalyptus, Riak CS etc. It is also used to collect logs and various configuration, artifacts from the Eucalyptus hosts to debug issues. There are test cases for both administrators and users. Also, many test cases are built to reproduce and verify bugs. All the test cases are found on Eutester github repository.

For Java developers we have Eutester4J, another test framework built with AWS Java SDK, yet another product to test AWS Java SDK compatibility a.k.a AWS compatibility. It uses TestNG to build various test suites for Eucalyptus. For dependency management, it uses Ivy to make life easier for all.

Now to test the user console we have this fantastic project that we call se34euca, which uses Selenium and automates the entire process of testing the user console.

Well, we have all these projects, but how do we achieve automation at it’s highest level? The answer is Jenkins! We use Jenkins to automate all the test cases. We took the advantage of continuous integration to automate the test frameworks. Now the answer leads us to more questions, “how/where do I install Jenkins? how do I integrate those frameworks?” Yes, to answer all those questions, we have MicroQA. It’s the home of all those frameworks. MicroQA uses Vagrant (yet another cool tool, eh?) to speed up the installation. Yes, it is even possible to test Eucalyptus cloud from your laptop.

With all these, what we never forget is Manual Testing. We do test manually, mainly when a new feature becomes available, sometimes when we verify certain bugs, manual testing is unavoidable. We use open source test management software Testlink to manage manual test cases.

Last but not least, unlike typical tech organizations, we work very closely with developers, we all make sure we understand our job which is not to count the number of bugs fixed/raised/verified, nether the number of features/test cases, rather it is to deliver a quality product.

live-long-and-prosper-tee-shirt-cbs114b