Eucalyptus FourZero (4.0)

Eucalyptus 4.0 is one of the biggest releases in Eucalyptus history with several major architectural changes. Lots of new re-engineered components and some behavioral changes have landed with this new release.

Major changes in Eucalyptus 4.0

 

Service Separation

This is the biggest one and probably the one many of us were waiting for a long time. From 4.0 CLC DB and user-facing services can be installed/registered in different hosts. With that said, now it is also possible to have multiple user-facing services (UFS).

UFS registration command looks like this,

euca_conf --register-service --service-type user-api --host 10.111.1.110 --service-name API_110

And describe UFS command is given below,

euca-describe-services -T user-api

Output:

SERVICE user-api API_110 API_110 ENABLED 45 http://10.111.1.110:8773/services/User-API arn:euca:bootstrap:API_110:user-api:API_110/
SERVICE user-api API_112 API_112 ENABLED 45 http://10.111.1.112:8773/services/User-API arn:euca:bootstrap:API_112:user-api:API_112/
SERVICE user-api API_119 API_119 ENABLED 45 http://10.111.1.119:8773/services/User-API arn:euca:bootstrap:API_119:user-api:API_119/
SERVICE user-api API_179 API_179 ENABLED 45 http://10.111.1.179:8773/services/User-API arn:euca:bootstrap:API_179:user-api:API_179/

Object Storage Gateway (OSG)

Another attractive feature in Eucalyptus 4.0. With this new service, it is possible to use different object storage backends. For now OSG has complete support for RiakCS and WalrusBackend as object storage backends. Other object storages like Ceph should be pluggable as well with OSG, but is not fully tested.

More about Object Storage Gateway and RiakCS were discussed in previous posts.

Image Management

This is another great addition to Eucalyptus. Now image management was never been so fun than this. One important thing is, from 4.0 Eustore has been replaced with couple of other interesting commands in the toolset.

Installing an HVM image was never been easier,

euca-install-image -i /root/precise-server-cloudimg-amd64-disk1.img -n "demoimage" -r x86_64 --virtualization-type hvm -b demobucket

Another interesting fact is, now it is possible to get an EBS backed image from HVM image with just one single command,

euca-import-volume /root/precise-server-cloudimg-amd64-disk1.img --format raw \
--availability-zone PARTI00 --bucket demobucket --owner-akid $EC2_ACCESS_KEY \
--owner-sak $EC2_SECRET_KEY --prefix demoimportvol --description "demo import volume"

Run the following command to check the conversion task status,

euca-describe-conversion-tasks

When completed create a snapshot from the volume Id in the describe result and register the EBS-backed image.

Heads up: an imaging worker instance will appear running the conversion task is started.

There is another super handy command that will create an EBS backed image from a HVM image and run an instance with provided detail,

euca-import-instance /root/precise-server-cloudimg-amd64-disk1.img --format raw \
--architecture x86_64 --platform Linux --availability-zone PARTI00 --bucket ibucket \
--owner-akid $EC2_ACCESS_KEY \ --owner-sak $EC2_SECRET_KEY --prefix image-name-prefix \
--description "textual description" --key sshlogin --instance-type m1.small

EDGE Networking Mode

EDGE is a new networking mode which was introduced in 3.4 as a tech-preview feature. The main reason behind this networking mode is to remove the need of Cluster Controller to be in the data for all the running VMs. Also, this helps to eradicate the need of tagging VLAN packets to achieve Layer 2 isolation between the VMs. With this network mode, now there will be a new standalone component called eucanetd will be running on the Node Controller. In EDGE networking mode eucanetd running on the Node Controller maintains the networking and ensures any single point of failure.

Re-engineered Eucalyptus Console

This is one of the biggest changes that happened in 4.0. We said goodbye to the Eucalyptus Admin UI (https://<CLC_IP_address&gt;:8443), Eucalyptus User Console and welcomed the newly designed EucaConsole with the administrative features.

EucaConsole 4.0.0
EucaConsole 4.0

Tech-Preview of CloudFormation

CloudFormation!!! Yes, CloudFormation feature has been implemented and released in Eucalyptus 4.0 as a tech-preview, though the implementation is pretty well.

In the currently implementation of CloudFormation, the service does not come with other user-facing services, it needs to be registered separately on the same host with CLC/DB (EUCA-9505).

euca_conf --register-service -T CloudFormation -H 10.111.1.11 -N API_11

Here is a basic CloudFormation template just to try it out right away,

{
  "Parameters" : {
    "KeyName" : {
      "Description" : "The EC2 Key Pair to allow SSH access to the instance",
      "Type" : "String"
    }
  },
  "Resources" : {
    "Ec2Instance" : {
      "Type" : "AWS::EC2::Instance",
      "Properties" : {
        "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" }, "default" ],
        "KeyName" : { "Ref" : "KeyName"},
        "ImageId" : "emi-3c17bd33"
      }
{
    },

    "InstanceSecurityGroup" : {
      "Type" : "AWS::EC2::SecurityGroup",
      "Properties" : {
{
        "GroupDescription" : "Enable SSH access via port 22",
        "SecurityGroupIngress" : [ {
          "IpProtocol" : "tcp",
          "FromPort" : "22",
          "ToPort" : "22",
          "CidrIp" : "0.0.0.0/0"
        } ]
      }
    }
  }
}

The following command can be used to validate the template,

euform-validate-template --template-file cloudformationdemo.template

Then create a stack with the template,

euform-create-stack --template-file cloudformationdemo.template --parameter KeyName=demokey MyDemoStack

Check CloudFormation stack status,

euform-describe-stacks MyDemoStack

Output:
STACK MyDemoStack CREATE_COMPLETE Complete! 2014-06-04T14:02:27.38Z

Check CF stack resources,

euform-describe-stack-resources -n MyDemoStack

More FourZero

Apart from those, another big improvement was with Administrative Roles. There are now pre-defined roles for Eucalyptus admin account, e.g Cloud Account Admin, Cloud Resource Admin, Infrastructure Admin. ELB supports session stickiness, modify attributes of instances is supported and so on. Also many AWS compatibility issues have been fixed in this Fantastic release.

Installing Eucalyptus is now easier than ever. You can start with a CentOS 6.5 minimal server and get your own Amazon compatible Eucalyptus cloud.

To get started run the following command and have your own private cloud up and running,

bash <(curl -Ls http://eucalyptus.com/install)

Enjoy Eucalyptus 4.0!!!

Eucalyptus Faststart 3.4.1 – cloud-in-a-vm on Fedora 19

Eucalyptus 3.4.1 is releasing soon, I mean, very soon. So, as a part of testing Eucalyptus Faststart, we used Fedora 19 box to try out Eucalyptus.

Cloud-in-a-vm, eh?

The journey wasn’t so bad, but I did have to touch couple of things that I never used, things that I did a while back and forgot, things I didn’t know and so on. But this morning our QA lead Victor Iglesias helped with the missing parts, in other words the reason of my suffering for a while.

Anyway, so, here is what I did to get a cloud running in a vm on a Fedora 19 box.

Installed Fedora 19 on a core i5 Dell Inspiron laptop. It is better to have some free space in the volume group while installing Fedora. We will be creating a 100GB logical volume (LV) for the cloud VM later on. Also, if there is unallocated space available in the HDD, we can use that space for the LV.

The first thing I did is disabled selinux from /etc/selinux/config

For this setup, I decided to install the @virtualization from the base group installer to keep things simpler.

yum install @virtualization

Since we will have to run AWS-like instances inside a VM, we need to enabled the nested-kvm feature from kvm_intel module. By default nested kvm is off in most of the systems.

Open/Create the following file, /etc/modprobe.d/kvm-nested.conf and add this line,

options kvm_intel nested=1

Reboot the system to enable this nested virtualization feature.

Ensure that we have nested kvm enabled,

cat /sys/module/kvm_intel/parameters/nested

“Y” represents nested kvm availability.

More on nested-kvm, here.

Now we have to create bridge (e.g br0) for the VMs.

Once the VM setup is complete, it’s almost time to start the Faststart VM.

For this setup we used a 100GB logical volume (LV).

lvcreate -L 100G -n "fslv" "volumegroup"

If there is only unallocated space available on the HDD but no free space in the volume group, create a partition (e.g /dev/sda3). Then create physical volume and extend volume group,

pvcreate /dev/sda3
vgextend "volumegroup" /dev/sda3

Uncomment the following lines in /etc/libvirt/qemu.conf file,

vnc_listen = "0.0.0.0"
vnc_password = "<password>"
user = "root"
group = "root"

Restart libvirtd service,

systemctl restart libvirtd.service

Copy the faststart-3.4.1.iso to /var/lib/libvirt/images/ directory.

Create a 3 liner script or run these commands manually,

#!/bin/bash
virsh undefine fs
dd if=/dev/zero of=/dev/fedora/fslv bs=1M count=1
virt-install --name fs --cpu host-passthrough --disk /dev/fedora/fslv --ram 4000 --cdrom $1 --graphics vnc,listen=0.0.0.0 --bridge br0

Make sure, to pass the iso file after –cdrom if you are running the above lines manually, otherwise pass the iso file as a argument with the script.

Connect to the instance with Fedora Remote Desktop Viewer or any other you like and follow the instruction to install Eucalyptus. For this installation I selected cloud-in-a-box to have all the component in the same box. This might take a while, so make sure you are caffeine enabled.

When the installation is completed, reboot the VM. In this case, you might have to start the VM again.

virsh --connect qemu:///system
virsh # start fs

It will now configure the Eucalyptus cloud and within couple of minutes an Eucalyptus cloud will be ready with one basic image and one load-balancer image.

Connect the Eucalyptus hybrid user console with the VMs IP,

https://faststart_vm_ip:8888
User Credentials:
  * Account:  demo
  * Username: admin
  * Password: password

By default Eucalyptus Faststart installation creates admin and demo credentials.

Follow the Eucalyptus documentation to discover Eucalyptus more.

More on cloud in a vm: https://github.com/eucalyptus/eucalyptus/wiki/Eucalyptus-Virtual-Cloud

Eucalyptus 3.3.0 in a nutshell

Eucalyptus 3.3.0, the most exciting Eucalyptus release so far is knocking on the door or perhaps it has been already released when you are reading this post.

Eucalyptus 3.3.0 has couple of most desired Amazon Web Services (AWS) features by the cloud users:

1. Elastic Load Balancing (ELB)

Needless to say, this is an AWS ELB compatible feature which is being introduced in Eucalyptus 3.3.0.

Creating a basic loadbalancer:

eulb-create-lb -z PARTI00 -l 'lb-port=80, protocol=HTTP, instance-port=80' MyElb
# output
# DNS_NAME	MyElb-576514848852.lb.localhost

eulb-describe-lbs
# output
# LOAD_BALANCER	MyElb	MyElb-576514848852.lb.localhost	2013-06-10T06:57:52.07Z

Register instances with Eucalyptus Elastic Load Balancer,

eulb-register-instances-with-lb MyElb --instances i-25D3415E,i-16463E17
# output
# INSTANCE i-25D3415E
# INSTANCE i-16463E17

eulb-describe-instance-health MyElb
# output
# INSTANCE	i-25D3415E	InService
# INSTANCE	i-16463E17	InService

Few other ELB operations,

# deregister instances from ELB
eulb-deregister-instances-from-lb MyElb --instances i-16463E17

# delete ELB
eulb-delete-lb MyElb

2. CloudWatch

CloudWatch is another AWS-compatible feature which is shipping with Eucalyptus 3.3.0. It enables cloud users to view, collect and analyze metrics of their could resources. It also lets cloud users to configure alarm actions based on the data from the metrics.

Enable instance monitoring,

# on existing instance
euca-monitor-instances i-25D3415E

# during instance run
euca-run-instances -k batman1key emi-90E83973 --monitor

# disable monitoring
euca-unmonitor-instances i-DB5842DC

Euwatch

# returns all the available metrics
euwatch-list-metrics

# returns list of metrics with particular metric name
euwatch-list-metrics --metric-name CPUUtilization

# returns list of metrics with particular namespace
euwatch-list-metrics --namespace AWS/EC2

# returns list of metrics with particular dimensions
euwatch-list-metrics --dimensions "InstanceId=i-25D3415E"

# returns time-series data for one or more statistics of a given MetricName
euwatch-get-stats CPUUtilization \
> --start-time 2013-06-10T07:09:00.043Z \
> --end-time 2013-06-10T08:46:54.043Z \
> --period 3600 \
> --statistics "Average,Minimum,Maximum" \
> --namespace "AWS/EC2" \
> --dimensions "InstanceId=i-25D3415E"

3. Auto Scaling

Eucalyptus Auto Scaling is consists of three fundamental principles,

  1. Launch Configurations
  2. Auto Scaling Groups
  3. Auto Scaling Policies

Create a launch configuration,

euscale-create-launch-config MyLC \
> --image-id emi-90E83973 \
> --instance-type m1.small

Create auto scaling group,

euscale-create-auto-scaling-group MyASGroup \
> --launch-configuration MyLC \
> --availability-zones PARTI00 \
> --min-size 1 --max-size 3

# describe auto scaling groups
euscale-describe-auto-scaling-groups

Create scale out policy,

euscale-put-scaling-policy MyScaleoutPolicy \
> --auto-scaling-group MyASGroup \
> --adjustment=30 \
> --type PercentChangeInCapacity

# output
# arn:aws:autoscaling::576514848852:scalingPolicy:c2a8f9dc-1c75-49d5-b54d-8ef87fe29e9a:autoScalingGroupName/MyASGroup:policyName/MyScaleoutPolicy

Creating scale in policy,

euscale-put-scaling-policy MyScaleInPolicy \
> --auto-scaling-group MyASGroup \
> --adjustment=-2  --type ChangeInCapacity

# output
# arn:aws:autoscaling::576514848852:scalingPolicy:a4148c27-81da-4eff-9140-cba3ba9381cb:autoScalingGroupName/MyASGroup:policyName/MyScaleInPolicy

CloudWatch Alarm

Eucalyptus CloudWatch alarm currently helps cloud users to take decisions on the resources (e.g instances, EBS volumes, Auto Scaling instances, ELBs) automatically based on the rules defined by the users based on the metrics. Eucalyptus CloudWatch alarm currently works with Auto Scaling policies.

Create alarm for scale out capacity and scale in capacity,

# create scale out alarm
euwatch-put-metric-alarm AddCapacity \
> --metric-name CPUUtilization \
> --namespace "AWS/EC2" \
> --statistic Average \
> --period 120 --threshold 80 \
> --comparison-operator GreaterThanOrEqualToThreshold \
> --dimensions "AutoScalingGroupName=MyASGroup" \
> --evaluation-periods 2 \
> --alarm-actions arn:aws:autoscaling::576514848852:scalingPolicy:c2a8f9dc-1c75-49d5-b54d-8ef87fe29e9a:autoScalingGroupName/MyASGroup:policyName/MyScaleoutPolicy

# create scale in alarm
euwatch-put-metric-alarm RemoveCapacity \
> --metric-name CPUUtilization \
> --namespace "AWS/EC2" \
> --statistic Average \
> --period 120 --threshold 40 \
> --comparison-operator LessThanOrEqualToThreshold \
> --dimensions "AutoScalingGroupName=MyASGroup" \
> --evaluation-periods 2 \
> --alarm-actions arn:aws:autoscaling::576514848852:scalingPolicy:a4148c27-81da-4eff-9140-cba3ba9381cb:autoScalingGroupName/MyASGroup:policyName/MyScaleInPolicy

# delete alarm
euwatch-delete-alarms

Set the alarm state to OK/ALARM for testing,

euwatch-set-alarm-state --state-value OK \
> --state-reason "testing" AddCapacity

euwatch-set-alarm-state --state-value OK \
> --state-reason "testing" RemoveCapacity

euwatch-describe-alarms

# output
# AddCapacity	OK	arn:aws:autoscaling::576514848852:scalingPolicy:c2a8f9dc-1c75-49d5-b54d-8ef87fe29e9a:autoScalingGroupName/MyASGroup:policyName/MyScaleoutPolicy	AWS/EC2	CPUUtilization	120	Average	2	GreaterThanOrEqualToThreshold	80.0
# RemoveCapacity	OK	arn:aws:autoscaling::576514848852:scalingPolicy:a4148c27-81da-4eff-9140-cba3ba9381cb:autoScalingGroupName/MyASGroup:policyName/MyScaleInPolicy	AWS/EC2	CPUUtilization	120	Average	2	LessThanOrEqualToThreshold	40.0

4. Resource Tagging

Resource tagging was another missing AWS feature which was not there until 3.2.2. This is a very important feature and also used by many 3rd party tools and application.

euca-create-tags vol-65803EB8 --tag "testtag"
# TAG volume vol-65803EB8 testtag

euca-describe-volumes
# VOLUME vol-65803EB8 2 PARTI00 available 2013-06-10T12:29:41.082Z standard
# TAG volume vol-65803EB8 testtag

5. More instance type

euca-describe-instance-types
INSTANCETYPE	Name         CPUs  Memory (MB)  Disk (GB)
INSTANCETYPE	m1.small        1          256          5
INSTANCETYPE	t1.micro        1          256          5
INSTANCETYPE	m1.medium       1          512         10
INSTANCETYPE	c1.medium       2          512         10
INSTANCETYPE	m1.large        2          512         10
INSTANCETYPE	m1.xlarge       2         1024         10
INSTANCETYPE	c1.xlarge       2         2048         10
INSTANCETYPE	m2.xlarge       2         2048         10
INSTANCETYPE	m3.xlarge       4         2048         15
INSTANCETYPE	m2.2xlarge      2         4096         30
INSTANCETYPE	m3.2xlarge      4         4096         30
INSTANCETYPE	cc1.4xlarge     8         3072         60
INSTANCETYPE	m2.4xlarge      8         4096         60
INSTANCETYPE	hi1.4xlarge     8         6144        120
INSTANCETYPE	cc2.8xlarge    16         6144        120
INSTANCETYPE	cg1.4xlarge    16        12288        200
INSTANCETYPE	cr1.8xlarge    16        16384        240
INSTANCETYPE	hs1.8xlarge    48       119808      24000

Well, if you used Eucalyptus before, I think, the improvement is very much visible 🙂

6. Maintenance Mode:

Eucalyptus 3.3.0 also comes with the feature which many cloud administrator might be waiting for such a long time, which is Maintenance Mode.

In other words, migrating a single instance to another Node Controller or evacuating a certain Node Controller are now supported by Eucalyptus.

# evacuate a Node Controller
euca-migrate-instances --source 10.111.1.119

# migrate specific instance to another destination
euca-migrate-instances -i i-38A74228 --dest 10.111.1.116

For more information check the Eucalyptus 3.3.0 roadmap. Architectural overview for 3.3.x release can be found on githubHere is a list of new stories that are going to take place in the 3.3.0 release.

More AWS Compatibility

Eucalyptus 3.3.x is the most AWS compatible release ever. It has more API compatibility than Eucalyptus ever had. Here is couple of our ongoing work on the different AWS SDKs and open source libraries.

  1. AWS SDK for Java
  2. AWS SDK for Ruby
  3. AWS SDK for PHP
  4. AWS toolkit for Eclipse
  5. jcloud on Eucalyptus – This is comparatively newest among all, we are tracking this as a story on jira, EUCA-5671.

Eucalyptus 3.3.0 has few very important improvements on Boot-from-EBS instances,

1. Root block device is /dev/sda and not /dev/sda1
2. Allow multiple EBS block device mappings
3. No more default ephemeral disk at /dev/sdb
4. Metadata service changes

Euca2ools 3.0 is huge in Eucalyptus 3.3.x. It has been completely ported to requestbuilder. Euca2ools 3 is slim and beautiful and it works!

One interesting fact about Euca2ools from the developers,

% git diff –shortstat 2.1.3.. — bin euca2ools generate-manpages.sh
install-manpages.sh setup.py
432 files changed, 14973 insertions(+), 15097 deletions(-)

euca2ools 3 adds three entirely new services and tons of new
functionality to the previous version, but it still manages to weigh
in at less code than it had before.

Read more about euca2ools 3, “What’s new in Euca2ools 3” Part 1 and Part 2.

With all these new features, Eucalyptus 3.3.0 has many bug fixes as well. There are many others documented/undocumented fixes are coming in 3.3.0. Some administrator tool are also on the way to see the light very soon.

If you are interested in trying from source code, you are more that welcome to checkout Eucalyptus from the public github repository.

Some places to give you feedback:

Bug report: eucalyptus.atlassian.net
Questions: engage.eucalyptus.com

Eucalyptus manual installation

Well, Eucalyptus does not come with Ubuntu any more from version 11.10. Why? Indeed there is no reason, all we can say, this the benefit of being open, you are free to make your own choice 🙂

Anyway, but that doesn’t mean Eucalyptus cannot be used with ubuntu anymore, that’s absurd, isn’t it 😛

Installation detail: Eucalyptus ver. 2.0.2, Ubuntu 11.10, Two physical machines (one with two NICs)

First we are going to setup Cluster Controller (CC). Storage Controller (SC), Cloud Controller and Walrus also going to live in the same box.

sudo apt-get install eucalyptus-cloud eucalyptus-cc eucalyptus-walrus eucalyptus-sc

now we need to install and configure ntp (Network Time Protocol) for the time sync between two machines.

sudo apt-get install ntp

we need to modify the ntp.conf for this setup, but this may not be a good idea for large scale installation.

add the following lines to ntp.conf

server 127.127.1.0
fudge 127.127.1.0 stratum 10

and restart the ntp service.

finally it’s time to register cluster, storage controller and walrus.

sudo euca_conf --register-cluster cluster1 192.168.1.2
sudo euca_conf --register-walrus 192.168.1.2
sudo euca_conf --register-sc cluster1 192.168.1.2

For Node controller we need few more packages. To be in the safe side, I installed all the recommended and suggested packages.

sudo apt-get install bridge-utils libcrypt-openssl-random-perl libcrypt-openssl-rsa-perl libcrypt-openssl-x509-perl open-iscsi powernap qemu-kvm vlan aoetools eucalyptus-nc

node has to be configured with a bridge as the primary interface

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
address 192.168.1.3

bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

install and configure ntp by adding the following line

server 192.168.1.2

modify the qemu.conf file to make sure libvirt is configured to run as user “eucalyptus”

sudo vim /etc/libvirt/qemu.conf

search and set: user = “eucalyptus”

modify the libvirt.conf file

unix_sock_group = "libvirtd"
unix_sock_ro_perms = "0777"
unix_sock_rw_perms = "0770"
auth_unix_ro = "none"
auth_unix_rw = "none"

as the modification is done, so now we have to stop and start libvirt for the changes to take place and also we have to make sure the sockets belong to the correct group

sudo /etc/init.d/libvirt-bin stop
sudo /etc/init.d/libvirt-bin start

chown root:libvirtd /var/run/libvirt/libvirt-sock
chown root:libvirtd /var/run/libvirt/libvirt-sock-ro

edit eucalyptus.conf and set private and public interface as br0

at this point the NC setup is done!

now we have to register this node from the CC like we did before

sudo euca_conf --register-nodes 192.168.1.3

and now you have your own private cloud!

tada!!! 😀

The origin of Cloud Computing

Few days ago while I was having a conversation one of my colleagues on Cloud Computing, he suddenly asked me that how cloud was evolved. Well, then I recalled some of discreet information that I looked at before. Whenever, a non-technical person ask me about Cloud, I feel little messy with the info I know. Anyway, the basic question was pretty much same.

What is cloud computing? How cloud computing has been evolved? Who invented cloud computing?

What is cloud computing The answer is almost everywhere on the internet. Wikipedia also has a fantastic definition of cloud computing. Cloud Computing is all about service. In the cloud, everything you are getting or providing has to be as a service. If not, the highest possibility is that you are not dealing with Cloud. So basically the three parts of cloud is IaaS, PaaS and SaaS. The last two terms are pretty much understandable. Perhaps, two simple example is enough to define these two services. When we are talking about PaaS, that means we are talking about something like Google App Engine and when we are talking about SaaS, then we talking about services like Salesforce.com, Google Apps etc.

So here comes the Infrastructure as a Service (IaaS). In a few words, here the vendors or the cloud service providers are outsourcing the hardware over the internet. This internet based computing models are called Cloud Computing.

1960s John McCarthy theorized of an eventual computing outsource model. He wrote that ‘computation may someday be organized as a public utility.’

There is no single term that defines cloud computing. It’s a collection of modern technologies, virtualization, Web Service and Service Oriented Architecture, Web 2.0 and Mashup.

NetCentric tried to trademark the ‘Cloud Computing’ in May 1997, patent serial number 75291765. But for some reason in April 1999 they abandoned it.

In  March 23, 2007 Dell also applied for the patent.

Cloud Computing vs. Cloud Service Though these two sounds similar, but the two terminology have some fundamental differences. In easy words, when the IT specialists are deploying an IT foundation with servers, storage, network, application software and IP networks etc. and making the system ready to provide service to end users, that refers to Cloud Computing. Cloud Service is mostly related to the end users, here the user is getting the services in real time over the internet. Cloud Service mostly deals with pricing, user interface, system interface, APIs etc.

In 1999, Salesforce.com introduced that enterprise application solutions can be provided using websites. Amazon web services came in 2002 and Google Doc in 2006.

Well, it is told, that Microsoft once tried to make the hype back in 2001. They created something called ‘Hailstorm’ and used the phrase ‘cloud’ of computers. It was a matter of surprise that I couldn’t found the product name on wiki. To know more about the fact this article worth a read.

In August 9, 2006 in a Search Engine Strategies Conference, Eric Schmidt pick the word ‘Cloud Computing’. He used it to explain PaaS/SaaS. But it is also told that Eric took control of the term as Amazon was launching EC2 later that same month and which is also known as classic Google FUD.

Security issues There is also a question about the security issues. Reliability, Availability, and Security (RAS) are the three greatest concerns about migrating to the cloud. Reliability is often covered by a service level agreement (SLA).

Eucalyptus installation

For installation three two PC (Server1 and Server2) is needed for the cloud purpose and one PC is for client which will also be serving for creating KVM images. Server2 and the client machine should have VT enabled as we will be running all our VMs on Server2 and client PC will be using to create the necessary KVM images.

Required configuration

Required Setup
Required Setup

Server1 setup

  • Boot the Ubuntu 11.04 64 bit Server CD/pen drive and from the graphical menu, select Ubuntu Enterprise Cloud and follow the installation menu.
  • If you are using DHCP for the public network, then just select eth0 and let the network to be setup automatically. Otherwise set your ethernet as mentioned above.
  • When the installation will be asking for the Cloud Controller address just put it blank.
  • For Server1 it’ll install the ‘Cluster Controller’, ‘Walrus Storage Service’, ‘Cluster Controller’ and ‘Storage Controller’.
  • Select eth1 for communication with nodes
  • Eucalyptus cluster name – Cluster1 (on anything)
  • Select an IP range to be used for the nodes, i.e. 192.168.1.10-192.168.1.99

Post installation setup

Set up static IP for eth1, edit /etc/network/interfaces and add the following to it,

auto eth1
iface eth1 inet static
address 192.168.20.1
netmask 255.255.255.0
network 192.168.20.0
broadcast 192.168.20.255

Run the following command to restart the networking,

localadmin@server1:~$ sudo /etc/init.d/networking restart

Update and upgrade the Eucalyptus to get the latest version of it,

localadmin@server1:~$ sudo apt-get update
localadmin@server1:~$ sudo apt-get upgrade eucalyptus

Install NTP package. Server1 is going to act as an NTP server for the nodes.

localadmin@server1:~$ sudo apt-get install ntp

Open and edit /etc/ntp.conf to make sure that the server serves time even when it’s connectivity to the internet is down. Add the following line to the file so that NTP server uses it’s own clock source.

server 127.127.1.0
fudge 127.127.1.0 stratum 10

Restart NTP server to make the changes active

localadmin@server1:~$ sudo /etc/init.d/ntp restart

Restart the Cluster Controller

localadmin@server1:~$ sudo restart eucalyptus-cc CLEAN=1

[NTP stands for Network Time Protocol, and it is an Internet protocol used to synchronize the clocks of computers to some time reference]

Server2 setup

  • Boot Ubuntu 11.04 64 bit Server and select ‘Install Ubuntu Enterprise Cloud’ and continue the basic installation process.
  • For network setup select eth0 and configure it manually. Set the private IP to 192.168.20.2 and the gateway as 192.168.20.1.
  • For UEC setup it’ll ask certain configuration option. If it doesn’t select the Cluster Controller by itself, put the the Cluster Controller address 192.168.20.1
  • In cloud installation mode select ‘Node Controller’

Post installation setup

Set up static IP for eth1 by adding the few lines to /etc/network/interfaces so that it looks like following

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
    address 192.168.20.2
    netmask 255.255.255.0
    network 192.168.20.0
    broadcast 192.168.20.255
    # gateway 192.168.20.1
    # dns-* options are implemented by the resolvconf package, if installed
    dns-nameservers 192.168.1.1
    bridge_ports eth0
    bridge_fd 9
    bridge_hello 2
    bridge_maxage 12
    bridge_stp off

auto eth1
iface eth1 inet static
address 192.168.1.103
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1

Run the following command to restart the networking,

localadmin@server1:~$ sudo /etc/init.d/networking restart

Update and upgrade the Eucalyptus to get the latest version of it,

localadmin@server1:~$ sudo apt-get update
localadmin@server1:~$ sudo apt-get upgrade eucalyptus

Install NTP package.

localadmin@server1:~$ sudo apt-get install ntp

Open the file /etc/ntp.conf and add the following line

server 192.168.20.1

Restart NTP server to make the changes active

localadmin@server1:~$ sudo /etc/init.d/ntp restart

Open the file /etc/eucalyptus/eucalyptus.conf and make the following changes,

VNET_PUBINTERFACE=”br0”
VNET_PRIVINTERFACE=”br0”
VNET_BRIDGE=”br0”
VNET_DHCPDAEMON=”/usr/sbin/dhcpd3”
VNET_DHCPUSER=”dhcpd”
VNET_MODE=”MANAGED−NOVLAN”

Now run the following command to restart the Node Controller to make all the changes active,

localadmin@server2:~$ sudo restart eucalyptus-nc

Setup CC’s SSH public key to NC

On the Node Controller, temporarily set a password for the “eucalyptus” user,

localadmin@server2:~$ sudo passwd eucalyptus

On the Cluster Controller:

localadmin@server1:~$ sudo -u eucalyptus ssh-copy-id -i ~eucalyptus/.ssh/id_rsa.pub eucalyptus@192.168.20.2

Remove the password of the “eucalyptus” account from the Node,

localadmin@server1:~$ sudo passwd -d eucalyptus

Client setup

  • Boot the 11.04 32/64 bit Desktop and install it.

Install KVM on the client machine.

shaon@client:~$ sudo apt-get install qemu-kvm

Post installation setup

To administrate the cloud we need to install euca2ools

shaon@client:~$ sudo apt-get install euca2ools

Monitoring

  • Login to the web interface from https://192.168.10.121:8443, default username is ‘admin’ and password is ‘admin’.
  • Download the user credentials from credential tab and save it to ~/.euca directory (if .euca is not there just create and save the credentials there)
  • Extract the credentials and source the eucarc script so that euca2ools can used this as environmental variables.
$ cd .euca
$ unzip xxxxxxxxx.zip
$ source eucarc

Verify that euca2ools can communicate with the UEC properly and all the services are running correctly run the following command,

$ euca-describe-availability-zones verbose

It’ll give output something like this,

AVAILABILITYZONE	cluster1	192.168.1.102
AVAILABILITYZONE	|- vm types	free / max   cpu   ram  disk
AVAILABILITYZONE	|- m1.small	0001 / 0002   1    192     2
AVAILABILITYZONE	|- c1.medium	0001 / 0002   1    256     5
AVAILABILITYZONE	|- m1.large	0000 / 0001   2    512    10
AVAILABILITYZONE	|- m1.xlarge	0000 / 0001   2   1024    20
AVAILABILITYZONE	|- c1.xlarge	0000 / 0000   4   2048    20

If VCPUs are found 0000, then use the following command on the Server1 to make sure that it finds the Node Controller and approve when prompts 192.168.20.2

localadmin@server1:~$ sudo euca_conf --discover-nodes

tadaa!!!

Eucalyptus and it’s components

Eucalyptus

Eucalyptus is an open source Linux based software architecture which provides an EC2-compatible cloud computing platform and S3-compatible cloud storage platform. It implements scalable, efficient-enhancing and private and hybrid clouds within and organization’s IT infrastructure. It gives an Infrastructure as a Service (IaaS) solution. Users can use commodity hardware.

Eucalyptus was developed to support the high performance computing (HPC). Eucalyptus can be deployed without modification on all major Linux OS distributions, including Ubuntu, RHEL/CentOS, openSUSE, and Debian.
Eucalyptus Features

For implementing, managing and maintaining the virtual machines, network and storage Eucalyptus has variety of features.

  • SSH Key Management
  • Image Management
  • Linux-based VM Management
  • IP Address Management
  • Security Group Management
  • Volume and Snapshot Management
Eucalyptus Fundamental Architecture
Eucalyptus Fundamental Architecture

Components of Eucalyptus:

1. Cluster Controller (CC) Cluster Controller manages the one or more Node controller and responsible for deploying and managing instances on them. It communicates with Node Controller and Cloud Controller simultaneously. CC also manages the networking for the running instances under certain types of networking modes available in Eucalyptus.

2. Cloud Controller (CLC) Cloud Controller is front end for the entire ecosystem. CLC provides an Amazon EC2/S3 compliant web services interface to the client tools on one side and interacts with the rest of the components of the Eucalyptus infrastructure on the other side.

3. Node Controller (NC) It is the basic component for Nodes. Node controller maintains the life cycle of the instances running on each nodes. Node Controller interacts with the OS, hypervisor and the Cluster Controller simultaneously.

4. Walrus Storage Controller (WS3) Walrus Storage Controller is a simple file storage system. WS3 stores the the machine images and snapshots. It also stores and serves files using S3 APIs.

5. Storage Controller (SC) Allows the creation of snapshots of volumes. It provides persistent block storage over AoE or iSCSI to the instances.

Eucalyptus Architecture
Eucalyptus Architecture