Minio as Eucalyptus ObjectStorage backend

Minio, the new cloud storage written in Go that let’s you store any data as objects also has another great feature, it’s AWS S3 compatible and that makes Minio really useful with Eucalyptus.

Eucalyptus supports multiple ObjectStorage backends, Riak CS (Cloud Storage), Ceph RGW and Eucalyptus already comes with S3 compatible Walrus. Eucalyptus ObjectStorage service acts as a gateway for the backends. Eucalyptus still handles all the AWS compatible Identity and Access Management stuffs. As Minio is compatible with AWS S3, we can actually use it with Eucalyptus as well. However, even though it is possible to use any object storage backend that is compatible with AWS S3, they are not supported unless specified otherwise on the Eucalyptus website/documentation.

The first step would be to start a Minio server.

Deploying a basic Minio server is super simple. For this demo I have used an Eucalyptus instance (virtual machine) running Ubuntu 16.04.1 LTS (Xenial Xerus).

$ wget https://dl.minio.io/server/minio/release/linux-amd64/minio
$ chmod +x minio
$ ./minio server ~/MinioBackend

We should see a super helpful output like below,

Endpoint:  http://172.31.21.31:9000  http://127.0.0.1:9000
AccessKey: GFQX5XMP1DSTIQ8JKRXN
SecretKey: aFLQegoeIgzF/hK+Xymba7JwSANfn98ANNcKXz+S
Region:    us-east-1
SqsARNs:

Browser Access:
   http://172.31.21.31:9000  http://127.0.0.1:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://172.31.21.31:9000 GFQX5XMP1DSTIQ8JKRXN aFLQegoeIgzF/hK+Xymba7JwSANfn98ANNcKXz+S

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide

Minio also comes with a helpful web UI which can be accessible once we have Minio running.

Next step would be to see how we can add a new provider client in Eucalyptus. It is definitely easier that it sounds🙂

If we try to reset the value for ObjectStorage it should show us the available supported objectstorage S3 provider clients or these values are also available in the cloud-output.log where user-facing services are running,

# euctl -r objectstorage.providerclient
euctl: error (ModifyPropertyValueType): Cannot modify \
objectstorage.providerclient.providerclient new value \
is not a valid value.  Legal values are: walrus,ceph-rgw,riakcs

So, now we have to add another provider client for Minio. Technically, we can probably use riakcs, if you have deployed Eucalyptus with packages, but I am not gonna do that in this case, since I already have source build cloud.

This post assumes that you know how to build Eucalyptus from source and doesn’t go into much detail. The purpose of this post is to show how to add a third-party object storage like Minio with Eucalyptus. But again, feel free to use package installation and use riakcs as provider client, use the Minio’s endpoint and user credentials as described below.

To add minio as a provider client, first we need to create a file called MinioProviderClient.java,

Build and install all of eucalyptus source or just build the jars.

Stop eucalyptus-cloud service and copy eucalyptus-object-storage-4.4.0.jar to /usr/share/eucalyptus directory where user-facing services are running. Restart eucalyptus-cloud service.

If everything goes well, when the services are enabled, we can now check if the supported object storage providers again.

# euctl -r objectstorage.providerclient
euctl: error (ModifyPropertyValueType): Cannot modify \
objectstorage.providerclient.providerclient new value \
is not a valid value.  Legal values are: walrus,ceph-rgw,minio,riakcs

Looks like minio is now one of the supported provider client for Eucalyptus object storage. That was easy!🙂

At this point we should be able to set minio as a provider client for Eucalyptus object storage.

# euctl objectstorage.providerclient=minio
objectstorage.providerclient = minio

Now configure Eucalyptus ObjectStorage service to use our Minio deployment endpoint (public IP address/hostname) and credentials,

# euctl objectstorage.s3provider.s3endpoint=10.116.159.114:9000
# euctl objectstorage.s3provider.s3accesskey=GFQX5XMP1DSTIQ8JKRXN
# euctl objectstorage.s3provider.s3secretkey=aFLQegoeIgzF/hK+Xymba7JwSANfn98ANNcKXz+S

Eucalyptus ObjectStorage gateway does a connectivity check with the provided S3 endpoint to enable the objectstorage service of Eucalyptus. Basically it expects a HTTP response code for HEAD operation on S3 endpoint, which can be configured based on the backend, as different backend can send different response code. Apparently Minio returns 404 on it’s endpoint. So, to make it work, we need to set the value to 404 (feels weird though).

# euctl objectstorage.s3provider.s3endpointheadresponse=404

Within a few seconds we should see the object storage is enabled. We can run the following to check the service status of object storage,

# euserv-describe-services --filter service-type=objectstorage

Now we can use Minio as an object storage backend. However, while working, I found an issue where any PUT object request that was made through Eucalyptus ends up with a failure. After debugging with Swathi Gangisetty it looks like the AWS Java SDK Eucalyptus using doesn’t like the fact that Minio is returning Etag instead of ETag as it is mentioned in the AWS Documentation. To test, we removed Eucalyptus out of the picture and tried to upload an object directly with AWS Java SDK 1.7.1, which also failed in the same way. However, the same upload request works with AWS Java SDK 1.10.x.
So, to use Minio successfully as an Eucalyptus ObjectStorage backend, we either need a fix in Minio where it returns the correct ETag or when Eucalyptus starts using a newer AWS Java SDK in a future release.

Here are the requests for the fix in Minio: https://github.com/minio/minio/issues/3284
and for the AWS Java SDK update in Eucalyptus: https://eucalyptus.atlassian.net/browse/EUCA-12955

Update:
Looks like this was a bug in AWS Java SDK 1.7.1 and fixed in a later release,
https://github.com/aws/aws-sdk-java/pull/590

Eucalyptus VPC with MidoNet 5.2

Eucalyptus started to support AWS compatible VPC (Virtual Private Cloud) from v4.2 as a new networking mode VPCMIDO. Eucalyptus still supports the EC2 classic networking in EDGE networking mode. Eucalyptus VPC exposes the same AWS VPC APIs to support the existing application that were built for AWS. Eucalyptus uses MidoNet as a backend for VPC and supports both open source MidoNet and Midokura Enterprise MidoNet (MEM). The current Eucalyptus release v4.3 supports MidoNet v1.9 and has gone through a huge improvements in terms of performance and stability from Eucalyptus v4.2.2.

Eucalyptus v4.4 is under heavy development and supports current stable release of MidoNet v5.2!

A basic deployment of MidoNet (v5.2) for Eucalyptus VPC consists of following components:

  1. MidoNet Cluster – installed on Cloud Controller (CLC)
  2. Gateway Node (MidoNet Gateway)
  3. Network State Database (NSDB) – Zookeeper and Cassandra
  4. MidoNet Agents (Midolman) – Cloud Controller (CLC) and Node Controllers (NC)

Steps to install Eucalyptus 4.4 VPC

Even though Eucalyptus 4.4 is still under development, nightly packages are already available here.

Installation of the MidoNet components are pretty straight forward and are well explained in MidoNet documentation.

  • Repository configuration for opensource MidoNet
  • Network State Database installation
  • Install and configure MidoNet Cluster on CLC
    # install packages
    yum install midonet-cluster python-midonetclient
    
    # file: /etc/midonet/midonet.conf
    [zookeeper]
    zookeeper_hosts = 10.111.5.209:2181
  • Run the following command on MidoNet Cluster, configure access to NSDB
    $ cat << EOF | mn-conf set -t default
    zookeeper {
        zookeeper_hosts = "10.111.5.209:2181"
    }
    
    cassandra {
        servers = "10.111.5.209"
    }
    EOF
  • Start midonet-cluster.service
  • Install and configure Midolman on CLC and NCs
    yum install java-1.8.0-openjdk-headless midolman
    
    # file: /etc/midolman/midolman.conf
    [zookeeper]
    zookeeper_hosts = 10.111.5.209:2181
  • Set Midolman resource template
    mn-conf template-set -h local -t default
  • Start midolman.service on all the hosts.
  • Install and configure Eucalyptus with VPCMIDO as networking mode. Eucalyptus 4.4 installation is identical to v4.3.

Create MidoNet Resource for VPC

  • Launch MidoNet CLI on MidoNet Cluster
    midonet-cli -A --midonet-url=http://localhost:8080/midonet-api
  • Create a tunnel-zone with type ‘gre’ (Generic Routing Encapsulation)
    midonet> create tunnel-zone name mido-tz type gre
    tzone0
  • Add hosts e.g CLC, NCs to tunnel-zone. If midolman services are running on the hosts with correct configuration, we should see a list hosts with the following command
    midonet> host list
    host host0 name h-03.qa1.eucalyptus-systems.com alive true addresses 169.254.123.1,fe80:0:0:0:0:11ff:fe00:1101,fe80:0:0:0:0:11ff:fe00:1102,10.111.5.3,fe80:0:0:0:eeb1:d7ff:fe7f:53bc,127.0.0.1,0:0:0:0:0:0:0:1,10.107.105.3,fe80:0:0:0:eeb1:d7ff:fe7f:53bc,fe80:0:0:0:eeb1:d7ff:fe7f:53bc flooding-proxy-weight 1 container-weight 1 container-limit no-limit enforce-container-limit false
    host host1 name g-19-11.qa1.eucalyptus-systems.com alive true addresses fe80:0:0:0:ea9a:8fff:fe74:12ca,fe80:0:0:0:0:11ff:fe00:1102,10.111.1.135,fe80:0:0:0:ea9a:8fff:fe74:12ca,127.0.0.1,0:0:0:0:0:0:0:1,fe80:0:0:0:ea9a:8fff:fe74:12cb,10.113.1.135,fe80:0:0:0:ea9a:8fff:fe74:12ca,10.107.101.135,fe80:0:0:0:0:11ff:fe00:1101,169.254.123.1 flooding-proxy-weight 1 container-weight 1 container-limit no-limit enforce-container-limit false
    host host2 name a-27-r.qa1.eucalyptus-systems.com alive true addresses 127.0.0.1,0:0:0:0:0:0:0:1,fe80:0:0:0:0:11ff:fe00:1102,fe80:0:0:0:ea39:35ff:fec5:7098,10.107.105.209,fe80:0:0:0:ea39:35ff:fec5:7098,fe80:0:0:0:0:11ff:fe00:1101,169.254.123.1,10.111.5.209,fe80:0:0:0:ea39:35ff:fec5:7098 flooding-proxy-weight 1 container-weight 1 container-limit no-limit enforce-container-limit false
    
    # Add the hosts to tunnel zone
    midonet> tunnel-zone list
    tzone tzone0 name mido-tz type gre
    midonet> tunnel-zone tzone0 add member host host0 address 10.111.5.3
    zone tzone0 host host0 address 10.111.5.3
    midonet> tunnel-zone tzone0 add member host host1 address 10.111.1.135
    zone tzone0 host host1 address 10.111.1.135
    midonet> tunnel-zone tzone0 add member host host2 address 10.111.5.209
    zone tzone0 host host2 address 10.111.5.209
  • Set up local ASN for router
    # list router
    midonet> router list
    router router0 name eucart state up asn 0
    midonet> router router0 set asn 65996
    
  • Set BGP Peer (may change in future EUCA-12890)
    midonet> router router0 add bgp-peer asn 65000 address 10.116.133.173
    router0:peer0
  • Set BGP Network
    midonet> router router0 add bgp-network net 10.116.131.0/24
    router0:net0

 

Install an image using and following command and start running instances with VPC!

python <(curl -sL https://git.io/vXZzY)
or
python <(curl -sL https://raw.githubusercontent.com/eucalyptus/eucalyptus-cookbook/master/faststart/install-emis/install-emis.py)

eucalyptus-selinux: security as a first-class citizen!

Eucalyptus 4.3 development sprint is almost over. SELinux support for Eucalyptus is one of the most exciting features [EUCA-1620] for this release.

Like me, if you haven’t looked into SELinux in a while or it is a new thing to you, here are few tips and tricks that may come handy if things don’t work as expected with SELinux while you are playing with Eucalyptus nightly builds or source code during dev cycles or any other new software while SELinux is set to enforcing on the system.

[It is recommended that you try Eucalyptus 4.3 with SELinux on RHEL 7.x or CentOS 7.x.]

Recently while trying Eucalyptus when SELinux is enabled, I was having issues with starting eucanetd.service, which gave me a chance to look further into Eucalyptus SELinux. It appears that the very latest eucanetd requires to open an UDP port and since SELinux doesn’t yet know about it, it is giving an AVC (Access Vector Cache) error.

In case of situation like this or when a newly installed application fails to run/execute on an SELinux enable box, the first thing to check would be /var/log/audit/audit.log.

In the log I found the following error:

type=AVC msg=audit(1462905888.565:1432): avc: denied { name_bind } for pid=10725 comm="eucanetd" src=63822 
scontext=system_u:system_r:eucalyptus_eucanetd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=udp_socket

type=SYSCALL msg=audit(1462905888.565:1432): arch=c000003e syscall=49 success=no exit=-13 a0=3 a1=7ffef025a010 a2=10 
a3=7ffef0259d90 items=0 ppid=1 pid=10725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) 
ses=4294967295 comm="eucanetd" exe="/usr/sbin/eucanetd" subj=system_u:system_r:eucalyptus_eucanetd_t:s0 key=(null)

To debug further and start eucanetd.service, I have installed the following tool,
yum install policycoreutils-python

This tools comes with a fantastic command called ‘audit2allow‘, which is useful for both debugging and fixing SELinux policy related issues.

The following command shows the list of SELinux failures with reasons.

audit2allow --why --all

Example:

type=AVC msg=audit(1462905222.124:825): avc:  denied  { name_bind } for  pid=2304 comm="eucanetd" src=63822 scontext=system_u:system_r:eucalyptus_eucanetd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=udp_socket
	Was caused by:
		Missing type enforcement (TE) allow rule.

		You can use audit2allow to generate a loadable module to allow this access.

type=AVC msg=audit(1462905888.565:1432): avc:  denied  { name_bind } for  pid=10725 comm="eucanetd" src=63822 scontext=system_u:system_r:eucalyptus_eucanetd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=udp_socket
	Was caused by:
		Missing type enforcement (TE) allow rule.

		You can use audit2allow to generate a loadable module to allow this access.

Update:
Though the above information is okay, but if you are submitting an issue for a project or asking for SELinux related queries, ausearch is probably a better tool as it combines the SYSCALL and AVC records. Thanks Garrett Holmstrom for the suggestion.

Example:

ausearch -c eucanetd --start recent

Output:

----
time->Tue May 10 17:05:36 2016
type=SYSCALL msg=audit(1462925136.650:5237): arch=c000003e syscall=49 success=no exit=-13 a0=3 a1=7ffe13b6d920 a2=10 
a3=7ffe13b6d6a0 items=0 ppid=1 pid=6069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) 
ses=4294967295 comm="eucanetd" exe="/usr/sbin/eucanetd" subj=system_u:system_r:eucalyptus_eucanetd_t:s0 key=(null)

type=AVC msg=audit(1462925136.650:5237): avc:  denied  { name_bind } for  pid=6069 comm="eucanetd" src=63822 
scontext=system_u:system_r:eucalyptus_eucanetd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=udp_socket

We can see that eucalyptus_eucanetd type (eucalyptus_eucanetd_t) is missing rules for access. In my case I had quite a few, but the above example gives an idea of how it might look.

Newbie tips:
Run the following command to check security context of a file on a SELinux enabled system,

# ls -ltrhZ /usr/sbin/eucanetd

-rwxr-xr-x. root root system_u:object_r:eucalyptus_eucanetd_exec_t:s0 /usr/sbin/eucanetd

selinux-penguin-new_medium

Now that we’ve identified the problem, solving it is easy. We can use audit2allow to generate modules to install necessary policies. Though, this can be a tedious job sometimes.

To generate missing policies, we can run the following command,

audit2allow -M my_eucanetd_policy < /var/log/audit/audit.log

which will generate an output like below:

******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i my_eucanetd_policy.pp

Now load the module as it says in the output.

At this point we try to run the service again, if it’s a new service, it may fails due to not enough permission, e.g a typical service may require following access for a file { getattr read open } in multiple system calls, but generate policy with audit2allow may only detect the last error.

So, if the service fails to start, we check the audit.log, if it’s another AVC denial error, run the command again to generate module, continue this process until there is no AVC denial in the audit.log.

By this time there could be a couple of policy modules generated and we may want have one single policy file to make future usage easier. After merging the policy files, we need to compile and create policy module package and then install module package like above.

For example after merging policies, mine looked like following:
[WARNING]
this post is for debugging and learning purposes only, under no circumstance you should need to apply custom policies for Eucalyptus. Please open an issue if you run across an AVC denial.


module eucanetd_policy 1.0;

require {
        type user_tmp_t;
        type unreserved_port_t;
        type eucalyptus_cloud_t;
        type eucalyptus_eucanetd_t;
        type eucalyptus_var_lib_t;
        type node_t;
        type eucalyptus_var_run_t;
        class capability { dac_read_search dac_override };
        class file { create getattr write open };
        class udp_socket { name_bind node_bind };
        class dir { read write search add_name };
}

#============= eucalyptus_cloud_t ==============
allow eucalyptus_cloud_t self:capability { dac_read_search dac_override };
allow eucalyptus_cloud_t user_tmp_t:dir read;

#============= eucalyptus_eucanetd_t ==============
allow eucalyptus_eucanetd_t eucalyptus_var_lib_t:dir search;
allow eucalyptus_eucanetd_t eucalyptus_var_run_t:dir { write add_name };
allow eucalyptus_eucanetd_t eucalyptus_var_run_t:file { create getattr write open };

allow eucalyptus_eucanetd_t node_t:udp_socket node_bind;
allow eucalyptus_eucanetd_t unreserved_port_t:udp_socket name_bind;

Update:
If planning to merge to another selinux module, the following command could be provide outputs precisely, thanks Tony Beckham for the suggestion.

ausearch --start recent --success no | audit2allow

After merging all the policies generated by audit2allow, it needs to be compiled and then packaged to be installed as a module.

# compile policy
checkmodule -M -m eucanetd_policy.te -o eucanetd_policy.mod

# create module package
semodule_package -m eucanetd_policy.mod -o eucanetd_policy.pp

# install module
semodule -i eucanetd_policy.pp

By following the above steps, typically we should be able to avoid problems like AVC denials.

The source for eucalyptus-selinux: https://github.com/eucalyptus/eucalyptus-selinux
Report for Eucalyptus related issues: https://eucalyptus.atlassian.net

Configure DNS server for Helion Eucalyptus

Helion Eucalyptus has come a long way since its inception in 2007. Now it comes with more services than ever with more features to make your Eucalyptus cloud robust and more scalable. However, it’s now at a point where configuring DNS has become a fundamental requirement, like they say, “With great power comes great responsibility.”

HPE Helion Logo

Eucalyptus services like Loadbalancing, Imaging and most importantly when you want to use multiple User Facing Services, configuring DNS is not optional anymore. Even though the title of the post is Configure DNS server for Helion Eucalyptus, but this DNS server can be used as a basic DNS server for other purposes in your data center, as well as usable with multiple Eucalyptus clouds at the same time.

Install packages for DNS sever:

yum install bind bind-utils

After installation we need to edit the file in /etc/named.conf to add zone specific information for forward and reverse lookup.

In this example, we have a forward zone called euca.example.net:

zone "euca.example.net" IN {
        type master;
        file "fwd.euca.example.net";
        allow-update { any; };
};

For this example, we allow dynamic updates from any hosts, the default is deny all.

And since we have hosts in 10.17.198.x and in 10.17.199.x, for simplicity we will use the first two octet for reverse dns,

zone "17.10.in-addr.arpa" IN {
        type master;
        file "rev.euca.example.net";
        allow-update { any; };
};

For this example, we disabled the DNS authentication,

dnssec-enable no;
dnssec-validation no;

The entire named.conf file should looks like below:

In the example above, 10.17.198.5 is the host where DNS server is being configured.

Now that we have zones configured, we will need to configure the forward and reverse DNS records for the DNS server:

Here is an example what’s the forward DNS configuration for the DNS server (aoe-08-5) should looks like:

$ORIGIN .
$TTL 86400	; 1 day
euca.example.net		IN SOA	aoe-08-5.euca.example.net. root.euca.example.net. (
				2011071306 ; serial
				3600       ; refresh (1 hour)
				1800       ; retry (30 minutes)
				604800     ; expire (1 week)
				86400      ; minimum (1 day)
				)
			NS	aoe-5.euca.example.net.

Now configure reverse lookup for the DNS server:

$ORIGIN .
$TTL 86400	; 1 day
17.10.in-addr.arpa	IN SOA	aoe-08-5.euca.example.net. root.euca.example.net. (
				2011071301 ; serial
				3600       ; refresh (1 hour)
				1800       ; retry (30 minutes)
				604800     ; expire (1 week)
				86400      ; minimum (1 day)
				)
			NS	aoe-08-5.euca.example.net.

Start/Restart named service:

service named start

At this point the DNS server should be ready to add records of other hosts.

We can update forward zone records for any host in the network with subdomain euca.example.net using the following command,

nsupdate -d
> zone euca.example.net
> server 10.17.198.5
> update add aoe-08-11.euca.example.net 86400 A 10.17.198.11
> send

Here is a small script that can be used on all the hosts to update forward DNS records, the script below also updates the hosts existing DNS configuration and adds the new DNS server in network script:

For adding reverse DNS records:

nsupdate -d
> zone 17.10.in-addr.arpa
> server 10.17.198.5
> update add 11.198.17.10 86400 IN PTR aoe-08-11.euca.example.net.
> send

Another snippet to update reverse DNS record or PTR record for all the hosts in the same network:

Finally restarting named service on the DNS server is not required to add/remove DNS records dynamically with nsupdate, but it doesn’t write the changes to the zone specific files until the service is reloaded.

Example reverse zone file after adding reloading named service:

Now since we want our DNS server to work for multiple Eucalyptus clouds, we will need to forward requests to specific Eucalyptus DNS services. So, basically add NS records for User Facing Services’ (UFS) hostnames with custom subdomain to forward all the requests to UFS for Eucalyptus to resolve service endpoints.
Example:

nsupdate -d
> zone euca.example.net
> server 10.17.198.5
> > update add aoe-08-10.euca.example.net 8600 NS aoe-08-10.super.euca.example.net
> send

In this example above, host aoe-08-10.euca.example.net has Eucalyptus DNS service running, so any request to aoe-08-10.super.euca.example.net will be forwarded to aoe-08-10.euca.example.net to get appropriate response.

After adding NS records for hosts running Eucalyptus DNS service (currently User Facing Services comes with Eucalyptus DNS service) and reloading named service on DNS server (10.17.198.5), the forward zone file should like this:

Finally, configure Eucalyptus system properties to DNS on Eucalyptus:

euctl bootstrap.webservices.use_dns_delegation=true
euctl bootstrap.webservices.use_instance_dns=true
euctl system.dns.dnsdomain=aoe-08-10.super.euca.example.net

Check out our documentation or more information about Helion Eucalyptus: http://docs.hpcloud.com/eucalyptus/
Find us on irc: (freenode) #eucalyptus #eucalyptus-qa
Raise issues: https://eucalyptus.atlassian.net

Using AWS CodeDeploy with Eucalyptus Cloudformation for On-Premise Application Deployments

More Mind Spew-age from Harold Spencer Jr.

Background

Recently, Amazon Web Services (AWS) announced that their CodeDeploy service supports on-premise instances.  This is extremely valuable – especially for developers and administrators to allow utilization of existing on-premise resources.

For teams who are using HP Helion Eucalyptus 4.1 (or who want to use Eucalyptus), this is even better news.  This feature – along with HP Helion Eucalyptus 4.1 Cloudformation – developers can deploy applications within a private cloud environment of HP Helion Eucalyptus.  This makes it even easier for developers and administrators to separate out and maintain production (AWS) versus development (HP Helion Eucalyptus) environments (or vice versa).  In addition, since HP Helion Eucalyptus strives for AWS compatibility, the Cloudformation templates used on Eucalyptus, can be used with AWS – with just a couple of modifications.

The Setup

To leverage on-premise instances with AWS CodeDeploy, please reference the AWS documentation entitled “Configure Existing On-Premises Instances…

View original post 798 more words

Eucalyptus 4.0 OSG Bucket ACLs and Object Lifecycle Management using Python Boto and s3cmd

More Mind Spew-age from Harold Spencer Jr.

Introduction

My last few blogs have discussed how Eucalyptus 4.0 supports similar Amazon Web Services, such as Elastic Compute Cloud (EC2), Simple Storage Services (S3), Elastic Load Balancing (ELB) and Security Token Services (STS) features. Using third party tools, such as python boto and s3cmd, features such as listing buckets and objects, listing running instances and assuming IAM roles were covered.  This blog entry is yet another example as to how Eucalyptus provides AWS compatible features in a private, on-premise cloud environment.

A couple of little known AWS-like features provided by the Eucalyptus 4.0.x Object Storage Gateway (OSG) that are barely discussed are the following:

Both of these features help cloud users implement cross-account bucket and object access, and define how long an object lives in a bucket.  Keeping consistent with the earlier blog entries and continuing to promote…

View original post 1,671 more words

Eucalyptus Block Storage Service with Ceph RBD

Press: HP acquires Eucalyptus🙂

Embracing open source technology is not something new to us and here at Eucalyptus we practice this everyday. In Eucalyptus 4.1.0, we chose this beautiful commodity storage technology Ceph for our Block Storage Service. yay! While the product is under heavy development, I am happy to give update about how Ceph can be used as a block storage backend for Eucalyptus Storage Controller (SC).

First thing first, so to use Ceph as a block storage manager, we need a basic Ceph cluster deployed. Apart from having several automation technologies to deploy Ceph, it also has a beautiful tool called ceph-deploy. We found it really pleasing while deploying Ceph clusters with ceph-deploy. But as our requirement is to be able to deploy Ceph clusters of different sizes on-demand for testing ceph+eucalyptus setup, so we needed some automation for this. After giving some thought about our use case, we decided to take the a newer path which would be easily maintainable for us. So, we decided to write a small chef cookbook to deploy a Ceph cluster with ceph-deploy, fun isn’t it?😉

 

For those who want to take a quick taste of Eucalyptus and Ceph at it’s early phase here is how you do it:

To deploy using ceph-cookbook which uses ceph-deploy,

1. clone the cookbook from https://github.com/shaon/ceph-cookbook.git (tested with 1x Monitor node and 2x OSDs, should work with a large number of OSDs)

2. upload the cookbook to your Chef server

3. update the environment file and upload it to the Chef server

4. Deploy Ceph cluster using Motherbrain

mb ceph-deploy bootstrap bootstrap.json --environment ceph_cluster -v

To add Ceph cluster as block storage backend for Eucalyptus, we will need a newer kernel with RBD module support and Ceph installed on the Eucalyptus Storage Controller (SC). For this demo we will use the LT kernel from ELRepo.

yum install ceph
yum --enablerepo=elrepo-kernel install kernel-lt

You may need to reboot the host by setting up the proper default value in grub.conf.

Now on your MON node, copy ceph.conf and ceph.client.admin.keyring from /root/mycluster directory and place those into /etc/ceph/ directory of the Eucalyptus SC host.

Register Eucalyptus Storage Controller and set the <cluster>.storage.blockstoragemanager property to “ceph”, e.g

euca-modify-property -p PARTI00.storage.blockstoragemanager=ceph

At this point your Eucalyptus Storage Controller should be ready to be used with Ceph as a storage backend!

Now test your Eucalyptus SC with Ceph RBD by creating volumes and snapshots.

To check the ongoing events on your Ceph cluster, run `ceph -w` on the Monitor node.

Happy Cephing with Eucalyptus!

Even though it’s under heavy development, we will love to hear your experience with Ceph+Eucalyptus.

Join us at #eucalyptus on Freenode.

Update: My colleague and good friend Swathi Gangisetty pointed out that for Eucalyptus Storage Controller to be operational no newer kernel is required. Currently, for librbd, SC interacts with Ceph via JNA bindings that are packaged as jar and comes with Eucalyptus.