Wrestling the Cephalopod

Ceph is a distributed storage and network file system designed to provide excellent performance, reliability, and scalability.

Sounds pretty cool right?

Ceph is forging the way in delivering petabyte/exabyte scale storage to thousands of clients using commodity hardware.

This post outlines some of the key activities that the Ubuntu Server Team have undertaken during the Ubuntu 12.10 development cycle to improve the Ceph experience on Ubuntu.

Chasing the Argonaut

Ubuntu 12.10 features Ceph 0.48.2 ‘Argonaut’, the first release of Ceph with long-term support.

While development continues at a blistering pace and new releases will contain new features, the 0.48.x series will only receive critical bug-fixes and stability improvements.

This is a really important step for Ceph deployments; having a stable, supported release to baseline on is critical to the operation and stability of production environments.

For more information on the 0.48.x releases, see the release notes for Ceph.

The ‘Missing Bits’

For Ubuntu 12.04, Ceph was included in Ubuntu ‘main’ which means that it receives an increased level of focus from both the Ubuntu Server and Security teams (underwritten by Canonical) for the lifecycle of the Ubuntu release.  However, to make this happen for the 12.04 release, some features of the packaging had to be disabled.

The good news is that those missing features have now been re-enabled in Ubuntu 12.10:

  • The RADOS Gateway (radosgw) provides a RESTful, S3 and Swift compatible gateway for storage and retrieval of objects in a Ceph cluster.
  • Ceph now uses Google Perftools (gperftools) on x86 architectures, providing higher performance memory allocation.

This re-aligns the Ubuntu packaging with the packages available directly from Ceph and in Debian.

Juju Deployment

Ceph can now be deployed effectively using Juju, the service orchestration tool for Ubuntu Server.

The Ceph charms for Juju build upon the automation work done by Tommi Virtanen from Inktank (who I think should win an award for his innovative use of Upstart for bootstrapping Ceph Object Storage Daemons).

The charms are still pending review for entry into the Juju Charm Store as the official charms but if you want to try them out:

cat > config.yaml << EOF
ceph:
  fsid: ecbb8960-0e21-11e2-b495-83a88f44db01 
  monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg==
  osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde
  ephemeral-unmount: /mnt
EOF
juju deploy -n 3 --config config.yaml --constraints="cpu=2" cs:~james-page/quantal/ceph

Some time later you should have a small three node Ceph cluster up and running.  You can then expand it with further storage nodes:

cat >> config.yaml << EOF
ceph-osd:
  osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde
  ephemeral-unmount: /mnt
EOF
juju deploy -n 3 --config config.yaml --constraints="cpu=2" cs:~james-page/quantal/ceph-osd
juju add-relation ceph ceph-osd

And then add a RADOS Gateway for RESTful access:

juju deploy cs:~james-page/quantal/ceph-radosgw
juju add-relation ceph ceph-radosgw
juju expose ceph-radosgw

The ceph-radosgw charm can also be scaled-out and fronted with haproxy:

juju add-unit -n 2 ceph-radosgw
juju deploy cs:precise/haproxy
juju add-relation haproxy ceph-radosgw

You should now have a deployment that looks something like this (click to explode):

Note that the above examples assume that you have a Juju environment already configured and bootstrapped – if you have not read this.

The ceph and ceph-osd charms require additional block storage devices to work correctly so will not work with the Juju local provider; they have been tested in OpenStack, ec2 and MAAS environments and generally work OK (aside from one issue when ec2 instances get domU-XX hostnames).

All of the charms have README’s – take a look to find out more.

Credit to Paul Collins from the Canonical IS Projects team for initial work on the ceph charm.

OpenStack Integration

OpenStack provides direct integration with Ceph in two ways:

  • Glance: storage of images that will be used for virtual machine instances in the cloud
  • Volumes: persistent block storage devices which can be attached to virtual machine instances

Due to the scalable, resilient nature of Ceph, integration with OpenStack presents a compelling proposition.

Sebastien Han has already done a great job of explaining how to configure and use these features in OpenStack so I’m not going to go into the finer details here.

The OpenStack Juju charms for Ubuntu 12.10 will be updated to optionally use Ceph as a block and object storage back-end; here’s a preview:

juju add-relation glance ceph
juju add-relation nova-volume ceph
juju add-relation nova-compute ceph

Job done…

What’s next?

Ceph plans for the next Ubuntu release might include:

  • Daily automated testing of Ceph on Ubuntu; the test is written, it just needs automating.
  • Making Ceph part of the per-commit testing of OpenStack that we do on Ubuntu.
  • Updating to the next Ceph LTS release.
  • Improving the out-of-the box configuration of the RADOS Gateway.
  • Using upstart configurations by default in the packaging.
  • Figuring out how to deliver Ceph releases to Ubuntu 12.04 so users who want to stick on the Ubuntu LTS can use the Ceph LTS.

Follow the Ceph Blueprint and UDS-R session to see how this pans out.

About these ads

One thought on “Wrestling the Cephalopod

  1. [...] James Page: Wrestling the Cephalopod [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 133 other followers

%d bloggers like this: