Ubuntu OpenStack Dev Summary – 22nd May 2017

Welcome to the first ever Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!


OpenStack Distribution

Stable Releases

Ceph 10.2.7 for Xenial, Yakkety, Zesty and Trusty-Mitaka UCA:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1684527

Open vSwitch updates (2.5.2 and 2.6.1) for Xenial and Yakkety plus associated UCA pockets:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1673063
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1641956

Point releases for Horizon (9.1.2) and Keystone (9.3.0) for Xenial and Trusty-Mitaka UCA:
https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1680098

And the current set of OpenStack Newton point releases have just entered testing:
https://bugs.launchpad.net/cloud-archive/+bug/1688557

Development Release

OpenStack Pike b1 is available in Xenial-Pike UCA (working through proposed testing in Artful).

Open vSwitch 2.7.0 is available in Artful and Xenial-Pike UCA.

Expect some focus on development previews for Ceph Luminous (the next stable release) for Artful and the Xenial-Pike UCA in the next month.


OpenStack Snaps

Progress on producing snap packages for OpenStack components continues; snaps for glance, keystone, nova, neutron and nova-hypervisor are available in the snap store in the edge channel – for example:

sudo snap install --edge --classic keystone

Snaps are currently Ocata aligned; once the team have a set of snaps that we’re all comfortable are a good base, we’ll be working towards publication of snaps across tracks for OpenStack Ocata and OpenStack Pike as well as expanding the scope of projects covered with snap packages.

The edge channel for each track will contain the tip of the associated branch for each OpenStack project, with the beta, candidate and release channels being reserved for released versions. These three channels will be used to drive the CI process for validation of snap updates. This should result in an experience something like:

sudo snap install --classic --channel=ocata/stable keystone

or

sudo snap install --classic --channel=pike/edge keystone

As the snaps mature, the team will be focusing on enabling deployment of OpenStack using snaps in the OpenStack Charms (which will support CI/CD testing) and migration from deb based installs to snap based installs.


Nova LXD

Support for different Cinder block device backends for Nova-LXD has landed into driver (and the supporting os-brick library), allowing Ceph Cinder storage backends to be used with LXD containers; this is available in the Pike development release only.

Work on support for new LXD features to allow multiple storage backends to be used is currently in-flight, allowing the driver to use dedicated storage for its LXD instances alongside any use of LXD via other tools on the same servers.


OpenStack Charms

6 monthly release cycle

The OpenStack Charms project is moving to a 6 monthly release cadence (rather than the 3 month cadence we’ve followed for the last few years); This reflects the reduced rate of new features across OpenStack and the charms, and the improved process for backporting fixes to the stable charm set between releases. The next charm release will be in August, aligned with the release of OpenStack Pike and the Xenial Pike UCA.

If you have bugs that you’d like to see backported to the current stable charm set, please tag them with the ‘stable-backport’ tag (and they will pop-up in the right place in Launchpad) – you can see the current stable bug pipeline here.

Ubuntu Artful and OpenStack Pike Support

Required changes into the OpenStack Charms to support deployment of Ubuntu Artful (the current development release) and OpenStack Pike are landing into the development branches for all charms, alongside the release of Pike b1 into Artful and the Xenial-Pike UCA.

You can consume these charms (as always) via the ~openstack-charmers-next team, for example:

juju deploy cs:~openstack-charmers-next/keystone

IRC (and meetings)

You can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM

Advertisements
Tagged , , ,

OpenStack Charms in Boston

At next weeks OpenStack Summit in Boston, the OpenStack Charms team will be holding an onboarding workshop on Monday at 4:40pm in MR-105.

This is a great opportunity to learn more about the project both in terms of how to get started using the OpenStack Charms to deploy OpenStack, and how to get involved with the project from from a contribution perspective!

Let us know if you’re coming along and what you’d like to get out of the session here.

Looking forward to seeing you all next week!

Tagged , ,

snap install openstackclients

Over the last month or so I’ve been working on producing snap packages for a variety of OpenStack components.  Snaps provide a new fully isolated, cross-distribution packaging paradigm which in the case of Python is much more aligned to how Python projects manage their dependencies.

Alongside work on Nova, Neutron, Glance and Keystone snaps (which I’ll blog about later), we’ve also published snaps for end-user tools such as the OpenStack clients, Tempest and Rally.

If you’re running on Ubuntu 16.04 its really simple to install and use the openstackclients snap:

sudo snap install --edge --classic openstackclients

right now, you’ll also need to enable snap command aliases for all of the clients the snap provides:

ls -1 /snap/bin/openstackclients.* | cut -f 2 -d . | xargs sudo snap alias openstackclients

after doing this, you’ll have all of the client tools aligned to the OpenStack Newton release available for use on your install:

aodh
barbican
ceilometer
cinder
cloudkitty
designate
freezer
glance
heat
ironic
magnum
manila
mistral
monasca
murano
neutron
nova
openstack
sahara
senlin
swift
tacker
trove
vitrage
watcher

The snap is currently aligned to the Newton OpenStack release; the intent is to publish snaps aligned to each OpenStack release using the series support that’s planned for snaps –  so you’ll be able to pick clients appropriate for any supported OpenStack release or for the current development release.

You can check out the source for the snap on github; writing a snap package for a Python project is pretty simple, as it makes use of the standard pip tooling to describe dependencies and install Python modules. Kudos to the snapcraft team who have done a great job on the Python plugin.

Let us know what you think by reporting bugs or by dropping into #openstack-snaps on Freenode IRC!

Tagged , ,

OpenStack Newton B3 for Ubuntu

The Ubuntu OpenStack team is pleased to announce the general availability of OpenStack Newton B3 milestone in Ubuntu 16.10 and for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive pocket for OpenStack Newton on Ubuntu 16.04 installations by running the following commands:

sudo add-apt-repository cloud-archive:newton
sudo apt update

The Ubuntu Cloud Archive for Newton includes updates for Aodh, Barbican, Ceilometer, Cinder, Designate, Glance, Heat, Horizon, Ironic (6.1.0), Keystone, Manila, Networking-OVN, Neutron, Neutron-FWaaS, Neutron-LBaaS, Neutron-VPNaaS, Nova, and Trove.

You can see the full list of packages and versions at here.

Ubuntu 16.10

No extra steps required; just start installing OpenStack!

Branch Package Builds

If you want to try out the latest master branch updates, or updates to stable branches, we are delivering continuously integrated packages on each upstream commit in the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/liberty
sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
sudo add-apt-repository ppa:openstack-ubuntu-testing/newton

bear in mind these are built per-commitish (30 min checks for new commits at the moment) so ymmv from time-to-time.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thanks and have fun!

Cheers,

Corey

(on behalf of the Ubuntu OpenStack team)

Tagged ,

OpenStack 2015.1.0 for Ubuntu 14.04 LTS and Ubuntu 15.04

The Ubuntu OpenStack team is pleased to announce the general availability of OpenStack 2015.1.0 (Kilo) release in Ubuntu 15.04 and for Ubuntu 14.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 14.04 LTS

You can enable the Ubuntu Cloud Archive for OpenStack Kilo on Ubuntu 14.04 installations by running the following commands:

 sudo add-apt-repository cloud-archive:kilo
 sudo apt-get update

The Ubuntu Cloud Archive for Kilo includes updates for Nova, Glance, Keystone, Neutron, Cinder, Horizon, Swift, Ceilometer and Heat; Ceph (0.94.1), RabbitMQ (3.4.2), QEMU (2.2), libvirt (1.2.12) and Open vSwitch (2.3.1) back-ports from 15.04 have also been provided.

Additionally Trove, Sahara, Ironic, Designate and Manila are also provided in the Ubuntu Cloud Archive for Kilo.  Note that Canonical are not providing support for these packages as they are not in Ubuntu main – these packages are community supported inline with other Ubuntu universe packages.

You can checkout the full list of packages and versions here.

NOTE: We’re shipping Swift 2.2.2 for release – due to the relatively late inclusion of new dependencies to support erasure coding in Swift, we’ve opted not to update to 2.3.0 this cycle in Ubuntu.

NOTE: Designate and Trove are still working through the Stable Release Update process, due to some unit testing and packaging issues,  so are lagging behind the rest of the release.

Ubuntu 15.04

No extra steps required; just start installing OpenStack!

Neutron Driver Decomposition

Ubuntu are only tracking the decomposition of Neutron FWaaS, LBaaS and VPNaaS from Neutron core in the Ubuntu archive; we expect to add additional packages for other Neutron ML2 mechanism drivers and plugins early during the Liberty/15.10 development cycle – we’ll provide these as backports to OpenStack Kilo users as and when they become available.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

 sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thanks and have fun!

Tagged ,

Neutron, ZeroMQ and Git – Ubuntu OpenStack 15.04 Charm release!

Alongside the Ubuntu 15.04 release on the 23rd April, the Ubuntu OpenStack Engineering team delivered the latest release of the OpenStack charms for deploying and managing OpenStack on Ubuntu using Juju.

Here are some selected highlights from this most recent charm release.

OpenStack Kilo support

As always, we’ve enabled charm support for OpenStack Kilo alongside development. To use this new release use the openstack-origin configuration option of the charms, for example:

juju set cinder openstack-origin=cloud:trusty-kilo

NOTE: Setting this option on an existing deployment will trigger an upgrade to Kilo via the charms – remember to plan and test your upgrade activities prior to production implementation!

Neutron

As part of this release, the team have been working on enabling some of the new Neutron features that were introduced in the Juno release of OpenStack.

Distributed Virtual Router

One of the original limitations of the Neutron reference implementation (ML2 + Open vSwitch) was the requirement to route all north/south and east/west network traffic between instance via network gateway nodes.

For Juno, the Distributed Virtual Router (DVR) function was introduced to allow routing capabilities to be distributed more broadly across an OpenStack cloud.

DVR pushes alot of the layer 3 network routing function of Neutron directly onto compute nodes – instances which have floating IP’s no longer have the restriction of routing via a gateway node for north/south traffic. This traffic is now pushed directly to the external network by the compute nodes via dedicated external network ports, bypassing the requirement for network gateway nodes.

Network gateway nodes are still required for snat northbound routing for instances that don’t having floating ip addresses.

For the 15.04 charm release, we’ve enabled this feature across the neutron-api, neutron-openvswitch and neutron-gateway charms – you can toggle this capability using configuration in the neutron-api charm:

juju set neutron-api enabled-dvr=true l2-population=true \
    overlay-network-type=vxlan

This feature requires that every compute node have a physical network port onto the external public facing network – this is configured on the neutron-openvswitch charm, which is deployed alongside nova-compute:

juju set neutron-openvswitch ext-port=eth1

NOTE: Existing routers will not be switched into DVR mode by default – this must be done manually by a cloud administrator.  We’ve also only tested this feature with vxlan overlay networks – expect gre and vlan enablement soon!

Router High Availability

For Clouds where the preference is still to route north/south traffic via a limited set of gateway nodes, rather than exposing all compute nodes directly to external network zones, Neutron has also introduced a feature to enable virtual routers in highly available configurations.

To use this feature, you need to be running multiple units of the neutron-gateway charm – again it’s enabled via configuration in the neutron-api charm:

juju set neutron-api enable-l3ha=true l2-population=false

Right now Neutron DVR and Router HA features are mutually exclusive due to layer 2 population driver requirements.

Our recommendation is that these new Neutron features are only enabled with OpenStack Kilo as numerous features and improvements have been introduced over the last 6 months since first release with OpenStack Juno.

Initial ZeroMQ support

The ZeroMQ lightweight messaging kernel is a library which extends the standard socket interfaces with features traditionally provided by specialised messaging middleware products, without the requirement for a centralized message broker infrastructure.

Interest and activity around the 0mq driver in Oslo Messaging has been gathering pace during the Kilo cycle, with numerous bug fixes and improvements being made into the driver code.

Alongside this activity, we’ve enabled ZeroMQ support in the Nova and Neutron charms in conjunction with a new charm – ‘openstack-zeromq’:

juju deploy redis-server
juju deploy openstack-zeromq
juju add-relation redis-server openstack-zeromq
for svc in nova-cloud-controller nova-compute \
    neutron-api neutron-openvswitch quantum-gateway; do
    juju deploy $svc
    juju add-relation $svc openstack-zeromq
done

The ZeroMQ driver makes use of a Redis server to maintain a catalog of topic endpoints for the OpenStack cloud so that services can figure out where to send RPC requests.

We expect to enable further charm support as this feature matures upstream – so for now please consider this feature for testing purposes only.

Deployment from source

A core set of the OpenStack charms have also grown the capability to deploy from git repositories, rather than from the usual Debian package sources from Ubuntu.   This allows all of the power of deploying OpenStack using charms to be re-used with deployments from active development.

For example, you’ll still be able to scale-out and cluster OpenStack services deployed this way –  seeing a keystone service deploy from git, running with haproxy, corosync and pacemaker as part of a fully HA deployment is pretty awesome!

This feature is currently tested with the stable/icehouse and stable/juno branches – we’re working on completing testing of the kilo support and expect to land that as a stable update soon.

This feature is considered experimental and we expect to complete further improvements and enablement across a wider set of charms – so please don’t use it for production services!

And finally…

Alongside the features delivered in this release, we’ve also been hard at work resolving bugs across the charms – please refer to milestone bug report for the full details.

We’ve also introduced features to enable easier monitoring with Nagios and support for Keystone PKI tokens as well as some improvements in the failure detection capabilities of the percona-cluster charm when operating in HA mode.

You can get the full low down on all of the changes in this release from the official release notes.

Tagged , ,

OpenStack Kilo RC1 for Ubuntu 14.04 LTS and Ubuntu 15.04

The Ubuntu OpenStack Engineering team is pleased to announce the general availability of the first release candidate of the OpenStack Kilo release in Ubuntu 15.04 development and for Ubuntu 14.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 14.04 LTS

You can enable the Ubuntu Cloud Archive for OpenStack Kilo on Ubuntu 14.04 installations by running the following commands:

 sudo add-apt-repository cloud-archive:kilo
 sudo apt-get update

The Ubuntu Cloud Archive for Kilo includes updates for Nova, Glance, Keystone, Neutron, Cinder, Horizon, Ceilometer and Heat; Ceph (0.94.1), RabbitMQ (3.4.2), QEMU (2.2), libvirt (1.2.12) and Open vSwitch (2.3.1) back-ports from 15.04 development have also been provided.

Note that for Swift we’re still at version 2.2.2 – we’re currently reviewing whether to include 2.3.0 for release.

Ubuntu 15.04 development

No extra steps required; just start installing OpenStack!

New OpenStack components

In addition to Trove, Sahara and Ironic we have now added Designate and Manila to the Ubuntu universe pocket.

Neutron Driver Decomposition

As of Kilo RC1, Ubuntu are only tracking the decomposition of Neutron FWaaS, LBaaS and VPNaaS from Neutron core in the Ubuntu archive; we expect to add additional packages for other Neutron ML2 mechanism drivers and plugins early during the Liberty/15.10 development cycle – we’ll provide these as backports to OpenStack Kilo users as and when they become available.

OpenStack Kilo Release

We have the slightly exciting situation this cycle in that OpenStack Kilo releases a week after Ubuntu 15.04; The Ubuntu OpenStack Engineering team will be working on a stable update for all OpenStack projects as soon as OpenStack Kilo is released.  I’d anticipate that these updates should be available around a week after the kilo release date.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

 sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thanks and have fun!

Tagged ,

OpenStack Summit Vancouver: Ubuntu OpenStack team presentations

Amongst the numerous submissions for speaking slots at the OpenStack Summit in Vancouver in May, you’ll find a select number of submissions from my team:

Multi-node OpenStack development on single system (Speakers: James Page, Corey Bryant)

Corey has been having some fun hacking on enabling deployment from source in the OpenStack Juju Charms for Ubuntu – come and hear about what we’ve done so far and how we’re trying to enable a multi-node OpenStack deployment from source in a single node using KVM and LXC container, with devstack style reloads!

Scaling automated testing of Ubuntu OpenStack (Speakers: James Page, Ryan Beisner, Liam Young)

The Ubuntu OpenStack team have a ever increasing challenge of supporting testing of numerous OpenStack versions on many different Ubuntu releases; we’ll be covering how we’ve used OpenStack itself to help us scale-out our testing infrastructure to support these activities, as well as some of the technologies and tools we use to deploy and test OpenStack itself.

OpenStack HA Nirvana on Ubuntu (Speaker: James Page)

We’ve been able to deploy OpenStack in Highly Available configurations using Juju and Ubuntu since the Portland Summit in 2013 – since then we have evolved and battle-tested our HA reference architecture into a rock-solid solution to ensure availability of cloud services to end users.  This session will cover the Ubuntu OpenStack HA reference architecture in detail – we might even manage a demo as well!

Testing Openstack with Openstack (Speaker: Ryan Beisner)

Ryan Beisner has been leading Ubuntu OpenStack QA for Canonical since 2014; he’ll be deep-diving on the challenges faced in ensuring the quality of Ubuntu OpenStack and how we’ve leveraged the awesome tool set we have in Ubuntu for deploying and testing OpenStack to support testing of OpenStack both virtually and on bare-metal 100’s of times a day.

also of interest, and building on and around the base technology that the Ubuntu OpenStack team delivers:

OpenStack IPv6 Support (Speaker: Edward Hope-Morley)

Ed’s team have made great in-roads into enabling Ubuntu OpenStack deployments in IPv6 only environments; he’ll be discussing the challenges encountered and how the team overcame them as well as setting out some suggested improvements that would make IPv6 support a first class citizen for OpenStack.

Autopiloting OpenStack (Speaker: Dean Henrichsmeyer)

Dean will be talking about how the Ubuntu OpenStack Autopilot pulls together all of the various technologies in Ubuntu (MAAS, Juju and OpenStack) to fully automate deployment and scale-out of complex OpenStack deployments on Ubuntu.

Containers for Dummies (Speaker: Tycho Andersen)

Tycho promises an enlightening and fun talk about containers introducing all the basic technologies in Linux that support containers – all done through the medium of pictures of cats!

You can find the full list of Canonical submissions here – see you all in Vancouver!

Tagged ,

Ubuntu OpenStack Charms: 15.01 release

OpenStack NovaThe Ubuntu Server team is pleased to announce their first interim release, 15.01, of charm features and fixes for the Ubuntu OpenStack charms for Juju – here are some selected highlights:

Clustering

General improvements have been made to the hacluster charm that we use for clustering OpenStack services; specifically the way quorum is handled in pacemaker and corosync has been improved so that clusters should react more appropriately in situations where one or more units fail.

We’ve also introduced a unicast mode for corosync cluster communication – this is useful in environments where multicast UDP might be disabled; in testing this has also proven much more reliable if you are running services under LXC containers spread across physical servers, and is the recommended configuration for these types of deployment.

Tuning

The ceph, ceph-osd, nova-compute and quantum-gateway charms have all gained a tuning configuration option which allows users to set sysctl options – we’ve provided some best practice defaults in the ceph charms, but this feature will allow expert users to tune Ubuntu away to their hearts content!

High Availability

The ceilometer and ceph-radosgw charms have grown HA support (using the hacluster charm) and the quantum-gateway charm now has a configuration option for Icehouse users to enable a legacy ha mode (again using the hacluster charm) to ensure that routers and networks are recovered onto active gateway nodes in the event that a unit fails.

We’ve also improved the nova-cloud-controller charm so that guest console access can be used in HA deployments by providing a memcached back-end for token storage and sharing between units.

Nova Ceph Storage Support

The nova-compute charm has grown support for different storage back-ends; the first new back-end support is for Ceph, allowing users to use Ceph for default storage of instance root and ephemeral disks.  You’ll want to be running some serious networking to use this feature – remember all those reads and writes will be going over the network!

And finally..

You can checkout the list of bugs closed and read the full release notes – which contain more detail on these new features!

Thanks go to all the charm contributors:

  • Edward Hope-Morley
  • Billy Olsen
  • Liang Chen
  • Jorge Niedbalski
  • Xiang Hui
  • Felipe Reyes
  • Yaguang Tang
  • Seyeong Kim
  • Jorge Castro
  • Corey Bryant
  • Tom Haddon
  • Brad Marshall
  • Liam Young
  • Ryan Beisner

awesome job guys!

EOM

Tagged , , , ,

Extreme OpenStack: Scale testing OpenStack Messaging

Just prior to the Paris OpenStack Summit in November, the Ubuntu Server team had the opportunity to repeat and expand on the scale testing of OpenStack Icehouse that we did in the first quarter of last year with AMD and SeaMicro. HP where kind enough to grant us access to a few hundred servers in their Discovery Lab; specifically three chassis of HP ProLiant Moonshot m350 cartridges (540 in total): indexThe m350 is an 8-core Intel Atom based server with 16GB of RAM and 64GB of SSD based direct attached storage. They are designed for scale out workloads, so not an immediately obvious choice for an OpenStack Cloud, but for the purposes of stretching OpenStack to the limit, having lots of servers is great as it puts load on central components in Neutron and Nova by having a large number of hypervisor edges to manage. We had a few additional objectives for this round of scale testing over and above re-validating the previous scale test we did on Icehouse on the new Juno release of OpenStack:

  • Messaging: The default messaging solution for OpenStack on Ubuntu is RabbitMQ; alternative messaging solutions have been supported for some time – we wanted to specifically look at how ZeroMQ, a broker-less messaging option, scales in a large OpenStack deployment.
  • Hypervisor: The testing done previously was based on the libvirt/kvm stack with Nova; The LXC driver was available in an early alpha release so poking at this looked like it might be fun.

As you would expect, we used the majority of the same tooling that we used in the previous scale test:

  • MAAS (Metal-as-a-Service) for deployment of physical server resources
  • Juju: installation and configuration of OpenStack on Ubuntu

in addition, we also decided to switch over to OpenStack Rally to complete the actual testing and benchmarking activities. During our previous scale test this project was still in its infancy but its grown a lot of features in the last 9 months including better support for configuring Neutron network resources as part of test context set-up.

Messaging Scale

The first comparison we wanted to test was between RabbitMQ and ZeroMQ; RabbitMQ has been the messaging workhorse for Ubuntu OpenStack deployments since our first release, but larger clouds do make high demands on a single message broker both in terms of connection concurrency and message throughput. ZeroMQ removes the central broker from the messaging topology, switching to a more directly connected edge topology.

The ZeroMQ driver in Oslo Messaging has been a little unloved over the last year or so, however some general stability improvements have been made – so it felt like a good time to take a look and see how it scales. For this part of the test we deployed a cloud of:

  • 8 Nova Controller units, configured as a cluster
  • 4 Neutron Controller units, configured as a cluster
  • Single MySQL, Keystone and Glance units
  • 300 Nova Compute units
  • Ganglia for monitoring

In order to push the physical servers as hard as possible, we also increased the default workers (cores x 4 vs cores x 2) and the cpu and ram allocation ratios for the Nova scheduler. We then completed an initial 5000 instance boot/delete benchmark with a single RabbitMQ broker with a concurrency level of 150.  Rally takes this as configuration options for the test runner – in this test Rally executed 150 boot-delete tests in parallel, with 5000 iterations:

action min (sec) avg (sec) max (sec) 90 percentile 95 percentile success count
total 28.197 75.399 220.669 105.064 117.203 100.0% 5000
nova.boot_server 17.607 58.252 208.41 86.347 97.423 100.0% 5000
nova.delete_server 4.826 17.146 134.8 27.391 32.916 100.0% 5000

Having established a baseline for RabbitMQ, we then redeployed and repeated the same test for ZeroMQ; we immediately hit issues with concurrent instance creation.  After some investigation and re-testing, the cause was found to be Neutron’s use of fanout messages for communicating with hypervisor edges; the ZeroMQ driver in Oslo Messaging has an inefficiency in that it creates a new TCP connection for every message it sends – when Neutron attempted to send fanout messages to all hypervisors edges with a concurrency level of anything over 10, the overhead in creating so many TCP connections causes the workers on the Neutron control nodes to back up, and Nova starts to timeout instance creation on network setup.

So the verdict on ZeroMQ scalability with OpenStack? Lots of promise but not there yet….

We introduced a new feature to the OpenStack Charms for Juju in the last charm release to allow use of different RabbitMQ brokers for Nova and Neutron, so we completed one last messaging test to look at this:

action min (sec) avg (sec) max (sec) 90 percentile 95 percentile success count
total 26.073 114.469 309.616 194.727 227.067 98.2% 5000
nova.boot_server 19.9 107.974 303.074 188.491 220.769 98.2% 5000
nova.delete_server 3.726 6.495 11.798 7.851 8.355 98.2% 5000

unfortunately we had some networking problems in the lab which caused some slowdown and errors for instance creation, so this specific test proved a little in-conclusive. However, by running split brokers, we were able to determine that:

  • Neutron peaked at ~10,000 messages/sec
  • Nova peaked at ~600 messages/sec

It’s also worth noting that the SSDs that the m350 cartridges use do make a huge difference, as the servers don’t suffer from the normal iowait times associated with spinning disks.

So in summary, RabbitMQ still remains the de facto choice for messaging in an Ubuntu OpenStack Cloud; it scales vertically very well – add more CPU and memory to your server and you can deal with a larger cloud – and benefits from fast storage.

ZeroMQ has a promising architecture but needs more work in the Oslo Messaging driver layer before it can be considered useful across all OpenStack components.

In my next post we’ll look at how hypervisor choice stacks up…

Tagged , ,