How we scaled OpenStack to launch 168,000 cloud instances

In the run up to the OpenStack summit in Atlanta, the Ubuntu Server team had it’s first opportunity to test OpenStack at real scale.SM_FS_15K_Frt_web-lo

AMD made available 10 SeaMicro 15000 chassis in one of their test labs. Each chassis has 64, 4 core, 2 thread (8 logical cores), 32GB RAM servers with 500G storage attached via a storage fabric controller – creating the potential to scale an OpenStack deployment to a large number of compute nodes in a small rack footprint.

As you would expect, we chose the best tools for deploying OpenStack:

  • MAAS – Metal-as-a-Service, providing commissioning and provisioning of servers.
  • Juju – The service orchestration for Ubuntu, which we use to deploy OpenStack on Ubuntu using the OpenStack charms.
  • OpenStack Icehouse on Ubuntu 14.04 LTS.
  • CirrOS – a small footprint linux based Cloud OS

MAAS has native support for enlisting a full SeaMicro 15k chassis in a single command – all you have to do is provide it with the MAC address of the chassis controller and a username and password.  A few minutes later, all servers in the chassis will be enlisted into MAAS ready for commissioning and deployment:

maas local node-group probe-and-enlist-hardware \
  nodegroup model=seamicro15k mac=00:21:53:13:0e:80 \
  username=admin password=password power_control=restapi2

Juju has been the Ubuntu Server teams preferred method for deploying OpenStack on Ubuntu for as long as I can remember; Juju uses Charms to encapsulate the knowledge of how to deploy each part of OpenStack (a service) and how each service relates to each other – an example would include how Glance relates to MySQL for database storage, Keystone for authentication and authorization and (optionally) Ceph for actual image storage.

Using the charms and Juju, it’s possible to deploy complex OpenStack topologies using bundles, a yaml format for describing how to deploy a set of charms in a given configuration – take a look at the OpenStack bundle we used for this test to get a feel for how this works.

scale_test_juju

Starting out small(ish)

All ten chassis were not all available from the outset of testing, so we started off with two chassis of servers to test and validate that everything was working as designed.   With 128 physical servers, we were able to put together a Neutron based OpenStack deployment with the following services:

  • 1 Juju bootstrap node (used by Juju to control the environment), Ganglia Master server
  • 1 Cloud Controller server
  • 1 MySQL database server
  • 1 RabbitMQ messaging server
  • 1 Keystone server
  • 1 Glance server
  • 3 Ceph storage servers
  • 1 Neutron Gateway network forwarding server
  • 118 Compute servers

We described this deployment using a Juju bundle, and used the juju-deployer tool to bootstrap and deploy the bundle to the MAAS environment controlling the two chassis.  Total deployment time for the two chassis to the point of a OpenStack cloud that was usable was around 35 minutes.

At this point we created 500 tenants in the cloud, each with its own private network (using Neutron), connected to the outside world via a shared public network.  The immediate impact of doing this is that Neutron creates dnsmasq instances, Open vSwitch ports and associated network namespaces on the Neutron Gateway data forwarding server – seeing this many instances of dnsmasq on a single server is impressive – and the server dealt with the load just fine!

Next we started creating instances; we looked at using Rally for this test, but it does not currently support using Neutron for instance creation testing, so we went with a simple shell script that created batches of servers (we used a batch size of 100 instances) and then waited for them to reach the ACTIVE state.  We used the CirrOS cloud image (developed and maintained by the Ubuntu Server teams’ very own Scott Moser) with a custom Nova flavor with only 64 MB of RAM.

We immediately hit our first bottleneck – by default, the Nova daemons on the Cloud Controller server will spawn sub-processes equivalent to the number of cores that the server has.  Neutron does not do this and we started seeing timeouts on the Nova Compute nodes waiting for VIF creation to complete.  Fortunately Neutron in Icehouse has the ability to configure worker threads, so we updated the nova-cloud-controller charm to set this configuration to a sensible default, and provide users of the charm with a configuration option to tweak this setting.  By default, Neutron is configured to match what Nova does, 1 process per core – using the charm configuration this can be scaled up using a simple multiplier – we went for 10 on the Cloud Controller node (80 neutron-server processes, 80 nova-api processes, 80 nova-conductor processes).  This allowed us to resolve the VIF creation timeout issue we hit in Nova.

At around 170 instances per compute server, we hit our next bottleneck; the Neutron agent status on compute nodes started to flap, with agents being marked down as instances were being created.  After some investigation, it turned out that the time required to parse and then update the iptables firewall rules at this instance density took longer than the default agent timeout – hence why agents kept dropping out from Neutrons perspective.  This resulted in virtual interface (VIF) creation timing out and we started to see instance activation failures when trying to create more that a few instances in parallel.  Without an immediate fix for this issue (see bug 1314189), we took the decision to turn Neutron security groups off in the deployment and run without any VIF level iptables security.  This was applied using the nova-compute charm we were using, but is obviously not something that will make it back into the official charm in the Juju charm store.

With the workaround on the Compute servers and we were able to create 27,000 instances on the 118 compute nodes. The API call times to create instances from the testing endpoint remained pretty stable during this test, however as the Nova Compute servers got heavily loaded, the amount of time taken for all instances to reach the ACTIVE state did increase:

Doubling up

At this point AMD had another two chassis racked and ready for use so we tore down the existing two chassis, updated the bundle to target compute services at the two new chassis and re-deployed the environment.  With a total of 256 servers being provisioned in parallel, the servers were up and running within about 60 minutes, however we hit our first bottleneck in Juju.

The OpenStack charm bundle we use has a) quite a few services and b) a-lot of relations between services – Juju was able to deploy the initial services just fine, however when the relations where added, the load on the Juju bootstrap node went very high and the Juju state service on this node started to throw a larger number of errors and became unresponsive – this has been reported back to the Juju core development team (see bug 1318366).

We worked around this bottleneck by bringing up the original two chassis in full, and then adding each new chassis in series to avoid overloading the Juju state server in the same way.  This obviously takes longer (about 35 minutes per chassis) but did allow us to deploy a larger cloud with an extra 128 compute nodes, bringing the total number of compute nodes to 246 (118+128).

And then we hit our next bottleneck…

By default, the RabbitMQ packaging in Ubuntu does not explicitly set a file descriptor ulimit so it picks up the Ubuntu defaults – which are 1024 (soft) and 4096 (hard).  With 256 servers in the deployment, RabbitMQ hits this limit on concurrent connections and stops accepting new ones.  Fortunately it’s possible to raise this limit in /etc/default/rabbitmq-server – and as we were deployed using the rabbitmq-server charm, we were able to update the charm to raise this limit to something sensible (64k) and push that change into the running environment.  RabbitMQ restarted, problem solved.

With the 4 chassis in place, we were able to scale up to 55,000 instances.

Ganglia was letting us know that load on the Nova Cloud Controller during instance setup was extremely high (15-20), so we decided at this point to add another unit to this service:

juju add-unit nova-cloud-controller

and within 15 minutes we had another Cloud Controller server up and running, automatically configured for load balancing of API requests with the existing server and sharing the load for RPC calls via RabbitMQ.   Load dropped, instance setup time decreased, instance creation throughput increased, problem solved.

Whilst we were working through these issues and performing the instance creation, AMD had another two chassis (6 & 7) racked, so we brought them into the deployment adding another 128 compute nodes to the cloud bringing the total to 374.

And then things exploded…

The number of instances that can be created in parallel is driven by two factors – 1) the number of compute nodes and 2) the number of workers across the Nova Cloud Controller servers.  However, with six chassis in place, we were not able to increase the parallel instance creation rate as much as we wanted to without getting connection resets between Neutron (on the Cloud Controllers) and the RabbitMQ broker.

The learning from this is that Neutron+Nova makes for an extremely noisy OpenStack deployment from a messaging perspective, and a single RabbitMQ server appeared to not be able to deal with this load.  This resulted in a large number of instance creation failures so we stopped testing and had a re-think.

A change in direction

After the failure we saw in the existing deployment design, and with more chassis still being racked by our friends at AMD, we still wanted to see how far we could push things; however with Neutron in the design, we could not realistically get past 5-6 chassis of servers, so we took the decision to remove Neutron from the cloud design and run with just Nova networking.

Fortunately this is a simple change to make when deploying OpenStack using charms as the nova-cloud-controller charm has a single configuration option to allow Neutron and Nova networkings to be configured. After tearing down and re-provisioning the 6 chassis:

juju destroy-enviroment maas
juju-deployer --bootstrap -c seamicro.yaml -d trusty-icehouse

with the revised configuration, we were able to create instances in batches of 100 at a respectable throughput of initially 4.5/sec – although this did degrade as load on compute servers went higher.  This allowed us to hit 75,000 running instances (with no failures) in 6hrs 33 mins, pushing through to 100,000 instances in 10hrs 49mins – again with no failures.

100k

As we saw in the smaller test, the API invocation time was fairly constant throughout the test, with the total provisioning time through to ACTIVE state increasing due to loading on the compute nodes:

100k

Status check

OK – so we are now running an OpenStack Cloud on Ubuntu 14.04 across 6 seamicro chassis (1,2,3,5,6,7 – 4 comes later) – a total of 384 servers (give or take one or two which would not provision).  The cumulative load across the cloud at this point was pretty impressive – Ganglia does a pretty good job at charting this:

100k-load

AMD had two more chassis (8 & 9) in the racks which we had enlisted and commissioned, so we pulled them into the deployment as well;  This did take some time – Juju was grinding pretty badly at this point and just running ‘juju add-unit -n 63 nova-compute-b6′ was taking 30 minutes to complete (reported upstream – see bug 1317909).

After a couple of hours we had another ~128 servers in the deployment, so we pushed on and created some more instances through to the 150,000 mark – as the instances where landing on the servers on the 2 new chassis, the load on the individual servers did increase more rapidly so instance creation throughput did slow down faster but the cloud managed the load.

Tipping point?

Prior to starting testing at any scale, we had some issues with one of the chassis (4) which AMD had resolved during testing, so we shoved that back into the cloud as well; after ensuring that the 64 extra servers where reporting correctly to Nova, we started creating instances again.

However, the instances kept scheduling onto the servers in the previous two chassis we added (8 & 9) with the new nodes not getting any instances.  It turned out that the servers in chassis 8 & 9 where AMD based servers with twice the memory capacity; by default, Nova does not look at VCPU usage when making scheduling decisions, so as these 128 servers had more remaining memory capacity that the 64 new servers in chassis 4, they were still being targeted for instances.

Unfortunately I’d hopped onto the plane from Austin to Atlanta for a few hours so I did not notice this – and we hit our first 9 instance failures.  The 128 servers in Chassis 8 and 9 ended up with nearly 400 instances each – severely over-committing on CPU resources.

A few tweaks to the scheduler configuration, specifically turning on the CoreFilter and setting the over commit at x 32, applied to the Cloud Controller nodes using the Juju charm, and instances started to land on the servers in chassis 4.  This seems like a sane thing to do by default, so we will add this to the nova-cloud-controller charm with a configuration knob to allow the over commit to be altered.

At the end of the day we had 168,000 instances running on the cloud – this may have got some coverage during the OpenStack summit….

The last word

Having access to this many real servers allowed us to exercise OpenStack, Juju, MAAS and our reference Charm configurations in a way that we have not been able undertake before.  Exercising infrastructure management tools and configurations at this scale really helps shake out the scale pinch points – in this test we specifically addressed:

  • Worker thread configuration in the nova-cloud-controller charm
  • Bumping open file descriptor ulimits in the rabbitmq-server charm enabled greater concurrent connections
  • Tweaking the maximum number of mysql connections via charm configuration
  • Ensuring that the CoreFilter is enabled to avoid potential extreme overcommit on nova-compute nodes.

There where a few things we could not address during the testing for which we had to find workarounds:

  • Scaling a Neutron base cloud past more than 256 physical servers
  • High instance density on nova-compute nodes with Neutron security groups enabled.
  • High relation creation concurrency in the Juju state server causing failures and poor performance from the juju command line tool.

We have some changes in the pipeline to the nova-cloud-controller and nova-compute charms to make it easier to split Neutron services onto different underlying messaging and database services.  This will allow the messaging load to be spread across different message brokers, which should allow us to scale a Neutron based OpenStack cloud to a much higher level than we achieved during this testing.  We did find a number of other smaller niggles related to scalability – checkout the full list of reported bugs.

And finally some thanks:

  • Blake Rouse for doing the enablement work for the SeaMicro chassis and getting us up and running at the start of the test.
  • Ryan Harper for kicking off the initial bundle configuration development and testing approach (whilst I was taking a break- thanks!) and shaking out the initial kinks.
  • Scott Moser for his enviable scripting skills which made managing so many servers a whole lot easier – MAAS has a great CLI – and for writing CirrOS.
  • Michael Partridge and his team at AMD for getting so many servers racked and stacked in such a short period of time.
  • All of the developers who contribute to OpenStack, MAAS and Juju!

.. you are all completely awesome!

Tagged , , ,

OpenStack 2014.1 for Ubuntu 12.04 and 14.04 LTS

I’m pleased to announce the general availability of OpenStack 2014.1 (Icehouse) in Ubuntu 14.04 LTS and in the Ubuntu Cloud Archive (UCA) for Ubuntu 12.04 LTS.

Users of Ubuntu 14.04 need take no further action other than follow their favourite install guide – but do take some time to checkout the release notes for Ubuntu 14.04.

Ubuntu 12.04 users can enable the Icehouse pocket of the UCA by running:

sudo add-apt-repository cloud-archive:icehouse

The Icehouse pocket of the UCA also includes updates for associated packages including Ceph 0.79 (which will be updated to the Ceph 0.80 Firefly stable release), Open vSwitch 2.0.1, qemu 2.0.0 and libvirt 1.2.2 – you can checkout the full list here.

Thanks goes to all of the people who have contributed to making OpenStack rock this release cycle – both upstream and in Ubuntu!

Remember that you can report bugs on packages from the UCA for Ubuntu 12.04 and from Ubuntu 14.04 using the ubuntu-bug tool – for example:

ubuntu-bug nova

will report the bug in the right place on launchpad and add some basic information about your installation.

The Juju charms for OpenStack have also been updated to support deployment of OpenStack Icehouse on Ubuntu 14.04 and Ubuntu 12.04.  Read the charm release notes for more details on the new features that have been enabled during this development cycle.

Canonical have a more concise install guide in the pipeline for deploying OpenStack using Juju and MAAS  – watch this space for more information…

EOM

 

Tagged , , , ,

OpenStack Icehouse RC1 for Ubuntu 14.04 and 12.04

OpenStack Icehouse RC1 packages for Cinder, Glance, Keystone, Neutron, Heat, Ceilometer, Horizon and Nova are now available in the current Ubuntu development release and the Ubuntu Cloud Archive for Ubuntu 12.04 LTS.

To enable the Ubuntu Cloud Archive for Icehouse on Ubuntu 12.04:

sudo add-apt-repository cloud-archive:icehouse
sudo apt-get update

Users of the Ubuntu development release (trusty) can install OpenStack Icehouse without any further steps required.

Other packages which have been updated for this Ubuntu release and are pertinent for OpenStack users include:

  • Open vSwitch 2.0.1 (+ selected patches)
  • QEMU 1.7 (upgrade to 2.0 planned prior to final release)
  • libvirt 1.2.2
  • Ceph 0.78 (firefly stable release planned as a stable release update)

Note that the 3.13 kernel that will be released with Ubuntu 14.04 supports GRE and VXLAN tunnelling via the in-tree Open vSwitch module – so no need to use dkms packages any longer!  You can read more about using Open vSwitch with Ubuntu in my previous post.

Ubuntu 12.04 users should also note that Icehouse is the last OpenStack release that will be backported to 12.04 – however it will receive support for the remainder of the 12.04 LTS support lifecycle (3 years).

Remember that you can always report bugs on packages in the Ubuntu Cloud Archive and Ubuntu 14.04 using the ubuntu-bug tool – for example:

ubuntu-bug nova-compute

Happy testing!

 

Tagged , , , , ,

Which Open vSwitch?

Since Ubuntu 12.04, we’ve shipped a number of different Open vSwitch versions supporting various different kernels in various different ways; I thought it was about time that the options were summarized to enable users to make the right choice for their deployment requirements.

Open vSwitch for Ubuntu 14.04 LTS

Ubuntu 14.04 LTS will be the first Ubuntu release to ship with in-tree kernel support for Open vSwitch with GRE and VXLAN overlay networking – all provided by the 3.13 Linux kernel. GRE and VXLAN are two of the tunnelling protocols used by OpenStack Networking (Neutron) to provide logical separation between tenants within an OpenStack Cloud.

This is great news from an end-user perspective as the requirement to use the openvswitch-datapath-dkms package disappears as everything should just *work* with the default Open vSwitch module. This allows us to have much more integrated testing of Open vSwitch as part of every kernel update that we will release for the 3.13 kernel going forward.

You’ll still need the userspace tooling to operate Open vSwitch; for Ubuntu 14.04 this will be the 2.0.1 release of Open vSwitch.

Open vSwitch for Ubuntu 12.04 LTS

As we did for the Raring 3.8 hardware enablement kernel, an openvswitch-lts-saucy package is working its way through the SRU process to support the Saucy 3.11 hardware enablement kernel; if you are using this kernel, you’ll be able to continue to use the full feature set of Open vSwitch by installing this new package:

sudo apt-get install openvswitch-datapath-lts-saucy-dkms

Note that if you are using Open vSwitch on Ubuntu 12.04 with the Ubuntu Cloud Archive for OpenStack Havana, you will already have access to this newer kernel module through the normal package name (openvswitch-datapath-dkms).

DKMS package names

Ubuntu 12.04/Linux 3.2: openvswitch-datapath-dkms (1.4.6)
Ubuntu 12.04/Linux 3.5: openvswitch-datapath-dkms (1.4.6)
Ubuntu 12.04/Linux 3.8: openvswitch-datapath-lts-raring-dkms (1.9.0)
Ubuntu 12.04/Linux 3.11: openvswitch-datapath-lts-saucy-dkms (1.10.2)
Ubuntu 12.04/Linux 3.13: N/A
Ubuntu 14.04/Linux 3.13: N/A

Hope that makes things clearer…

Tagged , ,

Call for testing: Juju and gccgo

Today I uploaded juju-core 1.17.0-0ubuntu2 to the Ubuntu Trusty archive.

This version of the juju-core package provides Juju binaries built using both the golang gc compiler and the gccgo 4.8 compiler that we have for 14.04.

The objective for 14.04 is to have a single toolchain for Go that can support x86, ARM and Power architectures. Currently the only way we can do this is to use gccgo instead of golang-go.

This initial build still only provides packages for x86 and armhf; other architectures will follow once we have sorted out exactly how to provide the ‘go’ tool on platforms other than these.

By default, you’ll still be using the golang gc built binaries; to switch to using the gccgo built versions:

sudo update-alternatives --set juju /usr/lib/juju-1.17.0-gcc/bin/juju

and to switch back:

sudo update-alternatives --set juju /usr/lib/juju-1.17.0/bin/juju

Having both versions available should make diagnosing any gccgo specific issues a bit easier.

To push the local copy of the jujud binary into your environment use:

juju bootstrap --upload-tools

This is not recommended for production use but will ensure that you are testing the gccgo built binaries on both client and server.

Thanks to Dave Cheney and the rest of the Juju development team for all of the work over the last few months to update the codebases for Juju and its dependencies to support gccgo!

Tagged

OpenvSwitch for Ubuntu 12.04.3 LTS

Supporting the OpenvSwitch datapath kernel module packages on Ubuntu 12.04 whilst ensuring compatibility with the hardware enablement kernels that we push out for each point release has been challenging; the patch set I had to implement on top of 1.4.0 to support the Quantal 3.5 kernel was not insignificant!

The upstream provided datapath kernel module is important for OpenStack users as it provides support for overlay networking using GRE tunnels which is used extensively by Neutron for separation of Layer 2 tenant networks. Right now the native kernel module does not support this feature (although that is being worked on – hopefully for 14.04 we can drop the datapath module provided by upstream completely).

For the Raring 3.8 kernel that will ship with the Ubuntu 12.04.3 point release we are taking a slightly different approach; instead of patching the hell out of the 1.4.0 OpenvSwitch datapath module again, we will be providing specific packages for the Raring HWE kernel.

If you currently use the openvswitch-datapath-dkms module and want to switch to the Raring HWE kernel then you will need to take the following action:

sudo apt-get install openvswitch-datapath-lts-raring-dkms

There is also an equivalent openvswitch-datapath-lts-raring-source package for users of module-assistant. These packages are based on the 1.9.0 release of OpenvSwitch that we have in Ubuntu 13.04 which provides full compatibility with the 3.8 kernel.

The userspace tools and daemons, openvswitch-switch for example, are compatible with later datapath module versions so these won’t be upgraded.

These updates are currently in the precise-proposed pocket undergoing verification testing in preparation for release alongside Ubuntu 12.04.3 – see bug 1213021 for full details if you would like to help out with testing.

EOM

Tagged , ,

Targetted machine deployment with Juju

As I blogged previously, its possible to deploy multiple charms to a single physical server using KVM, Juju and MAAS with the virtme charm.

With earlier versions of Juju it was also possible to use the ‘jitsu deploy-to’ hack to deploy multiple charms onto a single server without any separation; however this had some limitations specifically around use of ‘juju add-unit’ which just did crazy things and made this hack not particularly useful in real-world deployments.  It also does not work with the latest versions of Juju which no longer use Zookeeper for co-ordination.

As of the latest release of Juju (available in this PPA and in Ubuntu Saucy), Juju now has native support for specifying which machine a charm should be deployed to:

juju bootstrap --constraints="mem=4G"
juju deploy --to 0 mysql
juju deploy --to 0 rabbitmq-server

This will result in an environment with a bootstrap machine (0) which is also running both mysql and rabbitmq:

$ juju status
machines:
  "0":
    agent-state: started
    agent-version: 1.11.4
    dns-name: 10.5.0.41
    instance-id: 37f3e394-007c-42b9-8bde-c14ae41f50da
    series: precise
    hardware: arch=amd64 cpu-cores=2 mem=4096M
services:
  mysql:
    charm: cs:precise/mysql-26
    exposed: false
    relations:
      cluster:
      - mysql
    units:
      mysql/0:
        agent-state: started
        agent-version: 1.11.4
        machine: "0"
        public-address: 10.5.0.41
  rabbitmq-server:
    charm: cs:precise/rabbitmq-server-12
    exposed: false
    relations:
      cluster:
      - rabbitmq-server
    units:
      rabbitmq-server/0:
        agent-state: started
        agent-version: 1.11.4
        machine: "0"
        public-address: 10.5.0.41

Note that you need to know the identifier of the machine that you are going to “deploy –to” – in all deployments, machine 0 is always the bootstrap node so the above example works nicely.

As of the latest release of Juju, the ‘add-unit’ command also supports the –to option, so its now possible to specifically target machines when expanding service capacity:

juju deploy --constraints="mem=4G" openstack-dashboard
juju add-unit --to 1 rabbitmq-server

I should now have a second machine running both the openstack-dashboard service and a second unit of the rabbitmq-server service:

$ juju status
machines:
  "0":
    agent-state: started
    agent-version: 1.11.4
    dns-name: 10.5.0.44
    instance-id: 99a06a9b-a9f9-4c4a-bce3-3b87fbc869ee
    series: precise
    hardware: arch=amd64 cpu-cores=2 mem=4096M
  "1":
    agent-state: started
    agent-version: 1.11.4
    dns-name: 10.5.0.45
    instance-id: d1c6788a-d120-44c3-8c55-03aece997fd7
    series: precise
    hardware: arch=amd64 cpu-cores=2 mem=4096M
services:
  mysql:
    charm: cs:precise/mysql-26
    exposed: false
    relations:
      cluster:
      - mysql
    units:
      mysql/0:
        agent-state: started
        agent-version: 1.11.4
        machine: "0"
        public-address: 10.5.0.44
  openstack-dashboard:
    charm: cs:precise/openstack-dashboard-9
    exposed: false
    relations:
      cluster:
      - openstack-dashboard
    units:
      openstack-dashboard/0:
        agent-state: started
        agent-version: 1.11.4
        machine: "1"
        public-address: 10.5.0.45
  rabbitmq-server:
    charm: cs:precise/rabbitmq-server-12
    exposed: false
    relations:
      cluster:
      - rabbitmq-server
    units:
      rabbitmq-server/0:
        agent-state: started
        agent-version: 1.11.4
        machine: "0"
        public-address: 10.5.0.44
      rabbitmq-server/1:
        agent-state: started
        agent-version: 1.11.4
        machine: "1"
        public-address: 10.5.0.45

These two features make it much easier to deploy complex services such as OpenStack which use a large number of charms on a limited number of physical servers.

There are still a few gotchas:

  • Charms are running without any separation, so its entirely possible for Charms to stamp all over each others configuration files and try to bind to the same network ports.
  • Not all of the OpenStack Charms are compatible with the latest version of Juju – this is being worked on – checkout the OpenStack Charmers branches on Launchpad.

Juju is due to deliver a feature that will provide full separation of services using containers which will resolve the separation challenge.

For the OpenStack Charms, the OpenStack Charmers team will be aiming to limit file-system conflicts as much as possible – specifically in charms that won’t work well in containers such as nova-compute, ceph and quantumneutron-gateway because they make direct use of kernel features and network/storage devices.

Ubuntu OpenStack SRU cadence

At the last Ubuntu Developer Summit, the Ubuntu Server team discussed moving to a fixed cadence for releasing point releases of OpenStack into Ubuntu and the Ubuntu Cloud Archive for 12.04 under the Ubuntu Stable Release update process.

The amount of time between upstream point release and acceptance into Ubuntu and the Ubuntu Cloud Archive is relatively short, but the team felt that a more regular cadence was required to allow users of OpenStack on Ubuntu to plan around upstream point releases.

For future OpenStack point releases the Ubuntu Server team will be following a new cadence for pushing these releases into Ubuntu. This should allow the team to test and promote a point release of OpenStack into Ubuntu within two weeks of the upstream point release. Hopefully this will allow users of OpenStack on Ubuntu to plan upgrades a little more effectively going fowards.

For full details see the SRU Cadence documentation.

EOM

Mixing physical and virtual servers with Juju and MAAS

This is one the most common questions I get asked about deploying OpenStack on Ubuntu using Juju and MAAS is:

How can we reduce the number of servers required to deploy a small OpenStack Cloud?

OpenStack has a number of lighter weight services which don’t really make best use of anything other than the cheapest of cheap servers in this type of deployment; this includes the cinder, glance, keystone, nova-cloud-controller, swift-proxy, rabbitmq-server and mysql charms.

Ultimately Juju will solve the problem of service density in physical server deployments by natively supporting deployment of multiple charms onto the same physical servers; but in the interim I’ve hacked together a Juju charm, “virtme”, which can be deployed using Juju and MAAS to virtualize a physical server into a number of KVM instances which are also managed by MAAS.

Using this charm in conjunction with juju-jitsu allows you to make the most of a limited number of physical servers; I’ve been using this charm in a raring based Juju + MAAS environment:

juju bootstrap
(mkdir -p raring; cd raring; bzr branch lp:~virtual-maasers/charms/precise/virtme/trunk virtme)
jitsu deploy-to 0 --config config.yaml --repository . local:virtme

Some time later you should have an additional 7 servers registered into the MAAS controlling the environment ready for use. The virtme charm is deployed directly to the bootstrap node in the environment – so at this point the environment is using just one physical server.

The config.yaml file contains some general configuration for virtme:

virtme:
  maas-url: "http://<maas_hostname>/MAAS"
  maas-credentials: "<maas_token>"
  ports: "em2"
  vm-ports-per-net: 2
  vm-memory: 4096
  vm-cpus: 2
  num-vms: 7
  vm-disks: "10G 60G"

virtme uses OpenvSwitch to provide bridging between KVM instances and the physical network; right now this requires a dedicated port on the server to be cabled correctly – this is configured using ‘ports’. Each KVM instance will be configured with ‘vm-ports-per-net’ number of network ports on the OpenvSwitch bridge.

virtme also requires a URL and credentials for the MAAS cluster controller managing the environment; it uses this to register the details of the KVM instances it creates back into MAAS. Power control is supported using libvirt; virtme configures the libvirt daemon on the physical server to listen on the network and MAAS uses this to power control the KVM instances.

Right now the specification of the KVM instances is a little clunky – in the example above, virtme will create 7 instances with 2 vCPUS, 4096MB of memory and two disks, a root partition that is 10G and a secondary disk of 60G. I’d like to refactor this into something a little more rich to describe instances; maybe something like:

vms:
  small:
    - count: 7
    - cpu: 2
    - mem: 4096
    - networks: [ eth1, eth2 ]
    - disks: [ 10G, 20G ]

Now that the environment has a number of smaller, virtualized instances, I can deploy some OpenStack services onto these units:

juju deploy keystone
juju deploy mysql
juju deploy glance
juju deploy rabbitmq-server
....

leaving your bigger servers free to use for nova-compute:

juju deploy -n 6 --constraints="mem=96G" nova-compute

WARNING: right now libvirt is configured with no authentication or security on its network connection; this has obvious security implications! Future iterations of this charm will probably support SASL or SSH based security.

BOOTNOTE: virtme is still work-in-progress and is likely to change; if you find it useful let me know about what you like/hate!

Tagged , ,

Ubuntu Cloud Archive Bug Reporting

Since its launch bug reporting for packages sourced from the Ubuntu Cloud Archive for Ubuntu 12.04 LTS has been a little awkward and somewhat manual.

As of apport version 2.0.1-0ubuntu17.2, you can now:

ubuntu-bug <pkgname>

for packages from the Cloud Archive and bugs will get routed to the correct project in Launchpad with lots of extra bug data.

Thanks to those who spent time reporting bugs to-date – hopefully this will make you lives a little easier!

EOM

Follow

Get every new post delivered to your Inbox.

Join 157 other followers