Category Archives: Automation

How we scaled OpenStack to launch 168,000 cloud instances

In the run up to the OpenStack summit in Atlanta, the Ubuntu Server team had it’s first opportunity to test OpenStack at real scale.SM_FS_15K_Frt_web-lo

AMD made available 10 SeaMicro 15000 chassis in one of their test labs. Each chassis has 64, 4 core, 2 thread (8 logical cores), 32GB RAM servers with 500G storage attached via a storage fabric controller – creating the potential to scale an OpenStack deployment to a large number of compute nodes in a small rack footprint.

As you would expect, we chose the best tools for deploying OpenStack:

  • MAAS – Metal-as-a-Service, providing commissioning and provisioning of servers.
  • Juju – The service orchestration for Ubuntu, which we use to deploy OpenStack on Ubuntu using the OpenStack charms.
  • OpenStack Icehouse on Ubuntu 14.04 LTS.
  • CirrOS – a small footprint linux based Cloud OS

MAAS has native support for enlisting a full SeaMicro 15k chassis in a single command – all you have to do is provide it with the MAC address of the chassis controller and a username and password.  A few minutes later, all servers in the chassis will be enlisted into MAAS ready for commissioning and deployment:

maas local node-group probe-and-enlist-hardware \
  nodegroup model=seamicro15k mac=00:21:53:13:0e:80 \
  username=admin password=password power_control=restapi2

Juju has been the Ubuntu Server teams preferred method for deploying OpenStack on Ubuntu for as long as I can remember; Juju uses Charms to encapsulate the knowledge of how to deploy each part of OpenStack (a service) and how each service relates to each other – an example would include how Glance relates to MySQL for database storage, Keystone for authentication and authorization and (optionally) Ceph for actual image storage.

Using the charms and Juju, it’s possible to deploy complex OpenStack topologies using bundles, a yaml format for describing how to deploy a set of charms in a given configuration – take a look at the OpenStack bundle we used for this test to get a feel for how this works.

scale_test_juju

Starting out small(ish)

All ten chassis were not all available from the outset of testing, so we started off with two chassis of servers to test and validate that everything was working as designed.   With 128 physical servers, we were able to put together a Neutron based OpenStack deployment with the following services:

  • 1 Juju bootstrap node (used by Juju to control the environment), Ganglia Master server
  • 1 Cloud Controller server
  • 1 MySQL database server
  • 1 RabbitMQ messaging server
  • 1 Keystone server
  • 1 Glance server
  • 3 Ceph storage servers
  • 1 Neutron Gateway network forwarding server
  • 118 Compute servers

We described this deployment using a Juju bundle, and used the juju-deployer tool to bootstrap and deploy the bundle to the MAAS environment controlling the two chassis.  Total deployment time for the two chassis to the point of a OpenStack cloud that was usable was around 35 minutes.

At this point we created 500 tenants in the cloud, each with its own private network (using Neutron), connected to the outside world via a shared public network.  The immediate impact of doing this is that Neutron creates dnsmasq instances, Open vSwitch ports and associated network namespaces on the Neutron Gateway data forwarding server – seeing this many instances of dnsmasq on a single server is impressive – and the server dealt with the load just fine!

Next we started creating instances; we looked at using Rally for this test, but it does not currently support using Neutron for instance creation testing, so we went with a simple shell script that created batches of servers (we used a batch size of 100 instances) and then waited for them to reach the ACTIVE state.  We used the CirrOS cloud image (developed and maintained by the Ubuntu Server teams’ very own Scott Moser) with a custom Nova flavor with only 64 MB of RAM.

We immediately hit our first bottleneck – by default, the Nova daemons on the Cloud Controller server will spawn sub-processes equivalent to the number of cores that the server has.  Neutron does not do this and we started seeing timeouts on the Nova Compute nodes waiting for VIF creation to complete.  Fortunately Neutron in Icehouse has the ability to configure worker threads, so we updated the nova-cloud-controller charm to set this configuration to a sensible default, and provide users of the charm with a configuration option to tweak this setting.  By default, Neutron is configured to match what Nova does, 1 process per core – using the charm configuration this can be scaled up using a simple multiplier – we went for 10 on the Cloud Controller node (80 neutron-server processes, 80 nova-api processes, 80 nova-conductor processes).  This allowed us to resolve the VIF creation timeout issue we hit in Nova.

At around 170 instances per compute server, we hit our next bottleneck; the Neutron agent status on compute nodes started to flap, with agents being marked down as instances were being created.  After some investigation, it turned out that the time required to parse and then update the iptables firewall rules at this instance density took longer than the default agent timeout – hence why agents kept dropping out from Neutrons perspective.  This resulted in virtual interface (VIF) creation timing out and we started to see instance activation failures when trying to create more that a few instances in parallel.  Without an immediate fix for this issue (see bug 1314189), we took the decision to turn Neutron security groups off in the deployment and run without any VIF level iptables security.  This was applied using the nova-compute charm we were using, but is obviously not something that will make it back into the official charm in the Juju charm store.

With the workaround on the Compute servers and we were able to create 27,000 instances on the 118 compute nodes. The API call times to create instances from the testing endpoint remained pretty stable during this test, however as the Nova Compute servers got heavily loaded, the amount of time taken for all instances to reach the ACTIVE state did increase:

Doubling up

At this point AMD had another two chassis racked and ready for use so we tore down the existing two chassis, updated the bundle to target compute services at the two new chassis and re-deployed the environment.  With a total of 256 servers being provisioned in parallel, the servers were up and running within about 60 minutes, however we hit our first bottleneck in Juju.

The OpenStack charm bundle we use has a) quite a few services and b) a-lot of relations between services – Juju was able to deploy the initial services just fine, however when the relations where added, the load on the Juju bootstrap node went very high and the Juju state service on this node started to throw a larger number of errors and became unresponsive – this has been reported back to the Juju core development team (see bug 1318366).

We worked around this bottleneck by bringing up the original two chassis in full, and then adding each new chassis in series to avoid overloading the Juju state server in the same way.  This obviously takes longer (about 35 minutes per chassis) but did allow us to deploy a larger cloud with an extra 128 compute nodes, bringing the total number of compute nodes to 246 (118+128).

And then we hit our next bottleneck…

By default, the RabbitMQ packaging in Ubuntu does not explicitly set a file descriptor ulimit so it picks up the Ubuntu defaults – which are 1024 (soft) and 4096 (hard).  With 256 servers in the deployment, RabbitMQ hits this limit on concurrent connections and stops accepting new ones.  Fortunately it’s possible to raise this limit in /etc/default/rabbitmq-server – and as we were deployed using the rabbitmq-server charm, we were able to update the charm to raise this limit to something sensible (64k) and push that change into the running environment.  RabbitMQ restarted, problem solved.

With the 4 chassis in place, we were able to scale up to 55,000 instances.

Ganglia was letting us know that load on the Nova Cloud Controller during instance setup was extremely high (15-20), so we decided at this point to add another unit to this service:

juju add-unit nova-cloud-controller

and within 15 minutes we had another Cloud Controller server up and running, automatically configured for load balancing of API requests with the existing server and sharing the load for RPC calls via RabbitMQ.   Load dropped, instance setup time decreased, instance creation throughput increased, problem solved.

Whilst we were working through these issues and performing the instance creation, AMD had another two chassis (6 & 7) racked, so we brought them into the deployment adding another 128 compute nodes to the cloud bringing the total to 374.

And then things exploded…

The number of instances that can be created in parallel is driven by two factors – 1) the number of compute nodes and 2) the number of workers across the Nova Cloud Controller servers.  However, with six chassis in place, we were not able to increase the parallel instance creation rate as much as we wanted to without getting connection resets between Neutron (on the Cloud Controllers) and the RabbitMQ broker.

The learning from this is that Neutron+Nova makes for an extremely noisy OpenStack deployment from a messaging perspective, and a single RabbitMQ server appeared to not be able to deal with this load.  This resulted in a large number of instance creation failures so we stopped testing and had a re-think.

A change in direction

After the failure we saw in the existing deployment design, and with more chassis still being racked by our friends at AMD, we still wanted to see how far we could push things; however with Neutron in the design, we could not realistically get past 5-6 chassis of servers, so we took the decision to remove Neutron from the cloud design and run with just Nova networking.

Fortunately this is a simple change to make when deploying OpenStack using charms as the nova-cloud-controller charm has a single configuration option to allow Neutron and Nova networkings to be configured. After tearing down and re-provisioning the 6 chassis:

juju destroy-enviroment maas
juju-deployer --bootstrap -c seamicro.yaml -d trusty-icehouse

with the revised configuration, we were able to create instances in batches of 100 at a respectable throughput of initially 4.5/sec – although this did degrade as load on compute servers went higher.  This allowed us to hit 75,000 running instances (with no failures) in 6hrs 33 mins, pushing through to 100,000 instances in 10hrs 49mins – again with no failures.

100k

As we saw in the smaller test, the API invocation time was fairly constant throughout the test, with the total provisioning time through to ACTIVE state increasing due to loading on the compute nodes:

100k

Status check

OK – so we are now running an OpenStack Cloud on Ubuntu 14.04 across 6 seamicro chassis (1,2,3,5,6,7 – 4 comes later) – a total of 384 servers (give or take one or two which would not provision).  The cumulative load across the cloud at this point was pretty impressive – Ganglia does a pretty good job at charting this:

100k-load

AMD had two more chassis (8 & 9) in the racks which we had enlisted and commissioned, so we pulled them into the deployment as well;  This did take some time – Juju was grinding pretty badly at this point and just running ‘juju add-unit -n 63 nova-compute-b6’ was taking 30 minutes to complete (reported upstream – see bug 1317909).

After a couple of hours we had another ~128 servers in the deployment, so we pushed on and created some more instances through to the 150,000 mark – as the instances where landing on the servers on the 2 new chassis, the load on the individual servers did increase more rapidly so instance creation throughput did slow down faster but the cloud managed the load.

Tipping point?

Prior to starting testing at any scale, we had some issues with one of the chassis (4) which AMD had resolved during testing, so we shoved that back into the cloud as well; after ensuring that the 64 extra servers where reporting correctly to Nova, we started creating instances again.

However, the instances kept scheduling onto the servers in the previous two chassis we added (8 & 9) with the new nodes not getting any instances.  It turned out that the servers in chassis 8 & 9 where AMD based servers with twice the memory capacity; by default, Nova does not look at VCPU usage when making scheduling decisions, so as these 128 servers had more remaining memory capacity that the 64 new servers in chassis 4, they were still being targeted for instances.

Unfortunately I’d hopped onto the plane from Austin to Atlanta for a few hours so I did not notice this – and we hit our first 9 instance failures.  The 128 servers in Chassis 8 and 9 ended up with nearly 400 instances each – severely over-committing on CPU resources.

A few tweaks to the scheduler configuration, specifically turning on the CoreFilter and setting the over commit at x 32, applied to the Cloud Controller nodes using the Juju charm, and instances started to land on the servers in chassis 4.  This seems like a sane thing to do by default, so we will add this to the nova-cloud-controller charm with a configuration knob to allow the over commit to be altered.

At the end of the day we had 168,000 instances running on the cloud – this may have got some coverage during the OpenStack summit….

The last word

Having access to this many real servers allowed us to exercise OpenStack, Juju, MAAS and our reference Charm configurations in a way that we have not been able undertake before.  Exercising infrastructure management tools and configurations at this scale really helps shake out the scale pinch points – in this test we specifically addressed:

  • Worker thread configuration in the nova-cloud-controller charm
  • Bumping open file descriptor ulimits in the rabbitmq-server charm enabled greater concurrent connections
  • Tweaking the maximum number of mysql connections via charm configuration
  • Ensuring that the CoreFilter is enabled to avoid potential extreme overcommit on nova-compute nodes.

There where a few things we could not address during the testing for which we had to find workarounds:

  • Scaling a Neutron base cloud past more than 256 physical servers
  • High instance density on nova-compute nodes with Neutron security groups enabled.
  • High relation creation concurrency in the Juju state server causing failures and poor performance from the juju command line tool.

We have some changes in the pipeline to the nova-cloud-controller and nova-compute charms to make it easier to split Neutron services onto different underlying messaging and database services.  This will allow the messaging load to be spread across different message brokers, which should allow us to scale a Neutron based OpenStack cloud to a much higher level than we achieved during this testing.  We did find a number of other smaller niggles related to scalability – checkout the full list of reported bugs.

And finally some thanks:

  • Blake Rouse for doing the enablement work for the SeaMicro chassis and getting us up and running at the start of the test.
  • Ryan Harper for kicking off the initial bundle configuration development and testing approach (whilst I was taking a break- thanks!) and shaking out the initial kinks.
  • Scott Moser for his enviable scripting skills which made managing so many servers a whole lot easier – MAAS has a great CLI – and for writing CirrOS.
  • Michael Partridge and his team at AMD for getting so many servers racked and stacked in such a short period of time.
  • All of the developers who contribute to OpenStack, MAAS and Juju!

.. you are all completely awesome!

Advertisements
Tagged , , ,

Eventing Upstart

Upstart is a an alternative /bin/init, a replacement for System V style initialization and has been the default init in Ubuntu since 9.10, RHEL6 and Google’s Chrome OS; it handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.

The key difference from traditional init is that Upstart is event based; processes managed by Upstart are started and stopped as a result of events occurring in the system rather than scripts being executed in a defined order.

This post provides readers with a walk-through of a couple of Upstart configurations and explains how the event driven nature of Upstart provides a fantastic way of managing the processes running on your Ubuntu Server install.

Disecting a simple configuration

Lets start by looking at a basic Upstart configuration; specifically the one found in Floodlight (a Java based OpenFlow controller):

description "Floodlight controller"

start on runlevel [2345]
stop on runlevel [!2345]

setuid floodlight
setgid floodlight

respawn

pre-start script
    [ -f /usr/share/floodlight/java/floodlight.jar ] || exit 0
end script

script
    . /etc/default/floodlight
    exec java ${JVM_OPTS} -Dpython.home=/usr/share/jython \
        -Dlogback.configurationFile=/etc/floodlight/logback.xml \
        -jar /usr/share/floodlight/java/floodlight.jar \
        $DAEMON_OPTS 2>&1 >> /var/log/floodlight/floodlight.log
end script

This configuration is quite traditional in that it hooks into the runlevel events that upstart emits automatically during boot to simulate a System V style system initialization:

start on runlevel [2345]
stop on runlevel [!2345]

These provide a simple way to convert a traditional init script into an Upstart configuration without having to think to hard about exactly which event should start your process; note that this configuration also starts on the filesystem event – this is fired when all filesystems have been mounted.  For more information about events see the Upstart eventsman page.

The configuration uses stanza’s that tell Upstart to execute the scripts in the configuration as a different user:

setuid floodlight
setgid floodlight

and it also uses a process control stanza:

respawn

This instructs Upstart to respawn the process if it should die for any unexpected reason; Upstart has some sensible defaults on how many times it will attempt todo this before giving it up as a bad job – these can also be specified in the stanza.

The job has two scripts specified; the first is run prior to actually starting the process that will be monitored:

pre-start script
    [ -f /usr/share/floodlight/java/floodlight.jar ] || exit 0
end script

In this case its just a simple check to ensure that the floodlight package is still installed; Upstart configurations are treated as conf files by dpkg so won’t be removed unless you purge the package from your system. The final script is the one that actually exec’s the process that will be monitored:

script
    . /etc/default/floodlight
    exec java ${JVM_OPTS} -Dpython.home=/usr/share/jython \
        -Dlogback.configurationFile=/etc/floodlight/logback.xml \
        -jar /usr/share/floodlight/java/floodlight.jar \
        $DAEMON_OPTS 2>&1 >> /var/log/floodlight/floodlight.log
end script

Upstart will keep an eye on the java for Floodlight process during its lifetime.

And now for something clever…

The above example is pretty much a direct translation of a init script into an Upstart configuration; when you consider that an Upstart configuration can be triggered by any event being detected the scope of what you can do with it increases exponentially.

Ceph, the highly scalable, distributed object storage solution which runs on Ubuntu on commodity server hardware, has a great example on how this can extend to events occurring in the physical world.

Lets look at how Ceph works at a high level.

A Ceph deployment will typically spread over a large number of physical servers; three will be running the Ceph Monitor daemon (MON) and will be acting in quorum to monitor the topology of the Ceph deployment.  Ceph clients connect to these servers to retrieve this map which they then use to determine where the data they are looking for resides.

The rest of the servers will be running Ceph Object Storage daemons (OSD); these are responsible for storing/retrieving data on physical storage devices.   The recommended configuration is to have one OSD per physical storage device in any given server.  Servers can have quite a few direct-attached disks so this could be 10’s of disks per server – so in a larger deployment you may have 100’s or 1000’s of OSD’s running at any given point in time.

This presents a challenge; how does the system administrator manage the Ceph configuration for all of these OSD’s?

Ceph takes a innovative approach to address this challenge using Upstart.

The devices supporting the OSD’s are prepared for use using the ‘ceph-disk-prepare’ tool:

ceph-disk-prepare /dev/sdb

This partitions and formats the device with a specific layout and UUID so it can be recognized as OSD device; this is supplemented with an Upstart configuration which fires when devices of this type are detected:

description "Ceph hotplug"

start on block-device-added \
DEVTYPE=partition \
ID_PART_ENTRY_TYPE=4fbd7e29-9d25-41b8-afd0-062c0ceff05d

task
instance $DEVNAME

exec /usr/sbin/ceph-disk-activate --mount -- "$DEVNAME"

This Upstart configuration is a ‘task’; this means that its not long running so Upstart does not need to provide ongoing process management once ‘ceph-disk-activate’ exits (no respawn or stopping for example).

The ‘ceph-disk-activate’ tool mounts the device, prepares it for OSD usage (if not already prepared) and then emits the ‘ceph-osd’ event with a specific OSD id which has been allocate uniquely across the deployment; this triggers a second Upstart configuration:

description "Ceph OSD"

start on ceph-osd
stop on runlevel [!2345]

respawn
respawn limit 5 30

pre-start script
    set -e
    test -x /usr/bin/ceph-osd || { stop; exit 0; }
    test -d "/var/lib/ceph/osd/${cluster:-ceph}-$id" || { stop; exit 0; }

    install -d -m0755 /var/run/ceph

    # update location in crush; put in some suitable defaults on the
    # command line, ceph.conf can override what it wants
    location="$(ceph-conf --cluster="${cluster:-ceph}" --name="osd.$id" --lookup osd_crush_location || : )"
    weight="$(ceph-conf --cluster="$cluster" --name="osd.$id" --lookup osd_crush_initial_weight || : )"
    ceph \
        --cluster="${cluster:-ceph}" \
        --name="osd.$id" \
        --keyring="/var/lib/ceph/osd/${cluster:-ceph}-$id/keyring" \
        osd crush create-or-move \
    -- \
        "$id" \
    "${weight:-1}" \
    root=default \
    host="$(hostname -s)" \
    $location \
       || :

    journal="/var/lib/ceph/osd/${cluster:-ceph}-$id/journal"
    if [ -L "$journal" -a ! -e "$journal" ]; then
       echo "ceph-osd($UPSTART_INSTANCE): journal not present, not starting yet." 1>&2
       stop
       exit 0
    fi
end script

instance ${cluster:-ceph}/$id
export cluster
export id

exec /usr/bin/ceph-osd --cluster="${cluster:-ceph}" -i "$id" -f

This Upstart configuration does some Ceph configuration in its pre-start script to tell Ceph where the OSD is physically located and then starts the ceph-osd daemon using the unique OSD id.

These two Upstart configurations are used whenever OSD formatted block devices are detected; this includes on system start (so that OSD daemons startup on boot) and when disks are prepared for first use.

So how does this extend into the physical world?

The ‘block-device-added’ event can happen at any point in time –  for example:

  • One of the disks in server-X dies; the data centre operations staff have a pool of pre-formatted Ceph OSD replacement disks and replace the failed disk with a new one; Upstart detects the new disk and bootstraps a new OSD into the Ceph deployment.
  • server-Y dies with a main-board burnout;  the data centre operations staff replace the server with a swap out, remove the disks from the dead server and insert into the new one and install Ceph onto the system disk; Upstart detects the OSD disks on reboot and re-introduces them into the Ceph topology in their new location.

In both of these scenarios no additional system administrator action is required; considering that a Ceph deployment might contain 100’s of servers and 1000’s of disks, automating activity around physical replacement of devices in this way is critical in terms of operational efficiency.

Hats off to Tommi Virtanen@Inktank for this innovative use of Upstart – rocking work!

Summary

This post illustrates that Upstart is more than just a simple, event based replacement for init.

The Ceph use case shows how Upstart can be integrated into an application providing event based automation of operational processes.

Want to learn more? The Upstart Cookbook is a great place to start…

Tagged , , ,

Introducing python-jenkins

Over the last 12 months I have made use of a great Python library originally written by Ken Conley in automating the various testing activities that we undertake as part of Ubuntu Development using Jenkins.

Python Jenkins provides Python bindings to the Jenkins Remote API.

I’m pleased to announce that Python Jenkins 0.2 is available in Ubuntu Oneiric Ocelot and the project has now migrated to Launchpad for bug tracking, version control and release management.

The 0.2 release includes a number of bug fixes and new methods for managing Jenkins slave node configuration remotely – this is already being used in the juju charm for Jenkins to automatically create new slave node configuration when slave node units are added to a deployed juju environment (blog posting to follow….)

A quick overview…

It’s pretty easy to get started with python-jenkins on the latest Ubuntu development release – if you are running an earlier release of Ubuntu then you can use the PPA (Natty only ATM but more to follow):

sudo apt-get install python-jenkins

The library is also published to PyPI so you can use pip if you are not running Ubuntu.

Here’s a quick example script:

#!/usr/bin/python
import jenkins
# Connect to instance - username and password are optional
j = jenkins.Jenkins('http://hostname:8080', 'username', 'password')

# Create a new job
if not j.job_exists('empty'):
    j.create_job('empty', jenkins.EMPTY_CONFIG_XML)
j.disable_job('empty')

# Copy a job
if j.job_exists('empty_copy'):
    j.delete_job('empty_copy')
j.copy_job('empty', 'empty_copy')
j.enable_job('empty_copy')

# Reconfigure an existing job and build it
j.reconfig_job('empty_copy', jenkins.RECONFIG_XML)
j.build_job('empty_copy')

# Create a slave node
if j.node_exists('test-node'):
    j.delete_node('test-node')
j.create_node('test-node')

You can find the current documentation here.

Happy automating!

Package rebuild testing using Jenkins and sbuild

I’ve been working on rebuilding the Java packages in the Ubuntu archive that depend on ‘default-jdk’ against OpenJDK 7 to see where we are likely to have issues if and when we decide to upgrade the default version of Java in Ubuntu to OpenJDK 7

You can see the results here (limited time offer only); and here are the details of how I setup this rebuild.

Setup your rebuild server

So first things first; the current development release of Ubuntu (Oneiric Ocelot) includes the most recent LTS release of Jenkins so installing its pretty easy as long as you are using this release:

sudo apt-get install jenkins

You probably want to configure Jenkins with an administrator email address and point it at a mail relay so that Jenkins can tell you about failures.

Next you need to install a few development tools to make it all hang together nicely:

sudo apt-get install ubuntu-dev-tools sbuild apt-cacher-ng

This should get you to the point where you can setup the jenkins account to use sbuild:

sudo usermod -G sbuild jenkins
sudo stop jenkins
sudo start jenkins

Create sbuild environments

I decided that I wanted to compare current build results with openjdk-6 against the rebuild with openjdk-7 so I created two sbuild environments on my server – one for openjdk-6 and one for openjdk-7:

mk-sbuild --name=oneiric-java-6 oneiric
mk-sbuild --name=oneiric-java-7 oneiric

You might have noticed that I also installed apt-cacher-ng – as alot of the dependencies that get downloaded are the same having a local apt cache makes alot of sense to speed things up; its pretty easy to point the schroots we just created at the local instance of apt-cacher-ng:

sudo su -c "echo 'Acquire::http { Proxy \"http://localhost:3142\"; };' > /var/lib/schroot/chroots/oneiric-java-6-amd64/etc/apt/apt.conf.d/02proxy"
sudo su -c "echo 'Acquire::http { Proxy \"http://localhost:3142\"; };' > /var/lib/schroot/chroots/oneiric-java-7-amd64/etc/apt/apt.conf.d/02proxy"

You will also need to setup some self signed keys to make sbuild work:

sudo sbuild-update -k

So we now have two sbuild environments setup – but we need one to point at openjdk-7 instead of openjdk-6 – I stuck a modified copy of java-common (which determines the default-jdk) in a PPA:

sudo schroot -c oneiric-java-7-amd64-source
apt-get install python-software-properties
add-apt-repository ppa:james-page/default-jdk-7

Obviously this could just as easily be an upgrade to mysql, postgresql or any other package of your choosing.

Setting up Jenkins

I split this rebuild into ‘main’ and ‘universe’ components and used a mutli-configuration project with axis for ‘package’ (containing the list of packages I wanted to rebuild) and ‘version’ (java-6, java-7); I’ve included the config.xml for one of the projects here.  You can also see the configuration here.

The build is a very simple script:

#!/bin/bash
export HOME=/var/lib/jenkins
rm -Rf *
pull-lp-source ${package}
sbuild -d oneiric-${version} -A -n ${package}*.dsc

That should do the trick; you can then submit the project for build and it will gradually churn through the packages emailing you with any failures.