Category Archives: Linux

OpenStack 2015.1.0 for Ubuntu 14.04 LTS and Ubuntu 15.04

The Ubuntu OpenStack team is pleased to announce the general availability of OpenStack 2015.1.0 (Kilo) release in Ubuntu 15.04 and for Ubuntu 14.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 14.04 LTS

You can enable the Ubuntu Cloud Archive for OpenStack Kilo on Ubuntu 14.04 installations by running the following commands:

 sudo add-apt-repository cloud-archive:kilo
 sudo apt-get update

The Ubuntu Cloud Archive for Kilo includes updates for Nova, Glance, Keystone, Neutron, Cinder, Horizon, Swift, Ceilometer and Heat; Ceph (0.94.1), RabbitMQ (3.4.2), QEMU (2.2), libvirt (1.2.12) and Open vSwitch (2.3.1) back-ports from 15.04 have also been provided.

Additionally Trove, Sahara, Ironic, Designate and Manila are also provided in the Ubuntu Cloud Archive for Kilo.  Note that Canonical are not providing support for these packages as they are not in Ubuntu main – these packages are community supported inline with other Ubuntu universe packages.

You can checkout the full list of packages and versions here.

NOTE: We’re shipping Swift 2.2.2 for release – due to the relatively late inclusion of new dependencies to support erasure coding in Swift, we’ve opted not to update to 2.3.0 this cycle in Ubuntu.

NOTE: Designate and Trove are still working through the Stable Release Update process, due to some unit testing and packaging issues,  so are lagging behind the rest of the release.

Ubuntu 15.04

No extra steps required; just start installing OpenStack!

Neutron Driver Decomposition

Ubuntu are only tracking the decomposition of Neutron FWaaS, LBaaS and VPNaaS from Neutron core in the Ubuntu archive; we expect to add additional packages for other Neutron ML2 mechanism drivers and plugins early during the Liberty/15.10 development cycle – we’ll provide these as backports to OpenStack Kilo users as and when they become available.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

 sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thanks and have fun!

Tagged ,

Which Open vSwitch?

Since Ubuntu 12.04, we’ve shipped a number of different Open vSwitch versions supporting various different kernels in various different ways; I thought it was about time that the options were summarized to enable users to make the right choice for their deployment requirements.

Open vSwitch for Ubuntu 14.04 LTS

Ubuntu 14.04 LTS will be the first Ubuntu release to ship with in-tree kernel support for Open vSwitch with GRE and VXLAN overlay networking – all provided by the 3.13 Linux kernel. GRE and VXLAN are two of the tunnelling protocols used by OpenStack Networking (Neutron) to provide logical separation between tenants within an OpenStack Cloud.

This is great news from an end-user perspective as the requirement to use the openvswitch-datapath-dkms package disappears as everything should just *work* with the default Open vSwitch module. This allows us to have much more integrated testing of Open vSwitch as part of every kernel update that we will release for the 3.13 kernel going forward.

You’ll still need the userspace tooling to operate Open vSwitch; for Ubuntu 14.04 this will be the 2.0.1 release of Open vSwitch.

Open vSwitch for Ubuntu 12.04 LTS

As we did for the Raring 3.8 hardware enablement kernel, an openvswitch-lts-saucy package is working its way through the SRU process to support the Saucy 3.11 hardware enablement kernel; if you are using this kernel, you’ll be able to continue to use the full feature set of Open vSwitch by installing this new package:

sudo apt-get install openvswitch-datapath-lts-saucy-dkms

Note that if you are using Open vSwitch on Ubuntu 12.04 with the Ubuntu Cloud Archive for OpenStack Havana, you will already have access to this newer kernel module through the normal package name (openvswitch-datapath-dkms).

DKMS package names

Ubuntu 12.04/Linux 3.2: openvswitch-datapath-dkms (1.4.6)
Ubuntu 12.04/Linux 3.5: openvswitch-datapath-dkms (1.4.6)
Ubuntu 12.04/Linux 3.8: openvswitch-datapath-lts-raring-dkms (1.9.0)
Ubuntu 12.04/Linux 3.11: openvswitch-datapath-lts-saucy-dkms (1.10.2)
Ubuntu 12.04/Linux 3.13: N/A
Ubuntu 14.04/Linux 3.13: N/A

Hope that makes things clearer…

Tagged , ,

Avoiding missing-classpath lintian warnings with Maven based packages

Whilst I’ve been preparing the large dependency chain to support Jenkins for upload into Debian and Ubuntu I came across a new lintian warning:

libakuma-java: missing-classpath libjna-java

This is a new check that validates that the jar files associated with a debian -java package have a Class-Path entry set in the META-INF/MANIFEST.mf file.

This is used by wrapper scripts to dynamically set the classpath for Java applications at runtime based on the libraries that they use.

However, if you are packaging Maven based projects you don’t get this set for free (although that would be a great feature for maven-debian-helper).

Luckily the javahelper package has a CDBS class that can help out; simply add javahelper to the Build-Depends for your package then follow these steps:

1) debian/rules

Add the javahelper.mk class into the make file:

#!/usr/bin/make -f

include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/javahelper.mk
include /usr/share/cdbs/1/class/maven.mk

...
2) Add a .classpath file for the binary package

In my case this gets dropped into debian/libakuma-java.classpath

usr/share/java/akuma.jar /usr/share/java/jna.jar

Rebuild your package; the jar files should now have Class-Path entries and you should be lintian clean.

Enjoy.

Tagged

Trip Report: Puppet Camp Europe 2011

Puppet Camp in Amsterdam presented me with a great opportunity to get back up to speed with my favourite configuration management tool and to find out what else the guys a Puppet Labs have been up to over the last six months.

I’ve not attended a Puppet Camp before; I went with high expectations based on feedback from friends and colleagues and I was not disappointed. The event is a great mix of presentations from the Puppet Labs team, Puppet users and community members with break-out ‘Unconference’ sessions in the afternoon to focus on specific areas of interest that we all voted on after lunch.

Keynote

The conference started off with the keynote delivered by Luke Kanies, the founder of both Puppet and Puppet Labs. Puppet 2.7 is about to hit RC2 and Luke presented a great overview of what we can expect to see in this release. The focus of this release has been in the following areas:

  • Static Compiler Plugin; compiles file resources into the catalogue which will reduce the amount of calls between Puppet agents and the puppet master improving overall efficiency.
  • Certificate API: Puppet CA gets a new RESTful remote API for management of certificates.  This will be really important for integration into provisioning systems both in the data centre and in the cloud.
  • Puppet Faces; The puppet CLI will provide a new set of ‘Faces’ commands which allow more direct interface to Puppet internals.  This will allow uses to build custom agent behaviour for bespoke solutions much more easily.
  • Minimising Puppet Core: removal of excess dependencies (such as nagios plugins) to make the core of Puppet as simple and as efficient and possible while still allowing easy extension as required.

The other key change in 2.7 is a shift from GPL to Apache based licensing for the product. This will appease enterprises who don’t like/won’t use GPL licensed software and will allow Puppet Labs to side-step around some of the unproven concepts of whether an extension developed in Ruby is considered a change to core product under GPL – no one really seems to know the implications of this uncertainty.

Other stuff that Puppet Labs are thinking about for the future include:

  • Dealing with localised change in the Puppet agent to make it easier to feed back into central manifests and templates.
  • Cross-node application configuration management; Puppet is great at managing individual nodes but not so great at orchestrating change in a synchronised way across multiple nodes.  I suspect mCollective will be part of this story…
  • Database support
  • Change lifecycle management; mcollective seems to be key in this strategy but its not exactly clear how yet.
  • Further language changes to separate code from data

All good stuff IMHO.

Extending Puppet

Richard Crowley presented briefly about DevStructure’s current project, Blueprint, a tool to reverse engineer server configuration into Git.

He also presented on a number of ways in  which Puppet can be extended to-do interesting and crazy stuff – it was a good overview into the internal workings of Puppet (even if some of the examples came with warnings).  One of the neatest concepts was storing additional server configuration meta-data in DNS TXT records which puppet could then retrieve and use when compiling catalogues – kind of like a DNS based external node classifier.

Working with Puppet Modules

Henrik Lindberg from CloudSmith presented on some best practices for developing Puppet Modules and some of the work that is currently ongoing around the Puppet Forge, a central repository for community developed Puppet Modules.

He demonstrated Geppetto, an Eclipse based IDE for developing Puppet Modules; I think that developments such as this are key in the whole DevOps picture as it gives Developers a toolset that they should find familiar which allows then to develop functionality that can then be used for system configuration in Puppet – after all Puppet is really about coding not writing configuration files…

Geppetto could be a good target for inclusion into Debian and Ubuntu in a similar style to PyDev (see eclipse-pydev) and would make a nice, pre-integrated package for Ubuntu Desktops.

Puppet Continuous Integration

Nokia have taken with whole ‘Puppet is Code’ development principle one step further by applying Continuous Integration techniques to the modules that they develop to manage the infrastructure that supports their location based services.

Oliver Hookins from Nokia presented on how they have used Jenkins and cucumber-puppet to automate behavioural driven testing of Puppet catalogue definitions prior to automated deployment into their production environment.  I really like this concept as it injects a great software development discipline into configuration management development.

Puppi

Alessandro Franceschi from Lab42 gave a couple of great demonstrations (complete with cheezy 80’s style synth music) of puppi, a Puppet module and CLI toolset to support the deployment of full applications and batch operations using Puppet manifests. Although this was really interesting in its own right, he also demo’ed mc-puppi – puppi integrated with mcollective – deploying applications across a large number of systems using mcollective. Its this type of integration that really demonstrates the power of broadcast based systems management through mcollective

Puppet DSL

Randall Hansen has been employed by Puppet Labs specifically to address the usability of puppet across all of its interfaces; CLI, DSL and Dashboard. Current focus is on making the CLI consistent and usable with actionable error messaging being a key focus. The Puppet CLI has moved towards a “puppet command subcommand” structure over the last couple of releases and it looks like this strategy will continue to be developed.

This session generated quite alot of questions including ‘when will puppet get its own shell?’ (on the roadmap but not being actively developed) and ‘plugin support for dashboard?’ (that sounds like a really cool idea).

Unconference

I attended various Unconference sessions over the two days of Puppet Camp – here are my key takeouts:

  • Provisioning/Bootstrapping: an interesting session with respect to Ubuntu Orchestra as alot of the topics discussed where relevant to this project. Initial discussion focussed around which tools people where using for provisioning – Cobbler is in use (but seen as to specific to RHEL) but Foreman had alot more focus and interest. This is probably due to its tighter integration with Puppet; it can act as an external node classifier and can manage certificates during deployment, and its smart proxy architecture which looks really interesting (it can also proxy Puppet CA calls). The option to have a central management ‘foreman’ with deployment proxies in locations across the data centre(s) will fit well with what Enterprises want in terms of security so I can see its appeal.
  • mCollective: demo and questions with R.I. Pienaar (project founder). I really like the re-use of MoM concepts in System Adminstration; it scales so much better than traditional approaches and makes administering a large number of servers a piece of cake. I think it will be really interesting to see how Puppet Labs position this project alongside Puppet and what the blueprint deployment will look like in 6-12 months time. Again this is a really important development to consider in the context of Ubuntu Orchestra.
  • Network Device management; Puppet 2.7 has a feature which allows Cisco switches to be managed as nodes – this is the first step in this direction for Puppet. This is seen as a key development by the user base as it will support more integrated data centre configuration management across disciplines.

Summary

Puppet continues to develop strongly; the product and the company behind it are really maturing. I think there are a few key changes to integrate in the next few releases; specifically separation of data and code in the Puppet DSL and integration of mCollective into the Puppet Labs vision.

The community around Puppet continues to be very active; CloudSmith are really leading the charge with community developed modules and the user base seems to be very highly engaged with the product.