Puppet Camp in Amsterdam presented me with a great opportunity to get back up to speed with my favourite configuration management tool and to find out what else the guys a Puppet Labs have been up to over the last six months.
I’ve not attended a Puppet Camp before; I went with high expectations based on feedback from friends and colleagues and I was not disappointed. The event is a great mix of presentations from the Puppet Labs team, Puppet users and community members with break-out ‘Unconference’ sessions in the afternoon to focus on specific areas of interest that we all voted on after lunch.
The conference started off with the keynote delivered by Luke Kanies, the founder of both Puppet and Puppet Labs. Puppet 2.7 is about to hit RC2 and Luke presented a great overview of what we can expect to see in this release. The focus of this release has been in the following areas:
- Static Compiler Plugin; compiles file resources into the catalogue which will reduce the amount of calls between Puppet agents and the puppet master improving overall efficiency.
- Certificate API: Puppet CA gets a new RESTful remote API for management of certificates. This will be really important for integration into provisioning systems both in the data centre and in the cloud.
- Puppet Faces; The puppet CLI will provide a new set of ‘Faces’ commands which allow more direct interface to Puppet internals. This will allow uses to build custom agent behaviour for bespoke solutions much more easily.
- Minimising Puppet Core: removal of excess dependencies (such as nagios plugins) to make the core of Puppet as simple and as efficient and possible while still allowing easy extension as required.
The other key change in 2.7 is a shift from GPL to Apache based licensing for the product. This will appease enterprises who don’t like/won’t use GPL licensed software and will allow Puppet Labs to side-step around some of the unproven concepts of whether an extension developed in Ruby is considered a change to core product under GPL – no one really seems to know the implications of this uncertainty.
Other stuff that Puppet Labs are thinking about for the future include:
- Dealing with localised change in the Puppet agent to make it easier to feed back into central manifests and templates.
- Cross-node application configuration management; Puppet is great at managing individual nodes but not so great at orchestrating change in a synchronised way across multiple nodes. I suspect mCollective will be part of this story…
- Database support
- Change lifecycle management; mcollective seems to be key in this strategy but its not exactly clear how yet.
- Further language changes to separate code from data
All good stuff IMHO.
Richard Crowley presented briefly about DevStructure’s current project, Blueprint, a tool to reverse engineer server configuration into Git.
He also presented on a number of ways in which Puppet can be extended to-do interesting and crazy stuff – it was a good overview into the internal workings of Puppet (even if some of the examples came with warnings). One of the neatest concepts was storing additional server configuration meta-data in DNS TXT records which puppet could then retrieve and use when compiling catalogues – kind of like a DNS based external node classifier.
Working with Puppet Modules
Henrik Lindberg from CloudSmith presented on some best practices for developing Puppet Modules and some of the work that is currently ongoing around the Puppet Forge, a central repository for community developed Puppet Modules.
He demonstrated Geppetto, an Eclipse based IDE for developing Puppet Modules; I think that developments such as this are key in the whole DevOps picture as it gives Developers a toolset that they should find familiar which allows then to develop functionality that can then be used for system configuration in Puppet – after all Puppet is really about coding not writing configuration files…
Geppetto could be a good target for inclusion into Debian and Ubuntu in a similar style to PyDev (see eclipse-pydev) and would make a nice, pre-integrated package for Ubuntu Desktops.
Puppet Continuous Integration
Nokia have taken with whole ‘Puppet is Code’ development principle one step further by applying Continuous Integration techniques to the modules that they develop to manage the infrastructure that supports their location based services.
Oliver Hookins from Nokia presented on how they have used Jenkins and cucumber-puppet to automate behavioural driven testing of Puppet catalogue definitions prior to automated deployment into their production environment. I really like this concept as it injects a great software development discipline into configuration management development.
Alessandro Franceschi from Lab42 gave a couple of great demonstrations (complete with cheezy 80’s style synth music) of puppi, a Puppet module and CLI toolset to support the deployment of full applications and batch operations using Puppet manifests. Although this was really interesting in its own right, he also demo’ed mc-puppi – puppi integrated with mcollective – deploying applications across a large number of systems using mcollective. Its this type of integration that really demonstrates the power of broadcast based systems management through mcollective
Randall Hansen has been employed by Puppet Labs specifically to address the usability of puppet across all of its interfaces; CLI, DSL and Dashboard. Current focus is on making the CLI consistent and usable with actionable error messaging being a key focus. The Puppet CLI has moved towards a “puppet command subcommand” structure over the last couple of releases and it looks like this strategy will continue to be developed.
This session generated quite alot of questions including ‘when will puppet get its own shell?’ (on the roadmap but not being actively developed) and ‘plugin support for dashboard?’ (that sounds like a really cool idea).
I attended various Unconference sessions over the two days of Puppet Camp – here are my key takeouts:
- Provisioning/Bootstrapping: an interesting session with respect to Ubuntu Orchestra as alot of the topics discussed where relevant to this project. Initial discussion focussed around which tools people where using for provisioning – Cobbler is in use (but seen as to specific to RHEL) but Foreman had alot more focus and interest. This is probably due to its tighter integration with Puppet; it can act as an external node classifier and can manage certificates during deployment, and its smart proxy architecture which looks really interesting (it can also proxy Puppet CA calls). The option to have a central management ‘foreman’ with deployment proxies in locations across the data centre(s) will fit well with what Enterprises want in terms of security so I can see its appeal.
- mCollective: demo and questions with R.I. Pienaar (project founder). I really like the re-use of MoM concepts in System Adminstration; it scales so much better than traditional approaches and makes administering a large number of servers a piece of cake. I think it will be really interesting to see how Puppet Labs position this project alongside Puppet and what the blueprint deployment will look like in 6-12 months time. Again this is a really important development to consider in the context of Ubuntu Orchestra.
- Network Device management; Puppet 2.7 has a feature which allows Cisco switches to be managed as nodes – this is the first step in this direction for Puppet. This is seen as a key development by the user base as it will support more integrated data centre configuration management across disciplines.
Puppet continues to develop strongly; the product and the company behind it are really maturing. I think there are a few key changes to integrate in the next few releases; specifically separation of data and code in the Puppet DSL and integration of mCollective into the Puppet Labs vision.
The community around Puppet continues to be very active; CloudSmith are really leading the charge with community developed modules and the user base seems to be very highly engaged with the product.