Tag Archives: charms

OpenStack Charms @ Denver PTG

Last week, myself and a number of the OpenStack Charms team had the pleasure of attending the OpenStack Project Teams Gathering in Denver, Colorado.

The first two days of the PTG where dedicated to cross project discussions, with the last three days focused on project specific discussion and work in dedicated rooms.

Here’s a summary of the charm related discussion over the week.

Cross Project Discussions

Skip Level Upgrades

This topic was discussed at the start of the week, in the context of supporting upgrades across multiple OpenStack releases for operators.  What was immediately evident was this was really a discussion around ‘fast-forward’ upgrades, rather than actually skipping any specific OpenStack series as part of a cloud upgrade.  Deployments would still need to step through each OpenStack release series in turn, so the discussion centred around how to make this much easier for operators and deployment tools to consume than it has been to-date.

There was general agreement on the principles that all steps required to update a service between series should be supported whilst the service is offline – i.e. all database migrations can be completed without the services actually running;  This would allow multiple upgrade steps to be completed without having to start services up on interim steps. Note that a lot of projects all ready support this approach, but its never been agreed as a general policy as part of the ‘supports-upgrade‘ tag which was one of the actions resulting from this discussion.

In the context of the OpenStack Charms, we already follow something along these lines for minimising the amount of service disruption in the control plane during OpenStack upgrades; with implementation of this approach across all projects, we can avoid having to start up services on each series step as we do today, further optimising the upgrade process delivered by the charms for services that don’t support rolling upgrades.

Policy in Code

Most services in OpenStack rely on a policy.{json,yaml} file to define the policy for role based access into API endpoints – for example, what operations require admin level permissions for the cloud. Moving all policy default definitions to code rather than in a configuration file is a goal for the Queens development cycle.

This approach will make adapting policies as part of an OpenStack Charm based deployment much easier, as we only have to manage the delta on top of the defaults, rather than having to manage the entire policy file for each OpenStack release.  Notably Nova and Keystone have already moved to this approach during previous development cycles.

Deployment (SIG)

During the first two days, some cross deployment tool discussions where held for a variety of topics; of specific interest for the OpenStack Charms was the discussion around health/status middleware for projects so that the general health of a service can be assessed via its API – this would cover in-depth checks such as access to database and messaging resources, as well as access to other services that the checked service might depend on – for example, can Nova access Keystone’s API for authentication of tokens etc. There was general agreement that this was a good idea, and it will be proposed as a community goal for the OpenStack project.

OpenStack Charms Devroom

Keystone: v3 API as default

The OpenStack Charms have optionally supported Keystone v3 for some time; The Keystone v2 API is officially deprecated, so we had discussion around approach for switching the default API deployed by the charms going forwards; in summary

  • New deployments should default to the v3 API and associated policy definitions
  • Existing deployments that get upgraded to newer charm releases should not switch automatically to v3, limiting the impact of services built around v2 based deployments already in production.
  • The charms already support switching from v2 to v3, so v2 deployments can upgrade as and when they are ready todo so.

At some point in time, we’ll have to automatically switch v2 deployments to v3 on OpenStack series upgrade, but that does not have to happen yet.

Keystone: Fernet Token support

The charms currently only support UUID based tokens (since PKI was dropped from Keystone); The preferred format is now Fernet so we should implement this in the charms – we should be able to leverage the existing PKI key management code to an extent to support Fernet tokens.

Stable Branch Life-cycles

Currently the OpenStack Charms team actively maintains two branches – the current development focus in the master branch, and the most recent stable branch – which right now is stable/17.08.  At the point of the next release, the stable/17.08 branch is no longer maintained, being superseded by the new stable/XX.XX branch.  This is reflected in the promulgated charms in the Juju charm store as well.  Older versions of charms remain consumable (albeit there appears to be some trimming of older revisions which needs investigating). If a bug is discovered in a charm version from a inactive stable branch, the only course of action is to upgrade the the latest stable version for fixes, which may also include new features and behavioural changes.

There are some technical challenges with regard to consumption of multiple stable branches from the charm store – we discussed using a different team namespace for an ‘old-stable’ style consumption model which is not that elegant, but would work.  Maintaining more branches means more resource effort for cherry-picks and reviews which is not feasible with the currently amount of time the development team has for these activities so no change for the time being!

Service Restart Coordination at Scale

tl;dr no one wants enabling debug logging to take out their rabbits

When running the OpenStack Charms at scale, parallel restarts of daemons for services with large numbers of units (we specifically discussed hundreds of compute units) can generate a high load on underlying control plane infrastructure as daemons drop and re-connect to message and database services potentially resulting in service outages. We discussed a few approaches to mitigate this specific problem, but ended up with focus on how we could implement a feature which batched up restarts of services into chunks based on a user provided configuration option.

You can read the full details in the proposed specification for this work.

We also had some good conversation around how unit level overrides for some configuration options would be useful – supporting the use case where a user wants to enable debug logging for a single unit of a service (maybe its causing problems) without having to restart services across all units to support this.  This is not directly supported by Juju today – but we’ll make the request!

Cross Model Relations – Use Cases

We brainstormed some ideas about how we might make use of the new cross-model relation features being developed for future Juju versions; some general ideas:

  • Multiple Region Cloud Deployments
    • Keystone + MySQL and Dashboard in one model (supporting all regions)
    • Each region (including region specific control plane services) deployed into a different model and controller, potentially using different MAAS deployments in different DC’s.
  • Keystone Federation Support
    • Use of Keystone deployments in different models/controllers to build out federated deployments, with one lead Keystone acting as the identity provider to other peon Keystones in different regions or potentially completely different OpenStack Clouds.

We’ll look to use the existing relations for some of these ideas, so as the implementation of this feature in Juju becomes more mature we can be well positioned to support its use in OpenStack deployments.

Deployment Duration

We had some discussion about the length of time taken to deploy a fully HA OpenStack Cloud onto hardware using the OpenStack Charms and how we might improve this by optimising hook executions.

There was general agreement that scope exists in the charms to improve general hook execution time – specifically in charms such as RabbitMQ and Percona XtraDB Cluster which create and distribute credentials to consuming applications.

We also need to ensure that we’re tracking any improvements made with good baseline metrics on charm hook execution times on reference hardware deployments so that any proposed changes to charms can be assessed in terms of positive or negative impact on individual unit hook execution time and overall deployment duration – so expect some work in CI over the next development cycle to support this.

As a follow up to the PTG, the team is looking at whether we can use the presence of a VIP configuration option to signal to the charm to postpone any presentation of access relation data to the point after which HA configuration has been completed and the service can be accessed across multiple units using the VIP.  This would potentially reduce the number (and associated cost) of interim hook executions due to pre-HA relation data being presented to consuming applications.

Mini Sprints

On the Thursday of the PTG, we held a few mini-sprints to get some early work done on features for the Queens cycle; specifically we hacked on:

Good progress was made in most areas with some reviews already up.

We had a good turnout with 10 charm developers in the devroom – thanks to everyone who attended and a special call-out to Billy Olsen who showed up with team T-Shirts for everyone!

We have some new specs already up for review, and I expect to see a few more over the next two weeks!

EOM

Advertisements
Tagged , , ,

Ubuntu OpenStack Dev Summary – 12th June 2017

devsumWelcome to the second Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!


OpenStack Distribution

Stable Releases

The current set of OpenStack Newton point releases have been released:

https://bugs.launchpad.net/cloud-archive/+bug/1688557

The next cadence cycle of stable fixes is underway – the current candidate list includes:

Cinder: RBD calls block entire process (Kilo)
https://bugs.launchpad.net/cinder/+bug/1401335

Cinder: Upload to image does not copy os_type property (Kilo)
https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1692446

Swift: swift-storage processes die of rsyslog is restarted (Kilo, Mitaka)
https://bugs.launchpad.net/ubuntu/trusty/+source/swift/+bug/1683076

Neutron: Router HA creation race (Mitaka, Newton)
https://bugs.launchpad.net/neutron/+bug/1662804

We’ll also sweep up any new stable point releases across OpenStack Mitaka, Newton and Ocata projects at the same time:

https://bugs.launchpad.net/ubuntu/+bug/1696177

https://bugs.launchpad.net/ubuntu/+bug/1696133

https://bugs.launchpad.net/ubuntu/+bug/1696139

Development Release

x86_64, ppc64el and s390x builds of Ceph 12.0.3 (the current Luminous development release) are available for testing via PPA whilst misc build issues are resolved with i386 and armhf architectures:

https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/2779

OpenStack Pike b2 was out last week; dependency updates have been uploaded (including 5 new packages) and core project updates are being prepared this week pending processing of new packages in Ubuntu Artful development. .


OpenStack Snaps

We’re really close to a functional all-in-one OpenStack cloud using the OpenStack snaps – work is underway on the nova-hypervisor snap to resolve some issues with use of sudo by the nova-compute and neutron-openvswitch daemons. Once this work has landed expect a more full update on efforts to-date on the OpenStack snaps, and how you can help out with snapping the rest of the OpenStack ecosystem!

If you want to give the current snaps a spin to see what’s possible checkout snap-test.


Nova LXD

Work on support for new LXD features to allow multiple storage backends has been landed into nova-lxd. Support for LXD using storage pools has also been added to the nova-compute and lxd charms.

The Tempest experimental gate is now functional again (hint: use ‘check experimental’ on a Gerrit review). Work is also underway to resolve issues with Neutron linuxbridge compatibility in OpenStack Pike (raised by the OpenStack-Ansible team – thanks!), including adding a new functional gate check for this particular networking option.


OpenStack Charms

Deployment Guide

The charms team will be starting work on the new OpenStack Charms deployment guide in the next week or so; if you’re an OpenStack Charm user and would like to help contribute to a best practice guide to cover all aspects of building an OpenStack cloud using MAAS, Juju and the OpenStack Charms we want to hear from you!  Ping jamespage in #openstack-charms on Freenode IRC or attend our weekly meeting to find out more.

Stable bug backports

If you have bugs that you’d like to see backported to the current stable charm set, please tag them with the ‘stable-backport’ tag (and they will pop-up in the right place in Launchpad) – you can see the current pipeline here.

We’ve had a flurry of stable backports of the last few weeks to fill in the release gap left when the project switched to a 6 month release cadence so be sure to update and test out the latest versions of the OpenStack charms in the charm store.

IRC (and meetings)

You can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM

Tagged , , , ,

Ubuntu OpenStack Dev Summary – 22nd May 2017

Welcome to the first ever Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!


OpenStack Distribution

Stable Releases

Ceph 10.2.7 for Xenial, Yakkety, Zesty and Trusty-Mitaka UCA:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1684527

Open vSwitch updates (2.5.2 and 2.6.1) for Xenial and Yakkety plus associated UCA pockets:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1673063
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1641956

Point releases for Horizon (9.1.2) and Keystone (9.3.0) for Xenial and Trusty-Mitaka UCA:
https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1680098

And the current set of OpenStack Newton point releases have just entered testing:
https://bugs.launchpad.net/cloud-archive/+bug/1688557

Development Release

OpenStack Pike b1 is available in Xenial-Pike UCA (working through proposed testing in Artful).

Open vSwitch 2.7.0 is available in Artful and Xenial-Pike UCA.

Expect some focus on development previews for Ceph Luminous (the next stable release) for Artful and the Xenial-Pike UCA in the next month.


OpenStack Snaps

Progress on producing snap packages for OpenStack components continues; snaps for glance, keystone, nova, neutron and nova-hypervisor are available in the snap store in the edge channel – for example:

sudo snap install --edge --classic keystone

Snaps are currently Ocata aligned; once the team have a set of snaps that we’re all comfortable are a good base, we’ll be working towards publication of snaps across tracks for OpenStack Ocata and OpenStack Pike as well as expanding the scope of projects covered with snap packages.

The edge channel for each track will contain the tip of the associated branch for each OpenStack project, with the beta, candidate and release channels being reserved for released versions. These three channels will be used to drive the CI process for validation of snap updates. This should result in an experience something like:

sudo snap install --classic --channel=ocata/stable keystone

or

sudo snap install --classic --channel=pike/edge keystone

As the snaps mature, the team will be focusing on enabling deployment of OpenStack using snaps in the OpenStack Charms (which will support CI/CD testing) and migration from deb based installs to snap based installs.


Nova LXD

Support for different Cinder block device backends for Nova-LXD has landed into driver (and the supporting os-brick library), allowing Ceph Cinder storage backends to be used with LXD containers; this is available in the Pike development release only.

Work on support for new LXD features to allow multiple storage backends to be used is currently in-flight, allowing the driver to use dedicated storage for its LXD instances alongside any use of LXD via other tools on the same servers.


OpenStack Charms

6 monthly release cycle

The OpenStack Charms project is moving to a 6 monthly release cadence (rather than the 3 month cadence we’ve followed for the last few years); This reflects the reduced rate of new features across OpenStack and the charms, and the improved process for backporting fixes to the stable charm set between releases. The next charm release will be in August, aligned with the release of OpenStack Pike and the Xenial Pike UCA.

If you have bugs that you’d like to see backported to the current stable charm set, please tag them with the ‘stable-backport’ tag (and they will pop-up in the right place in Launchpad) – you can see the current stable bug pipeline here.

Ubuntu Artful and OpenStack Pike Support

Required changes into the OpenStack Charms to support deployment of Ubuntu Artful (the current development release) and OpenStack Pike are landing into the development branches for all charms, alongside the release of Pike b1 into Artful and the Xenial-Pike UCA.

You can consume these charms (as always) via the ~openstack-charmers-next team, for example:

juju deploy cs:~openstack-charmers-next/keystone

IRC (and meetings)

You can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM

Tagged , , ,

OpenStack Charms in Boston

At next weeks OpenStack Summit in Boston, the OpenStack Charms team will be holding an onboarding workshop on Monday at 4:40pm in MR-105.

This is a great opportunity to learn more about the project both in terms of how to get started using the OpenStack Charms to deploy OpenStack, and how to get involved with the project from from a contribution perspective!

Let us know if you’re coming along and what you’d like to get out of the session here.

Looking forward to seeing you all next week!

Tagged , ,

Ubuntu OpenStack Charms: 15.01 release

OpenStack NovaThe Ubuntu Server team is pleased to announce their first interim release, 15.01, of charm features and fixes for the Ubuntu OpenStack charms for Juju – here are some selected highlights:

Clustering

General improvements have been made to the hacluster charm that we use for clustering OpenStack services; specifically the way quorum is handled in pacemaker and corosync has been improved so that clusters should react more appropriately in situations where one or more units fail.

We’ve also introduced a unicast mode for corosync cluster communication – this is useful in environments where multicast UDP might be disabled; in testing this has also proven much more reliable if you are running services under LXC containers spread across physical servers, and is the recommended configuration for these types of deployment.

Tuning

The ceph, ceph-osd, nova-compute and quantum-gateway charms have all gained a tuning configuration option which allows users to set sysctl options – we’ve provided some best practice defaults in the ceph charms, but this feature will allow expert users to tune Ubuntu away to their hearts content!

High Availability

The ceilometer and ceph-radosgw charms have grown HA support (using the hacluster charm) and the quantum-gateway charm now has a configuration option for Icehouse users to enable a legacy ha mode (again using the hacluster charm) to ensure that routers and networks are recovered onto active gateway nodes in the event that a unit fails.

We’ve also improved the nova-cloud-controller charm so that guest console access can be used in HA deployments by providing a memcached back-end for token storage and sharing between units.

Nova Ceph Storage Support

The nova-compute charm has grown support for different storage back-ends; the first new back-end support is for Ceph, allowing users to use Ceph for default storage of instance root and ephemeral disks.  You’ll want to be running some serious networking to use this feature – remember all those reads and writes will be going over the network!

And finally..

You can checkout the list of bugs closed and read the full release notes – which contain more detail on these new features!

Thanks go to all the charm contributors:

  • Edward Hope-Morley
  • Billy Olsen
  • Liang Chen
  • Jorge Niedbalski
  • Xiang Hui
  • Felipe Reyes
  • Yaguang Tang
  • Seyeong Kim
  • Jorge Castro
  • Corey Bryant
  • Tom Haddon
  • Brad Marshall
  • Liam Young
  • Ryan Beisner

awesome job guys!

EOM

Tagged , , , ,