Tag Archives: openstack

Winning with OpenStack Upgrades?

On the Monday of the project teams gathering in Dublin a now somewhat familiar gathering of developers and operators got together to discuss upgrades – specifically fast forward upgrades but discussion over the day drifted into rolling upgrades and how to minimize downtime in supporting components as well. This discussion has been a regular feature over the last 18 months at PTG’s, Forums and Ops Meetups.

Fast Forward Upgrades?

So what is a fast forward upgrade? A fast forward upgrade takes an OpenStack deployment through multiple OpenStack releases without the requirement to run agents/daemons at each upgrade step; it does not allow you to skip an OpenStack release – the process allows you to just not run a release as you pass through it. This enables operators using older OpenStack releases to catch up with the latest OpenStack release in as short an amount of time as possible, accepting the compromise that the cloud control plane is down during the upgrade process.

This is somewhat adjunct to a rolling upgrade, where access to the control plane of the cloud is maintained during the upgrade process by upgrading units of a specific service individually, and leveraging database migration approaches such as expand/migrate/contract (EMC) to provide as seamless an upgrade process as possible for an OpenStack cloud. In common with fast forward upgrades, releases cannot be skipped.

Both upgrade approaches specifically aim to not disrupt the data plane of the cloud – instances, networking and storage – however this may be unavoidable if components such as Open vSwitch and the Linux kernel need to be upgraded as part of the upgrade process.

Deployment Project Updates

The TripleO team have been working towards fast forward upgrades during the Queens cycle and have a ‘pretty well defined model’ for what they’re aiming for with their upgrade process. They still have some challenges around ordering to minimize downtime specifically around Linux and OVS upgrades.

The OpenStack Ansible team gave an update – they have a concept of ‘leap upgrades’ which is similar to fast-forward upgrades – this work appears to lag behind the main upgrade path for OSA, which is a rolling upgrade approach which aims to be 100% online.

The OpenStack Charms team still continue to have a primary upgrade focus on rolling upgrades, minimizing downtime as much as possible for both the control and data plane of the Cloud. The primary focus for this team right now is supporting upgrades of the underlying Ubuntu OS between LTS releases with the imminent release of 18.04 on the horizon in April 2018, so no immediate work is planned on adopting fast-forward upgrades.

The Kolla team also have a primary focus on rolling upgrades, for which support starts at OpenStack Queens or later. There was some general discussion around automated configuration generation using Oslo to ease migration between OpenStack releases.

No one was present to represent the OpenStack Helm team.

Keeping Networking Alive

Challenges around keeping the Neutron data-plane alive during an upgrade where discussed – this included:

  • Minimising Open vSwitch downtime by saving and restoring flows.
  • Use of the ‘neutron-ha-tool’ from AT&T to manage routers across network nodes during an OpenStack cloud upgrade – there was also a bit of bike shedding on approaches to Neutron router HA in larger clouds. Plan are afoot to endeavor to make this part of the neutron code base.

Ceph Upgrades

We had a specific slot to discuss upgrade Ceph as part of an OpenStack Cloud upgrade; some deployment projects upgrade Ceph first (Charms), some last (TripleO) but there was general agreement that Ceph upgrades are pretty much always a rolling upgrade – i.e. no disruption to the storage services being provided. Generally there seems to be less pain in this area so it was not a long session.

Operator Feedback

A number of operators shared experiences of walking their OpenStack deployments through fast forward upgrades including some of the gotchas and trip hazards encountered.

Oath provided a lot of feedback on their experience of fast-forward upgrading their cloud from Juno to Ocata which included some increased complexity due to the move to using cells internally for Ocata. Ensuring compatibility between OpenStack and supporting projects was one challenge encountered – for example, snapshots worked fine with Juno and Libvirt 1.5.3, however on upgrade live snapshots where broken until Libvirt was upgraded to 2.9.0. Not all test combinations are covered in the gate!

Some of these have been shared on the OpenStack Wiki.

Upgrade SIG

Upgrade discussion has become a regular fixture at PTG’s, Forums, Summits and Meetups over the last few years; getting it right is tricky and the general feeling in the session was that this is something that we should talk about more between events.

The formation of an Upgrade SIG was proposed and supported by key participants in the session. The objective of the SIG is to improve the overall upgrade process for OpenStack Clouds, covering both offline ‘fast-forward’ and online ‘rolling’ upgrades by providing a forum for cross-project collaboration between operators and developers to document and codify best practice for upgrading OpenStack.

The SIG will initially be led by Lujin Luo (Fujitsu), Lee Yarwood (Redhat) and myself (Canonical) – we’ll be sorting out the schedule for bi-weekly IRC meetings in the next week or so – OpenStack operators and developers from across all projects are invited to participate in the SIG and help move OpenStack life cycle management forward!

Advertisements
Tagged , , , , ,

Ubuntu OpenStack Dev Summary – 18th December 2018

Welcome to the Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

Current in-flight SRU’s for OpenStack related packages:

Ceph 12.2.1

OpenvSwitch 2.8.1

nova-novncproxy process gets wedged, requiring kill -HUP

Horizon Cinder Consistency Groups

Recently released SRU’s for OpenStack related packages:

Percona XtraDB Cluster Security Updates

Pike Stable Releases

Ocata Stable Releases

Ceph 10.2.9

Development Release

Since the last dev summary, OpenStack Queens Cloud Archive pockets have been setup and have received package updates for the first and second development milestones – you can install them on Ubuntu 16.04 LTS using:

sudo add-apt-repository cloud-archive:queens[-proposed]

OpenStack Queens will also form part of the Ubuntu 18.04 LTS release in April 2018, so alternatively you can try out OpenStack Queens using Ubuntu Bionic directly.

You can always test with up-to-date packages built from project branches from the Ubuntu OpenStack testing PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/queens

Nova LXD

No significant feature work to report on since the last dev summary.

The OpenStack Ansible team have contributed an additional functional gate for nova-lxd – its currently non-voting, but does provide some additional testing feedback for nova-lxd developers during the code review process.  If it proves stable and useful, we’ll make this a voting check/gate.

OpenStack Charms

Ceph charm migration

Since the last development summary, the Charms team released the 17.11 set of stable charms; this includes a migration path for users of the deprecated ceph charm to using ceph-mon and ceph-osd. For full details on this process checkout the charm deployment guide.

Queens Development

As part of the 17.11 charm release a number of charms switched to execution of charm hooks under Python 3 – this includes the nova-compute, neutron-{api,gateway,openvswitch}, ceph-{mon,osd} and heat charms;  once these have had some battle testing, we’ll focus on migrating the rest of the charm set to Python 3 as well.

Charm changes to support the second Queens milestone (mainly in ceilometer and keystone) and Ubuntu Bionic are landing into charm development to support ongoing testing during the development cycle.  OpenStack Charm deployments for Queens and later will default to using the Keystone v3 API (v2 has been removed as of Queens).  Telemetry users must deploy Ceilometer with Gnocchi and Aodh as the Ceilometer API has now been removed from charm based deployments and from the Ceilometer codebase.  You can install the current tip of charm development using the the openstack-charmers-next prefix for charmstore URL’s – for example:

juju deploy cs:~openstack-charmers-next/neutron-api

ZeroMQ support has been dropped from the charms; having no know users and no functional testing in the gate and having issued deprecation warnings in release notes it was time to drop the associated code from the code base.  PostgreSQL and deploy from source are also expected to be removed from the charms this cycle.

You can read the full list of specs currently scheduled for Queens here.

Releases

The last stable charm release went out at the end of November including the first stable release of the Gnocchi charm – you can read the full details in the release notes.  The next stable charm release will take place in February alongside OpenStack Queens, with a release shortly after the Ubuntu 18.04 LTS release in May to sweep up any pending LTS support and fixes needed.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.  The next IRC meeting will be on the 8th of January at 1700 UTC.

And finally – Merry Christmas!

EOM

 

Tagged , ,

Ubuntu Openstack Dev Summary – 9th October 2017

Welcome to the seventh Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

Current in-flight SRU’s for OpenStack related packages:

Ceph 10.2.9 point release

Ocata Stable Point Releases

Pike Stable Point Releases

Horizon Newton->Ocata upgrade fixes

Recently released SRU’s for OpenStack related packages:

Newton Stable Point Releases

Development Release

OpenStack Pike released in August and is install-able on Ubuntu 16.04 LTS using the Ubuntu Cloud Archive:

sudo add-apt-repository cloud-archive:pike

OpenStack Pike also forms part of the Ubuntu 17.10 release later this month; final charm testing is underway in preparation for full Artful support for the charm release in November.

We’ll be opening the Ubuntu Cloud Archive for OpenStack Queens in the next two weeks; the first uploads will be the first Queens milestones, which will coincide nicely with the opening of the next Ubuntu development release (which will become Ubuntu 18.04 LTS).

OpenStack Snaps

The main focus in the last few weeks has been on testing of the gnocchi snap, which is currently install-able from the edge channel:

sudo snap install --edge gnocchi

The gnocchi snap provides the gnocchi-api (nginx/uwsgi deployed) and gnocchi-metricd service;  Due to some incompatibilities between gnocchi/cradox/python-rados the snap is currently based on the 3.1.11 release; hopefully we should work through the issues with the 4.0.x release in the next week or so, as well as having multiple tracks setup for this snap so you can consume a version known to be compatible with a specific OpenStack release.

Nova LXD

The team is currently planning work for the Queens development cycle; pylxd has received a couple of new features – specifically support for storage pools as provided in newer LXD versions, and streaming of image uploads to LXD which greatly reduces the memory footprint of client applications during uploads.

OpenStack Charms

Queens Planning

Out of the recent Queens PTG, we have a number of feature specs landed in the charms specification repository . There are a few more in the review queue; if you’re interested in plans for the Queens release of the charms next year, this is a great place to get a preview and provide the team feedback on the features that are planned for development.

Deployment Guide

The first version of the new Charm Deployment Guide has now been published to the OpenStack Docs website; we have a small piece of followup work to complete to ensure its published alongside other deployment project guides, but hopefully that should wrap up in the next few days.  Please give the guide a spin and log any bugs that you might find!

Bugs

Over the last few weeks there has been an increased level of focus on the current bug triage queue for the charms; from a peak of 600 open bugs two weeks ago, with around 100 pending triage, we’ve closed out 70 bugs and the triage queue is down to a much more manageable level.  The recently introduced bug triage rota has helped with this effort and should ensure we keep on-top of incoming bugs in the future.

Releases

In the run-up to the August charm release, a number of test scenarios which required manual execution where automated as part of the release testing activity;  this automation work reduces the effort to produce the release, and means that the majority of test scenarios can be run on a regular basis.  As a result, we’re going to move back to a three month release cycle; the next charm release will be towards the end of November after the OpenStack summit in Sydney.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM

 

Tagged , ,

OpenStack Charms @ Denver PTG

Last week, myself and a number of the OpenStack Charms team had the pleasure of attending the OpenStack Project Teams Gathering in Denver, Colorado.

The first two days of the PTG where dedicated to cross project discussions, with the last three days focused on project specific discussion and work in dedicated rooms.

Here’s a summary of the charm related discussion over the week.

Cross Project Discussions

Skip Level Upgrades

This topic was discussed at the start of the week, in the context of supporting upgrades across multiple OpenStack releases for operators.  What was immediately evident was this was really a discussion around ‘fast-forward’ upgrades, rather than actually skipping any specific OpenStack series as part of a cloud upgrade.  Deployments would still need to step through each OpenStack release series in turn, so the discussion centred around how to make this much easier for operators and deployment tools to consume than it has been to-date.

There was general agreement on the principles that all steps required to update a service between series should be supported whilst the service is offline – i.e. all database migrations can be completed without the services actually running;  This would allow multiple upgrade steps to be completed without having to start services up on interim steps. Note that a lot of projects all ready support this approach, but its never been agreed as a general policy as part of the ‘supports-upgrade‘ tag which was one of the actions resulting from this discussion.

In the context of the OpenStack Charms, we already follow something along these lines for minimising the amount of service disruption in the control plane during OpenStack upgrades; with implementation of this approach across all projects, we can avoid having to start up services on each series step as we do today, further optimising the upgrade process delivered by the charms for services that don’t support rolling upgrades.

Policy in Code

Most services in OpenStack rely on a policy.{json,yaml} file to define the policy for role based access into API endpoints – for example, what operations require admin level permissions for the cloud. Moving all policy default definitions to code rather than in a configuration file is a goal for the Queens development cycle.

This approach will make adapting policies as part of an OpenStack Charm based deployment much easier, as we only have to manage the delta on top of the defaults, rather than having to manage the entire policy file for each OpenStack release.  Notably Nova and Keystone have already moved to this approach during previous development cycles.

Deployment (SIG)

During the first two days, some cross deployment tool discussions where held for a variety of topics; of specific interest for the OpenStack Charms was the discussion around health/status middleware for projects so that the general health of a service can be assessed via its API – this would cover in-depth checks such as access to database and messaging resources, as well as access to other services that the checked service might depend on – for example, can Nova access Keystone’s API for authentication of tokens etc. There was general agreement that this was a good idea, and it will be proposed as a community goal for the OpenStack project.

OpenStack Charms Devroom

Keystone: v3 API as default

The OpenStack Charms have optionally supported Keystone v3 for some time; The Keystone v2 API is officially deprecated, so we had discussion around approach for switching the default API deployed by the charms going forwards; in summary

  • New deployments should default to the v3 API and associated policy definitions
  • Existing deployments that get upgraded to newer charm releases should not switch automatically to v3, limiting the impact of services built around v2 based deployments already in production.
  • The charms already support switching from v2 to v3, so v2 deployments can upgrade as and when they are ready todo so.

At some point in time, we’ll have to automatically switch v2 deployments to v3 on OpenStack series upgrade, but that does not have to happen yet.

Keystone: Fernet Token support

The charms currently only support UUID based tokens (since PKI was dropped from Keystone); The preferred format is now Fernet so we should implement this in the charms – we should be able to leverage the existing PKI key management code to an extent to support Fernet tokens.

Stable Branch Life-cycles

Currently the OpenStack Charms team actively maintains two branches – the current development focus in the master branch, and the most recent stable branch – which right now is stable/17.08.  At the point of the next release, the stable/17.08 branch is no longer maintained, being superseded by the new stable/XX.XX branch.  This is reflected in the promulgated charms in the Juju charm store as well.  Older versions of charms remain consumable (albeit there appears to be some trimming of older revisions which needs investigating). If a bug is discovered in a charm version from a inactive stable branch, the only course of action is to upgrade the the latest stable version for fixes, which may also include new features and behavioural changes.

There are some technical challenges with regard to consumption of multiple stable branches from the charm store – we discussed using a different team namespace for an ‘old-stable’ style consumption model which is not that elegant, but would work.  Maintaining more branches means more resource effort for cherry-picks and reviews which is not feasible with the currently amount of time the development team has for these activities so no change for the time being!

Service Restart Coordination at Scale

tl;dr no one wants enabling debug logging to take out their rabbits

When running the OpenStack Charms at scale, parallel restarts of daemons for services with large numbers of units (we specifically discussed hundreds of compute units) can generate a high load on underlying control plane infrastructure as daemons drop and re-connect to message and database services potentially resulting in service outages. We discussed a few approaches to mitigate this specific problem, but ended up with focus on how we could implement a feature which batched up restarts of services into chunks based on a user provided configuration option.

You can read the full details in the proposed specification for this work.

We also had some good conversation around how unit level overrides for some configuration options would be useful – supporting the use case where a user wants to enable debug logging for a single unit of a service (maybe its causing problems) without having to restart services across all units to support this.  This is not directly supported by Juju today – but we’ll make the request!

Cross Model Relations – Use Cases

We brainstormed some ideas about how we might make use of the new cross-model relation features being developed for future Juju versions; some general ideas:

  • Multiple Region Cloud Deployments
    • Keystone + MySQL and Dashboard in one model (supporting all regions)
    • Each region (including region specific control plane services) deployed into a different model and controller, potentially using different MAAS deployments in different DC’s.
  • Keystone Federation Support
    • Use of Keystone deployments in different models/controllers to build out federated deployments, with one lead Keystone acting as the identity provider to other peon Keystones in different regions or potentially completely different OpenStack Clouds.

We’ll look to use the existing relations for some of these ideas, so as the implementation of this feature in Juju becomes more mature we can be well positioned to support its use in OpenStack deployments.

Deployment Duration

We had some discussion about the length of time taken to deploy a fully HA OpenStack Cloud onto hardware using the OpenStack Charms and how we might improve this by optimising hook executions.

There was general agreement that scope exists in the charms to improve general hook execution time – specifically in charms such as RabbitMQ and Percona XtraDB Cluster which create and distribute credentials to consuming applications.

We also need to ensure that we’re tracking any improvements made with good baseline metrics on charm hook execution times on reference hardware deployments so that any proposed changes to charms can be assessed in terms of positive or negative impact on individual unit hook execution time and overall deployment duration – so expect some work in CI over the next development cycle to support this.

As a follow up to the PTG, the team is looking at whether we can use the presence of a VIP configuration option to signal to the charm to postpone any presentation of access relation data to the point after which HA configuration has been completed and the service can be accessed across multiple units using the VIP.  This would potentially reduce the number (and associated cost) of interim hook executions due to pre-HA relation data being presented to consuming applications.

Mini Sprints

On the Thursday of the PTG, we held a few mini-sprints to get some early work done on features for the Queens cycle; specifically we hacked on:

Good progress was made in most areas with some reviews already up.

We had a good turnout with 10 charm developers in the devroom – thanks to everyone who attended and a special call-out to Billy Olsen who showed up with team T-Shirts for everyone!

We have some new specs already up for review, and I expect to see a few more over the next two weeks!

EOM

Tagged , , ,

OpenStack Charms 17.08 release!

The OpenStack Charms team is pleased to announce that the 17.08 release of the OpenStack Charms is now available from jujucharms.com!

In addition to 204 bug fixes across the charms and support for OpenStack Pike, this release includes a new charm for Gnocchi, support for Neutron internal DNS, Percona Cluster performance tuning and much more.

For full details of all the new goodness in this release please refer to the release notes.

Thanks go to the following people who contributed to this release:

Nobuto Murata
Mario Splivalo
Ante Karamatić
zhangbailin
Shane Peters
Billy Olsen
Tytus Kurek
Frode Nordahl
Felipe Reyes
David Ames
Jorge Niedbalski
Daniel Axtens
Edward Hope-Morley
Chris MacNaughton
Xav Paice
James Page
Jason Hobbs
Alex Kavanagh
Corey Bryant
Ryan Beisner
Graham Burgess
Andrew McLeod
Aymen  Frikha
Hua Zhang
Alvaro Uría
Peter Sabaini

EOM

 

 

Tagged ,

OpenStack Pike for Ubuntu 16.04 LTS

Hi All,

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Pike for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive. Details of the Pike release can be found in the OpenStack release notes for Pike.

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive pocket for OpenStack Pike on Ubuntu 16.04 LTS installations by running the following commands:

sudo add-apt-repository cloud-archive:pike
sudo apt update

The Ubuntu Cloud Archive for Pike includes updates for:

aodh, barbican, ceilometer, ceph (12.2.0 Luminous), cinder, congress, designate, designate-dashboard, dpdk (17.05.1), glance, gnocchi, heat, horizon, ironic, libvirt (3.6.0), keystone, magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, nova, nova-lxd, openstack-trove, openvswitch (2.8.0 pre-release), panko, qemu (2.10), sahara, sahara-dashboard, senlin, swift, trove-dashboard, watcher and zaqar

Open vSwitch will be updated to the 2.8.0 release as soon as it’s available.

For a full list of packages and versions, please refer to the Pike UCA version tracker.

Branch Package Builds

If you would like to try out the latest updates to git branches, we deliver continuously integrated packages on each upstream commit via the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/newton
sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

Reporting Bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thanks to everyone who has contributed to OpenStack Pike, both upstream and downstream!

Have fun and see you all for Queens!

Regards,

James

(on behalf of the Ubuntu OpenStack team)

Tagged , ,

Ubuntu Openstack Dev Summary – 31th August 2017

Welcome to the sixth Ubuntu OpenStack development summary!

Firstly apologies for the lack of update two weeks ago; your author’s attempt to schedule publication whilst he was on holiday failed miserably – so this edition is a bit of a wrap up of the last 3-4 weeks of activities.

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

Current in-flight SRU’s for OpenStack related packages:

Nova: Incorrect host cpu is given to emulator threads when
cpu_realtime_mask flag is set
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1614054

Recently released SRU’s for OpenStack related packages:

python-cinderclient: enable use with webob 1.6.0
https://bugs.launchpad.net/ubuntu/+bug/1559072

python-openstackclient: enable network commands
without region and 2.3.1 point release
https://bugs.launchpad.net/ubuntu/+bug/1703372
https://bugs.launchpad.net/ubuntu/+bug/1570491

Nova: nova-api <-> nova-placement-api install-ability
https://bugs.launchpad.net/bugs/1700677

Neutron: router host binding id not updated after failover
https://bugs.launchpad.net/ubuntu/+bug/1694337

Newton point releases
https://bugs.launchpad.net/cloud-archive/+bug/1705176

Keystone: keystone-manage mapping_engine federation rule testing
https://bugs.launchpad.net/ubuntu/+bug/1655182

Ocata point releases
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1706297

Expect new point releases for Ceph (10.2.9) and Open vSwitch (2.5.3) for the next SRU cycle in September.

Development Release

OpenStack Pike release packages should be available in the Ubuntu Cloud Archive and in Ubuntu Artful this week, along with Ceph Luminous 12.2.0, Open vSwitch 2.8.0~ and updates to the latest libvirt and qemu versions; you can test with them today in the proposed testing area:

sudo add-apt-repository cloud-archive:pike-proposed

We’ll make a more detailed release announcement once final testing has been completed and package updates are available in the -updates pocket.

Remember that it’s also possible to consume OpenStack packages built from the tip of the upstream git repository master branches using:

sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

Packages are automatically built for Artful and Xenial.

OpenStack Snaps

We’ve been polishing the snapstack testing tool, working on decoupling it from the snap-test scripts, in preparation for deprecating snap-test. We’ve also improved its performance and reliability when running in the OpenStack Gerrit gate by switching over to using smaller, lighter cirros images, and using tarball URLs that route through the zuul reverse proxy whenever possible. We’ve also added some features to support better use of snapstack from behind proxies.

Work has also begun on the Gnocchi snap, which will be used in the upcoming Gnocchi charm!

Nova LXD

The pike release of nova-lxd (16.0.0) was made on the 30th of August; this will be avaliable in Ubuntu Artful and the Pike Cloud Archive for Ubuntu 16.04 LTS.

OpenStack Charms

Pike Release

We’re right on top of the release of OpenStack Pike – the charm release will happen the week after the main OpenStack release on the 7th of September.  Feature freeze was on the 24th August so development has shifted away from feature work towards working through the bug backlog.  Look for more details in the release notes for the Charm release next week.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM

 

Tagged , ,

Ubuntu OpenStack Pike Milestone 2

The Ubuntu OpenStack team is pleased to announce the general availability of the OpenStack Pike b2 milestone in Ubuntu 17.10 and for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive.

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive for OpenStack Pike on Ubuntu 16.04 LTS installations by running the following commands:

sudo add-apt-repository cloud-archive:pike
sudo apt update

The Ubuntu Cloud Archive for Pike includes updates for Barbican, Ceilometer, Cinder, Congress, Designate, Glance, Heat, Horizon, Ironic, Keystone, Manila, Murano, Neutron, Neutron FWaaS, Neutron LBaaS, Neutron VPNaaS, Neutron Dynamic Routing, Networking OVN, Networking ODL, Networking BGPVPN, Networking Bagpipe, Networking SFC, Nova, Sahara, Senlin, Trove, Swift, Mistral, Zaqar, Watcher, Senlin, Rally and Tempest.

We’ve also now included GlusterFS 3.10.3 in the Ubuntu Cloud Archive in order to provide new stable releases back to Ubuntu 16.04 LTS users in the context of OpenStack.

You can see the full list of packages and versions here.

Ubuntu 17.10

No extra steps required; just start installing OpenStack!

Branch Package Builds

If you want to try out the latest master branch updates, or updates to stable branches, we are maintaining continuous integrated packages in the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/newton
sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

bear in mind these are built per-commitish (30 min checks for new commits at the moment) so ymmv from time-to-time.

Reporting bugs

Any issues please report bugs using the ‘ubuntu-bug’ tool:

sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Still to come…

In terms of general expectation for the OpenStack Pike release in August we’ll be aiming to include Ceph Luminous (the next stable Ceph release) and Open vSwitch 2.8.0 so long as the release schedule timing between projects works out OK.

Any finally – if you’re interested in the general stats – Pike b2 involved 77 package uploads including new 4 new packages for new Python module dependencies!

Thanks and have fun!

James

Tagged , ,

Ubuntu OpenStack Dev Summary – 12th June 2017

devsumWelcome to the second Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!


OpenStack Distribution

Stable Releases

The current set of OpenStack Newton point releases have been released:

https://bugs.launchpad.net/cloud-archive/+bug/1688557

The next cadence cycle of stable fixes is underway – the current candidate list includes:

Cinder: RBD calls block entire process (Kilo)
https://bugs.launchpad.net/cinder/+bug/1401335

Cinder: Upload to image does not copy os_type property (Kilo)
https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1692446

Swift: swift-storage processes die of rsyslog is restarted (Kilo, Mitaka)
https://bugs.launchpad.net/ubuntu/trusty/+source/swift/+bug/1683076

Neutron: Router HA creation race (Mitaka, Newton)
https://bugs.launchpad.net/neutron/+bug/1662804

We’ll also sweep up any new stable point releases across OpenStack Mitaka, Newton and Ocata projects at the same time:

https://bugs.launchpad.net/ubuntu/+bug/1696177

https://bugs.launchpad.net/ubuntu/+bug/1696133

https://bugs.launchpad.net/ubuntu/+bug/1696139

Development Release

x86_64, ppc64el and s390x builds of Ceph 12.0.3 (the current Luminous development release) are available for testing via PPA whilst misc build issues are resolved with i386 and armhf architectures:

https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/2779

OpenStack Pike b2 was out last week; dependency updates have been uploaded (including 5 new packages) and core project updates are being prepared this week pending processing of new packages in Ubuntu Artful development. .


OpenStack Snaps

We’re really close to a functional all-in-one OpenStack cloud using the OpenStack snaps – work is underway on the nova-hypervisor snap to resolve some issues with use of sudo by the nova-compute and neutron-openvswitch daemons. Once this work has landed expect a more full update on efforts to-date on the OpenStack snaps, and how you can help out with snapping the rest of the OpenStack ecosystem!

If you want to give the current snaps a spin to see what’s possible checkout snap-test.


Nova LXD

Work on support for new LXD features to allow multiple storage backends has been landed into nova-lxd. Support for LXD using storage pools has also been added to the nova-compute and lxd charms.

The Tempest experimental gate is now functional again (hint: use ‘check experimental’ on a Gerrit review). Work is also underway to resolve issues with Neutron linuxbridge compatibility in OpenStack Pike (raised by the OpenStack-Ansible team – thanks!), including adding a new functional gate check for this particular networking option.


OpenStack Charms

Deployment Guide

The charms team will be starting work on the new OpenStack Charms deployment guide in the next week or so; if you’re an OpenStack Charm user and would like to help contribute to a best practice guide to cover all aspects of building an OpenStack cloud using MAAS, Juju and the OpenStack Charms we want to hear from you!  Ping jamespage in #openstack-charms on Freenode IRC or attend our weekly meeting to find out more.

Stable bug backports

If you have bugs that you’d like to see backported to the current stable charm set, please tag them with the ‘stable-backport’ tag (and they will pop-up in the right place in Launchpad) – you can see the current pipeline here.

We’ve had a flurry of stable backports of the last few weeks to fill in the release gap left when the project switched to a 6 month release cadence so be sure to update and test out the latest versions of the OpenStack charms in the charm store.

IRC (and meetings)

You can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM

Tagged , , , ,

Ubuntu OpenStack Dev Summary – 22nd May 2017

Welcome to the first ever Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!


OpenStack Distribution

Stable Releases

Ceph 10.2.7 for Xenial, Yakkety, Zesty and Trusty-Mitaka UCA:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1684527

Open vSwitch updates (2.5.2 and 2.6.1) for Xenial and Yakkety plus associated UCA pockets:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1673063
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1641956

Point releases for Horizon (9.1.2) and Keystone (9.3.0) for Xenial and Trusty-Mitaka UCA:
https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1680098

And the current set of OpenStack Newton point releases have just entered testing:
https://bugs.launchpad.net/cloud-archive/+bug/1688557

Development Release

OpenStack Pike b1 is available in Xenial-Pike UCA (working through proposed testing in Artful).

Open vSwitch 2.7.0 is available in Artful and Xenial-Pike UCA.

Expect some focus on development previews for Ceph Luminous (the next stable release) for Artful and the Xenial-Pike UCA in the next month.


OpenStack Snaps

Progress on producing snap packages for OpenStack components continues; snaps for glance, keystone, nova, neutron and nova-hypervisor are available in the snap store in the edge channel – for example:

sudo snap install --edge --classic keystone

Snaps are currently Ocata aligned; once the team have a set of snaps that we’re all comfortable are a good base, we’ll be working towards publication of snaps across tracks for OpenStack Ocata and OpenStack Pike as well as expanding the scope of projects covered with snap packages.

The edge channel for each track will contain the tip of the associated branch for each OpenStack project, with the beta, candidate and release channels being reserved for released versions. These three channels will be used to drive the CI process for validation of snap updates. This should result in an experience something like:

sudo snap install --classic --channel=ocata/stable keystone

or

sudo snap install --classic --channel=pike/edge keystone

As the snaps mature, the team will be focusing on enabling deployment of OpenStack using snaps in the OpenStack Charms (which will support CI/CD testing) and migration from deb based installs to snap based installs.


Nova LXD

Support for different Cinder block device backends for Nova-LXD has landed into driver (and the supporting os-brick library), allowing Ceph Cinder storage backends to be used with LXD containers; this is available in the Pike development release only.

Work on support for new LXD features to allow multiple storage backends to be used is currently in-flight, allowing the driver to use dedicated storage for its LXD instances alongside any use of LXD via other tools on the same servers.


OpenStack Charms

6 monthly release cycle

The OpenStack Charms project is moving to a 6 monthly release cadence (rather than the 3 month cadence we’ve followed for the last few years); This reflects the reduced rate of new features across OpenStack and the charms, and the improved process for backporting fixes to the stable charm set between releases. The next charm release will be in August, aligned with the release of OpenStack Pike and the Xenial Pike UCA.

If you have bugs that you’d like to see backported to the current stable charm set, please tag them with the ‘stable-backport’ tag (and they will pop-up in the right place in Launchpad) – you can see the current stable bug pipeline here.

Ubuntu Artful and OpenStack Pike Support

Required changes into the OpenStack Charms to support deployment of Ubuntu Artful (the current development release) and OpenStack Pike are landing into the development branches for all charms, alongside the release of Pike b1 into Artful and the Xenial-Pike UCA.

You can consume these charms (as always) via the ~openstack-charmers-next team, for example:

juju deploy cs:~openstack-charmers-next/keystone

IRC (and meetings)

You can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM

Tagged , , ,
Advertisements