Liam Young wrote a blog post a few months ago about how to enable OpenStack guest console support and noted it was in the next charms. This feature landed in our stable charms in October. If you are wondering how it’s done, check out Liam’s blog post – http://blog.gnuoy.eu/2014/09/openstack-guest-console-access-with-juju.html
Our very own James Page blogged about Kilo-1 availability for Vivid and Trusty (via Ubuntu Cloud Archive) . If you are interested in checking out the current OpenStack in development on Ubuntu, this is for you. Enjoy!
We (the Canonical OIL dev team) are about to finish the production roll out of our OpenStack Interoperability Lab (OIL). It’s been an awesome time getting here so I thought I would take the opportunity to get everyone familiar, at a high level, with what OIL is and some of the cool technology behind it.
So what is OIL?
For starters, OIL is essentially continuous integration of the entire stack, from hardware preparation, to Operating System deployment, to orchestration of OpenStack and third party software, all while running specific tests at each point in the process. All test results and CI artifacts are centrally stored for analysis and monthly report generation.
Typically, setting up a cloud (particularly OpenStack) for the first time can be frustrating and time consuming. The potential combinations and permutations of hardware/software components and configurations can quickly become mind-numbing. To help ease the process and provide stability across options we sought to develop an interoperability test lab to vet as much of the ecosystem as possible.
To accomplish this we developed a CI process for building and tearing down entire OpenStack deployments in order to validate every step in the process and to make sure it is repeatable. The OIL lab is comprised of a pool of machines (including routers/switches, storage systems, and computer servers) from a large number of partners. We continually pull available nodes from the pool, setup the entire stack, go to town testing, and then tear it all back down again. We do this so many times that we are already deploying around 50 clouds a day and expect to scale this by a factor of 3-4 with our production roll-out. Generally, each cloud is composed of about 5-7 machines each but we have the ability to scale each test as well.
But that’s not all, in addition to testing we also do bug triage, defect analysis and work both internally and with our partners on fixing as many things as we can. All to ensure that deploying OpenStack on Ubuntu is as seamless a process as possible for both users and vendors alike.
We didn’t want to reinvent the wheel so, we are leveraging the latest Ubuntu technologies as well as some standard tools to do all of this. In fact the majority of the OIL infrastructure is public code you can get and start playing with right away!
Here is a small list of what we are using for all this CI goodness:
- MaaS — to do the base OS install
- Juju — for all the complicated OpenStack setup steps — and linking them together
- Tempest — the standard test suite that pokes and prods OpenStack to ensure everything is working
- Machine selections & random config generation code — to make sure we get a good hardware/software cross sections
- Jenkins — gluing everything together
Using all of this we are able to manage our hardware effectively, and with a similar setup you can easily too. This is just a high-level overview so we will have to leave the in-depth technological discussions for another time.
More to come
We plan on having a few more blog posts cover some of the more interesting aspects (both results we are getting from OIL and some underlying technological discussions).
We are getting very close to OIL’s official debut and are excited to start publishing some really insightful data.
The Server team just finished up the second day of the Ubuntu Developer Summit (UDS) March 2014 (see http://summit.ubuntu.com/uds-1403/track/servercloud/ for the Server track). I may be biased (well, actually, I know I am), but I think its been very interesting – lots of good, thoughtful discussions around where we are and where we’re heading. Check out the videos and let us know what you think.
One video from today’s UDS sessions, I wanted to specifically highlight is the demo Robie Basak gave on uvtool. Uvtool is, as Robie explains in the video, a very simple tool for setting up kvm guests – he calls it the glue that brings together several existing tools. Just go watch it – http://www.youtube.com/embed/Ue0C2ssp450 and then go try it.
In December, Serge did a writeup on uvtool, I think that’s worth a read also – http://s3hh.wordpress.com/2013/12/12/quickly-run-ubuntu-cloud-images-locally-using-uvtool/
Anyhow, time to prepare for the last day of UDS. Enjoy!
We are looking for two fabulous Software Engineers to join the Ubuntu Server team. Check out the individual job listings for more details:
Think you’ve got what it takes? Apply!
- #ubuntu-meeting: ubuntu-server-team, 18 Feb at 16:03 — 16:38 UTC
- Full logs and further details at https://wiki.ubuntu.com/MeetingLogs/Server/20140218
- Review ACTION points from previous meeting
- T Development
- Server & Cloud Bugs (caribou)
- Weekly Updates & Questions for the QA Team (psivaa)
- Weekly Updates & Questions for the Kernel Team (smb, sforshee)
- Weekly Updates & Questions regarding Ubuntu ARM Server (rbasak)
- Ubuntu Server Team Events
- Open Discussion
- Announce next meeting date, time and chair
This weeks meeting had a focus on addressing items needed before Feature Freeze on Feb 20. This included conversations around high/essential bugs, red high/essential blueprints, and test failures.
Specific bugs discussed in this weeks meeting were:
- 1248283 in juju-core (Ubuntu Trusty) “juju userdata should not restart networking” [High,Triaged] https://launchpad.net/bugs/1248283
- 1278897 in dovecot (Ubuntu Trusty) “dovecot warns about moved ssl certs on upgrade” [High,Triaged] https://launchpad.net/bugs/1278897
- 1259166 in horizon (Ubuntu Trusty) “Fix lintian error” [High,Triaged]
- 1273877 in neutron (Ubuntu Trusty) “neutron-plugin-nicira should be renamed to neutron-plugin-vmware” [High,Triaged]
Specific Blueprints discussed:
- curtain, openstack charms, ceph, mysql alt, cloud-init, openstack (general)
Meeting closed with announcing Marco and Jorge will be at SCALE12x giving a talk, so be sure to stop by if your are going to be at SCALE.
Review ACTION points from previous meeting
The discussion about “Review ACTION points from previous meeting” started at 16:04.
16:06 <arosales> gaughen follow up with jamespage on bug 1243076 16:06 <ubottu> bug 1243076 in mod-auth-mysql (Ubuntu Trusty) “libapache2-mod-auth-mysql is missing in 13.10 amd64” [High,Won’t fix] https://launchpad.net/bugs/1243076 16:09 <jamespage> not got to that yet 16:10 <jamespage> working on a few pre-freeze items first 16:10 <arosales> ack I’ll take its appropriately on your radar –thanks 16:10 <jamespage> it is
16:06 <arosales> gaughen follow up on dbus task for bug 1248283 16:06 <ubottu> bug 1248283 in juju-core (Ubuntu Trusty) “juju userdata should not restart networking” [High,Triaged] https://launchpad.net/bugs/1248283
16:07 <arosales> jamespage to follow up on bug 1278897 (policy compliant) 16:07 <ubottu> bug 1278897 in dovecot (Ubuntu Trusty) “dovecot warns about moved ssl certs on upgrade” [High,Triaged] https://launchpad.net/bugs/1278897
16:07 <arosales> smoser update servercloud-1311-curtin bp 16:07 <smoser> i updated it . 16:07 <smoser> i’ll file a ffe today
16:07 <arosales> hallyn follow up on 1248283 from an lxc pov, ping serue to coordinate 16:08 <serue> Done 16:08 <arosales> smoser update cloud-init BP 16:08 <smoser> we’ll say same there.
The discussion about “Trusty Development” started at 16:10.
- Release Bugs (16:11)
- LINK: http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-t-tracking-bug-tasks.html#server
- LINK: https://bugs.launchpad.net/maas/+bug/1248283
- LINK: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1259166
- LINK: https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1273877
- LINK: https://bugs.launchpad.net/ubuntu/+source/dovecot/+bug/1278897
- ACTION: follow upon bug 1273877
- Blueprints (16:22)
- LINK: http://status.ubuntu.com/ubuntu-t/group/topic-t-servercloud-overview.html
- ACTION: gaughen ensure BPs are updated
Weekly Updates & Questions for the QA Team (psivaa)
The discussion about “Weekly Updates & Questions for the QA Team (psivaa)” started at 16:27.
- LINK: https://code.launchpad.net/~psivaa/ubuntu-test-cases/mod_php-fix/+merge/204273 needs merging
Ubuntu Server Team Events
The discussion about “Ubuntu Server Team Events” started at 16:35.
Action items, by person
- gaughen ensure BPs are updated
- follow upon bug 1273877
Announce next meeting date and time
Next meeting will be on Tuesday, February 25th at 16:00 UTC in #ubuntu-meeting.
People present (lines said)
- arosales (77)
- jamespage (19)
- psivaa (12)
- smoser (10)
- ubottu (9)
- meetingology (5)
- serue (2)
- zul (2)
- sforshee (1)
- rbasak (1)
- gaughen (1)
- smb (1)
Review Previous Actions
Robie has a merge proposal nearly ready for landing for delta reporting:
- ACTION: rbasak to land delta report to lp:ubuntu-reports, Daviey to deploy
Most server packages are now unblocked from migrating to the saucy release pocket aside from one last fix for the apache2.4 transition.
James noted that Debian import freeze and Alpha 2 for saucy is schedule for next week:
- Release Bugs (16:07)
- Blueprints (16:13)
- LINK: http://status.ubuntu.com/ubuntu-s/group/topic-s-servercloud-overview.html
- LINK: https://blueprints.launchpad.net/ubuntu/+spec/servercloud-s-mongodb
- LINK: https://blueprints.launchpad.net/ubuntu/+spec/servercloud-s-juju-2-delivery
- LINK: https://blueprints.launchpad.net/ubuntu/+spec/servercloud-s-ceph
- LINK: https://blueprints.launchpad.net/ubuntu/+spec/servercloud-s-openstack-charms
- ACTION: arosales to review Juju blueprints
- ACTION: arosales to review juju related blueprints with owners after OSCON
Ubuntu Server Team Events
OSCON happening right now – Jorge and Mark Mims running a Charm School!
Weekly Updates & Questions for the QA Team (plars)
plars updated on a couple of kernel bugs currently causing issues in server automated testing:
- LINK: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1203694 and https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1203211
These should be fixed up shortly
Weekly Updates & Questions for the Kernel Team (smb)
Some issues with KVM guests with the latest saucy kernel – apw investigating.
Weekly Updates & Questions regarding Ubuntu ARM Server (rbasak)
Nothing to note.
The discussion about “Open Discussion” started at 16:33.
- LINK: http://uds.ubuntu.com/
Antonio took the opportunity to remind everyone that UDS is scheduled for the end of August.
Chuck noted that Quantum has now been renamed Neutron; removal of the old source package has been requested.
Announce next meeting date and time
Tuesday 30th July at 1600 GMT
Full meeting log can be found here: https://wiki.ubuntu.com/MeetingLogs/Server/20130723
The Ubuntu Server Team is constantly working on some really exciting areas. We try to collate the best of open source to deliver a distribution suitable for cloud, scale out and traditional server workloads. We try to provide agile granite foundations for users to build their workload on. Most of the work we do is for no cost to the user, maximising value.
On a weekly basis, we hold an IRC meeting, where we discuss blueprints and development, but I do think we could probably do better at sharing some of the great stuff we are doing.
To achieve this, i’m setting a target of trying to give a weekly insight into the highlights of the work that the Ubuntu Server team is doing. So join me on my mission and prepare for your dunked digest into the giant cup of Server.
Juju is a key Ubuntu Server technology. it is typically called a service orchestration management tool, rather than a server management tool. Many of the deliverables of the server team are either built upon Juju, or underpins Juju itself. In return, Juju underpins greatness.
Juju supports writing a charm in any language (or even compiled binary!), that can be executed or interpreted by the machine. I believe the most complex charms in the store are the Openstack ones. Some of the original charms were written in shell/bash, but it has become apparent that a richer higher level such as python can be massively useful. Therefore, we decided to rewrite some of the earlier charms in python. The Cinder charm was rewritten by Adam in python and Andres has been on the same for Glance. The real key part of this is that the deployments can have a seamless upgrade, without realising that the underlying charm language has changed.
We’ve also found that many of the charms contain significant overlap, therefore we have been trying to push much of the common code into charm-helpers. This is vital for any DRhY methodology, which helps with maintainability – but also allowing us to be more effective. James found that he could rework the Ceph charms to use charm-helper and push some extra features back to charm-helpers.
The velocity of development means that Quality is a constant concern. The only way we can raise our capacity and have a good level of confidence in what we deliever is to have frequent testing. To scale this, we’ve been putting significant work into automating areas where we can. DEP-8 (autopkgtest) is a format for describing test requirements, setup and the actual test case. Adam implemented DEP-8 package testing into the Openstack packaging, using juju, jenkins and a special internal Openstack deployment we’ve codenamed ServerStack.
Adam also worked on some Ubuntu Cloud Archive tooling to make it easier to submit packages and cleaner package release reporting that make it easier to identify workflow status.
Andres uploaded latest version of MAAS to Saucy. Diogo who is helping to drive quality in the server team worked on resolving some MAAS jenkins test failures under Saucy and setup tarmac (code lander) for juju-gui.
Chuck uploaded new versions of python-keystoneclient, python-ceilometer, python-swiftclient. In addition he also backported Qemu 1.5.0 to the Ubuntu Cloud Archive, enabling the latest Qemu features on the stable base that 12.04 LTS provides.
Chuck has also been leading the way for python3 compatibility, including some work on making python-novaclient python3 compatible. He also worked on a bunch of upstream openstack patches.
As part of the hardware enablement stack, 3.10 kernel is being brought back to the 12.04 Precise. This means that a bunch of these need to be made 3.10 compatible. James worked on resolving a failure with with iscsitarget, and pushed it upstream.
Robie did some great work on enabling multiple tests (DEP8/autopkgtest) for LXC, which was discussed on the ubuntu-server mailing list.
Serge, who has been pushing LXC developments in Ubuntu built a custom Saucy kernel with Dwight’s xfs userns patchset (final set needed before we can ask kernel team for enablement!) and also investigated signalfd/epoll/sigchld race which was reproducible with LXC.
Oh, and this week Andy Murray also won the tennis championship, Wimbledon – which I for one, attribute his success to Week 28 of Ubuntu Server development. The most interesting part of this, is that he is the first British man to win who wasn’t wearing full length trousers. I’ve heard, but it’s yet to be confirmed – that he used juju during his training, but this is yet to be confirmed.
Jorge Castro had a good blog post regarding the LTS enablement stack and sysadmins. The TLDR as Jorge puts it is, “12.04.2 ISOs are NOT just rolled up updates, they’re 12.04 with newer kernels.” It is good to also note that the 12.04 stack will continue to be maintained for 5 years. Thus, it will get SRUs and the kernel won’t change on you. I think this is an important thing to note. However, some folks may want a newer kernel in the LTS life span and for those folks then can evaluate a point release.
Jorge calls out some good recommendations:
- The 12.04 and 12.04.1 ISO’s are at http://old-releases.ubuntu.com/ – you’ll likely want to keep a set for yourself if you want to roll out with the same exact kernel for your deployments – you’ll probably want to have all three ISO sets on hand depending on your hardware.
- The original 12.04 stack will continue to be maintained for 5 years, if you don’t need the new kernel, you don’t need to use it.
- In the past if new hardware rolled out and didn’t work with the LTS you were kind of stuck with either backporting a kernel, or (what I reluctantly did) deploy a non-LTS release until the next LTS came out, at which point you would rebase on the new LTS.)
Be sure to give his blog post and LTS Enablement Stack Wiki a read. If you have any questions or comments, as always, feel free to give the list (firstname.lastname@example.org) or a ping in IRC (#ubuntu-server@Freenode).