Currently showing posts tagged: openSUSE

“Ruler of the Stack” competition, OpenStack summit, Paris

By , November 5, 2014 7:32 pm

Today at the OpenStack summit here in Paris, we continued SUSE’s winning streak at the “Ruler of the Stack” competition 🙂

ruler-of-the-stack2-25.jpg

This time, the challenge from the OpenStack Foundation was to deploy an OpenStack cloud as quickly as possible which could withstand “attacks” such as one of the controller nodes being killed, and still keep the OpenStack services and VM instances functional.

It was considerably more complex than the challenge posed by Intel at the Atlanta summit in May which my teammate Dirk Müller (pictured right) won, since now we needed a minimum of two controller nodes clustered together, two compute nodes, and some shared or replicated storage. So this time an approach deploying to a single node via a “quickstart” script was not an option, and we knew it would take a lot longer than 3 minutes 14 seconds.

However as soon as I saw the rules I thought we had a good chance to win with SUSE Cloud, for four reasons:

  1. We already released the capability to automatically deploy highly available OpenStack clouds back in May, as an update to SUSE Cloud 3.1
  2. With openSUSE’s KIWI Image System and the Open Build Service, our ability to build custom appliance images of product packages for rapid deployment is excellent, as already proved by Dirk’s victory in Atlanta. KIWI avoids costly reboots by using kexec.
  3. I have been working hard the last couple of months on reducing the amount of manual interaction during deployment to the absolute minimum, and this challenge was a perfect use case for harnessing these new capabilities.2
  4. The solution had to work within a prescribed set of VLANs and IP subnets, and Crowbar (the deployment technology embedded in SUSE Cloud) is unique amongst current OpenStack deployment tools for its very flexible approach to network configuration.

I worked with Dirk late in our hotel the night before, preparing and remotely testing two bootable USB images which were custom-built especially for the challenge: one for the Crowbar admin server, and one for the controller and compute nodes.

The clock was started when we booted the first node for installation, and stopped when we declared the nodes ready for stress-testing by the judges. It took us 53 minutes to prepare 6 servers: two admin servers (one as standby), two controllers, and two compute nodes running the KVM hypervisor.3 However we lost ~15 minutes simply through not realising that plugging a USB device directly into the server is far more performant than presenting virtual media through the remote console! And there were several other optimizations we didn’t have time to make, so I think in future we could manage the whole thing under 30 minutes.

However the exercise was a lot of fun and also highlighted several areas where we can make the product better.

At least three other well-known combinations of OpenStack distribution and deployment tools were used by other competitors. No-one else managed to finish deploying a cloud, so we don’t know how they would have fared against the HA tests. Perhaps everyone was too distracted by all the awesome sessions going on at the same time 😉

I have to send a huge thank-you to Dirk whose expertise in several areas, especially building custom appliances for rapid deployment, was a massive help. Also to Bernhard, Tom, and several other members of our kick-ass SUSE Cloud engineering team 🙂 And of course to Intel for arranging a really fun challenge. I’m sure next time they’ll think of some great new ways to stretch our brains!

Footnotes:

1

Automating deployment of an HA cloud presented a lot of architectural and engineering challenges, and I’m painfully aware that I still owe the OpenStack and Chef communities a long overdue blog post on this, as well as more information about the code on github.

2

The result is a virtualized Vagrant environment can deploy a fully HA environment in VMs from a simple one-line command! It is intended to be used by anyone who wants to quickly deploy/demo/test SUSE Cloud: developers, Sales Engineers, partners, and customers alike. I also needed it for the hands-on workshop I co-presented with Florian Haas of Hastexo fame (slides and video now available), and also for an upcoming workshop at SUSEcon.

3

Once the nodes were registered in Crowbar, we fed a simple YAML file into the new crowbar batch build command and it built the full cluster of OpenStack services in ~20 minutes.

Share

Upgrading to openSUSE 12.2

By , September 25, 2012 12:15 pm

Yesterday I did an online upgrade of my laptop from openSUSE 12.1 to 12.2, using this procedure. It was almost a painless process; however unfortunately something crashed the whole X session part-way through the upgrade so I had to manually re-run zypper dup from a virtual console to complete it. The procedure should probably warn about that scenario in some way, although there is no obvious clean solution, e.g. switching to runlevel 3 before starting the upgrade would probably kill the network when NetworkManager is in use. I also had some weirdness related to my encrypted /home partition (despite the advice in the release notes).

The final issue was Google Chrome failing to start with the following error:

/usr/bin/google-chrome: error while loading shared libraries: libbz2.so.1.0: cannot open shared object file: No such file or directory

It seems that this is at least in part due to a disagreement between Google engineers and Linux distros on what symlinks ldconfig should generate. If you look in the %post section of the google-chrome-stable rpm, you’ll see

# Most RPM distros do not provide libbz2.so.1.0, i.e.
# https://bugzilla.redhat.com/show_bug.cgi?id=461863
# so we create a symlink and point it to libbz2.so.1.
# This is technically wrong, but it'll work since we do
# not anticipate a new version of bzip2 with a different
# minor version number anytime soon.

I don’t have time to decipher openSUSE’s shared library packaging policy, but I assume it’s deliberate that /usr/lib64/libbz2.so.1.0 does not exist as a symlink even though /usr/lib64/libbz2.so and /usr/lib64/libbz2.so.1 do.
It has been suggested to me that this omission could be related to http://en.opensuse.org/openSUSE:Usr_merge. Regardless, google’s hack should have compensated for this, but for some reason (presumably related to the upgrade), there was a dangling symlink in /opt/google/chrome:

lrwxrwxrwx 1 root root 18 Sep 24 10:17 libbz2.so.1.0 -> /lib64/libbz2.so.1

This symlink is not owned by any rpm, which suggests that google’s hack might be in violation of the FHS for modifying a static and potentially read-only filesystem – although that’s moot since the modification would only happen at install-time at which point opt has to be mounted read-write anyway. Regardless, removing that dangling symlink and pointing it to /usr/lib64/libbz2.so.1 fixed the issue.

Share

announcing rypper (again)

By , May 20, 2010 10:54 pm

Back in August 2009, I announced the release of a simple script called rypper which I wrote to wrap around zypper and provide the ability to make batch operations on repositories. Here are some example usages:

# list all disabled repos
rypper -d

# list all enabled repos with autorefresh off
rypper -e -R

# list all repos which have anything to do with KDE
rypper -x kde

# list priority and URIs for all repos whose alias contains 'home:'
rypper -a home: l -pu

# enable autorefresh on all OpenSUSE Build Service repos
rypper -u download.opensuse.org -R mr -r

# remove all repos on external USB HDD mounted on /media/disk
rypper -u /media/disk rr

I only got one response so I assumed it wasn’t that useful to other people.

However here I am at BrainShare EMEA in Amsterdam surrounded by like-minded geeks, and in a fit of enthusiasm / procrastination, I have finally got round to learning how to build openSUSE Build Service packages, and so proudly (?) present my first OBS package release: the 1-click install version of rypper. Enjoy!

UPDATE 21 March 2011: based on feedback from SLE Product Management, I have submitted a formal request to have zypper extended to include this kind of functionality.

Share

Panorama Theme by Themocracy