How to build an OpenStack cloud from SUSEcon’s free USB stick handouts

By , December 11, 2014 3:28 pm

Once again, SUSEcon was a blast! Thanks to everyone who helped make it such a great success, especially all our customers and partners who attended.

If you attended the final Thursday keynote, you should have been given a free USB stick preloaded with a bootable SUSE Cloud appliance. And if you missed out or couldn’t attend, download a copy here! This makes it possible for anyone to build an OpenStack cloud from scratch extremely quickly and easily. (In fact, it’s almost identical to the appliance we used a few weeks ago to win the “Ruler of the Stack” competition at the OpenStack summit in Paris.)

Erin explained on stage at a high-level what this appliance does, but below are some more specific technical details which may help in case you haven’t yet tried it out.

The appliance can be booted on any physical or virtual 64-bit x86 machine … but before we start! – if you would like try running the appliance in a VM using either KVM or VirtualBox, then there is an even easier alternative which uses Vagrant to reduce the whole setup to a one-line command. If you like the sound of that, stop reading and go here instead. However if you want to try it on bare metal or with a different hypervisor such as VMware or HyperV, read on!

Continue reading 'How to build an OpenStack cloud from SUSEcon’s free USB stick handouts'»

Share

“Ruler of the Stack” competition, OpenStack summit, Paris

By , November 5, 2014 7:32 pm

Today at the OpenStack summit here in Paris, we continued SUSE’s winning streak at the “Ruler of the Stack” competition :-)

ruler-of-the-stack2-25.jpg

This time, the challenge from the OpenStack Foundation was to deploy an OpenStack cloud as quickly as possible which could withstand “attacks” such as one of the controller nodes being killed, and still keep the OpenStack services and VM instances functional.

It was considerably more complex than the challenge posed by Intel at the Atlanta summit in May which my teammate Dirk Müller (pictured right) won, since now we needed a minimum of two controller nodes clustered together, two compute nodes, and some shared or replicated storage. So this time an approach deploying to a single node via a “quickstart” script was not an option, and we knew it would take a lot longer than 3 minutes 14 seconds.

However as soon as I saw the rules I thought we had a good chance to win with SUSE Cloud, for four reasons:

  1. We already released the capability to automatically deploy highly available OpenStack clouds back in May, as an update to SUSE Cloud 3.1
  2. With openSUSE’s KIWI Image System and the Open Build Service, our ability to build custom appliance images of product packages for rapid deployment is excellent, as already proved by Dirk’s victory in Atlanta. KIWI avoids costly reboots by using kexec.
  3. I have been working hard the last couple of months on reducing the amount of manual interaction during deployment to the absolute minimum, and this challenge was a perfect use case for harnessing these new capabilities.2
  4. The solution had to work within a prescribed set of VLANs and IP subnets, and Crowbar (the deployment technology embedded in SUSE Cloud) is unique amongst current OpenStack deployment tools for its very flexible approach to network configuration.

I worked with Dirk late in our hotel the night before, preparing and remotely testing two bootable USB images which were custom-built especially for the challenge: one for the Crowbar admin server, and one for the controller and compute nodes.

The clock was started when we booted the first node for installation, and stopped when we declared the nodes ready for stress-testing by the judges. It took us 53 minutes to prepare 6 servers: two admin servers (one as standby), two controllers, and two compute nodes running the KVM hypervisor.3 However we lost ~15 minutes simply through not realising that plugging a USB device directly into the server is far more performant than presenting virtual media through the remote console! And there were several other optimizations we didn’t have time to make, so I think in future we could manage the whole thing under 30 minutes.

However the exercise was a lot of fun and also highlighted several areas where we can make the product better.

At least three other well-known combinations of OpenStack distribution and deployment tools were used by other competitors. No-one else managed to finish deploying a cloud, so we don’t know how they would have fared against the HA tests. Perhaps everyone was too distracted by all the awesome sessions going on at the same time ;-)

I have to send a huge thank-you to Dirk whose expertise in several areas, especially building custom appliances for rapid deployment, was a massive help. Also to Bernhard, Tom, and several other members of our kick-ass SUSE Cloud engineering team :-) And of course to Intel for arranging a really fun challenge. I’m sure next time they’ll think of some great new ways to stretch our brains!

Footnotes:

1

Automating deployment of an HA cloud presented a lot of architectural and engineering challenges, and I’m painfully aware that I still owe the OpenStack and Chef communities a long overdue blog post on this, as well as more information about the code on github.

2

The result is a virtualized Vagrant environment can deploy a fully HA environment in VMs from a simple one-line command! It is intended to be used by anyone who wants to quickly deploy/demo/test SUSE Cloud: developers, Sales Engineers, partners, and customers alike. I also needed it for the hands-on workshop I co-presented with Florian Haas of Hastexo fame (slides and video now available), and also for an upcoming workshop at SUSEcon.

3

Once the nodes were registered in Crowbar, we fed a simple YAML file into the new crowbar batch build command and it built the full cluster of OpenStack services in ~20 minutes.

Share

OpenStack Paris workshop: Automated Deployment of an HA Cloud

By , November 1, 2014 7:39 pm

6 months after its debut in Atlanta, the HA workshop happening again in Paris, this Monday (16:20, Room 241)! If you plan on attending and didn’t already see Florian’s blog post, please get downloading and installing the prerequisite files quick, because downloading them over hotel or conference wifi is likely to be painful. (N.B. Unless you have a really strong reason not to, please go with the VirtualBox option, because there are more pitfalls when using KVM+libvirt.)

However if you’re already on the way to Paris, don’t despair, because we’ll also do our best to make the files available at the SUSE booth. And failing that, you can still just turn up, enjoy the show, and then try the hands-on exercise any time later at your own leisure!

In case you have no idea what I’m talking about, here’s some brief history:

Back before the last OpenStack summit in Atlanta in May, I managed to persuade Florian Haas to join me in an endeavour which some might view as slightly insane: run a 90 minute hands-on workshop in which we’d have every attendee build a Highly Available OpenStack cloud on their laptop, from scratch.

With so many moving parts and a vast range of different hardware involved, it certainly wasn’t plain sailing, but by the power of Vagrant and VirtualBox, I think overall it was a success.

Since then, we’ve been working extremely hard to improve the workshop materials based on lessons learnt last time. So what’s new? Well, firstly the workshop environment has been upgraded from Havana Icehouse (SUSE Cloud 4 vs. version 3 in May), and quite a lot of polish has been applied, since in May the HA code was still relatively new.

Secondly, we’ve added support for building the cloud using KVM+libvirt (although VirtualBox is still recommended for a smoother ride).

Thirdly, the documentation is way more comprehensive, and should help you avoid many common pitfalls.

Hope to see you at the workshop, and please come and say hello to us at the SUSE booth!

Share

managing your github notifications inbox with mutt

By , October 5, 2014 1:59 pm

Like many F/OSS developers, I’m a heavy user of GitHub. This means I interact with other developers via GitHub multiple times a day. GitHub has a very nice notifications system which lets me know when there has been some activity on a project I’m collaborating on.

I’m a fan of David Allen’s GTD (“Getting Things Done”) system, and in my experience I get the best results by minimising the number of inboxes I have to look at every day. So I use another great feature of GitHub, which is the ability to have notification emails delivered directly to your email inbox. This means I don’t have to keep checking https://github.com/notifications in addition to my email inbox.

However, this means that I receive GitHub notifications in two places. Wouldn’t it be nice if when I read them in my email inbox, GitHub could somehow realise and mark them read at https://github.com/notifications too, so that when I look there, I don’t end up getting reminded about notifications I’ve already seen in my inbox? Happily the folks at GitHub already thought of this too, and come up with a solution:

If you read a notification email, it’ll automatically be marked as read in the Notifications section. An invisible image is embedded in each mail message to enable this, which means that you must allow viewing images from notifications@github.com in order for this feature to work.

https://help.github.com/articles/configuring-notification-emails/#shared-read-state

But there’s a catch! Like many Linux geeks, I use mutt for reading and writing email. In fact, I’ve been using it since 1997 and I’m still waiting for another MUA to appear which is more powerful and lets me crunch through email faster. However mutt is primarily text-based, which means by default it doesn’t download images when displaying HTML-based email. Of course, it can. But do I want it to automatically open a new tab in my browser every time I encounter an HTML attachment? No! That would slow me down horribly. Even launching a terminal-based HTML viewer such as w3m or links or lynx would be too slow.

So I figured out a better solution. mutt has a nice message-hook feature where you can configure it to automatically execute mutt functions for any message matching specific criteria just before it displays the message. So we can use that to pipe the whole email to a script whenever a message is being read for the first time:

message-hook "(~N|~O) ~f notifications@github.com" "push '<pipe-message>read-github-notification\n'"

(~N|~O) matches mails which have the N flag (meaning new unread email) or O (meaning old unread email) set.

The read-github-notifications script reads the email on STDIN, extracts the URL of the 1-pixel read notification beacon <img> embedded in the HTML attachment, and sends an HTTP request for that image, so that github knows the notification has been read.

This means an extra delay of 0.5 seconds or so when viewing a notification email, but for me it’s a worthwhile sacrifice.

If you want to try it, simply download the script and stick it somewhere on your $PATH, and then add the above line to your ~/.muttrc file.

Share

Email inboxes and the GTD 2-minute rule

By , March 20, 2014 12:46 am

Today’s dose of structured procrastination resulted in something I’ve been meaning to build for quite a while: a timer to help apply the two minute rule from David Allen’s famous GTD (Getting Things Done) system to the processing of a maildir-format email inbox.

Briefly, the idea is that when processing your inbox, for each email you have a maximum of two minutes to either:

  • perform any actions required by that email, or
  • add any such actions to your TODO list, and move the email out of the inbox. (IMHO, best practice is to move it to an archive folder and have a system for rapid retrieval of the email via the TODO list item, e.g. via a hyperlink which will retrieve the email based on its Message-Id: header, using an indexing mail search engine.)

However, I find that I frequently exhibit the bad habit of fidgeting with my inbox – in other words, checking it frequently to satisfy my curiosity about what new mail is there, without actually taking any useful action according to the processing (a.k.a. clarification) step. This isn’t just a waste of time; it also increases my stress levels by making me aware of things I need to do whilst miserably failing to help get them done.

Another bad habit I have is mixing up the processing/clarification phase with the organizing and doing phases – in other words, I look at an email in my inbox, realise that it requires me to perform some actions, and then I immediately launch right into the actions without any thought as to how urgent they are or how long they will take. This is another great way of increasing stress levels when they are not urgent and could take a long time, because at least subconsciously I’m usually aware that this is just another form of procrastination.

So today I wrote this simple Ruby timer program which constantly monitors the number of emails in the given maildir-formatted folder, and shows you how much of the two minutes you have left to process the item you are currently looking at. Here’s a snippet of the output:

1:23    24 mails
1:22    24 mails
1:21    24 mails
1:20    24 mails
Processed 1 mail in 41s!
Average velocity now 57s per mail
At this rate you will hit your target of 0 mails in 21m 55s, at 2014-03-19 23:18:59 +0000
2:00    23 mails
1:59    23 mails
1:58    23 mails
1:57    23 mails

You can see that each time you process mail and remove it from the email folder, it resets the counter back to two minutes. If you exceed the two minute budget, it will start beeping annoyingly, to prod you back into adherence to the rule.

So for example if you have 30 mails in your inbox, using this timer it should take you an absolute maximum of one hour to process them all (“process” in the sense defined within David Allen’s GTD system, not to complete all associated tasks).

Since gamification seems to be the new hip buzzword on the block, I should mention I’m already enjoying the fact that this turns the mundane chore of churning through an inbox into something of a fun game – seeing how quickly I can get through everything. And I already have an item on the TODO list for collecting statistics about each “run”, so that I can see stuff like:

  • on avarege how many emails I process daily
  • how often I process email
  • on average how many emails I process during each “sitting”
  • how much time I spend processing email
  • whether I’m getting faster over time

I also really like being able to see an estimate of the remaining time – I expect this will really help me decide whether I should be processing or doing. E.g. if I have deadlines looming and I know it’s going to take two hours to process my inbox, I’m more likely to consciously decide to ignore it until the work for my deadline is complete.

Other TODO items include improving the interface to give a nice big timer and/or progress bar, and the option of a GTK interface or similar. Pull requests are of course very welcome ;-)

For mutt users, this approach can work nicely in conjunction with a trick which helps focus on a single mail thread at a time.

Hope that was useful or at least interesting. If you end up using this hack, I’d love to hear about it!

Share

Panorama Theme by Themocracy