How to build an OpenStack cloud from SUSEcon’s free USB stick handouts

By , December 11, 2014 3:28 pm

Once again, SUSEcon was a blast! Thanks to everyone who helped make it such a great success, especially all our customers and partners who attended.

If you attended the final Thursday keynote, you should have been given a free USB stick preloaded with a bootable SUSE Cloud appliance. And if you missed out or couldn’t attend, download a copy here! This makes it possible for anyone to build an OpenStack cloud from scratch extremely quickly and easily. (In fact, it’s almost identical to the appliance we used a few weeks ago to win the “Ruler of the Stack” competition at the OpenStack summit in Paris.)

Erin explained on stage at a high-level what this appliance does, but below are some more specific technical details which may help in case you haven’t yet tried it out.

The appliance can be booted on any physical or virtual 64-bit x86 machine … but before we start! – if you would like try running the appliance in a VM using either KVM or VirtualBox, then there is an even easier alternative which uses Vagrant to reduce the whole setup to a one-line command. If you like the sound of that, stop reading and go here instead. However if you want to try it on bare metal or with a different hypervisor such as VMware or HyperV, read on!

Requirements

You’ll need the following:

  • At least three physical or virtual 64-bit x86 machines, each with at least 2GB RAM and 8GB disk, and no valuable data on any of the attached disks:

    • one admin node running the Crowbar deployment tool which will provision the other nodes from scratch,
    • at least one controller node which will run OpenStack infrastructure services, and
    • at least one compute node which will host VM instances within the cloud. (If this compute node is a VM, then the VM instances in the cloud will have to be run either using KVM nested virtualization, or QEMU software virtualization which is slower but good enough for “kicking the tires”.)
  • A private IPv4 network which all the machines must be connected to. Setting this up is the only potentially tricky bit of the whole exercise. By default the network in question needs to be 192.168.124.0/24 with no DHCP server enabled, so:

    • if you are installing the appliance in a VM, you should be able to set up a NAT or host-only virtual network and configure your hypervisor so that it does not serve DHCP on that network, or
    • if you are installing on bare metal, ensure there is no DHCP server active on that L2 segment.
  • Another machine (physical or virtual) with a modern web browser on the same private IPv4 network.

Installing the appliance

Attach the bootable USB media (physical or virtual), and boot the machine. This will automatically install a SUSE Cloud Admin Node onto the disk. Caution: this will wipe any pre-existing OS, so only use it on a spare machine or freshly-created VM, with no valuable data on any of the attached disks!

disk-destroying confirmation dialog box

After confirming you are OK to wipe all existing data on the disk, the appliance will be written to disk and then booted. Shortly after, YaST will appear, allowing you to configure:

  • which keyboard layout you want,
  • what password to use for the root user,

root password configuration dialog box

  • what hostname and domain name to use (the defaults are fine),

hostname/DNS configuration dialog box

  • the network setup (the default of 192.168.124.10/24 is recommended, otherwise you will also have to configure Crowbar prior to installation),
  • the clock and time zone, and finally
  • the NTP configuration (supplying an upstream NTP server is recommended but not required).

Logging in to the Admin Node

Log in as root (with the password you specified above) either on the console or via ssh root@192.168.124.10 if you have another machine configured to be on the same subnet.

Press q and then y to accept the beta EULA, which highlights that the appliance is partially based on unreleased code. Please do not use it for production deployments!

Configuring Crowbar (optional)

Crowbar is very powerful and flexible in terms of network configuration. If you have other traffic on the L2 network segment (e.g. if you are using bare metal hardware and a physical network rather than a dedicated virtual network) then you should check that its default networks don’t conflict with your existing traffic. To do this, type yast crowbar:

how to launch the YaST Crowbar module

and select the Networks tab:

Crowbar network configuration dialog

From here you can examine and change the networks Crowbar will use. Some are only used when various options are selected later on, but at a minimum, admin, nova_fixed, and nova_floating will all be used. For more information, see the Networking section of the SUSE Cloud Deployment Guide.

Installing SUSE Cloud

From the shell prompt, type screen go to initiate the installation of SUSE Cloud. This will take several minutes.

Exploring the Crowbar web interface

On another machine on the same network as the machine running the now-installed appliance, start a browser, and navigate to http://192.168.124.10:3000/ (adjust the IP accordingly if you changed the admin network above).

Crowbar web UI

PXE-boot some other nodes

Now simply PXE-boot your other nodes. (Typically this requires PXE-booting to be enabled in the BIOS, and/or manually selecting it from the BIOS boot menu which is often accessible by hitting the F11 key or similar during boot.)

This will run a small inventory ramdisk image on each one to detect its hardware and report the discovery back to Crowbar, without touching the node’s local disk(s). Each node will then appear in the Crowbar web UI, and sit in an idle loop whilst awaiting task allocation via Crowbar:

Crowbar web UI with new node discovered

Clicking on that node will show the results of the automatic hardware inventorying, and give us the option to allocate the node:

viewing the newly discovered node in Crowbar's web UI

When editing the node, we can give it a more human-friendly alias (e.g. node1), and then click Allocate to install a minimal SLES OS:

editing the newly discovered node in Crowbar's web UI

A full OS will be automatically installed on the node via AutoYaST:

autoyast installation in progress

When OS installation has finished, the console looks like this:

autoyast installation completed

and then the node turns green in the web UI indicating that it’s ready to have roles assigned to it:

Crowbar's web UI showing node1 as ready

Multiple nodes can be installed via PXE/AutoYaST at the same time.

Deploying OpenStack via Crowbar barclamps

By this point you should have at least two freshly-installed nodes managed by Crowbar (excluding the admin node itself which Crowbar runs on), in which case you are ready to deploy OpenStack via Crowbar’s barclamps, which can be found via the Barclamps drop-down in Crowbar’s web interface:

navigating to the OpenStack barclamps in Crowbar's web UI

This process is relatively straightforward, and a full explanation can be found in the corresponding chapter of the SUSE Cloud 4 Deployment Guide.

However, it is also possible to automate the whole deployment from this point on, using Crowbar’s batch subcommand and an appropriately crafted .yaml file. There are three sample .yaml files in /root on the admin node. The simplest configuration reflected by these three files is simple-cloud.yaml, which assumes a single controller node and a single compute node, with aliases controller1 and compute1 respectively:

simple 2-node cloud scenario shown by Crowbar's UI

In this case, assuming your nodes are given the above aliases, you could set up the entire OpenStack cloud with this single command run as root on the admin node:

crowbar batch --timeout 1800 build simple-cloud.yaml

starting crowbar batch build

It takes a while to apply all the barclamps, as can be seen from the timestamps whilst it’s running:

crowbar batch build on cinder

(The other two sample .yaml files are for a highly available control plane which assumes you have three nodes aliased controller1, controller2, and compute1.)

Once the barclamps are all applied, they should show as green in the Crowbar UI view:

Crowbar UI showing barclamps successfully applied

Exploring OpenStack

From the main Nodes dashboard in the Crowbar web UI, click the controller1 node (or whichever one you deployed OpenStack’s Dashboard to), and you will see a couple of links to the OpenStack Dashboard (a.k.a “Horizon”):

Crowbar UI showing barclamps successfully applied

Click on OpenStack Dashboard (admin) and it will take you to the OpenStack Dashboard, where you can log in as the admin user with a password of (by default) crowbar:

OpenStack Dashboard login page

OpenStack Dashboard compute overview page

Congratulations! You have set up a full OpenStack cloud from scratch! Now you can start reading the SUSE Cloud Admin and End User guides to learn more about how to use OpenStack.

Support

Whilst this bootable appliance is partially based on unreleased, unsupported code, we are still very interested to hear feedback from our customers and partners. So while we (obviously!) cannot offer unlimited free support for it, if you post any questions / issues to the SUSE Cloud web forum, we will try to respond on a best-effort basis. (And of course full commercial support for the released version of SUSE Cloud is available if you want it 😉 )

As we say in the SUSE world, have a lot of fun!

Share

One Response to “How to build an OpenStack cloud from SUSEcon’s free USB stick handouts”

  1. […] By Adam Spiers: How to build an OpenStack cloud from SUSEcon’s free USB stick handouts […]

Leave a Reply

 

Panorama Theme by Themocracy