Experimenting with OpenStack Essex on Ubuntu 12.04 LTS under VirtualBox

The best way to get an insight into OpenStack is playing with a live installation, but OpenStack's simplest configuration requires 2 network interfaces, e.g. 2 machines, each equipped with 2 network cards and an additional ethernet hub. Using VirtualBox we can set up a full OpenStack installation on a single laptop or desktop.
This approach is very appealing and there's a number of guides that describe it. This guide is a compilation of those sources (credited below) with the neccessary updates and fixes. 

 

OpenStack is evolving fast and the recent release, Essex, came out by the end of April 2012 - at about the same time Ubuntu released 12.04 LTS “Precise Pangolin” (LTS stands for Long Term Support). OpenStack's development is mostly carried out on Ubuntu so this is the easiest platform to get it running on. Ubuntu's repositories for 12.04 provide all the necessary packages to support OpenStack's Essex so a POC based on this combination is supposed to be the easiest to set up.
 

 

Quick Overview of The Installation

We will install VirtualBox on whatever host OS you have (provided it runs VirtualBox, even Windows). After configuring VirtualBox's host-only network adapetrs, we will create a VM (named essex1) in which a fresh ISO of Ubuntu 12.04 will be installed. All subsequent actions are performed inside essex1. We will grab Kevin's script (credits below) which will do most of the installation and configuration work. Then we will download a tiny cloud-ready image, from which we will create our “cloud instance”. OpenStack's Dashboard will be used to launch this instance, we will end the session logging into the cloud instance.

 

 

 

Ingredients

Host PC

recommended 8GB RAM, at least 30GB free disk space, internet connection, VT-X enabled in BIOS.

The host OS can be any Linux, windows, Mac supporting VirtualBox

VirtualBox 4

I'm using 4.1.12-dfsg-2 (from “universe”)

Ubuntu 12.04 LTS Precise Pangolin ISO image

Most guides use the “server” edition; since a lot of work is performed inside the VM guest, I use (and highly recommend) the “desktop”.
Choose 32 or 64 bits according to your host

Kevin Jackson's Osinstall.sh script

is obtained by git-cloning Kevin's repository [inline below]

Tiny cloud image

is downloaded by another script (written by Kevin)

 

Estimated time

About 1h30, including VM installation and OpenStack setup (not including the ubuntu ISO download time).
 

Step 1: VirtualBox Install & Setup - performed on Host PC

Verify VT-X enabled in your host PC's BIOS

Download Ubuntu 12.04 ISO (i'm using the 64 bits desktop)

install VirtualBox [apt-get or here]

Configure VirtualBox's Host-Only Networks

start VirtualBox [on ubuntu prior to Unity it's in Applications → Accessories]
open File → Preferences → Network tab
Add host-only netwok for vboxnet0 – this will be the Public interface

set IP to 172.16.0.254, mask 255.255.0.0, dhcp disbaled


Add host-only netwok for vboxnet1 – this will be the Private (VLAN) interface

set IP to 11.0.0.1, mask 255.0.0.0, dhcp disbaled


Note1: In Kevin's screencast, vboxnet1 is set to 10.0.0.1 which is also used by the default D-Link router (from Bezeq). There shouldn't be IP addressing issues since VirtualBox is supposed to take care of isolation – but I had IP clashes and therefore suggest 11.0.0.0/8 instead.


Note2: In my screenshots vboxnet0 is assigned to another test, so I use vboxnet1 and vboxnet2 respectively

 

[should be thumbnail]

 

Step 2: Create Guest - performed in VirtualBox

click the “New” button 

Create a VM with the following settings:

Name: Essex1 (or whatever, not really important)
OS type: Linux
Version: Ubuntu (or Ubuntu 64, in accordance with the ISO downloaded above)
Memory: 1536MB
Hard Disk: accept all the defaults, size 20GB

 

Configure the newly created VM

Now modify the guest as follows: (performed from the right panel in VirtualBox's main window, where the new VM is selected on the left).
System tab:

Processor (optional, but recommended): Increase CPU from 1 set to 2
Acceleration: make sure VT-x and nested paging are checked

Network tab: see figure above 

Adapter 1: attached to NAT – eth0 will connect here; 

Adapter 2: attached to Host-Only Adapter, vboxnet0 - eth1 will connect here ;

Adapter 3: attached to Host-Only Adapter, vboxnet1 - eth2 will connect here;

Audio tab: may be disabled

Shared Folders: optional

if you want to copy files around (from/to the host PC and the VM) that's handy.

 

Power the newly created VM

VirtualBox should popup a wizard to connect the ISO file as boot device.
Note: If this doesnt happen (e.g. this isn't your first attmept to boot the VM), you may use the Storage tab to hook the ISO to the virtual CD drive. Alternatively, use the VM window's Devices menu to do the same.

 

Step 3: Guest Install & Initial Configuration

Install the guest (from the iso image), use the suggested defaults.

Create a user with a quick password such as 0000 (you'll soon have to retype it many times).
Choose eth0 as your default network interface.
when prompted to reboot, disconnect the ISO from the CD 

 


 

 

Verify internet access after reboot (mandatory to continue)

You should have internet access through eth0. If it doesn't work, copy the setup for /etc/network/interfaces from the snippet below.
Normally, eth0 will get an IP address of 10.0.2.15

 

Configure network interfaces

Become root (from now till the end):
sudo -i
Edit /etc/network/interfaces, make it look like this:

 auto lo
iface lo inet loopback

  

# The primary network interface
auto eth0
iface eth0 inet dhcp

  

#Public Interface
auto eth1
iface eth1 inet static

address 172.16.0.1
netmask 255.255.0.0
network 172.16.0.0
broadcast 172.16.255.255

 

#Private VLAN interface
auto eth2
iface eth2 inet manual

 up ifconfig eth2 up

 then run:

ifup eth1 #after this, ifconfig shows inet addr:172.16.0.1 Bcast:172.16.255.255 Mask:255.255.0.0
ifup eth2 #after this, ifconfig doesnt report ipv4 addr for eth1

or reboot

 

Verify reachability from your host PC

ping 172.16.0.1 

 

Update && upgrade

run

apt-get update &&  apt-get upgrade


and reboot

 

Install GuestAdditions [optional but highly recommended]

GuestAdditions provide the ability to resize the main VM window, cut/paste between the Host PC and the guest and enable other productivity gains.
from the guest window's top menu, select Devices → Install Guest Additions
A pop up (from the VM) will ask if you want to autorun, accept and let the script run as root.
Reboot.
Note1: after each kernel update, you may be required to reinstall GuestAdditions.

 

Install openssh-server, required when installation ISO is the “desktop” edition:

apt-get -y install openssh-server

 

Install Git, required to pull down Kevin's scripts:

apt-get -y install git

 

Take a snapshot of VirtualBox's Guest.

The guest is now installed, updated and configured with the basic network setup. If something goes wrong with the OpenStack installation, you can start over using this snapshot.

 

 

 

Step 4: OpenStack installation - Automated Part

we continue as root on the Guets essex1

sudo -i # and cwd is /root

Clone Kevin's repository:

git clone https://github.com/uksysadmin/OpenStackInstaller.git
cd OpenStackInstaller
git checkout essex

Run the combo installer:

./OSinstall.sh -F 172.16.1.0/24 -f 11.1.0.0/16 -s 512 -p eth2 -t demo -v qemu

Note: option '-p eth2' is missing from Kevin's screenscat, adding it makes the difference.

The script displays a configuration summary and prompts for yes/no.
This is what we get:

OpenStack Essex Release: OpenStack with Keystone and Glance

OpenStack will be installed with these options:

Installation: all
Networking: VLAN (100)
Private Interface = eth2
>> Private Network: 11.1.0.0/16 1 512
Public Interface = eth1
>> Public Floating network = 172.16.1.0/24
Cloud Controller (API, Keystone + Glance) = 172.16.0.1
Virtualization Type: qemu

Note: The larger the public floating range, the longer it takes to create the entries
Stick to a /24 to create 256 entries in test environments with the -F parameter

Account Credentials

Tenancy: admin
Role: Admin
Credentials: admin:admin

 

Tenancy: demo
Role: Member, Admin
Credentials demo:demo

Are you sure you want to continue? [Y/n]

 

Take note of the account credentials for tenants 'admin' and 'demo'. 

Hit enter to start the automated part of the installation. Plenty of output, including installation of openstack components, Apache, MySQL etc.

 

Step 5: Finalize OpenStack installation

We're root on essex1.
The last screen of output lists additional commands to run manually in order to finalize the installation, we'll run them all now and rectify the errors in the next step.

 

Restarting service to finalize changes...
To set up your environment and a test VM execute the following:

Upload a test Ubuntu image:

./upload_ubuntu.sh -a admin -p openstack -t demo -C 172.16.0.1

Setting up user environment

Copy over the demorc file created in this directory to your client
Source in the demorc file:
. demorc

 

Add a keypair to your environment so you can access the guests using keys:

euca-add-keypair demo > demo.pem
chmod 0600 demo.pem

 

Set the security group defaults (iptables):

euca-authorize default -P tcp -p 22 -s 0.0.0.0/0
euca-authorize default -P tcp -p 80 -s 0.0.0.0/0
euca-authorize default -P tcp -p 8080 -s 0.0.0.0/0
euca-authorize default -P icmp -t -1:-1

 

copy and run the commands in bold.
==

 

Step 6: Fix Glance

We're root on essex1 and the environment variables defined in demorc are sourced [. demorc]

Verify that the ubuntu cloud image is registered with Glance (at that point, this didn't work for me and others):

glance details

 

if you get:

ERROR [glance.registry.db.api] (ProgrammingError) (1146, "Table 'glance.images' doesn't exist")

 

Repair glance:

glance-manage version_control 0
glance-manage db_sync
restart glance-api
restart glance-registry
#if restart fails, run start instead

 

Run the upload script again (it's smart enough to skip the wget if the file was already downloaded):

./upload_ubuntu.sh -a admin -p openstack -t demo -C 172.16.0.1

expect the output tolook something like this:

ubuntu 11.10 i386 now available in Glance (7d7d0f62-35af-4f46-be9d-8659fe786ad2)

where on the first attempt above we got:

ubuntu 11.10 i386 now available in Glance ()

 

Now glance details should produce something similar to this:

root@essex1:~/OpenStackInstaller# glance details

================================================================================
URI: http://172.16.0.1:9292/v1/images/7d7d0f62-35af-4f46-be9d-8659fe786ad2
Id: 7d7d0f62-35af-4f46-be9d-8659fe786ad2
Public: Yes
Protected: No
Name: ubuntu 11.10 i386 Server
Status: active
Size: 1476395008
Disk format: ami
Container format: ami
Minimum Ram Required (MB): 0
Minimum Disk Required (GB): 0
Owner: 77009d2c93154ab5975127dbd48faa6d
Property 'kernel_id': 41185339-9935-454f-acb2-37afaa233e3d
Property 'distro': ubuntu 11.10
================================================================================
URI: http://172.16.0.1:9292/v1/images/41185339-9935-454f-acb2-37afaa233e3d
Id: 41185339-9935-454f-acb2-37afaa233e3d
Public: Yes
Protected: No
Name: ubuntu 11.10 i386 Kernel
Status: active
Size: 4790624
Disk format: aki
Container format: aki
Minimum Ram Required (MB): 0
Minimum Disk Required (GB): 0
Owner: 77009d2c93154ab5975127dbd48faa6d
Property 'distro': ubuntu 11.10

================================================================================

 Update 2012-05-15: new issue with glance details ["Response from Keystone does not contain a Glance endpoint."

 

 

Step 7: Launch an instance Using the Horizon Web UI

Start a browser and login to the Dashboard at http://172.16.0.1 with credentials ‘demo/openstack’ (the browser can run in the host PC or on essex1).

Click the Project tab, expect something like this:

 

Figure Dashboard project access+security

 

Verifications:
In Access & Security:
under Keypairs you should see the keypair that euca-add-keypair created in demo.pem [compare fingerprint]

Under Security Groups/Edit Rules, you should see the routing entries created by the 4 euca-authorize commands we manually entered above.

 

Launch go to Images & Snapshots:

You should see an entry for the cloud image “ubuntu 11.10 i386 Server” [in Kevin's updated repo the image is ubuntu 12.04 amd64 Server]

 

if all looks good, click the Launch button (on the right of the tiny image), in the popup:

  1. give it a name

  2. specify the keypair

  3. click Launch Instance

 

 

 

 

 

Step 8: Login to the openstack instance  

You may login from the host PC or from essex1, all you need is the keypair in demo.pem and the public IP address listed in the Dashboard's Instances & Volumes tab.

it may take up to 5 minutes for the tiny instance to accept the ssh request. Be patient if you get “Destination Host Unreachable” or “Connection refused”.

 

make sure [!!] the keypair permssions are set correctly:

chmod 0600 demo.pem #MUST – if perms not 0600, ssh will yield Connection refused
ssh -i demo.pem ubuntu@172.16.1.1

 

BTW, passwd isn't required for sudo -i on this instance.

 

Take a 2nd snapshot.
 

Step 9: Pending Issues 

glance and keystone bugs

Inspecting OpenStack's state on essex2 using CLI clients generally works fine. For example:

[14:37:07]root@essex2[OpenStackInstaller]
# . demorc       #source demorc to set the OSTKenvironment vars OSTK
 
[14:37:10]root@essex2[OpenStackInstaller]
# keystone service-list
+----------------------------------+----------+--------------+----------------------------+
|                id                |   name   |     type     |        description         |
+----------------------------------+----------+--------------+----------------------------+
| 0798614d6a0f4dbfad6d4c05867ae49f | volume   | volume       | Volume Service             |
| 6f5d88e2f89b41248f6268f356f13f18 | keystone | identity     | OpenStack Identity Service |
| 83e79cbc07ac47eb86e5c572b877ef0a | nova     | compute      | OpenStack Compute Service  |
| 9170c087443d411d9faa88277d691537 | ec2      | ec2          | EC2 Service                |
| b141b167e5114071982bd024ce961f8a | glance   | image        | OpenStack Image Service    |
| cc4961ae75e34bb9a2f163d129069ed2 | swift    | object-store | OpenStack Storage Service  |
+----------------------------------+----------+--------------+----------------------------+

 

[14:38:36]root@essex2[OpenStackInstaller]
# nova image-list
+--------------------------------------+---------------------------+--------+--------------------------+
|                  ID                  |            Name           | Status |    Server                |
+--------------------------------------+---------------------------+--------+--------------------------+
| 31cdfe12-2822-43e2-ba57-055669a38862 | ubuntu 12.04 amd64 Kernel | ACTIVE
 |                          |

| 7bb28da2-2979-4dc9-8845-d138c8bc2071 | ubuntu 12.04 amd64 Server | ACTIVE |                          |

+--------------------------------------+---------------------------+--------+--------------------------+
 

For glance however, it doesn't work as expected (recall it worked after the fix in step 6):

[14:38:59]root@essex2[OpenStackInstaller]
# glance index
Failed to show index. Got error:
Response from Keystone does not contain a Glance endpoint.

What goes wrong here?

Keystone has an endpoint for glance, we see it (actually the URL) in the output of keystone endpoint-list. So the error string is probably wrong.

We may assume glance doesn't authenticate correcly with keystone - but all other clients do; In order to dig deeper, we need some verbose output from keystone.

This in turn opens up another issue, which is how to turn verbose logging in keystone. The sample file comes with no comments and the logging section in keystone's wiki isn't of much help.

  

solution to the 'glance' issues update 2012-10-18

Add this to your env and glance (index, details, etc) will work as expected:

export OS_TENANT_NAME=demo   #or whatever you assigned for tenant above

This is a bug in 'glance 2012.1.X' (where X may be 2, 3 or 4).

As we see above, other OSTK CLI clients (e.g. nova, keystone) don't use OS_TENANT_NAME and they don't fail when it's undefined in the environment.

It's worth noting that in setups where username == tenant_name, glance will not fail as seen above. The doc states that when OS_TENANT_NAME isn't defined, autherntication tries a match against OS_USERNAME ("why simple when we can make it complicated?"). 

Subsequently, the error string emitted by glance ("Response from Keystone does not contain a Glance endpoint")  reveals a but in auth.py, the returned error string is erronous and misleading. 

 

Alternatively, you may execute galnce with '-T demo', as in: 

glance -T demo index

 

 

swift

When clicking "Containers" in dashboard, you'll get

error at /nova/containers

[Errno 111] Connection refused

That's normal as Swift isn't installed by Kevin's script, and That's All Folks...

 

 

Credits

This writing is largely based on Kevin Jackson's excellent blog and Osinstall.sh script, with fixes by Rob Davison and other sources.

  1. Kevin's Running OpenStack under VirtualBox – A Complete Guide (Part 1) [for Diablo]
  2. Kevin's Screencast / Video of an Install of OpenStack Essex on Ubuntu 12.04 under VirtualBox  
  3. Rob's blog with fixes to the screencast