OpenStack (Ubuntu 11.10 and OpenStack Diablo)

Introduction to OpenStack

Following links provide you a feeling about the architecture of OpenStack and how the several components interact:

OpenStack Administration Guide: “Getting Started with OpenStack”

OpenStack Administration Guide: “Networking Options”

OpenStack Wiki: “Understanding FlatNetworking”

We recommend you to read through the links as well as the OpenStack Starter Guide.

 

Our Starting Point…

This blog entry describes an openstack multinode installation. As the name implies, this configuration is based on 2 (or more) physical hosts. We use one of the nodes as the so called “cloud controller” and the other one as an additional computing ressource. The hostname of the cloud ontroller is “c2n2”, the hostname of the additional computing ressource node is “c2n1”. Make sure that the domain name resolution works in both directions!

The cloud controller c2n2 will host all basic nova services that are needed for providing a cloud computing environment:

  • Glance (manages images for virtual instances)
  • Compute (runs virtual instances)
  • Network (manages the virtual logical network infrastructure and network traffic within the cloud environment)
  • Scheduler (manages the distribution of virtual instances to physical hosts based on metadata they deliver)
  • Volume (manages persistant block device storage)
  • Rabbit Message Server (provides a communication channel between the several components of OpenStack)

The other physical node provides solely addional computing power. So only the nova-compute (and nova-common) service will be installed on that node. At the end we have one cloud management node (c2n2) which works simultaneously as computing node together with the other node (c2n1).

Our OpenStack test installation is hosted in a vlan seperated lab environment. So our physical nodes are seperated from other physical nodes based on OSI Layer 2 vlans.

Each of our physical nodes is equipped with 2 network interface cards. The following picture shows our topology.

our topology

The initial configuration of our 2 physical hosts is as follows:

  • operating system: Ubuntu Server 11.10 (Oneiric Ocelot)
  • Manual Partitioning:
    • 1. Partition: Boot Flag, 50G, ext4, /, (UbuntuServer), sda1
    • 2. Partition: 25G, sda2
    • 3. Partition: 5G, swap, sda3
  • Main Network Interface Card: eth0
  • Local user: localadmin / localadmin (no prerequisite)
  • Grub installed in MBR
  • OpenSSH Server installed

 

Static Network Configuration

c2n2

root@c2n2:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface auto lo iface lo inet loopback
# The primary network interface 

auto eth0
iface eth0 inet static        
address 10.110.12.2        
netmask 255.255.255.0        
gateway 10.110.12.254 

auto eth1
iface eth1 inet static

 

c2n1

root@c2n1:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface auto lo iface lo inet loopback
# The primary network interface 

auto eth0
iface eth0 inet static
address 10.110.12.1
netmask 255.255.255.0
gateway 10.110.12.254 

auto eth1
iface eth1 inet static

The eth1 interfaces on the nodes are left unconfigured – they will be used as bridging devices by openstack.

 

More on the installation process of openstack after the break…

 

Installation

We divided our openstack installation and configuration instructions into 2 parts. In the first part we describe the installation and configuration steps for the cloud controller, in the second part for the computing node.

 

Installation of the cloud controller

We started by adding the original openstack project repositories to our local packet sources (we had some bad experiences when we used the openstack project packages from the official ubuntu sources. But – hey – Openstack is a dynamic project. Changes can and will happen often and sudden. So give the offical ubuntu packets a try!)

In our example, we called the cloud controller “c2n2”.

Add the project packets sources to the local repository list:

root@c2n2:/dev# apt-get install python-software-properties
root@c2n2:/dev# add-apt-repository ppa:openstack-release/2011.3

The python-software-properties is a prerequistite for adding the project packet sources with add-apt-repository. Project version “2011.3” referes to the current diablo release.

After adding them, run an update and upgrade.

root@c2n2:/dev# apt-get update
root@c2n2:/dev# apt-get upgrade

Now install the bridge-utils package. The Linux Bridge is used by openstack to create a local virtual ethernet bridge for bridging the traffic from and to the running instances on one node. Compared to older Open Stack releases, Diablo automatically creates that virtual bridge (default: br100). So you don’t have to create one on your own.

root@c2n2:~# apt-get install bridge-utils
Attention!
The local time on the nodes must be in sync – otherwise you will get some synchronization errors. So install a ntp server on your cloud controller so that other nodes (in our case: c2n1) can synchronize their time from it.
root@c2n2:~# apt-get install -y ntp
Attention!
If you’re running the openstack cloud in a production environment, take care to constrain the number of nodes, which should access the ntp server fortime synchronization.

If you have a running ntp server in your network, configure it by using the “server” statement in /etc/ntp.conf.We did it, because our university laboratory is highly secured 😉 This is our final /etc/ntp.conf:

root@c2n2:~# cat /etc/ntp.conf
# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help

driftfile /var/lib/ntp/ntp.drift

# Enable this if you want statistics to be logged.
#statsdir /var/log/ntpstats/

statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable

# Specify one or more NTP servers.

# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
# more information.
# server 0.at.pool.ntp.org
# server 1.at.pool.ntp.org
# server 2.at.pool.ntp.org
# server 3.at.pool.ntp.org
# server 1.ubuntu.pool.ntp.org
# server 2.ubuntu.pool.ntp.org
# server 3.ubuntu.pool.ntp.org
server zoidberg.lab maxpoll 5
# Use Ubuntu's ntp server as a fallback.
server ntp.ubuntu.com
server 127.127.1.0 maxpoll 5
fudge 127.127.1.0 stratum 10

# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
# details.  The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
# might also be helpful.
#
# Note that "restrict" applies to both servers and clients, so a configuration
# that might be intended to block requests from certain clients could also end
# up blocking replies from your own upstream servers.

# By default, exchange time with everybody, but don't allow configuration.
restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery

# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict ::1

# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict 192.168.123.0 mask 255.255.255.0 notrust

# If you want to provide time to your local subnet, change the next line.
# (Again, the address is an example only.)
#broadcast 192.168.123.255

# If you want to listen to time broadcasts on your local subnet, de-comment the
# next lines.  Please do this only if you trust everybody on the network!
#disable auth
#broadcastclient

 

Message Server Installation

Install the “rabbit-mqserver” message server. As described above, this service is responsible for providing a communication platform for the openstack services (which may run on different nodes).

root@c2n2:~# apt-get install -y rabbitmq-server

Now install some Python dependencies:

root@c2n2:~# apt-get install -y python-greenlet python-mysqldb
Attention!
Never use this default configuration of rabbitmq-server in a production environment. It’s a huge security risk, because it’s the communication aorta of OpenStack. Whenever you get an insight, you can take over the control of the whole cloud environment. So take care by chainging default passwords, use ssl encryption and authentication for exchaning information via the channels, … But this is out of the scope of this blog post.

 

Mysql Database Installation

Install the mysql database. Per default, openstack uses a sqlite database for storing the configuration and the current state of the cloud environment. In this example a mysql database does this job for us.

root@c2n2:~# apt-get install -y mysql-server

If you want to use the default sqlite database, you can skip the “Mysql Database Installation” step.

During the installation of the mysql server, you will be asked to prompt a password for the mysql-server admin “root”. Note it!

Attention!
We used the string “0815” as password. In a production environment use a complex password instead of just 0815. Use a combination of letters, characters, numbers and symbols for your password so that’s harder to crack.

Remember the password – we need it later  for configuring the database server and creating a database for openstack.
Now we have to change the bind address of the mysql server from 127.0.0.1 to 0.0.0.0.

Attention!
In a production environment, you should select only one interface as your binding interface. In our current configuration, the mysql server would listen on every network interface of the node for incoming requests!
root@c2n2:~# sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf

Now you have to create a database for the openstack cloud environment. This database must be named “nova” so that it openstack can find and use it. (Make use of your mysql admin password – here “0815”)

root@c2n2:~# mysql -u root -p0815 -e 'CREATE DATABASE nova;'

Create a new database user, which will be used by openstack to access the “nova” database. First of all create the user and then grant it the privileges to manipulate the nova database.

root@c2n2:~# mysql -u root -p0815 -e "GRANT ALL PRIVILEGES ON nova.* TO 'novadbadmin'@'%' ;"
Attention!
In a production environment you should never use the “%” wildcard for defining the nodes (using IP Adresses or its FQDNs) from where the “nova” (and all of its tables [nova.*]) database can be accessed. Beside specifying the full IP address and FQDN, you can use the %  character for a range. For example: %.foo.com (matches all hosts in the foo.com domain): 192.168.1.% (matches all machines in a private network 192.168.1.0)

Set a password for the new “novadbadmin” user:

root@c2n2:/dev# mysql -u root -p0815 -e "SET PASSWORD FOR 'novadbadmin'@'%' = PASSWORD('0815');"

We used the same password as for the mysql-server root (the mysqlserver admin) –> 0815. In a production environment, we encourage you to use an complex one instead.

Now check whether your database configuration was successful by trying to access the nova database using the new “novadbadmin” user:

root@c2n2:/dev# mysql -u novadbadmin -p0815 nova

(Exit: “quit”)

 

Glance Installation

In our case we use the “glance” service for managing our image templates within the cloud environment. Glance uses sqlite as default database. We decided to use the previous installed mysql database. So we have to create a database for the “glance” service, create a new user called “glancedbadmin” which will be used by the glance service to access the glance database, grant the new user the right to manage the glance database and set a password for glancedbadmin

root@c2n2:/dev# mysql -u root -p0815 -e 'CREATE DATABASE glance';
root@c2n2:/dev# mysql -u root -p0815 -e 'CREATE USER glancedbadmin';
root@c2n2:/dev# mysql -u root -p0815 -e "GRANT ALL PRIVILEGES ON glance.* TO 'glancedbadmin'@'%';"
root@c2n2:/dev# mysql -u root -p0815 -e "SET PASSWORD FOR 'glancedbadmin'@'%' = PASSWORD('0815');"

The new database must be named “glance” – otherwise the glance service wouldn’t know which database it must access to store and retrieve information.

We haven’t created any tables within the glance database yet. This will be done after we’ve installed and restarted the glance service.

root@c2n2:/dev# apt-get install -y glance

 

Nova-“Core” packages installation

In our case, the cloud controller node owns all roles of the open stack concept. So we have to install the remaining packages:

root@c2n2:/dev# apt-get install nova-volume nova-vncproxy nova-api nova-ajax-console-proxy
root@c2n2:/dev# apt-get install nova-doc nova-scheduler
root@c2n2:/dev# apt-get install nova-network nova-compute

During the installation process, a new system user and group “nova” will be created. All nova services are run by this user. nova-compute also installs kvm/qemu.

 

Euca Tools installation

Although the whole cloud infrastructure can also be managed by using openstack tools (nova-manage, …) we will use the popular eucatools. There will be put more emphasize on using openstack tools for managing the cloud infrastructure in future releases. Currently, getting an overview about all open stack commandline tools is quite challenging – corresponding documentation is quite rarely! (For getting additional information regarding the use of the several openstack tools, use the “–help” flag.)

root@c2n2:/dev# apt-get install -y euca2ools unzip

 

Configuring the Open Stack Cloud Environment: nova.conf

Beside the official documentation on the net,

root@c2n2:~# nova-api --helpshort

will give you an overview about the meaning and default values of all usable flags in the nova.conf configuration file.

Here is the nova.conf file we used for our OpenStack enviornment. Since we have a VLAN seperated Lab Infrastructure, we cannot use the VLAN Network Manager network mode. Therefore we switched to the Flat DHCP Manager network mode. Alternatively the FlatManager could be used. To get an overview about the different network options, please follow the links: Link to NetworkManager Options (Openstack.org). More Information about the DHCPFlatManager Modell: 1, 2, 3

Attention!
After reading through the provided links above, you’ve hopefully noticed, that the node where nova-network service is installed is a potential single point of failure for the whole cloud environment. If the node fails, there will be no network connectivity to your running virtual instances. To mitigate this flaw, have a look on the new high availability features of open stack. Detailed information about them would be out of the scope of this blog entry. For more information visit: http://unchainyourbrain.com/openstack/13-networking-in-nova
root@c2n2:~# cat /etc/nova/nova.conf 

# Set network type: FlatDHCPManager
--network_manager=nova.network.manager.FlatDHCPManager

The IPv4 and IPv6 network stack are used by default. To avoid any IPv6 related problems, we decided, that OpenStack should only use the IPv4 stack.

# Sets whether to user IPV6 Addresses
--use_ipv6=false

The following abstract describes the network architecture we used for our Open Stack environment. As already mentioned, we use the FlatDHCPManager:

# Network Configuration
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--flat_network_bridge=br100
--flat_interface=eth1
--flat_injected=False

We use eth1 as “flat_interface”. This means, that this network interface of the hosting node (in our case e.g. c2n2) is used for bridging traffic from and to the virtual instances to the flat network or to the public network (e.g. the internet) via an virtual ethernet bridge. This virtual bridge, which name is br100 by default (“flat_network_bridge”), will be automatically created on startup of the nova-compute service. So, compared to older versions of OpenStack, it hasn’t to be created manually anymore. The “flat_injected” flag is used in conjunction with the direct injection of ipv6 addresses into the virtual instances during startup – therefore we deactived it.

--public_interface=eth0 # Outgoing IP for Instances with fixed IP Range --routing_source_ip=10.110.12.2

The flag “public_interface” denotes the interface, that will be used to access your cloud environment from the “public” (or from any other part of your enterprise network. Per default it is set to eth0). The “routing_to_source” flag configures the ip address that will be used for the outoing internet traffic from virtual instances as source ip for Source NAT.

#Block of IP addresses that are fixed IPs
--fixed_range=192.168.0.0/16

Openstack distinguishes between your normal local logical network and its logical flat network. If you’re using only one NIC, make sure, that these two network types are logically seperated through different logical IP networks.  As already mentioned, the defined flat interface can be seen as the entry point to the flat network (if you have 2 nics – as we have in our case). All virtual instances are connected to the flat network via the virutal bridges on the compute nodes. The “fixed_range” flag defines the ip address pool of this flat network.
In our case, every subnetwork defined for the different tenants of openstack must be within the 192.168.0.0/16 network.

# Top-Level Directory for Maintaining nova's state
--state_path=/var/lib/nova

The following flags configure the locations (nodes) where the several nova services reside.

# These indicates where nova-api services are installed
--rabbit_host=10.110.12.2
--glance_api_servers=10.110.12.2:9292
--s3_host=10.110.12.2
--ec2_host=10.110.12.2
--osapi_host=10.110.12.2
--network_host=10.110.12.2
# For Debugging
--verbose

We are using the mysql database instead of the default sqlite. Therefore we have to configure the service (mysql) and the username and password of the database root user.

# Database Connect
--sql_connection=mysql://novadbadmin:0815@10.110.12.2/nova
# Imageservice: Glance (per default)
--image_service=nova.image.glance.GlanceImageService
# Other
--logdir=/var/log/nova
--lock_path=/var/lock/nova
--force_dhcp_release=True

If the “use_deprecated_auth” flag is set, the Cactus method of authentication is used with the novarc
file containing credentials. Otherwise you’re getting authenticated via the openstack identity service “keystone”. Keep also in mind, that according to the latest documentation: “with the Diablo release, the nova-manage command has been deprecated and you must specify if you want to use it by using the –use_deprecated_auth flag in nova.conf.” Somewhere else on the huge openstack site they state something different 😉

--use_deprecated_auth
--verbose #ISCSI

OpenStack uses iscsi to connect to remote logical storage units before they get attached to the running virtual instances as persistent block storage device. Since the ietadm program is part of Ubuntu use it instead of tgtadm – otherwise you will get socket errors.

--iscsi_helper=ietadm

Openstack uses following ip network to look for possible iscsi shared luns.

--iscsi_ip_prefix=10.110.12.2

Finally our complete nova.conf File:

root@c2n2:/tmp# cat /etc/nova/nova.conf
# Set network type
--network_manager=nova.network.manager.FlatDHCPManager

# Sets whether to user IPV6 Addresses
--use_ipv6=false

# DHCP Bridge Information
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--flat_network_bridge=br100
--flat_interface=eth1
--public_interface=eth0
--flat_injected=False
# --flat_network_dhcp_start=192.168.4.1

# Top-Level Directory for Maintaining nova's state
--state_path=/var/lib/nova

# These indicates where nova-api services are installed
--rabbit_host=10.110.12.2
--glance_api_servers=10.110.12.2:9292
--s3_host=10.110.12.2
--ec2_host=10.110.12.2
--osapi_host=10.110.12.2
--network_host=10.110.12.2

#Block of IP addresses that are fixed IPs
--fixed_range=192.168.0.0/16

# Number of addresses in each private subnet
# --network_size=8

# For Debugging
--verbose

# Database Connect
--sql_connection=mysql://novadbadmin:0815@10.110.12.2/nova

# Imageservice: Glance (per default)
--image_service=nova.image.glance.GlanceImageService

# Outgoing IP for Instances with fixed IP Range
--routing_source_ip=10.110.12.2

# Other
--logdir=/var/log/nova
--lock_path=/var/lock/nova
--force_dhcp_release=True
--use_deprecated_auth
--iscsi_helper=ietadm
--verbose

#ISCSI
--iscsi_ip_prefix=10.110.12.2

 

Nova Volume Configuration (iSCSI)

Overview

As written before, nova-volume is similar to Amazon’s Elastic Block Storage (EBS). The nova-volume service is based on iSCSI and LVM.
You can create volumes and attach them to instances. It is important to know that you can only attach one volume at one time per instance. On the server, where you store the volumes, you need to install and start the iSCSI-Target service. The nova-volume service is responsible for managing the provided volumes (in our case: the cloud controller node c2n2)
On the other servers (compute-nodes), where the instance is running which should use the volume, you need to install an iSCSI-Initiator which opens a connection to the iSCSI-Target and establishes a session for the use of the volume.

 

Installation of iSCSI-Target

On the controller you need to install the lvm2 package and the iSCSI-Target:

root@c2n2:~# apt-get install lvm2 iscsitarget

After the installation of both packages, you have to enable the iSCSI-Target by modifying “/etc/default/iscsitarget” and set the flag “ISCSITARGET_ENABLE” true, to start the daemon by the startup-script:

root@c2n2:~# vi /etc/default/iscsitarget ISCSITARGET_ENABLE=true

If the error “FATAL: Module iscsi_trgt not found” occurs, you need to install “iscsitarget-dkms”. iSCSI-dkms is the “Dynamic Kernel Module Support” and needed if the kernel does not contain the iSCSI’s source and so you have to install this kernel module with:

root@c2n2:~# apt-get install iscsitarget-dkms

 

Create a LVM Volume Group

Before you’re able to work with volumes, you have to create a logical volume group, which includes all LVM volumes that can be used collectively for block storage. This volume group will be shared afterwards via iSCSI.

For that puprose, the OpenStack documentation recommends to reserve an extra partition during the operating system installation. If you forgot to create one – don’t heasitate, you can get the same result by using containerfile as loop device. Therefore we want to describe both options. Choose one of them:

Creating a LVM Volume Group by using a partition

For that purpose, we reserved /dev/sda2. Create a new LVM phsical volume:

root@c2n2:~# pvcreate /dev/sda2

 

Create a new volume group named “nova-volumes”
root@c2n2:~# vgcreate nova-volumes /dev/sda2

Creating a LVM Volume Group by using a container file as loop device

Create a container file (size 10 G):

root@c2n2:~# dd if=/dev/null of=diskfile bs=1G count=10

Look for a free loop device with

root@c2n2:~# losetup -f

and create one with

root@c2n2:~# losetup /dev/loop0 container_file

using the previously created container file. Now continue with the steps described in the previous option.

 

Configuration of the iSCSI-Target

Attention:
There is also the “tgtd” (Target framework daemon) which can handle iSCSI too. This service was also running and rising conflicts (because of listening port) when starting the iSCSI-target daemon.
If you running ubuntu, check if tgtd service is already running (“ps aux | grep tgtd”) and listening on port “3260”.

root@c2n2:~# lsof -i -a | grep 3260

After installation you need to start the iSCSI-Target service with the service-command:

root@c2n2:~# service iscsitarget start

It is very important to configure the “–iscsi_ip_prefix” flag in “/etc/nova/nova.conf”, to tell nova-volume on which host the iSCSI-target ist running. You have the possibility to specify one specific host, like we did with “10.110.12.2” or a whole IP-Range and the iSCSI-helper finds the target for you. If you specify a whole IP-Range, you need to add a tailing “.” – for example “10.110.12.”.
For configuring the iSCSI-helper you need to put “ietadm” behind the “–iscsi_helper” flag.
This flag is important because there is the “tgtadm” tool which is the “Linux SCSI Target Framework AdministrationUtility” which is part of “tgtd” and did not work for us.

root@c2n2:~# cat /etc/nova/nova.conf
...
# Other
--logdir=/var/log/nova
--lock_path=/var/lock/nova
--force_dhcp_release=True
--use_deprecated_auth
--iscsi_helper=ietadm
--verbose 

#ISCSI
--iscsi_ip_prefix=10.110.12.2

After starting the service it should be listening on port “3260”. To check this, you can execute following command:

root@c2n2:~# lsof -i -a | grep 3260

After running the command you should see the following service (binary in /usr/sbin/ietd) running for IPv4 and IPv6. “ietd” stands for “iSCSI enterprise target daemon”.

root@c2n2:~# lsof -i -a | grep 3260 ietd      17150     root    7u  IPv4  191849      0t0  TCP *:3260 (LISTEN) ietd      17150     root    8u  IPv6  191850      0t0  TCP *:3260 (LISTEN)
Hint!
In our case, the controller node also acts as a compute node. Therefore the iSCSI-initiator also has to be installed and started on the cloud controller too (see following description).

 

Installation and configuration of iSCSI-Initiator

On the compute-nodes you need to install the iSCSI-Initiator, which is used to establish an iSCSI-Session to the nova-volume service. The initiator contained in the “open-iscsi” package.

root@c2n2:~# apt-get install open-iscsi

After installation you need to start the iSCSI-Initiator Service:

root@c2n2:~# service open-iscsi start

 

Discover and Test your iSCSI-target

For test purposes you can create a new volume and manually connect to it by using the iSCSI-initiator client program.
First, create a new volume using the “euca-create-volume” command, with options for which zone it should be created (see “euca-describe-availability-zones”) and what size it should have:

root@c2n2:~# euca-create-volume -s 1 -z nova VOLUME  vol-00000011
1       creating (proj, None, None, None)       2011-12-08T17:30:17Z

After creating you can list the volume using “euca-describe-volumes” and the status should be “available”:

root@c2n2:~# euca-describe-volumes VOLUME  vol-00000011
1              nova    available (proj, c2n2, None, None)      2011-12-08T17:30:17Z

To discover the offers of your iSCSI-target, you can use following command:

root@c2n2:~# iscsiadm -m discovery -t st -p 10.110.12.2 192.168.122.1:3260,1 iqn.2010-10.org.openstack:volume-00000011 10.110.12.2:3260,1 iqn.2010-10.org.openstack:volume-00000011 192.168.3.1:3260,1 iqn.2010-10.org.openstack:volume-00000011 192.168.4.1:3260,1 iqn.2010-10.org.openstack:volume-00000011 169.254.169.254:3260,1 iqn.2010-10.org.openstack:volume-00000011

Attention!
As you can see above, the iSCSI-target is running on all interfaces of our cloud-controller.

In production environments, this should be avoided by configuring the daemon only listening on specific interfaces.

As mentioned before, the iSCSI initiator is also installed on our cloud-controller (see the hint above).
Now you can test your iSCSI installation by creating a connection to the target with your initiator, running the “iscsiadm” command.
We added options for the mode (“-m node”), the target volume from the discovery output (“–targetname “iqn.2010-10.org.openstack:volume-00000011″”) and the server running iSCSI (“–portal “10.110.12.2:3260″”).

root@c2n2:~# iscsiadm -m node --targetname "iqn.2010-10.org.openstack:volume-00000011" --portal "10.110.12.2:3260" --login
Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-00000011, portal: 10.110.12.2,3260]
Login to [iface: default, target: iqn.2010-10.org.openstack:volume-00000011, portal: 10.110.12.2,3260]: successful

For listing all active iSCSI sessions use following command:

root@c2n2:~# iscsiadm -m session
tcp: [1] 10.110.12.2:3260,1 iqn.2010-10.org.openstack:volume-00000011

To close the connection to the target, type following command (important: “–logout” option at the end):

root@c2n2:~# iscsiadm -m node --targetname "iqn.2010-10.org.openstack:volume-00000011" --portal "10.110.12.2:3260" --logout
Logging out of session [sid: 1, target: iqn.2010-10.org.openstack:volume-00000011, portal: 10.110.12.2,3260]
Logout of [sid: 1, target: iqn.2010-10.org.openstack:volume-00000011, portal: 10.110.12.2,3260]: successful

 

Finalizing the OpenStack Cloud Environment Configuration on the Cloud Controller

First of all try to restart all installed services by using:

root@c2n2:/dev# service nova-volume restart; service nova-vncproxy restart; service nova-api restart; service nova-ajax-console-proxy restart root@c2n2:/dev# service nova-doc restart; service nova-scheduler restart root@c2n2:/dev# service nova-network restart; service nova-compute restart

Make sure, that all services are running properly.

Now initialize the Nova Database by using the command

root@c2n2:/etc/nova# nova-manage db sync

If you encounter an error in your log file like this one:

/usr/lib/python2.7/dist-packages/migrate/changeset/schema.py:124: MigrateDeprecationWarning: Passing a Column object to alter_column is deprecated. Just pass in keyword parameters instead. MigrateDeprecationWarning

Make sure that all services are running before you use the db sync command. In our case the service nova-network wasn’t running, therefor the networks table couldn’t be created in the nova database.

Next, create a Openstack Project Admin user. This Admin user will be responsible for managing the the ressource of a project (users, volumes, networks, etc.). We call this user “projadmin” and assign it the open stack buildin “admin” role.

root@c2n2:/etc/nova# nova-manage user admin projadmin
export EC2_ACCESS_KEY=1fc4e2b5-4982-4024-9462-2034ccef4c3c
export EC2_SECRET_KEY=f213fbba-2940-4f06-a571-e29d948e8f46

Now we create a new OpenStack Cloud Project (in the latest openstack relase diablo they are also called tenants) “proj”. Our new user “projadmin” will be the new admin user for the new project “proj”

root@c2n2:/etc/nova# nova-manage project create proj projadmin

Now assign to the new project a certain project specific network range. This network range represents the private ip addresses of the virtual instances of a project (comparable to the private IP addresses of Amazon instances). These project specific network ranges are used by OpenStack (iptables) to seperate the logical networks of different projects (and therefore also the group of virutal instances which are connected to that logical networks). So we created one network with 256 possible addresses within the 192.168.4.0/24 block.

root@c2n2:/etc/nova# nova-manage network create projnet 192.168.4.0/24 1 256
root@c2n2:~#nova-manage floating create --ip_range=10.110.12.224/27
Attention!
If you ask yourself, hm… I created a new virtual network for the instances. Now I want to assign this network to one of my projects – How can I do that? Have a look on the options the “nova-mange network create” command provides by using the command: “nova-mange network create –help”. Oh, there are some options to define a project id for associating a network to a project – great!. Try it and nothing will happen! It seems to us, that they simply have forgotten to implement that function. A workaround is provided here: https://bugs.launchpad.net/nova/+bug/901899

With “nova-manage floating create” you define a range of public ip addresses, which can be assigned to running instances. Floating IP Adresses in Openstack are compareable to Elastic IPs in Amazons Cloud.

The euca-tools need the credentials of our projadmin user and some additional information to manage a project. We get them by using the following commands:

root@c2n2:~# nova-manage project zipfile proj projadmin ~/novacreds.zip
root@c2n2:~# unzip novacreds.zip
Archive:  novacreds.zip
extracting: novarc
extracting: pk.pem
extracting: cert.pem
extracting: cacert.pem
root@c2n2:~# ls
cacert.pem  cert.pem  novacreds.zip  novarc  nova.zip  pk.pem

The euca-tools use shell environmental variables to store the user credentials and other information. To set them execute:

root@c2n2:~# source novarc

If you want, restart all nova related services by using following command:

root@c2n2:~# restart libvirt-bin; restart nova-network; restart nova-compute; restart nova-api; restart nova-scheduler; restart nova-volume; restart glance-api; restart glance-registry

and check whether your services are running:

root@c2n2:/tmp# euca-describe-availability-zones verbose
AVAILABILITYZONE        nova    available
AVAILABILITYZONE        |- c2n2
AVAILABILITYZONE        | |- nova-network       enabled :-) 2011-12-09 23:06:54
AVAILABILITYZONE        | |- nova-compute       enabled :-) 2011-12-09 23:06:51
AVAILABILITYZONE        | |- nova-scheduler     enabled :-) 2011-12-09 23:06:52
AVAILABILITYZONE        | |- nova-volume        enabled :-) 2011-12-09 23:06:53

 

Installation of the compute node

Install the bridge utils:

root@c2n1:~# apt-get install -y bridge-utils

Install and configure ntp so that c2n2 will act as the ntp server for c2n1:

root@c2n1:~# apt-get install -y ntp

and add following line to /etc/ntp.conf:

server 10.110.12.2 maxpoll 5

Restart the ntp service with

root@c2n1:~# /etc/init.d/ntp restart

and check whether the time synchronization works properly:

root@c2n1:~# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*c2n2.stratus.la 10.203.50.250    4 u    5   32  377    0.137    0.007   0.008

Now install the OpenStack Packages needed for the compute node:

root@c2n1:~# nova-common python-nova nova-compute

Copy /etc/nova/nova.conf from our cloud controller node c2n2 to /etc/nova/nova.conf on our compute node c2n1.

root@c2n1:~# scp root@c2n2.stratus.lab:/etc/nova/nova.conf /etc/nova/nova.conf root@c2n2.stratus.lab's password: nova.conf                                     100% 1262     1.2KB/s   00:00

Restart the nova-compute service:

root@c2n2:~# restart nova-compute

Check wheter the new node is successfully integrated into our existing cloud infrastructure:

root@c2n2:/tmp# euca-describe-availability-zones verbose
AVAILABILITYZONE        nova    available
AVAILABILITYZONE        |- c2n1
AVAILABILITYZONE        | |- nova-compute       enabled :-) 2011-12-09 23:06:52
AVAILABILITYZONE        |- c2n2
AVAILABILITYZONE        | |- nova-network       enabled :-) 2011-12-09 23:06:54
AVAILABILITYZONE        | |- nova-compute       enabled :-) 2011-12-09 23:06:51
AVAILABILITYZONE        | |- nova-scheduler     enabled :-) 2011-12-09 23:06:52
AVAILABILITYZONE        | |- nova-volume        enabled :-) 2011-12-09 23:06:53

 

Creating a custom image

Creating an Virtual Instance Image is quite easy. All you need is a node where kvm is installed and your favourite operating system as an iso file (ubuntu.iso). We used Ubuntu Server 11.10. We used the ressourceefficient minimal installation option for the Ubuntu Server 11.10.

First of all we have to create an HD Image with

root@c2n2:~# kvm-img create -f raw server.img 1.5G
Hint!
1.5 GB should be enough for our purposes. The basic installation of the ubuntu server takes arround 930 MB.

Now we can start to install our Minimal Installation of Ubuntu Server 11.10 by using the previously created server.img file:

root@c2n2:~# kvm -m 256 -cdrom ubuntu.iso -hda server.img -boot d -net nic -net user -nographic -vnc :0
Hint!
For the german keyboard layout at the option: -k de

Use

vncviewer c2n2.stratus.lab:0

to get graphical access to the installation dialog of Ubuntu Server 11.10. On the starting screen press F4 and select the “Install a minimal virtual machine” option. After then put the focus back on “Install Ubuntu Server” and hit enter. Configure following partition scheme:

  • 1.5 G, ext4, /, boot flag, 1 GB: (this will be our / Partition we’re going to extract afterwards out of the server.img)

During the installation process you’ll be asked whether you want to install “Basic Ubuntu Server” -> don’t choose this option. (otherwise this list of nearly useless packages will be installed [“useless” compared to our purposes]). In the same dialog window choose the option below: “openssh-server”.

After the installation procedure completed, restart your VM (we call it ubuntuserverimage) and install at least:

root@ubuntuserverimage:~# apt-get install cloud-init

Now install and configure all packages and servers you need in your VM image.

Within your VM: remove the network persistence rules from /etc/udev/rules.d as their presence will result in the network interface in the instance coming up as an interface other than eth0.

root@ubuntuserverimage:~# rm -rf /etc/udev/rules.d/70-persistent-net.rules

Use scp to copy the kernel and the initrd image from your VM to any local directory. You’ll need them later when you publish your VM image into the OpenStack cloud environment.

root@ubuntuserverimage:~# scp /vmlinuz root@10.110.12.2:~/UbuntuServerImage
root@ubuntuserverimage:~# scp /initrd.img root@10.110.12.2:~/UbuntuServerImage

Based on the chose instance type on startup (openstack calls them flavours), the disk sizes of the instances can vary dynamically – the original UUID of the disk can get invalid. Instead of the UUID we have to use the file system label as an partition identifier.

Edit /etc/fstab and change the following line of the root partition from

root@ubuntuserverimage:~# cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point>   <type>  <options>       <dump>  <pass> proc            /proc           proc    nodev,noexec,nosuid 0       0 # / was on /dev/sda1 during installation UUID=c535e1a1-2be9-4fb7-8ccd-bac3d9aa5afb /               ext4    errors=remount-ro 0       1 # swap was on /dev/sda3 during installation

to

LABEL=uec-rootfs    /    ext4        defaults    0   0

Now we have to extract the pure ext4 partition of the server.img file. We just want to have our own individually installed operating system on our cloud instance image. We do this by changing the server.img image file to a loop device.

root@c2n2:~# losetup -f  server.img
root@c2n2:~# losetup -a

Then you will see an output like this, that the img acts like an local loop device:

root@c2n2:~# losetup -a
[...]
/dev/loop1: [0801]:1048591 (/tmp/server.img)

Now we need the partition table for extracting the ext4 (/) partition out of the server.img Image (sry that the output is written in german ;):

root@c2n2:~# fdisk -l /dev/loop2

Platte /dev/loop2: 1610 MByte, 1610612736 Byte
175 Köpfe, 43 Sektoren/Spur, 418 Zylinder, zusammen 3145728 Sektoren
Einheiten = Sektoren von 1 a 512 = 512 Bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000202f8

      Gerät  boot.     Anfang        Ende     Blöcke   Id  System
/dev/loop2p1   *        2048     3143679     1570816   83  Linux

So our partition starts at 2048. We have to multiply this value with the sector size (512) to get our starting offset for dd:

root@c2n2:~# losetup -f -o 1048576 server.img
root@c2n2:~# losetup -a
[...]
/dev/loop1: [0801]:1048591 (/tmp/server.img), offset 1048576

and now we extract our ext4 parition into a new image.

root@c2n2:~# dd if=/dev/loop1 of=serverfinal.img

We don’t need the loop device anymore, so let’s destroy it 😉

root@c2n2:~# sudo losetup -d /dev/loop1

We have to change the filesystem label to be compatible to our changes in /etc/fstab:

root@c2n2:~# tune2fs -L uec-rootfs vsmallfinal.img
tune2fs 1.41.14 (22-Dec-2010)

Now we’re able to publish our first image:

root@c2n2:/tmp# uec-publish-image -t image --kernel-file vmlinuz --ramdisk-file initrd.img amd64 serverfinal.img max

The publishing process takes a while. If it has finished correctly, you will see following output or something similar:

root@c2n2:~# euca-describe-images
IMAGE   ami-0000001d    max/vsmallfinal.img.manifest.xml                available       private         x86_64  machine aki-0000001a    ari-0000001b instance-store
IMAGE   ari-0000001b    max/initrd.img.manifest.xml            available       private         x86_64  ramdisk                 instance-store
IMAGE   aki-0000001a    max/vmlinuz.manifest.xml               available       private         x86_64  kernel                  instance-store

 

Instance Management

Starting and  connecting to instances

To start an instance using a published image, you need to run the “euca-describe-images” command first to list all your images:

root@c2n2:~# euca-describe-images
IMAGE   ami-0000001d    max/vsmallfinal.img.manifest.xml                available       private         x86_64  machine aki-0000001a    ari-0000001b instance-store
IMAGE   ari-0000001b    max/initrd.img.manifest.xml            available       private         x86_64  ramdisk                 instance-store
IMAGE   aki-0000001a    max/vmlinuz.manifest.xml               available       private         x86_64  kernel                  instance-store

As you can see above, there is an custom image (“vsmallfinal.img”) in the bucket “max” with a ramdisk and a kernel.
To start this image (create an instance), you have to specify the ID (for the custom image “ami-0000001d”).
It is important to add the keyfile (“-k mykey”) without the extension of the keyfile. The keyfile gets injected into the instance and you can connect with ssh to the started instance.

root@c2n2:~# euca-run-instances ami-0000001d -k mykey -t m1.tiny
RESERVATION     r-yk0nd2np      proj    default
INSTANCE        i-0000001d      ami-0000001d                    pending mykey (proj, None)      0               m1.tiny 2011-12-09T23:39:01Z    unknown zone  aki-0000001a    ari-0000001b
Attention!
If you add the extension of the key-file you will get following error: “KeypairNotFound: Keypair %(keypair_name)s not found for user %(user_id)s”

With following command you can check which instances are running. As you can see, the instance in in status “pending” which changes after some seconds to “running”:

root@c2n2:~# euca-describe-instances
RESERVATION     r-yk0nd2np      proj    default
INSTANCE        i-0000001d      ami-0000001d    192.168.4.7     192.168.4.7     running mykey (proj, c2n2)      0               m1.tiny 2011-12-09T23:39:01Z  nova    aki-0000001a    ari-0000001b

To connect to the started instance, run the following command and add the option “-i” to specify a keyfile which should be used to connect to the instance. It is important that you add the extension of the keyfile (“.priv”).

root@c2n2:~# ssh -i mykey.priv root@192.168.4.7

 

Stopping and terminating instances

You have to destinguish between stopping and terminating an instance. To shutdown an instance you have to use the terminate-command from euca2ools.

If you want to terminate an instance, you have to  run “euca-terminate-instance” and add the instance-ID of the running machine.

root@c2n2:~# euca-describe-instances
RESERVATION     r-yk0nd2np      proj    default
INSTANCE        i-0000001d      ami-0000001d    192.168.4.7     192.168.4.7     running mykey (proj, c2n2)      0               m1.tiny 2011-12-09T23:39:01Z  nova    aki-0000001a    ari-0000001b

In the above example the running instance has the ID “i-0000001d”. To terminate that instace run the following command:

root@c2n2:~# euca-terminate-instances i-0000001d

You have to check own on your own if the instance is stopped using “euca-describe-instances”.

If you only want to stop an instance, run following command:

root@c2n2:~# euca-stop-instances i-0000001d

 

Volume Management

You have the possibility to attach or detach a persistent block storage volume to an instance as describe above. To do so, you have to create a volume with euca2ools.
First, create a new volume using “euca-create-volume” command, with options for which zone it
should be created (see “euca-describe-availability-zones”) and what size it should have.

Creating Volumes

root@c2n2:~# euca-create-volume -s 1 -z nova
VOLUME  vol-00000011    1       creating (proj, None, None, None)       2011-12-08T17:30:17Z

After creating you can list the volume using “euca-describe-volumes” and the status should be “available”:

root@c2n2:~# euca-describe-volumes
VOLUME  vol-00000011     1              nova    available (proj, c2n2, None, None)      2011-12-08T17:30:17Z

To attach the volume to the instance, you need to specify the instance-ID (“-i i-0000001b”) of the running instance, the local device name inside the guest VM (“-d /dev/sdh”) and the volume-id (“vol-00000011”)

root@c2n2:~# euca-attach-volume -i i-0000001b -d /dev/sdh vol-00000011 VOLUME  vol-00000011

To check if the volume is used as expected, you can run euca2ools again and you should see, that the state of the attached volume changed to “in-used”.

root@c2n2:~# euca-describe-volumes
VOLUME  vol-00000011     1              nova    in-use (proj, c2n2, i-0000001b[c2n1], /dev/sdh)    2011-12-08T17:30:17Z

Now you can connect to the instance over ssh and check the attachment by running “dmesg | tail”. To use the disk, you have to run “fdisk /dev/sdh”.

 

Creating snapshots of volumes

You can create snapshots of existing volumes and create new volumes out of these snapshots and attach them to other instances.

root@c2n2:~# euca-describe-volumes
VOLUME  vol-00000011     1              nova    in-use (proj, c2n2, i-0000001b[c2n1], /dev/sdh)    2011-12-08T17:30:17Z

Attention:
The most important thing is that you cannot create a snapshot of a volume if it is marked as in use:

As you can see below, the volume is in use and snapshot-creation results in an error:

root@c2n2:~# euca-create-snapshot vol-00000011
ApiError: Volume status must be available

To create a new snapshot you don’t have to terminate the instance, just stop it:

root@c2n2:~# euca-stop-instances i-0000001b

After this, you can create the snapshot without any errors:

root@c2n2:~# euca-create-snapshot vol-00000011
SNAPSHOT        snap-00000004   vol-00000011    creating        2011-12-08T19:19:39Z       0%

Check the state of the snapshot using “euca-describe-snapshots”:

root@c2n2:~# euca-describe-snapshots
SNAPSHOT        snap-00000004   vol-00000011    available       2011-12-08T

Now you have the possibility to create a new volume from a existing snapshot using “–snapshot” option:

root@c2n2:~# euca-create-volume -z nova --snapshot snap-00000004
VOLUME  vol-00000012    1       snap-00000004   creating (proj, None, None, None)  2011-12-08T19:30:29Z

After creation, there is a new volume with reference to the base-snapshot:

root@c2n2:~# euca-describe-volumes
VOLUME  vol-00000011     1              nova    available (proj, c2n2, None, None)      2011-12-08T17:30:17Z VOLUME  vol-00000012     1      snap-00000004   nova    creating (proj, c2n2, None, None)       2011-12-08T19:30:29Z

    • Sam
    • February 29th, 2012 10:14pm

    Hi,

    Thank you for your post.I follow this instruction, but get failed. Hopefully someone here can give me some hints. Should I post my error in comment section or send an email to troppusnwt@ta.ca.ptshf? Thank you very much.

    • Bryan
    • March 3rd, 2012 6:10am

    I like all of the detail and appreciate the effort to document but that is entirely too much work just to configure two nodes with a simple public/private network. I realize openstack is early in development but for the average IT guy out there just wanting to configure a cluster there has to be an easier way than this. I will keep my eye on openstack because of the potential and I hope that configuration become much easier in the future. Thanks for the detailed work though, it was an interesting read without all of the hours spent personally fighting errors.

  1. No trackbacks yet.