Nov 112014
 

It took me several re-stacks, but I finally got a working devstack with Ironic and Heat services running. I mostly followed directions here, but I ended up making a few changes in the local.conf file.

Start with cloning the repo and creating the stack user:

$ git clone https://github.com/openstack-dev/devstack.git devstack
$ sudo ./devstack/tools/create-stack-user.sh 

Now, switch to the ‘stack’ user and clone again:

$ sudo su stack
$ cd ~
$ git clone https://github.com/openstack-dev/devstack.git devstack
$ cd devstack

Create local.conf file in devstack directory:

[[local|localrc]]

# Credentials
DATABASE_PASSWORD=secrete
ADMIN_PASSWORD=secrete
SERVICE_PASSWORD=secrete
SERVICE_TOKEN=secrete
RABBIT_PASSWORD=secrete

# Enable Ironic API and Ironic Conductor
enable_service ironic
enable_service ir-api
enable_service ir-cond

# Enable Neutron which is required by Ironic and disable nova-network.
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-lbaas
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,ironic,ir-api,ir-cond
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
ENABLED_SERVICES+=,horizon

# Create 3 virtual machines to pose as Ironic's baremetal nodes.
IRONIC_VM_COUNT=3
IRONIC_VM_SSH_PORT=22
IRONIC_BAREMETAL_BASIC_OPS=True

# The parameters below represent the minimum possible values to create
# functional nodes.
IRONIC_VM_SPECS_RAM=1024
IRONIC_VM_SPECS_DISK=10

# Size of the ephemeral partition in GB. Use 0 for no ephemeral partition.
IRONIC_VM_EPHEMERAL_DISK=0

VIRT_DRIVER=ironic

# By default, DevStack creates a 10.0.0.0/24 network for instances.
# If this overlaps with the hosts network, you may adjust with the
# following.
NETWORK_GATEWAY=10.1.0.1
FIXED_RANGE=10.1.0.0/24
FIXED_NETWORK_SIZE=256

# Log all output to files
LOGFILE=$HOME/devstack.log
SCREEN_LOGDIR=$HOME/logs
IRONIC_VM_LOG_DIR=$HOME/ironic-bm-logs

Now your devstack is ready for setup, simply run

./stack.sh

After devstack finishes running, you should see something similar to:

Horizon is now available at http://10.0.0.1/
Keystone is serving at http://10.0.0.1:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: secrete
This is your host ip: 10.0.0.1

Source the credentials for the demo user:

$ source ~/devstack/openrc

Create ssh key:

$ nova keypair-add test > test.pem
chmod 600 test.pem

Check available flavors, note that baremetal is one of the flavors:

$ nova flavor-list
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID  | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1   | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2   | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3   | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4   | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5   | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
| 551 | baremetal | 1024      | 10   | 0         |      | 1     | 1.0         | True      |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

Create first image using ironic:

$ image=$(nova image-list | egrep "$DEFAULT_IMAGE_NAME"'[^-]' | awk '{ print $2 }')

$ nova boot --flavor baremetal --image $image --key_name test my-first-metal
+--------------------------------------+----------------------------------------------------------------+
| Property                             | Value                                                          |
+--------------------------------------+----------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                         |
| OS-EXT-AZ:availability_zone          | nova                                                           |
| OS-EXT-STS:power_state               | 0                                                              |
| OS-EXT-STS:task_state                | scheduling                                                     |
| OS-EXT-STS:vm_state                  | building                                                       |
| OS-SRV-USG:launched_at               | -                                                              |
| OS-SRV-USG:terminated_at             | -                                                              |
| accessIPv4                           |                                                                |
| accessIPv6                           |                                                                |
| adminPass                            | 99wcXpnQFkck                                                   |
| config_drive                         |                                                                |
| created                              | 2014-11-12T04:56:42Z                                           |
| flavor                               | baremetal (551)                                                |
| hostId                               |                                                                |
| id                                   | 53094f5d-5f48-4059-9547-9e297a5f324b                           |
| image                                | cirros-0.3.2-x86_64-uec (183b8227-10a0-4cf1-afd9-c4ff1272dc41) |
| key_name                             | test                                                           |
| metadata                             | {}                                                             |
| name                                 | my-first-metal                                                 |
| os-extended-volumes:volumes_attached | []                                                             |
| progress                             | 0                                                              |
| security_groups                      | default                                                        |
| status                               | BUILD                                                          |
| tenant_id                            | c6cf561507d2425480279f2209209b77                               |
| updated                              | 2014-11-12T04:56:43Z                                           |
| user_id                              | ad3318ff89014b7a8bbf6d589806e501                               |
+--------------------------------------+----------------------------------------------------------------+
stack@egle-node:~$ nova list
+--------------------------------------+----------------+--------+------------+-------------+------------------+
| ID                                   | Name           | Status | Task State | Power State | Networks         |
+--------------------------------------+----------------+--------+------------+-------------+------------------+
| 53094f5d-5f48-4059-9547-9e297a5f324b | my-first-metal | ACTIVE | -          | Running     | private=10.1.0.4 |
+--------------------------------------+----------------+--------+------------+-------------+------------------+

To see all the “bare metal” nodes and what is going on with them, use admin user:

. ~/devstack/openrc admin admin 

There were 3 “bare metal” nodes created for us using the local.conf file configuration. Since this is devstack, the nodes magically appeared for us thanks to virtualization magic.

Before any nodes are built, all the “bare metal” instances will be powered off and not provisioned. Remember to use admin user for all the ironic commands.

$ ironic node-list
+--------------------------------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+---------------+-------------+--------------------+-------------+
| 321122bf-a187-456b-9232-d34ec2279941 | None          | power off   | None               | False       |
| a636cb12-3491-4abb-91a1-05ebd7137e8a | None          | power off   | None               | False       |
| 1af459b4-ba3a-4356-8d7a-c2e7ce70a900 | None          | power off   | None               | False       |
+--------------------------------------+---------------+-------------+--------------------+-------------+

Once the instance above is booted, this will change, pay attention to the “Provisioning State” and “Power State”:

$ ironic node-list
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| 321122bf-a187-456b-9232-d34ec2279941 | 53094f5d-5f48-4059-9547-9e297a5f324b | power off   | deploying          | False       |
| a636cb12-3491-4abb-91a1-05ebd7137e8a | None                                 | power off   | None               | False       |
| 1af459b4-ba3a-4356-8d7a-c2e7ce70a900 | None                                 | power off   | None               | False       |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
stack@egle-node:~/devstack$ ironic node-list
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| 321122bf-a187-456b-9232-d34ec2279941 | 53094f5d-5f48-4059-9547-9e297a5f324b | power on    | wait call-back     | False       |
| a636cb12-3491-4abb-91a1-05ebd7137e8a | None                                 | power off   | None               | False       |
| 1af459b4-ba3a-4356-8d7a-c2e7ce70a900 | None                                 | power off   | None               | False       |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
$ ironic node-list
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| 321122bf-a187-456b-9232-d34ec2279941 | 53094f5d-5f48-4059-9547-9e297a5f324b | power on    | active             | False       |
| a636cb12-3491-4abb-91a1-05ebd7137e8a | None                                 | power off   | None               | False       |
| 1af459b4-ba3a-4356-8d7a-c2e7ce70a900 | None                                 | power off   | None               | False       |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+

Remember to check official documentation for devstack + ironic: http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html#deploying-ironic-with-devstack

Happy stacking!
-eglute

 Posted by at 11:50 pm
May 152013
 

I just started looking at Ceph, and wanted to set it up to be used with DevStack. Ceph website has great documentation, so if you are looking on how to setup OpenStack and Ceph for production use, check it out! However, if you just want to get the DevStack with Ceph running, follow along this post.

I started out following directions on how-to Ceph and DevStack on EC2, but since I am using a Rackspace Cloud Server, some of the instructions were not relevant to me. Also, since everything is running on one server, I am not using  Ceph auth.

Install DevStack

The server: Ubuntu 12.04, 4GB RAM. After creating it through the control panel, login, do the usual updates, and create a new user. DevStack will create a user for you, but I find it that things run much smoother if I create the user myself.

apt-get update
apt-get install git
groupadd stack
useradd -g stack -s /bin/bash -d /opt/stack -m stack
visudo

In sudo-ers file, add this line:

%stack ALL=(ALL:ALL) NOPASSWD: ALL

This will allow stack user to use sudo without password.

sudo su stack

As a stack user, install DevStack:

git clone git://github.com/openstack-dev/devstack.git
cd devstack

Create localrc file:

FLOATING_RANGE=192.168.1.224/27
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=eth0
ADMIN_PASSWORD=supersecret
MYSQL_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
SERVICE_TOKEN=password

DEST=/opt/stack
LOGFILE=stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True

Run the install script:

./stack.sh

Go get some coffee, and when all the scripts complete, login to Horizon, and make sure creating instances is working (create one!).

Install Ceph

Now that devstack is installed, it is time to install Ceph. I followed the “5 Minute Quick Start” with a couple small changes and omissions.

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update && sudo apt-get install ceph

Create Ceph configuration file:

sudo vi /etc/ceph/ceph.conf

and add the following:

[global]

# For version 0.55 and beyond, you must explicitly enable
# or disable authentication with "auth" entries in [global].

auth cluster required = none
auth service required = none
auth client required = none

[osd]
osd journal size = 1000

#The following assumes ext4 filesystem.
filestore xattr use omap = true
# For Bobtail (v 0.56) and subsequent versions, you may
# add settings for mkcephfs so that it will create and mount
# the file system on a particular OSD for you. Remove the comment `#`
# character for the following settings and replace the values
# in braces with appropriate values, or leave the following settings
# commented out to accept the default values. You must specify the
# --mkfs option with mkcephfs in order for the deployment script to
# utilize the following settings, and you must define the 'devs'
# option for each osd instance; see below.

#osd mkfs type = {fs-type}
#osd mkfs options {fs-type} = {mkfs options} # default for xfs is "-f"
#osd mount options {fs-type} = {mount options} # default mount option is "rw,noatime"

# For example, for ext4, the mount option might look like this:

#osd mkfs options ext4 = user_xattr,rw,noatime

# Execute $ hostname to retrieve the name of your host,
# and replace {hostname} with the name of your host.
# For the monitor, replace {ip-address} with the IP
# address of your host.

[mon.a]

host = {hostname}
mon addr = {IP}:6789

[osd.0]
host = {hostname}

# For Bobtail (v 0.56) and subsequent versions, you may
# add settings for mkcephfs so that it will create and mount
# the file system on a particular OSD for you. Remove the comment `#`
# character for the following setting for each OSD and specify
# a path to the device if you use mkcephfs with the --mkfs option.

#devs = {path-to-device}

[osd.1]
host = {hostname}
#devs = {path-to-device}

Note that you will need to change the hostname and IP address to match your own. If you want to use Ceph auth, you will need to change “none” to “cephx”.

Create directories for Ceph daemons:

sudo mkdir -p /var/lib/ceph/osd/ceph-0
sudo mkdir -p /var/lib/ceph/osd/ceph-1
sudo mkdir -p /var/lib/ceph/mon/ceph-a
sudo mkdir -p /var/lib/ceph/mds/ceph-a

Deploy Ceph, generate user key, start Ceph:

cd /etc/ceph
sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
sudo service ceph -a start
sudo ceph health

When “ceph health” command returns “HEALTH_OK”, Ceph is ready to be used. Create volumes:

ceph osd pool create volumes 128
ceph osd pool create images 128

Install client libraries:

sudo apt-get install python-ceph

Setup pool permissions, create users and keyrings:

ceph auth get-or-create client.volumes mon 'allow r' osd 'allow rwx pool=volumes, allow rx pool=images'
ceph auth get-or-create client.images mon 'allow r' osd 'allow rwx pool=images'
sudo useradd glance
sudo useradd cinder
ceph auth get-or-create client.images | sudo tee /etc/ceph/ceph.client.images.keyring
sudo chown glance:glance /etc/ceph/ceph.client.images.keyring
ceph auth get-or-create client.volumes | sudo tee /etc/ceph/ceph.client.volumes.keyring
sudo chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring

Edit glance configuration file:

sudo vi /etc/glance/glance-api.conf
default_store = rbd

[ ... ]

# ============ RBD Store Options =============================

# Ceph configuration file path
# If using cephx authentication, this file should
# include a reference to the right keyring
# in a client. section
rbd_store_ceph_conf = /etc/ceph/ceph.conf

# RADOS user to authenticate as (only applicable if using cephx)
rbd_store_user = images

# RADOS pool in which images are stored
rbd_store_pool = images

# Images will be chunked into objects of this size (in megabytes).
# For best performance, this should be a power of two
rbd_store_chunk_size = 8

Add the following lines to the cinder configuration:

sudo vi /etc/cinder/cinder.conf
volume_driver=cinder.volume.driver.RBDDriver
rbd_pool=volumes

Configure DevStack to Use Ceph

At this point, both Ceph and DevStack are configured, however, since the configuration files for glance and cinder were changed, restart all glance and cinder services.

To restart the services in DevStack, re-join screen, bring up services, quit and restart them.
Rejoin screen:

screen -r

If you get something like this:

Cannot open your terminal '/dev/pts/0' - please check.

This means you do not have permissions to it. Simple fix:

sudo chmod 777 /dev/pts/0

After re-joining screen, type control-a keys, followed by ” (quotation mark). This will present you with all the running services:

Num Name                    Flags

  0 shell                       $
  1 key                      $(L)
  2 horizon                  $(L)
  3 g-reg                    $(L)
  4 g-api                    $(L)
  5 n-api                    $(L)
  6 n-cond                   $(L)
  7 n-cpu                    $(L)
  8 n-crt                    $(L)
  9 n-net                    $(L)
 10 n-sch                    $(L)
 11 n-novnc                  $(L)
 12 n-xvnc                   $(L)
 13 n-cauth                  $(L)
 14 n-obj                    $(L)
 15 c-api                    $(L)
 16 c-vol                    $(L)
 17 c-sch                    $(L)

Use up and down arrows to select the service for restart, and click enter. Type control-c, to stop the service, then up-arrow once to bring up the previous command, and enter to start it up again. Rinse and repeat.

Services that need to be restarted:

3 g-reg                    $(L)
  4 g-api                    $(L)
 15 c-api                    $(L)
 16 c-vol                    $(L)
 17 c-sch                    $(L)

If you are looking on some general DevStack, logging, and screen tips, check out this blog: http://vmartinezdelacruz.com/logging-and-debugging-in-openstack/

Use Ceph

Now your DevStack will use Ceph! Go to Horizon. Under Project/Volumes, create a couple new volumes and attach them to your VM. On command line, list volumes:

rbd ls -p volumes

You should see something similar to this:

volume-c74332e9-1c97-4ee9-be15-c8bdf0103910
volume-e69fa2df-b9e5-4ab2-8664-a5af2bf14098

This is all you should need to get going with Ceph and DevStack.

Useful resources:

DevStack: http://devstack.org/
Ceph: http://ceph.com/docs/master/
Ceph + DevStack on EC2: http://ceph.com/howto/building-a-public-ami-with-ceph-and-openstack/
Ceph 5 Minute Quick Start: http://ceph.com/docs/master/start/quick-start/
Ceph and OpenStack: http://ceph.com/docs/master/rbd/rbd-openstack/
Some good DevStack logging and screen tips: http://vmartinezdelacruz.com/logging-and-debugging-in-openstack/

Have fun with!

-eglute

 Posted by at 3:30 pm