Jan 132015
 

Every time I am involved in something related to OpenStack, it is amazing to see how great the OpenStack community is. Today is a second day of OpenStack elections, and it is very exciting to be part of it. Over the last couple days I had people that do not know me in person write to me, talk to me at work, and show their support in other ways.

Check out my candidate profile here: http://www.openstack.org/community/members/profile/3106

I know the OpenStack community is very active, however, this time we are asked to be even more involved than usual: not only do you need to vote, you need to make sure that your friends and colleges do not forget to vote as well.

HOW TO VOTE: If you are an eligible voter, you should have received an email with the subject “OpenStack Foundation – 2015 Individual Director Election”from secretary@openstack.org. This email includes your unique voting link. If you did not receive an email, please contact secretary@openstack.org.

Vote for Egle!

See all the candidates running for the elections: http://www.openstack.org/election/2015-individual-director-election/CandidateList

-eglute

 Posted by at 5:18 pm
Nov 112014
 

It took me several re-stacks, but I finally got a working devstack with Ironic and Heat services running. I mostly followed directions here, but I ended up making a few changes in the local.conf file.

Start with cloning the repo and creating the stack user:

$ git clone https://github.com/openstack-dev/devstack.git devstack
$ sudo ./devstack/tools/create-stack-user.sh 

Now, switch to the ‘stack’ user and clone again:

$ sudo su stack
$ cd ~
$ git clone https://github.com/openstack-dev/devstack.git devstack
$ cd devstack

Create local.conf file in devstack directory:

[[local|localrc]]

# Credentials
DATABASE_PASSWORD=secrete
ADMIN_PASSWORD=secrete
SERVICE_PASSWORD=secrete
SERVICE_TOKEN=secrete
RABBIT_PASSWORD=secrete

# Enable Ironic API and Ironic Conductor
enable_service ironic
enable_service ir-api
enable_service ir-cond

# Enable Neutron which is required by Ironic and disable nova-network.
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-lbaas
ENABLED_SERVICES+=,g-api,g-reg
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
ENABLED_SERVICES+=,ironic,ir-api,ir-cond
ENABLED_SERVICES+=,heat,h-api,h-api-cfn,h-api-cw,h-eng
ENABLED_SERVICES+=,horizon

# Create 3 virtual machines to pose as Ironic's baremetal nodes.
IRONIC_VM_COUNT=3
IRONIC_VM_SSH_PORT=22
IRONIC_BAREMETAL_BASIC_OPS=True

# The parameters below represent the minimum possible values to create
# functional nodes.
IRONIC_VM_SPECS_RAM=1024
IRONIC_VM_SPECS_DISK=10

# Size of the ephemeral partition in GB. Use 0 for no ephemeral partition.
IRONIC_VM_EPHEMERAL_DISK=0

VIRT_DRIVER=ironic

# By default, DevStack creates a 10.0.0.0/24 network for instances.
# If this overlaps with the hosts network, you may adjust with the
# following.
NETWORK_GATEWAY=10.1.0.1
FIXED_RANGE=10.1.0.0/24
FIXED_NETWORK_SIZE=256

# Log all output to files
LOGFILE=$HOME/devstack.log
SCREEN_LOGDIR=$HOME/logs
IRONIC_VM_LOG_DIR=$HOME/ironic-bm-logs

Now your devstack is ready for setup, simply run

./stack.sh

After devstack finishes running, you should see something similar to:

Horizon is now available at http://10.0.0.1/
Keystone is serving at http://10.0.0.1:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: secrete
This is your host ip: 10.0.0.1

Source the credentials for the demo user:

$ source ~/devstack/openrc

Create ssh key:

$ nova keypair-add test > test.pem
chmod 600 test.pem

Check available flavors, note that baremetal is one of the flavors:

$ nova flavor-list
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID  | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1   | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2   | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3   | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4   | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5   | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
| 551 | baremetal | 1024      | 10   | 0         |      | 1     | 1.0         | True      |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

Create first image using ironic:

$ image=$(nova image-list | egrep "$DEFAULT_IMAGE_NAME"'[^-]' | awk '{ print $2 }')

$ nova boot --flavor baremetal --image $image --key_name test my-first-metal
+--------------------------------------+----------------------------------------------------------------+
| Property                             | Value                                                          |
+--------------------------------------+----------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                         |
| OS-EXT-AZ:availability_zone          | nova                                                           |
| OS-EXT-STS:power_state               | 0                                                              |
| OS-EXT-STS:task_state                | scheduling                                                     |
| OS-EXT-STS:vm_state                  | building                                                       |
| OS-SRV-USG:launched_at               | -                                                              |
| OS-SRV-USG:terminated_at             | -                                                              |
| accessIPv4                           |                                                                |
| accessIPv6                           |                                                                |
| adminPass                            | 99wcXpnQFkck                                                   |
| config_drive                         |                                                                |
| created                              | 2014-11-12T04:56:42Z                                           |
| flavor                               | baremetal (551)                                                |
| hostId                               |                                                                |
| id                                   | 53094f5d-5f48-4059-9547-9e297a5f324b                           |
| image                                | cirros-0.3.2-x86_64-uec (183b8227-10a0-4cf1-afd9-c4ff1272dc41) |
| key_name                             | test                                                           |
| metadata                             | {}                                                             |
| name                                 | my-first-metal                                                 |
| os-extended-volumes:volumes_attached | []                                                             |
| progress                             | 0                                                              |
| security_groups                      | default                                                        |
| status                               | BUILD                                                          |
| tenant_id                            | c6cf561507d2425480279f2209209b77                               |
| updated                              | 2014-11-12T04:56:43Z                                           |
| user_id                              | ad3318ff89014b7a8bbf6d589806e501                               |
+--------------------------------------+----------------------------------------------------------------+
stack@egle-node:~$ nova list
+--------------------------------------+----------------+--------+------------+-------------+------------------+
| ID                                   | Name           | Status | Task State | Power State | Networks         |
+--------------------------------------+----------------+--------+------------+-------------+------------------+
| 53094f5d-5f48-4059-9547-9e297a5f324b | my-first-metal | ACTIVE | -          | Running     | private=10.1.0.4 |
+--------------------------------------+----------------+--------+------------+-------------+------------------+

To see all the “bare metal” nodes and what is going on with them, use admin user:

. ~/devstack/openrc admin admin 

There were 3 “bare metal” nodes created for us using the local.conf file configuration. Since this is devstack, the nodes magically appeared for us thanks to virtualization magic.

Before any nodes are built, all the “bare metal” instances will be powered off and not provisioned. Remember to use admin user for all the ironic commands.

$ ironic node-list
+--------------------------------------+---------------+-------------+--------------------+-------------+
| UUID                                 | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+---------------+-------------+--------------------+-------------+
| 321122bf-a187-456b-9232-d34ec2279941 | None          | power off   | None               | False       |
| a636cb12-3491-4abb-91a1-05ebd7137e8a | None          | power off   | None               | False       |
| 1af459b4-ba3a-4356-8d7a-c2e7ce70a900 | None          | power off   | None               | False       |
+--------------------------------------+---------------+-------------+--------------------+-------------+

Once the instance above is booted, this will change, pay attention to the “Provisioning State” and “Power State”:

$ ironic node-list
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| 321122bf-a187-456b-9232-d34ec2279941 | 53094f5d-5f48-4059-9547-9e297a5f324b | power off   | deploying          | False       |
| a636cb12-3491-4abb-91a1-05ebd7137e8a | None                                 | power off   | None               | False       |
| 1af459b4-ba3a-4356-8d7a-c2e7ce70a900 | None                                 | power off   | None               | False       |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
stack@egle-node:~/devstack$ ironic node-list
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| 321122bf-a187-456b-9232-d34ec2279941 | 53094f5d-5f48-4059-9547-9e297a5f324b | power on    | wait call-back     | False       |
| a636cb12-3491-4abb-91a1-05ebd7137e8a | None                                 | power off   | None               | False       |
| 1af459b4-ba3a-4356-8d7a-c2e7ce70a900 | None                                 | power off   | None               | False       |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
$ ironic node-list
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+
| 321122bf-a187-456b-9232-d34ec2279941 | 53094f5d-5f48-4059-9547-9e297a5f324b | power on    | active             | False       |
| a636cb12-3491-4abb-91a1-05ebd7137e8a | None                                 | power off   | None               | False       |
| 1af459b4-ba3a-4356-8d7a-c2e7ce70a900 | None                                 | power off   | None               | False       |
+--------------------------------------+--------------------------------------+-------------+--------------------+-------------+

Remember to check official documentation for devstack + ironic: http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html#deploying-ironic-with-devstack

Happy stacking!
-eglute

 Posted by at 11:50 pm
Mar 192014
 

Last week I was troubleshooting neutron issues, and one of my colleagues showed me a new cool tool to help debug some issues.
Here is the link to the tool and its current documentation: https://github.com/openstack/neutron/tree/master/neutron/debug.

Basic usage:

Get a neutron network UUID:

neutron net-list

Add a probe to on the network:

neutron-debug --config-file <config file for l3 agent> probe-create <network UUID>
neutron-debug --config-file /etc/neutron/l3_agent.ini probe-create 81abae05-657d-4dc9-ad30-c42f5b7d0c75

List network namespaces:

$ ip netns
 qprobe-43d574f6-ee34-43d8-a3b4-4c7e686312cb
 qrouter-7f660048-332b-4d39-85f9-05b5854f64ad
 qdhcp-97eebde0-c20b-496a-a086-77c95b587291
 qdhcp-81abae05-657d-4dc9-ad30-c42f5b7d0c75

List probes:

neutron-debug --config-file /etc/neutron/l3_agent.ini probe-list

Use the newly created probe to ping all the things on the network (if it is not working, you need to update neutron-debug):

neutron-debug --config-file /etc/neutron/l3_agent.ini ping-all 43d574f6-ee34-43d8-a3b4-4c7e686312cb

Delete the probe:

neutron-debug --config-file /etc/neutron/l3_agent.ini probe-delete 43d574f6-ee34-43d8-a3b4-4c7e686312cb

Cleanup net spaces:

$ neutron-netns-cleanup --force --config-file=/etc/neutron/dhcp_agent.ini --config-file=/etc/neutron/neutron.conf
$ neutron-netns-cleanup --force --config-file=/etc/neutron/l3_agent.ini --config-file=/etc/neutron/neutron.conf

-eglute

 Posted by at 1:09 pm
Feb 032014
 

OpenStack services have very powerful command line interfaces, with lots of different options.  I went looking for a good command line cheat sheet, and did not find many options, so decided to created one myself. This is not a replacement for reading excellent OpenStack documentation, rather, a short summary of some basic commands. Comments, corrections, suggestions are welcome!

OpenStack command line cheat sheet PDF version.

Keystone (Identity Service)

# List all users
keystone user-list

# List identity service catalog
keystone catalog

# Discover keystone endpoints
keystone discover

# List all services in service catalog
keystone service-list

# Create new user
keystone user-create --name --tenant-id --pass --email --enabled

# Create new tenant
keystone tenant-create --name --description --enabled

Nova (Compute Service)

# List instances, notice status of instance
nova list

# List images
nova image-list

# List flavors
nova flavor-list

# Boot an instance using flavor and image names (if names are unique)
nova boot --image --flavor
nova boot --image cirros-0.3.1-x86_64-uec --flavor m1.tiny MyFirstInstance

# Login to instance
ip netns
sudo ip netns exec ssh <user@server or use a key>
sudo ip netns exec qdhcp-6021a3b4-8587-4f9c-8064-0103885dfba2 ssh cirros@10.0.0.2

# if you are on devstack, password is “cubswin:)” without the quotes

# Show details of instance
nova show <name>
nova show MyFirstInstance

# View console log of instance
nova console-log MyFirstInstance

# Pause, suspend, stop, rescue, resize, rebuild, reboot an instance
# Pause
nova pause <name>
nova pause volumeTwoImage

# Unpause
nova unpause <name>

# Suspend
nova suspend <name>

# Unsuspend
nova resume <name>

# Stop
nova stop <name>

# Start
nova start <name>

# Rescue
nova rescue <name>

# Resize
nova resize <name> <flavor>
nova resize my-pem-server m1.small
nova resize-confirm server1

# Rebuild
nova rebuild <name> <image>
nova rebuild newtinny cirros-qcow2

# Reboot
nova reboot <name>
nova reboot newtinny

# Inject user data and files into an instance
nova boot --user-data ./userdata.txt MyUserdataInstance
nova boot --user-data userdata.txt --image cirros-qcow2 --flavor m1.tiny MyUserdataInstance2

# to validate file is there, ssh into instance, go to /var/lib/cloud look for file

# Inject a keypair into an instance and access the instance with that keypair
# Create keypair
nova keypair-add test > test.pem
chmod 600 test.pem

# Boot
nova boot --image cirros-0.3.0-x86_64 --flavor m1.small --key_name test my-first-server

# ssh into instance
sudo ip netns exec qdhcp-98f09f1e-64c4-4301-a897-5067ee6d544f ssh -i test.pem cirros@10.0.0.4

# Set metadata on an instance
nova meta volumeTwoImage set newmeta=’my meta data’

# Create an instance snapshot
nova image-create volumeTwoImage snapshotOfVolumeImage
nova image-show snapshotOfVolumeImage

# Manage security groups
# Add rules to default security group allowing ping and ssh between #instances in the default security group
nova secgroup-add-group-rule default default icmp -1 -1
nova secgroup-add-group-rule default default tcp 22 22

Glance (Image Service)

# List images you can access
glance image-list

# Delete specified image
glance image-delete <image>

# Describe a specific image
glance image-show <image>

# update image
glance image-update <image>

# Manage images
# Kernel image
glance image-create --name “cirros-threepart-kernel” --disk-format aki --container-format aki --is-public True --file ~/images/cirros-0.3.1~pre4-x86_64-vmlinuz

# Ram image
glance image-create—name “cirros-threepart-ramdisk” --disk-format ari --container-format ari --is-public True --file ~/images/cirros-0.3.1~pre4-x86_64-initrd

# 3-part image
glance image-create—name “cirros-threepart” --disk-format ami --container-format ami --is-public True --property kernel_id=$KID—property ramdisk_id=$RID --file ~/images/cirros-0.3.1~pre4-x86_64-blank.img

# Register raw image
glance image-create --name “cirros-qcow2” --disk-format qcow2 --container-format bare --is-public True --file ~/images/cirros-0.3.1~pre4-x86_64-disk.img

Neutron (Networking Service)

# Create network
neutron net-create <name>
neutron net-create my-network

# Create a subnet
neutron subnet-create <network name> <cidr>
neutron subnet-create my-network 10.0.0.0/29

# List network and subnet
neutron net-list
neutron subnet-list

# Examine details of network and subnet
neutron net-show <id or name of network>
neutron subnet-show <id or name of subnet>

Cinder (Block Storage)

# Manage volumes and volume snapshots
# Create a new volume
cinder create <size in GB> --display-name
cinder create 1 --display-name MyFirstVolume

# Boot an instance and attach to volume
nova boot—image cirros-qcow2 --flavor m1.tiny MyVolumeInstance

# List volumes, notice status of volume
cinder list

# Attach volume to instance after instance is active, and volume is available
nova volume-attach <instance-id> <volume-id> auto
nova volume-attach MyVolumeInstance /dev/vdb auto

# Login into instance, list storage devices
sudo fdisk -l

# On the instance, make filesystem on volume
sudo mkfs.ext3 /dev/vdb

# Create a mountpoint
sudo mkdir /myspace

# Mount the volume at the mountpoint
sudo mount /dev/vdc /myspace

# Create a file on the volume
sudo touch /myspace/helloworld.txt
sudo ls /myspace

# Unmount the volume
sudo umount /myspace

Swift (Object Store)

# Displays information for the account, container, or object
swift stat
swift stat <account>
swift stat <container>
swift stat <object>

# List containers
swift list

# Create a container
swift post mycontainer

# Upload file to a container
swift upload <containder name> <file name>
swift upload mycontainer myfile.txt

# List objects in container
swift list container

# Download object from container
swift download <containder name> <file name>

# Upload with chunks, for large file
swift upload -S <size> <containder name> <file name>
swift upload -S 64 container largeFile

Happy stacking!

-eglute

May 152013
 

I just started looking at Ceph, and wanted to set it up to be used with DevStack. Ceph website has great documentation, so if you are looking on how to setup OpenStack and Ceph for production use, check it out! However, if you just want to get the DevStack with Ceph running, follow along this post.

I started out following directions on how-to Ceph and DevStack on EC2, but since I am using a Rackspace Cloud Server, some of the instructions were not relevant to me. Also, since everything is running on one server, I am not using  Ceph auth.

Install DevStack

The server: Ubuntu 12.04, 4GB RAM. After creating it through the control panel, login, do the usual updates, and create a new user. DevStack will create a user for you, but I find it that things run much smoother if I create the user myself.

apt-get update
apt-get install git
groupadd stack
useradd -g stack -s /bin/bash -d /opt/stack -m stack
visudo

In sudo-ers file, add this line:

%stack ALL=(ALL:ALL) NOPASSWD: ALL

This will allow stack user to use sudo without password.

sudo su stack

As a stack user, install DevStack:

git clone git://github.com/openstack-dev/devstack.git
cd devstack

Create localrc file:

FLOATING_RANGE=192.168.1.224/27
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=eth0
ADMIN_PASSWORD=supersecret
MYSQL_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
SERVICE_TOKEN=password

DEST=/opt/stack
LOGFILE=stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True

Run the install script:

./stack.sh

Go get some coffee, and when all the scripts complete, login to Horizon, and make sure creating instances is working (create one!).

Install Ceph

Now that devstack is installed, it is time to install Ceph. I followed the “5 Minute Quick Start” with a couple small changes and omissions.

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update && sudo apt-get install ceph

Create Ceph configuration file:

sudo vi /etc/ceph/ceph.conf

and add the following:

[global]

# For version 0.55 and beyond, you must explicitly enable
# or disable authentication with "auth" entries in [global].

auth cluster required = none
auth service required = none
auth client required = none

[osd]
osd journal size = 1000

#The following assumes ext4 filesystem.
filestore xattr use omap = true
# For Bobtail (v 0.56) and subsequent versions, you may
# add settings for mkcephfs so that it will create and mount
# the file system on a particular OSD for you. Remove the comment `#`
# character for the following settings and replace the values
# in braces with appropriate values, or leave the following settings
# commented out to accept the default values. You must specify the
# --mkfs option with mkcephfs in order for the deployment script to
# utilize the following settings, and you must define the 'devs'
# option for each osd instance; see below.

#osd mkfs type = {fs-type}
#osd mkfs options {fs-type} = {mkfs options} # default for xfs is "-f"
#osd mount options {fs-type} = {mount options} # default mount option is "rw,noatime"

# For example, for ext4, the mount option might look like this:

#osd mkfs options ext4 = user_xattr,rw,noatime

# Execute $ hostname to retrieve the name of your host,
# and replace {hostname} with the name of your host.
# For the monitor, replace {ip-address} with the IP
# address of your host.

[mon.a]

host = {hostname}
mon addr = {IP}:6789

[osd.0]
host = {hostname}

# For Bobtail (v 0.56) and subsequent versions, you may
# add settings for mkcephfs so that it will create and mount
# the file system on a particular OSD for you. Remove the comment `#`
# character for the following setting for each OSD and specify
# a path to the device if you use mkcephfs with the --mkfs option.

#devs = {path-to-device}

[osd.1]
host = {hostname}
#devs = {path-to-device}

Note that you will need to change the hostname and IP address to match your own. If you want to use Ceph auth, you will need to change “none” to “cephx”.

Create directories for Ceph daemons:

sudo mkdir -p /var/lib/ceph/osd/ceph-0
sudo mkdir -p /var/lib/ceph/osd/ceph-1
sudo mkdir -p /var/lib/ceph/mon/ceph-a
sudo mkdir -p /var/lib/ceph/mds/ceph-a

Deploy Ceph, generate user key, start Ceph:

cd /etc/ceph
sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
sudo service ceph -a start
sudo ceph health

When “ceph health” command returns “HEALTH_OK”, Ceph is ready to be used. Create volumes:

ceph osd pool create volumes 128
ceph osd pool create images 128

Install client libraries:

sudo apt-get install python-ceph

Setup pool permissions, create users and keyrings:

ceph auth get-or-create client.volumes mon 'allow r' osd 'allow rwx pool=volumes, allow rx pool=images'
ceph auth get-or-create client.images mon 'allow r' osd 'allow rwx pool=images'
sudo useradd glance
sudo useradd cinder
ceph auth get-or-create client.images | sudo tee /etc/ceph/ceph.client.images.keyring
sudo chown glance:glance /etc/ceph/ceph.client.images.keyring
ceph auth get-or-create client.volumes | sudo tee /etc/ceph/ceph.client.volumes.keyring
sudo chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring

Edit glance configuration file:

sudo vi /etc/glance/glance-api.conf
default_store = rbd

[ ... ]

# ============ RBD Store Options =============================

# Ceph configuration file path
# If using cephx authentication, this file should
# include a reference to the right keyring
# in a client. section
rbd_store_ceph_conf = /etc/ceph/ceph.conf

# RADOS user to authenticate as (only applicable if using cephx)
rbd_store_user = images

# RADOS pool in which images are stored
rbd_store_pool = images

# Images will be chunked into objects of this size (in megabytes).
# For best performance, this should be a power of two
rbd_store_chunk_size = 8

Add the following lines to the cinder configuration:

sudo vi /etc/cinder/cinder.conf
volume_driver=cinder.volume.driver.RBDDriver
rbd_pool=volumes

Configure DevStack to Use Ceph

At this point, both Ceph and DevStack are configured, however, since the configuration files for glance and cinder were changed, restart all glance and cinder services.

To restart the services in DevStack, re-join screen, bring up services, quit and restart them.
Rejoin screen:

screen -r

If you get something like this:

Cannot open your terminal '/dev/pts/0' - please check.

This means you do not have permissions to it. Simple fix:

sudo chmod 777 /dev/pts/0

After re-joining screen, type control-a keys, followed by ” (quotation mark). This will present you with all the running services:

Num Name                    Flags

  0 shell                       $
  1 key                      $(L)
  2 horizon                  $(L)
  3 g-reg                    $(L)
  4 g-api                    $(L)
  5 n-api                    $(L)
  6 n-cond                   $(L)
  7 n-cpu                    $(L)
  8 n-crt                    $(L)
  9 n-net                    $(L)
 10 n-sch                    $(L)
 11 n-novnc                  $(L)
 12 n-xvnc                   $(L)
 13 n-cauth                  $(L)
 14 n-obj                    $(L)
 15 c-api                    $(L)
 16 c-vol                    $(L)
 17 c-sch                    $(L)

Use up and down arrows to select the service for restart, and click enter. Type control-c, to stop the service, then up-arrow once to bring up the previous command, and enter to start it up again. Rinse and repeat.

Services that need to be restarted:

3 g-reg                    $(L)
  4 g-api                    $(L)
 15 c-api                    $(L)
 16 c-vol                    $(L)
 17 c-sch                    $(L)

If you are looking on some general DevStack, logging, and screen tips, check out this blog: http://vmartinezdelacruz.com/logging-and-debugging-in-openstack/

Use Ceph

Now your DevStack will use Ceph! Go to Horizon. Under Project/Volumes, create a couple new volumes and attach them to your VM. On command line, list volumes:

rbd ls -p volumes

You should see something similar to this:

volume-c74332e9-1c97-4ee9-be15-c8bdf0103910
volume-e69fa2df-b9e5-4ab2-8664-a5af2bf14098

This is all you should need to get going with Ceph and DevStack.

Useful resources:

DevStack: http://devstack.org/
Ceph: http://ceph.com/docs/master/
Ceph + DevStack on EC2: http://ceph.com/howto/building-a-public-ami-with-ceph-and-openstack/
Ceph 5 Minute Quick Start: http://ceph.com/docs/master/start/quick-start/
Ceph and OpenStack: http://ceph.com/docs/master/rbd/rbd-openstack/
Some good DevStack logging and screen tips: http://vmartinezdelacruz.com/logging-and-debugging-in-openstack/

Have fun with!

-eglute

 Posted by at 3:30 pm
Apr 242013
 

This is a post of how helpful people are in the OpenStack community. If you were to visit the lobby of Hilton Executive tower in Portland last Wednesday, you would have seen me and a couple other people making the cloud of USB sticks.

Last Thursday, I gave my first ever conference talk/tutorial at the OpenStack Summit. The goal of my tutorial was to show how to use Razor, and how easy it is to install OpenStack with it. There were a couple issues I had to solve for this tutorial: what is the fastest way to get people to have a running Razor on their system and how to do so over not very reliable internet connection.

To solve the first issue, I created a VM that had all the necessary components installed on it, everything from Razor to Chef server. Just to give some perspective, it takes just over an hour to download and setup all the major components, including loading cookbooks for OpenStack installation and loading them to the Chef server. Considering that my talk needed to be under 1 hour, live install would not have been a good option.

After I got to the conference, I realized that there is no way I can share my 3.6 GB VM over the intertubes! I probably should have timed downloading the VM over the regular (not on infinite work bandwidth) internet, to realize that it was a bad idea anyways. Luckily for me, I got some help. First, I talked to the Rackspace Private Cloud training team, and they provided me with wireless routers for setting up local network for sharing VM over internal network. The day of the talk, super helpful @Thediopter actually configured the routers for me, in record time! This local network allowed me to serve the file off my laptop while I was giving the talk.

Since I was not sure wireless local network was going to work, I decided that I needed to put my VM on USB sticks. I knew that eNovance is giving out USB drives at their vendor booth. I shared my problem with them, and they happily supplied me with a lot of USB drives. Going by the Suse booth, I noticed that they too had USB drives, and I asked if I could have some for my talk. They gladly supplied me with additional drives.

By now, I had 45 drives, and a limited amount of time to copy my VM to them. Surely there is a better way to copy something to a USB drive than one at the time. As I was walking by the Piston booth, I was asked about the USB sticks. I repeated my story of needing to make lots of USB sticks, and what do you know? I walked away with a 10 port USB hub!

Back at the hotel lobby, the baking of the USB sticks began. My coworkers helped me out with a little script to run on my mac to copy, and all I have to say, I am not sure where I would have been without them!

So, if you ever find yourself with a mac, USB hub, and lots of USB drives, this is how you copy to them on a command line. First, make sure you are root, then:

for i in `jot 10 2`; do asr --noverify --erase --noprompt --source /dev/disk1s1 --target /dev/disk${i}s1 & done

What this script is doing: copying from /dev/disk1s1 to all the other disks (USB drives), erasing destination, and does not verify anything. This process may take a while, but it is still faster than copying one at the time.

So, thanks again to everyone that helped out in the making of the USB cloud!

usbcloud1

Curious what was on the USB drives? This.

-eglute

 Posted by at 6:17 pm
Nov 052012
 

If you are following the cloudscape, you may have noticed that OpenStack is getting a lot of attention right now.  If you never even heard of OpenStack, cookbook by Kevin Jackson is not the right place for you to start. For the very green I would suggest http://devstack.org/.

For those that want something a little more advanced, I would recommend picking up a copy OpenStack Cloud Computing Cookbook.  Whether you are building your very own private cloud or maintaining one, this book is right for you.

Recipes start with setting up sand box environment on VirtualBox, followed by Essex install on Ubuntu Precise (12.04).  After basic install, the book covers installing, configuring, and administering all of the components of OpenStack.  Chapters 2 and 3 cover compute and keystone components.  Chapter 4 starts out with a setup of swift (storage component) sand box environment. Chapters 5 and 6 are more Swift recipes. Glance, Nova, Horizon and Networking get the next 4 chapters, while 11 and 12 cover practical details like installing OpenStack on bare metal (MAAS) and monitoring. The last chapter delves into troubleshooting, logging, submitting bug reports, and getting help from community.

What this book is not: an in-depth explanation of OpenStack components. It is also not OpenStack for Dummies.  However, if you just want to get things working, this is a great reference book.

 

 Posted by at 10:32 pm