I just started looking at Ceph, and wanted to set it up to be used with DevStack. Ceph website has great documentation, so if you are looking on how to setup OpenStack and Ceph for production use, check it out! However, if you just want to get the DevStack with Ceph running, follow along this post.
I started out following directions on how-to Ceph and DevStack on EC2, but since I am using a Rackspace Cloud Server, some of the instructions were not relevant to me. Also, since everything is running on one server, I am not using Ceph auth.
Install DevStack
The server: Ubuntu 12.04, 4GB RAM. After creating it through the control panel, login, do the usual updates, and create a new user. DevStack will create a user for you, but I find it that things run much smoother if I create the user myself.
apt-get update apt-get install git groupadd stack useradd -g stack -s /bin/bash -d /opt/stack -m stack visudo
In sudo-ers file, add this line:
%stack ALL=(ALL:ALL) NOPASSWD: ALL
This will allow stack user to use sudo without password.
sudo su stack
As a stack user, install DevStack:
git clone git://github.com/openstack-dev/devstack.git cd devstack
Create localrc file:
FLOATING_RANGE=192.168.1.224/27 FIXED_RANGE=10.0.0.0/24 FIXED_NETWORK_SIZE=256 FLAT_INTERFACE=eth0 ADMIN_PASSWORD=supersecret MYSQL_PASSWORD=iheartdatabases RABBIT_PASSWORD=flopsymopsy SERVICE_PASSWORD=iheartksl SERVICE_TOKEN=password DEST=/opt/stack LOGFILE=stack.sh.log SCREEN_LOGDIR=$DEST/logs/screen SYSLOG=True
Run the install script:
./stack.sh
Go get some coffee, and when all the scripts complete, login to Horizon, and make sure creating instances is working (create one!).
Install Ceph
Now that devstack is installed, it is time to install Ceph. I followed the “5 Minute Quick Start” with a couple small changes and omissions.
wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add - echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list sudo apt-get update && sudo apt-get install ceph
Create Ceph configuration file:
sudo vi /etc/ceph/ceph.conf
and add the following:
[global] # For version 0.55 and beyond, you must explicitly enable # or disable authentication with "auth" entries in [global]. auth cluster required = none auth service required = none auth client required = none [osd] osd journal size = 1000 #The following assumes ext4 filesystem. filestore xattr use omap = true # For Bobtail (v 0.56) and subsequent versions, you may # add settings for mkcephfs so that it will create and mount # the file system on a particular OSD for you. Remove the comment `#` # character for the following settings and replace the values # in braces with appropriate values, or leave the following settings # commented out to accept the default values. You must specify the # --mkfs option with mkcephfs in order for the deployment script to # utilize the following settings, and you must define the 'devs' # option for each osd instance; see below. #osd mkfs type = {fs-type} #osd mkfs options {fs-type} = {mkfs options} # default for xfs is "-f" #osd mount options {fs-type} = {mount options} # default mount option is "rw,noatime" # For example, for ext4, the mount option might look like this: #osd mkfs options ext4 = user_xattr,rw,noatime # Execute $ hostname to retrieve the name of your host, # and replace {hostname} with the name of your host. # For the monitor, replace {ip-address} with the IP # address of your host. [mon.a] host = {hostname} mon addr = {IP}:6789 [osd.0] host = {hostname} # For Bobtail (v 0.56) and subsequent versions, you may # add settings for mkcephfs so that it will create and mount # the file system on a particular OSD for you. Remove the comment `#` # character for the following setting for each OSD and specify # a path to the device if you use mkcephfs with the --mkfs option. #devs = {path-to-device} [osd.1] host = {hostname} #devs = {path-to-device}
Note that you will need to change the hostname and IP address to match your own. If you want to use Ceph auth, you will need to change “none” to “cephx”.
Create directories for Ceph daemons:
sudo mkdir -p /var/lib/ceph/osd/ceph-0 sudo mkdir -p /var/lib/ceph/osd/ceph-1 sudo mkdir -p /var/lib/ceph/mon/ceph-a sudo mkdir -p /var/lib/ceph/mds/ceph-a
Deploy Ceph, generate user key, start Ceph:
cd /etc/ceph sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring sudo service ceph -a start sudo ceph health
When “ceph health” command returns “HEALTH_OK”, Ceph is ready to be used. Create volumes:
ceph osd pool create volumes 128 ceph osd pool create images 128
Install client libraries:
sudo apt-get install python-ceph
Setup pool permissions, create users and keyrings:
ceph auth get-or-create client.volumes mon 'allow r' osd 'allow rwx pool=volumes, allow rx pool=images' ceph auth get-or-create client.images mon 'allow r' osd 'allow rwx pool=images' sudo useradd glance sudo useradd cinder ceph auth get-or-create client.images | sudo tee /etc/ceph/ceph.client.images.keyring sudo chown glance:glance /etc/ceph/ceph.client.images.keyring ceph auth get-or-create client.volumes | sudo tee /etc/ceph/ceph.client.volumes.keyring sudo chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring
Edit glance configuration file:
sudo vi /etc/glance/glance-api.conf
default_store = rbd [ ... ] # ============ RBD Store Options ============================= # Ceph configuration file path # If using cephx authentication, this file should # include a reference to the right keyring # in a client. section rbd_store_ceph_conf = /etc/ceph/ceph.conf # RADOS user to authenticate as (only applicable if using cephx) rbd_store_user = images # RADOS pool in which images are stored rbd_store_pool = images # Images will be chunked into objects of this size (in megabytes). # For best performance, this should be a power of two rbd_store_chunk_size = 8
Add the following lines to the cinder configuration:
sudo vi /etc/cinder/cinder.conf
volume_driver=cinder.volume.driver.RBDDriver rbd_pool=volumes
Configure DevStack to Use Ceph
At this point, both Ceph and DevStack are configured, however, since the configuration files for glance and cinder were changed, restart all glance and cinder services.
To restart the services in DevStack, re-join screen, bring up services, quit and restart them.
Rejoin screen:
screen -r
If you get something like this:
Cannot open your terminal '/dev/pts/0' - please check.
This means you do not have permissions to it. Simple fix:
sudo chmod 777 /dev/pts/0
After re-joining screen, type control-a keys, followed by ” (quotation mark). This will present you with all the running services:
Num Name Flags 0 shell $ 1 key $(L) 2 horizon $(L) 3 g-reg $(L) 4 g-api $(L) 5 n-api $(L) 6 n-cond $(L) 7 n-cpu $(L) 8 n-crt $(L) 9 n-net $(L) 10 n-sch $(L) 11 n-novnc $(L) 12 n-xvnc $(L) 13 n-cauth $(L) 14 n-obj $(L) 15 c-api $(L) 16 c-vol $(L) 17 c-sch $(L)
Use up and down arrows to select the service for restart, and click enter. Type control-c, to stop the service, then up-arrow once to bring up the previous command, and enter to start it up again. Rinse and repeat.
Services that need to be restarted:
3 g-reg $(L) 4 g-api $(L) 15 c-api $(L) 16 c-vol $(L) 17 c-sch $(L)
If you are looking on some general DevStack, logging, and screen tips, check out this blog: http://vmartinezdelacruz.com/logging-and-debugging-in-openstack/
Use Ceph
Now your DevStack will use Ceph! Go to Horizon. Under Project/Volumes, create a couple new volumes and attach them to your VM. On command line, list volumes:
rbd ls -p volumes
You should see something similar to this:
volume-c74332e9-1c97-4ee9-be15-c8bdf0103910 volume-e69fa2df-b9e5-4ab2-8664-a5af2bf14098
This is all you should need to get going with Ceph and DevStack.
Useful resources:
DevStack: http://devstack.org/
Ceph: http://ceph.com/docs/master/
Ceph + DevStack on EC2: http://ceph.com/howto/building-a-public-ami-with-ceph-and-openstack/
Ceph 5 Minute Quick Start: http://ceph.com/docs/master/start/quick-start/
Ceph and OpenStack: http://ceph.com/docs/master/rbd/rbd-openstack/
Some good DevStack logging and screen tips: http://vmartinezdelacruz.com/logging-and-debugging-in-openstack/
Have fun with!
-eglute