May 152013
 

I just started looking at Ceph, and wanted to set it up to be used with DevStack. Ceph website has great documentation, so if you are looking on how to setup OpenStack and Ceph for production use, check it out! However, if you just want to get the DevStack with Ceph running, follow along this post.

I started out following directions on how-to Ceph and DevStack on EC2, but since I am using a Rackspace Cloud Server, some of the instructions were not relevant to me. Also, since everything is running on one server, I am not using  Ceph auth.

Install DevStack

The server: Ubuntu 12.04, 4GB RAM. After creating it through the control panel, login, do the usual updates, and create a new user. DevStack will create a user for you, but I find it that things run much smoother if I create the user myself.

apt-get update
apt-get install git
groupadd stack
useradd -g stack -s /bin/bash -d /opt/stack -m stack
visudo

In sudo-ers file, add this line:

%stack ALL=(ALL:ALL) NOPASSWD: ALL

This will allow stack user to use sudo without password.

sudo su stack

As a stack user, install DevStack:

git clone git://github.com/openstack-dev/devstack.git
cd devstack

Create localrc file:

FLOATING_RANGE=192.168.1.224/27
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=eth0
ADMIN_PASSWORD=supersecret
MYSQL_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
SERVICE_TOKEN=password

DEST=/opt/stack
LOGFILE=stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True

Run the install script:

./stack.sh

Go get some coffee, and when all the scripts complete, login to Horizon, and make sure creating instances is working (create one!).

Install Ceph

Now that devstack is installed, it is time to install Ceph. I followed the “5 Minute Quick Start” with a couple small changes and omissions.

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update && sudo apt-get install ceph

Create Ceph configuration file:

sudo vi /etc/ceph/ceph.conf

and add the following:

[global]

# For version 0.55 and beyond, you must explicitly enable
# or disable authentication with "auth" entries in [global].

auth cluster required = none
auth service required = none
auth client required = none

[osd]
osd journal size = 1000

#The following assumes ext4 filesystem.
filestore xattr use omap = true
# For Bobtail (v 0.56) and subsequent versions, you may
# add settings for mkcephfs so that it will create and mount
# the file system on a particular OSD for you. Remove the comment `#`
# character for the following settings and replace the values
# in braces with appropriate values, or leave the following settings
# commented out to accept the default values. You must specify the
# --mkfs option with mkcephfs in order for the deployment script to
# utilize the following settings, and you must define the 'devs'
# option for each osd instance; see below.

#osd mkfs type = {fs-type}
#osd mkfs options {fs-type} = {mkfs options} # default for xfs is "-f"
#osd mount options {fs-type} = {mount options} # default mount option is "rw,noatime"

# For example, for ext4, the mount option might look like this:

#osd mkfs options ext4 = user_xattr,rw,noatime

# Execute $ hostname to retrieve the name of your host,
# and replace {hostname} with the name of your host.
# For the monitor, replace {ip-address} with the IP
# address of your host.

[mon.a]

host = {hostname}
mon addr = {IP}:6789

[osd.0]
host = {hostname}

# For Bobtail (v 0.56) and subsequent versions, you may
# add settings for mkcephfs so that it will create and mount
# the file system on a particular OSD for you. Remove the comment `#`
# character for the following setting for each OSD and specify
# a path to the device if you use mkcephfs with the --mkfs option.

#devs = {path-to-device}

[osd.1]
host = {hostname}
#devs = {path-to-device}

Note that you will need to change the hostname and IP address to match your own. If you want to use Ceph auth, you will need to change “none” to “cephx”.

Create directories for Ceph daemons:

sudo mkdir -p /var/lib/ceph/osd/ceph-0
sudo mkdir -p /var/lib/ceph/osd/ceph-1
sudo mkdir -p /var/lib/ceph/mon/ceph-a
sudo mkdir -p /var/lib/ceph/mds/ceph-a

Deploy Ceph, generate user key, start Ceph:

cd /etc/ceph
sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
sudo service ceph -a start
sudo ceph health

When “ceph health” command returns “HEALTH_OK”, Ceph is ready to be used. Create volumes:

ceph osd pool create volumes 128
ceph osd pool create images 128

Install client libraries:

sudo apt-get install python-ceph

Setup pool permissions, create users and keyrings:

ceph auth get-or-create client.volumes mon 'allow r' osd 'allow rwx pool=volumes, allow rx pool=images'
ceph auth get-or-create client.images mon 'allow r' osd 'allow rwx pool=images'
sudo useradd glance
sudo useradd cinder
ceph auth get-or-create client.images | sudo tee /etc/ceph/ceph.client.images.keyring
sudo chown glance:glance /etc/ceph/ceph.client.images.keyring
ceph auth get-or-create client.volumes | sudo tee /etc/ceph/ceph.client.volumes.keyring
sudo chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring

Edit glance configuration file:

sudo vi /etc/glance/glance-api.conf
default_store = rbd

[ ... ]

# ============ RBD Store Options =============================

# Ceph configuration file path
# If using cephx authentication, this file should
# include a reference to the right keyring
# in a client. section
rbd_store_ceph_conf = /etc/ceph/ceph.conf

# RADOS user to authenticate as (only applicable if using cephx)
rbd_store_user = images

# RADOS pool in which images are stored
rbd_store_pool = images

# Images will be chunked into objects of this size (in megabytes).
# For best performance, this should be a power of two
rbd_store_chunk_size = 8

Add the following lines to the cinder configuration:

sudo vi /etc/cinder/cinder.conf
volume_driver=cinder.volume.driver.RBDDriver
rbd_pool=volumes

Configure DevStack to Use Ceph

At this point, both Ceph and DevStack are configured, however, since the configuration files for glance and cinder were changed, restart all glance and cinder services.

To restart the services in DevStack, re-join screen, bring up services, quit and restart them.
Rejoin screen:

screen -r

If you get something like this:

Cannot open your terminal '/dev/pts/0' - please check.

This means you do not have permissions to it. Simple fix:

sudo chmod 777 /dev/pts/0

After re-joining screen, type control-a keys, followed by ” (quotation mark). This will present you with all the running services:

Num Name                    Flags

  0 shell                       $
  1 key                      $(L)
  2 horizon                  $(L)
  3 g-reg                    $(L)
  4 g-api                    $(L)
  5 n-api                    $(L)
  6 n-cond                   $(L)
  7 n-cpu                    $(L)
  8 n-crt                    $(L)
  9 n-net                    $(L)
 10 n-sch                    $(L)
 11 n-novnc                  $(L)
 12 n-xvnc                   $(L)
 13 n-cauth                  $(L)
 14 n-obj                    $(L)
 15 c-api                    $(L)
 16 c-vol                    $(L)
 17 c-sch                    $(L)

Use up and down arrows to select the service for restart, and click enter. Type control-c, to stop the service, then up-arrow once to bring up the previous command, and enter to start it up again. Rinse and repeat.

Services that need to be restarted:

3 g-reg                    $(L)
  4 g-api                    $(L)
 15 c-api                    $(L)
 16 c-vol                    $(L)
 17 c-sch                    $(L)

If you are looking on some general DevStack, logging, and screen tips, check out this blog: http://vmartinezdelacruz.com/logging-and-debugging-in-openstack/

Use Ceph

Now your DevStack will use Ceph! Go to Horizon. Under Project/Volumes, create a couple new volumes and attach them to your VM. On command line, list volumes:

rbd ls -p volumes

You should see something similar to this:

volume-c74332e9-1c97-4ee9-be15-c8bdf0103910
volume-e69fa2df-b9e5-4ab2-8664-a5af2bf14098

This is all you should need to get going with Ceph and DevStack.

Useful resources:

DevStack: http://devstack.org/
Ceph: http://ceph.com/docs/master/
Ceph + DevStack on EC2: http://ceph.com/howto/building-a-public-ami-with-ceph-and-openstack/
Ceph 5 Minute Quick Start: http://ceph.com/docs/master/start/quick-start/
Ceph and OpenStack: http://ceph.com/docs/master/rbd/rbd-openstack/
Some good DevStack logging and screen tips: http://vmartinezdelacruz.com/logging-and-debugging-in-openstack/

Have fun with!

-eglute

 Posted by at 3:30 pm

  2 Responses to “DevStack and Ceph Tutorial”

  1. Cinder conf update as mentioned above cause the c-api restart with RBDDriver import error. Is this expected error?

  2. I am trying to install devstack with ceph by using the following plugin in localrc file:
    # Enable ceph DevStack plugin
    enable_plugin devstack-plugin-ceph git://git.openstack.org/openstack/devstack-plugin-ceph

    And it comes up fine
    The problem is that when I reboot the server, I lost all ceph configuration.

    All my ceph commands stop working and I am getting the following errors:

    adminx@cephcontrail:~$ sudo ceph status
    2016-03-17 15:55:55.489590 7fa34c7c8700 0 — :/3530400219 >> 192.168.57.64:6789/0 pipe(0x7fa34805d050 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa348059c50).fault
    2016-03-17 15:55:58.489293 7fa34c6c7700 0 — :/3530400219 >> 192.168.57.64:6789/0 pipe(0x7fa33c000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa33c004ef0).fault
    ^CError connecting to cluster: InterruptedOrTimeoutError
    adminx@cephcontrail:~$
    adminx@cephcontrail:~$
    adminx@cephcontrail:~$
    adminx@cephcontrail:~$ sudo ceph mon stat
    2016-03-17 15:56:05.009688 7fa050226700 0 — :/3529225905 >> 192.168.57.64:6789/0 pipe(0x7fa04c05d050 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa04c059c90).fault
    2016-03-17 15:56:08.010524 7fa050125700 0 — :/3529225905 >> 192.168.57.64:6789/0 pipe(0x7fa040000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa040004ef0).fault
    ^CError connecting to cluster: InterruptedOrTimeoutEr

    The following file system disappears from mount
    adminx@cephos2:~/devstack$ sudo mount | grep ceph

    Before reboot, I was getting the following output

    /var/lib/ceph/drives/images/ceph.img on /var/lib/ceph type xfs (rw,noatime,nodiratime,nobarrier,logbufs=8)

    And all the following ceph monitor and osd related files disappear after the reboot:

    Before reboot, I was getting the following output:

    adminx@cephos2:~/devstack$ ls -lrt /var/lib/ceph/
    total 0
    drwxr-xr-x 2 root root 6 Mar 16 12:01 radosgw
    drwxr-xr-x 2 root root 6 Mar 16 12:01 mds
    drwxr-xr-x 2 root root 32 Mar 16 12:01 tmp
    drwxr-xr-x 3 root root 25 Mar 16 12:01 mon
    drwxr-xr-x 2 root root 25 Mar 16 12:01 bootstrap-osd
    drwxr-xr-x 2 root root 25 Mar 16 12:01 bootstrap-rgw
    drwxr-xr-x 2 root root 25 Mar 16 12:01 bootstrap-mds
    drwxr-xr-x 3 root root 19 Mar 16 12:01 osd

    adminx@cephos2:~/devstack$ ls -lrt /var/lib/ceph/mon/ceph-cephos2/
    total 4
    -rw-r–r– 1 root root 77 Mar 16 12:01 keyring
    -rw-r–r– 1 root root 0 Mar 16 12:01 upstart
    drwxr-xr-x 2 root root 128 Mar 16 12:01 store.db

    adminx@cephos2:~$ ls -lrt /var/lib/ceph/osd
    total 0
    drwxr-xr-x 3 root root 163 Mar 16 12:01 ceph-0
    adminx@cephos2:~$
    adminx@cephos2:~$ ls -lrt /var/lib/ceph/mon
    total 0
    drwxr-xr-x 3 root root 49 Mar 16 12:01 ceph-cephos2
    adminx@cephos2:~$
    adminx@cephos2:~$
    adminx@cephos2:~$ ls -lrt /var/lib/ceph/mds/
    total 0
    adminx@cephos2:~$ ls -lrt /var/lib/ceph/osd/ceph-0/
    total 102436
    -rw-r–r– 1 root root 53 Mar 16 12:01 superblock
    -rw-r–r– 1 root root 4 Mar 16 12:01 store_version
    -rw-r–r– 1 root root 37 Mar 16 12:01 fsid
    -rw-r–r– 1 root root 2 Mar 16 12:01 whoami
    -rw-r–r– 1 root root 6 Mar 16 12:01 ready
    -rw-r–r– 1 root root 21 Mar 16 12:01 magic
    -rw-r–r– 1 root root 37 Mar 16 12:01 ceph_fsid
    -rw-r–r– 1 root root 56 Mar 16 12:01 keyring
    -rw-r–r– 1 root root 0 Mar 16 12:01 upstart
    drwxr-xr-x 92 root root 4096 Mar 16 12:31 current
    -rw-r–r– 1 root root 104857600 Mar 17 15:41 journal
    adminx@cephos2:~$
    adminx@cephos2:~$
    adminx@cephos2:~$ ls -lrt /var/lib/ceph/mon/ceph-cephos2/
    total 4
    -rw-r–r– 1 root root 77 Mar 16 12:01 keyring
    -rw-r–r– 1 root root 0 Mar 16 12:01 upstart
    drwxr-xr-x 2 root root 230 Mar 17 16:00 store.db

    After reboot, all the above files are gone

    It seems like that I need to modify my /etc/fstab file to make it persistence and make some other ceph related changes so that it stays after the reboot

    Would you please suggest anything to make it persistence?

 Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>