Skip to content

Commit

Permalink
Merge branch 'leseb-doc-rbd-havana'
Browse files Browse the repository at this point in the history
Reviewed-by: Josh Durgin <[email protected]>
  • Loading branch information
jdurgin committed Dec 26, 2013
2 parents 21a64c1 + 8b0b32b commit e244be1
Showing 1 changed file with 111 additions and 41 deletions.
152 changes: 111 additions & 41 deletions doc/rbd/rbd-openstack.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,28 +34,41 @@ technology stack.
| OSDs | | Monitors |
+------------------------+ +------------------------+

.. important:: To use Ceph Block Devices with OpenStack, you must have
.. important:: To use Ceph Block Devices with OpenStack, you must have
access to a running Ceph Storage Cluster.

Two parts of OpenStack integrate with Ceph's block devices:
Three parts of OpenStack integrate with Ceph's block devices:

- **Images**: OpenStack Glance manages images for VMs. Images
are immutable. OpenStack treats images as binary blobs and
downloads them accordingly.
downloads them accordingly.

- **Volumes**: Volumes are block devices. OpenStack uses volumes
to boot VMs, or to attach volumes to running VMs. OpenStack manages
to boot VMs, or to attach volumes to running VMs. OpenStack manages
volumes using Cinder services.

You can use OpenStack Glance to store images in a Ceph Block Device, and you
- **Guest Disks**: Guest disks are guest operating system disks.
By default, when you boot a virtual machine,
its disk appears as a file on the filesystem of the hypervisor
(usually under ``/var/lib/nova/instances/<uuid>/``). Prior OpenStack
Havana, the only way to boot a VM in Ceph was to use the boot from volume
functionality from Cinder. However, now it is possible to
directly boot every virtual machine inside Ceph without using Cinder.
This is really handy because it allows us to easily perform
maintenance operation with the live-migration process.
On the other hand, if your hypervisor dies it is also really convenient
to trigger ``nova evacuate`` and almost seamlessly run the virtual machine
somewhere else.

You can use OpenStack Glance to store images in a Ceph Block Device, and you
can use Cinder to boot a VM using a copy-on-write clone of an image.

The instructions below detail the setup for Glance and Cinder, although
The instructions below detail the setup for Glance, Cinder and Nova, although
they do not have to be used together. You may store images in Ceph block devices
while running VMs using a local disk, or vice versa.

.. tip:: This document describes using Ceph Block Devices with OpenStack Havana.
For earlier versions of OpenStack see
.. tip:: This document describes using Ceph Block Devices with OpenStack Havana.
For earlier versions of OpenStack see
`Block Devices and OpenStack (Dumpling)`_.

.. index:: pools; OpenStack
Expand All @@ -69,6 +82,7 @@ your Ceph cluster is running, then create the pools. ::

ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create backups 128

See `Create a Pool`_ for detail on specifying the number of placement groups for
your pools, and `Placement Groups`_ for details on the number of placement
Expand All @@ -81,21 +95,21 @@ groups you should set for your pools.
Configure OpenStack Ceph Clients
================================

The nodes running ``glance-api`` and ``cinder-volume`` act as Ceph clients. Each
The nodes running ``glance-api``, ``cinder-volume``, ``nova-compute`` and ``cinder-backup`` act as Ceph clients. Each
requires the ``ceph.conf`` file::

ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf

Install Ceph client packages
----------------------------

On the ``glance-api`` node and on the ``cinder-volume`` node, you'll need the
Python bindings for ``librbd``::
On the ``glance-api`` node, you'll need the Python bindings for ``librbd``::

sudo apt-get install python-ceph
sudo yum install python-ceph

On the ``cinder-volume`` node, use the client command line tools::
On the ``nova-compute``, ``cinder-backup`` and on the ``cinder-volume`` node, use both
the Python bindings and the client command line tools::

sudo apt-get install ceph-common
sudo yum install ceph
Expand All @@ -107,39 +121,51 @@ Setup Ceph Client Authentication
If you have `cephx authentication`_ enabled, create a new user for Nova/Cinder
and Glance. Execute the following::

ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'

Add the keyrings for ``client.volumes`` and ``client.images`` to the appropriate
Add the keyrings for ``client.cinder``, ``client.glance``, and ``client.cinder-backup`` to the appropriate
nodes and change their ownership::

ceph auth get-or-create client.images | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.images.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.images.keyring
ceph auth get-or-create client.volumes | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.volumes.keyring
ssh {your-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring
ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

Nodes running ``nova-compute`` do not need the keyring. Instead, they
store the secret key in ``libvirt``. Create a temporary copy of the secret
Nodes running ``nova-compute`` need the keyring file for the ``nova-compute`` process. They
also need to store the secret key of the ``client.cinder`` user in ``libvirt``. The libvirt
process needs it to access the cluster while attaching a block device from Cinder.

Create a temporary copy of the secret
key on the nodes running ``nova-compute``::

ceph auth get-key client.volumes | ssh {your-compute-node} tee client.volumes.key
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key

Then, on the compute nodes, add the secret key to ``libvirt`` and remove the
temporary copy of the key::

uuidgen
457eb676-33da-42ec-9a8c-9293d545c337

cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.volumes secret</name>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
<uuid of secret is output here>
sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key) && rm client.volumes.key secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

Save the uuid of the secret for configuring ``nova-compute`` later.

.. important:: You don't necessarily need the UUID on all the compute nodes. However from a platform consistency perspective it's better to keep the same UUID.

.. _cephx authentication: ../../rados/operations/authentication


Expand All @@ -153,13 +179,13 @@ Glance can use multiple back ends to store images. To use Ceph block devices by
default, edit ``/etc/glance/glance-api.conf`` and add::

default_store=rbd
rbd_store_user=images
rbd_store_user=glance
rbd_store_pool=images

If want to enable copy-on-write cloning of images into
volumes, also add::

show_image_direct_url=True
show_image_direct_url=True

Note that this exposes the back end location via Glance’s API, so the endpoint
with this option enabled should not be publicly accessible.
Expand All @@ -174,34 +200,78 @@ edit ``/etc/cinder/cinder.conf`` by adding::

volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
rbd_max_clone_depth=5
glance_api_version=2

If you're using `cephx authentication`_, also configure the user and
uuid of the secret you added to ``libvirt`` earlier::
uuid of the secret you added to ``libvirt`` as documented earlier::

rbd_user=cinder
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337


Configuring Cinder Backup
-------------------------

OpenStack Cinder Backup requires a specific daemon so don't forget to install it.
On your Cinder Backup node, edit ``/etc/cinder/cinder.conf`` and add::

backup_driver=cinder.backup.drivers.ceph
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size=134217728
backup_ceph_pool=backups
backup_ceph_stripe_unit=0
backup_ceph_stripe_count=0
restore_discard_excess_bytes=true

rbd_user=volumes
rbd_secret_uuid={uuid of secret}

Configuring Nova
----------------

In order to boot all the virtual machines directly into Ceph Nova must be configured.
On every Compute nodes, edit ``/etc/nova/nova.conf`` and add::

libvirt_images_type=rbd
libvirt_images_rbd_pool=volumes
libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337

It is also a good practice to disable any file injection.
Usually, while booting an instance Nova attempts to open the rootfs of the virtual machine.
Then, it injects directly into the filesystem things like: password, ssh keys etc...
At this point, it is better to rely on the metadata service and cloud-init.
On every Compute nodes, edit ``/etc/nova/nova.conf`` and add::

libvirt_inject_password=false
libvirt_inject_key=false
libvirt_inject_partition=-2


Restart OpenStack
=================

To activate the Ceph block device driver and load the block device pool name
into the configuration, you must restart OpenStack. Navigate the directory where
you installed OpenStack, and execute the following::
into the configuration, you must restart OpenStack. Thus, for Debian based systems
execute these commands on the appropriate nodes::

./rejoin-stack.sh

If you have OpenStack configured as a service, you can also execute
these commands on the appropriate nodes::

sudo service glance-api restart
sudo glance-control api restart
sudo service nova-compute restart
sudo service cinder-volume restart
sudo service cinder-backup restart

For Red Hat based systems execute::

sudo service openstack-glance-api restart
sudo service openstack-nova-compute restart
sudo service openstack-cinder-volume restart
sudo service openstack-cinder-backup restart

Once OpenStack is up and running, you should be able to create a volume with
OpenStack on a Ceph block device.
Once OpenStack is up and running, you should be able to create a volume
and boot from it.


Booting from a Block Device
Expand All @@ -224,4 +294,4 @@ instance, choosing the image that you created the volume from, and selecting
'boot from volume' and the volume you created.

.. _qemu-img: ../qemu-rbd/#running-qemu-with-rbd
.. _Block Devices and OpenStack (Dumpling): http://ceph.com/docs/dumpling/rbd/rbd-openstack
.. _Block Devices and OpenStack (Dumpling): http://ceph.com/docs/dumpling/rbd/rbd-openstack

0 comments on commit e244be1

Please sign in to comment.