Skip to content

Commit

Permalink
Removed "Ceph Development Status" per Bryan
Browse files Browse the repository at this point in the history
Modified title syntax per Tommi
Modified paragraph width to 80-chars per Dan
Moved "Build from Source" out of Install
Renamed create_cluster to config-cluster
Added config-ref with configuration reference tables
Added a toc ref for man/1/obsync per Dan
Removed redundant sections from Ops
Deleted "Why use Ceph" and "Introduction to Storage Clusters"



Signed-off-by: John Wilkins <[email protected]>
  • Loading branch information
John Wilkins committed May 3, 2012
1 parent ec99775 commit d49c3d2
Show file tree
Hide file tree
Showing 40 changed files with 352 additions and 889 deletions.
122 changes: 63 additions & 59 deletions doc/create_cluster/ceph_conf.rst → doc/config-cluster/ceph_conf.rst
Original file line number Diff line number Diff line change
@@ -1,23 +1,27 @@
========================
Ceph Configuration Files
========================
When you start the Ceph service, the initialization process activates a series of daemons that run in the background.
The hosts in a typical RADOS cluster run at least one of three processes or daemons:
==========================
Ceph Configuration Files
==========================
When you start the Ceph service, the initialization process activates a series
of daemons that run in the background. The hosts in a typical RADOS cluster run
at least one of three processes or daemons:

- RADOS (``ceph-osd``)
- Monitor (``ceph-mon``)
- Metadata Server (``ceph-mds``)

Each process or daemon looks for a ``ceph.conf`` file that provides their configuration settings.
The default ``ceph.conf`` locations in sequential order include:
Each process or daemon looks for a ``ceph.conf`` file that provides their
configuration settings. The default ``ceph.conf`` locations in sequential
order include:

1. ``$CEPH_CONF`` (*i.e.,* the path following the ``$CEPH_CONF`` environment variable)
1. ``$CEPH_CONF`` (*i.e.,* the path following
the ``$CEPH_CONF`` environment variable)
2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
3. /etc/ceph/ceph.conf
3. ``/etc/ceph/ceph.conf``
4. ``~/.ceph/config``
5. ``./ceph.conf`` (*i.e.,* in the current working directory)

The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you have installed the Ceph packages on the OSD Cluster hosts, you need to create
The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you
have installed the Ceph packages on the OSD Cluster hosts, you need to create
a ``ceph.conf`` file to configure your OSD cluster.

Creating ``ceph.conf``
Expand All @@ -29,7 +33,8 @@ The ``ceph.conf`` file defines:
- Paths to Hosts
- Runtime Options

You can add comments to the ``ceph.conf`` file by preceding comments with a semi-colon (;). For example::
You can add comments to the ``ceph.conf`` file by preceding comments with
a semi-colon (;). For example::

; <--A semi-colon precedes a comment
; A comment may be anything, and always follows a semi-colon on each line.
Expand All @@ -45,8 +50,6 @@ in a RADOS cluster.
+=================+==============+==============+=================+=================================================+
| All Modules | All | ``[global]`` | N/A | Settings affect all instances of all daemons. |
+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
| Groups | Group | ``[group]`` | Alphanumeric | Settings affect all instances within the group |
+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
| RADOS | ``ceph-osd`` | ``[osd]`` | Numeric | Settings affect RADOS instances only. |
+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
| Monitor | ``ceph-mon`` | ``[mon]`` | Alphanumeric | Settings affect monitor instances only. |
Expand All @@ -56,8 +59,10 @@ in a RADOS cluster.

Metavariables
~~~~~~~~~~~~~
The configuration system supports certain 'metavariables,' which are typically used in ``[global]`` or process/daemon settings.
If metavariables occur inside a configuration value, Ceph expands them into a concrete value--similar to how Bash shell expansion works.
The configuration system supports certain 'metavariables,' which are typically
used in ``[global]`` or process/daemon settings. If metavariables occur inside
a configuration value, Ceph expands them into a concrete value--similar to how
Bash shell expansion works.

There are a few different metavariables:

Expand All @@ -74,46 +79,41 @@ There are a few different metavariables:
+--------------+----------------------------------------------------------------------------------------------------------+
| ``$name`` | Expands to ``$type.$id``. |
+--------------+----------------------------------------------------------------------------------------------------------+
| ``$cluster`` | Expands to the cluster name. Useful when running multiple clusters on the same hardware. |
+--------------+----------------------------------------------------------------------------------------------------------+

Global Settings
~~~~~~~~~~~~~~~
The Ceph configuration file supports a hierarchy of settings, where child settings inherit the settings of the parent.
Global settings affect all instances of all processes in the cluster. Use the ``[global]`` setting for values that
are common for all hosts in the cluster. You can override each ``[global]`` setting by:
The Ceph configuration file supports a hierarchy of settings, where child
settings inherit the settings of the parent. Global settings affect all
instances of all processes in the cluster. Use the ``[global]`` setting for
values that are common for all hosts in the cluster. You can override each
``[global]`` setting by:

1. Changing the setting in a particular ``[group]``.
2. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
3. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` )

Overriding a global setting affects all child processes, except those that you specifically override.

For example::
Overriding a global setting affects all child processes, except those that
you specifically override. For example::

[global]
; Enable authentication between hosts within the cluster.
auth supported = cephx

Group Settings
~~~~~~~~~~~~~~
Group settings affect all instances of all processes in a group. Use the ``[group]`` setting for values that
are common for all hosts in a group within the cluster. Each group must have a name. For example::

[group primary]
addr = 10.9.8.7

[group secondary]
addr = 6.5.4.3

auth supported = cephx

Process/Daemon Settings
~~~~~~~~~~~~~~~~~~~~~~~
You can specify settings that apply to a particular type of process. When you specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a particular instance,
the setting will apply to all OSDs, monitors or metadata daemons respectively.
You can specify settings that apply to a particular type of process. When you
specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
particular instance, the setting will apply to all OSDs, monitors or metadata
daemons respectively.

Instance Settings
~~~~~~~~~~~~~~~~~
You may specify settings for particular instances of an daemon. You may specify an instance by entering its type, delimited by a period (.) and
by the instance ID. The instance ID for an OSD is always numeric, but it may be alphanumeric for monitors and metadata servers. ::
You may specify settings for particular instances of an daemon. You may specify
an instance by entering its type, delimited by a period (.) and by the
instance ID. The instance ID for an OSD is always numeric, but it may be
alphanumeric for monitors and metadata servers. ::

[osd.1]
; settings affect osd.1 only.
Expand All @@ -124,14 +124,17 @@ by the instance ID. The instance ID for an OSD is always numeric, but it may be

``host`` and ``addr`` Settings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The `Hardware Recommendations <../install/hardware_recommendations>`_ section provides some hardware guidelines for configuring the cluster.
It is possible for a single host to run multiple daemons. For example, a single host with multiple disks or RAIDs may run one ``ceph-osd``
for each disk or RAID. Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon on the same host. Ideally, you will have
a host for a particular type of process. For example, one host may run ``ceph-osd`` daemons, another host may run a ``ceph-mds`` daemon,
and other hosts may run ``ceph-mon`` daemons.

Each host has a name identified by the ``host`` setting, and a network location (i.e., domain name or IP address) identified by the ``addr`` setting.
For example::
The `Hardware Recommendations <../hardware_recommendations>`_ section
provides some hardware guidelines for configuring the cluster. It is possible
for a single host to run multiple daemons. For example, a single host with
multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID.
Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon
on the same host. Ideally, you will have a host for a particular type of
process. For example, one host may run ``ceph-osd`` daemons, another host
may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons.

Each host has a name identified by the ``host`` setting, and a network location
(i.e., domain name or IP address) identified by the ``addr`` setting. For example::

[osd.1]
host = hostNumber1
Expand All @@ -143,10 +146,12 @@ For example::

Monitor Configuration
~~~~~~~~~~~~~~~~~~~~~
Ceph typically deploys with 3 monitors to ensure high availability should a monitor instance crash. An odd number of monitors (3) ensures
that the Paxos algorithm can determine which version of the cluster map is the most accurate.
Ceph typically deploys with 3 monitors to ensure high availability should a
monitor instance crash. An odd number of monitors (3) ensures that the Paxos
algorithm can determine which version of the cluster map is the most accurate.

.. note:: You may deploy Ceph with a single monitor, but if the instance fails, the lack of a monitor may interrupt data service availability.
.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
the lack of a monitor may interrupt data service availability.

Ceph monitors typically listen on port ``6789``.

Expand All @@ -156,17 +161,16 @@ Example Configuration File
.. literalinclude:: demo-ceph.conf
:language: ini


Configuration File Deployment Options
-------------------------------------
The most common way to deploy the ``ceph.conf`` file in a cluster is to have all hosts share the same configuration file.

You may create a ``ceph.conf`` file for each host if you wish, or specify a particular ``ceph.conf`` file for a subset of hosts within the cluster. However, using per-host ``ceph.conf``
configuration files imposes a maintenance burden as the cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file on the Administration host and then copies that
file to each OSD Cluster host.

The current cluster deployment script, ``mkcephfs``, does not make copies of the ``ceph.conf``. You must copy the file manually.


The most common way to deploy the ``ceph.conf`` file in a cluster is to have
all hosts share the same configuration file.

You may create a ``ceph.conf`` file for each host if you wish, or specify a
particular ``ceph.conf`` file for a subset of hosts within the cluster. However,
using per-host ``ceph.conf``configuration files imposes a maintenance burden as the
cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file
on the Administration host and then copies that file to each OSD Cluster host.

The current cluster deployment script, ``mkcephfs``, does not make copies of the
``ceph.conf``. You must copy the file manually.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
============================
Deploying Ceph Configuration
============================
Ceph's current deployment script does not copy the configuration file you created from the Administration host
to the OSD Cluster hosts. Copy the configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
==============================
Deploying Ceph Configuration
==============================
Ceph's current deployment script does not copy the configuration file you
created from the Administration host to the OSD Cluster hosts. Copy the
configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.

::
Expand Down
52 changes: 52 additions & 0 deletions doc/config-cluster/file_system_recommendations.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
=========================================
Hard Disk and File System Recommendations
=========================================

Ceph aims for data safety, which means that when the application receives notice
that data was written to the disk, that data was actually written to the disk.
For old kernels (<2.6.33), disable the write cache if the journal is on a raw
disk. Newer kernels should work fine.

Use ``hdparm`` to disable write caching on the hard disk::

$ hdparm -W 0 /dev/hda 0


Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file
system for:

- Internal object state
- Snapshot metadata
- RADOS Gateway Access Control Lists (ACLs).

Ceph OSDs rely heavily upon the stability and performance of the underlying file
system. The underlying file system must provide sufficient capacity for XATTRs.
File system candidates for Ceph include B tree and B+ tree file systems such as:

- ``btrfs``
- ``XFS``

If you are using ``ext4``, enable XATTRs. ::

filestore xattr use omap = true

.. warning:: XATTR limits.

The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit
for XATTRs in ``ext4``, causing the ``ceph-osd`` process to crash. Version 0.45
or newer uses ``leveldb`` to bypass this limitation. ``ext4`` is a poor file
system choice if you intend to deploy the RADOS Gateway or use snapshots on
versions earlier than 0.45.

.. tip:: Use ``xfs`` initially and ``btrfs`` when it is ready for production.

The Ceph team believes that the best performance and stability will come from
``btrfs.`` The ``btrfs`` file system has internal transactions that keep the
local data set in a consistent state. This makes OSDs based on ``btrfs`` simple
to deploy, while providing scalability not currently available from block-based
file systems. The 64-kb XATTR limit for ``xfs`` XATTRS is enough to accommodate
RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice
file system of the Ceph team in the long run, but ``xfs`` is currently more
stable than ``btrfs``. If you only plan to use RADOS and ``rbd`` without
snapshots and without ``radosgw``, the ``ext4`` file system should work just fine.

29 changes: 29 additions & 0 deletions doc/config-cluster/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
===============================
Configuring a Storage Cluster
===============================
Ceph can run with a cluster containing thousands of Object Storage Devices
(OSDs). A minimal system will have at least two OSDs for data replication. To
configure OSD clusters, you must provide settings in the configuration file.
Ceph provides default values for many settings, which you can override in the
configuration file. Additionally, you can make runtime modification to the
configuration using command-line utilities.

When Ceph starts, it activates three daemons:

- ``ceph-osd`` (mandatory)
- ``ceph-mon`` (mandatory)
- ``ceph-mds`` (mandatory for cephfs only)

Each process, daemon or utility loads the host's configuration file. A process
may have information about more than one daemon instance (*i.e.,* multiple
contexts). A daemon or utility only has information about a single daemon
instance (a single context).

.. note:: Ceph can run on a single host for evaluation purposes.

.. toctree::

file_system_recommendations
Configuration <ceph_conf>
Deploy Config <deploying_ceph_conf>
deploying_ceph_with_mkcephfs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
======================================
Metadata Server Configuration Settings
======================================
===================
MDS Configuration
===================

+-----------------------------------+-------------------------+------------+------------------------------------------------+
| Setting | Type | Default | Description |
Expand Down Expand Up @@ -155,4 +155,4 @@ Metadata Server Configuration Settings
+-----------------------------------+-------------------------+------------+------------------------------------------------+


// make it (mds_session_timeout - mds_beacon_grace |
// make it (mds_session_timeout - mds_beacon_grace |
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
==============================
Monitor Configuration Settings
==============================
=======================
Monitor Configuration
=======================



Expand Down Expand Up @@ -60,4 +60,4 @@ Monitor Configuration Settings
| ``mon slurp timeout`` | Double | 10.0 | |
+-----------------------------------+----------------+---------------+-----------------------------------------------------------+

inactive | unclean | or stale (see doc/control.rst under dump stuck for more info |
inactive | unclean | or stale (see doc/control.rst under dump stuck for more info |
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
==========================
osd Configuration Settings
==========================
===================
OSD Configuration
===================

+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
| Setting | Type | Default Value | Description |
Expand Down Expand Up @@ -148,4 +148,4 @@ osd Configuration Settings
| ``osd op complaint time`` | Float | 30 | // how old in secs makes op complaint-worthy |
+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
| ``osd command max records`` | 32-bit Int | 256 | |
+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
+-----------------------------------------+---------------------+-----------------------+------------------------------------------------+
16 changes: 6 additions & 10 deletions doc/config.rst
Original file line number Diff line number Diff line change
@@ -1,16 +1,12 @@
=========================
Configuration reference
Configuration Reference
=========================

.. todo:: write me

OSD (RADOS)
===========

Monitor
=======

MDS
===

.. toctree::
:maxdepth: 1

config-ref/mon-config
config-ref/osd-config
config-ref/mds-config
5 changes: 0 additions & 5 deletions doc/create_cluster/deploying_with_chef.rst

This file was deleted.

Loading

0 comments on commit d49c3d2

Please sign in to comment.