Skip to content
This repository has been archived by the owner on Dec 28, 2022. It is now read-only.

Commit

Permalink
c* -> ceph-*
Browse files Browse the repository at this point in the history
Hopefully I didn't miss too much...

Signed-off-by: Sage Weil <[email protected]>
  • Loading branch information
liewegas committed Sep 22, 2011
1 parent 97aa1aa commit 6f8f140
Show file tree
Hide file tree
Showing 83 changed files with 364 additions and 364 deletions.
8 changes: 4 additions & 4 deletions README
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ $ make
A quick summary of binaries that will be built in src/

daemons:
cmon -- monitor daemon. handles cluster state and configuration
ceph-mon -- monitor daemon. handles cluster state and configuration
information.
cosd -- storage daemon. stores objects on a given block device.
cmds -- metadata daemon. handles file system namespace.
cfuse -- fuse client.
ceph-osd -- storage daemon. stores objects on a given block device.
ceph-mds -- metadata daemon. handles file system namespace.
ceph-fuse -- fuse client.

tools:
ceph -- send management commands to the monitor cluster.
Expand Down
36 changes: 18 additions & 18 deletions ceph.spec.in
Original file line number Diff line number Diff line change
Expand Up @@ -158,17 +158,17 @@ fi
%doc README COPYING
%{_bindir}/ceph
%{_bindir}/cephfs
%{_bindir}/cconf
%{_bindir}/cclsinfo
%{_bindir}/ceph-conf
%{_bindir}/ceph-clsinfo
%{_bindir}/crushtool
%{_bindir}/monmaptool
%{_bindir}/osdmaptool
%{_bindir}/cauthtool
%{_bindir}/csyn
%{_bindir}/crun
%{_bindir}/cmon
%{_bindir}/cmds
%{_bindir}/cosd
%{_bindir}/ceph-syn
%{_bindir}/ceph-run
%{_bindir}/ceph-mon
%{_bindir}/ceph-mds
%{_bindir}/ceph-osd
%{_bindir}/crbdnamer
%{_bindir}/librados-config
%{_bindir}/rados
Expand All @@ -192,26 +192,26 @@ fi
%{_sysconfdir}/bash_completion.d/radosgw_admin
%{_sysconfdir}/bash_completion.d/rbd
%config(noreplace) %{_sysconfdir}/logrotate.d/ceph
%{_mandir}/man8/cmon.8*
%{_mandir}/man8/cmds.8*
%{_mandir}/man8/cosd.8*
%{_mandir}/man8/ceph-mon.8*
%{_mandir}/man8/ceph-mds.8*
%{_mandir}/man8/ceph-osd.8*
%{_mandir}/man8/mkcephfs.8*
%{_mandir}/man8/crun.8*
%{_mandir}/man8/csyn.8*
%{_mandir}/man8/ceph-run.8*
%{_mandir}/man8/ceph-syn.8*
%{_mandir}/man8/crushtool.8*
%{_mandir}/man8/osdmaptool.8*
%{_mandir}/man8/monmaptool.8*
%{_mandir}/man8/cconf.8*
%{_mandir}/man8/ceph-conf.8*
%{_mandir}/man8/ceph.8*
%{_mandir}/man8/cephfs.8*
%{_mandir}/man8/mount.ceph.8*
%{_mandir}/man8/radosgw.8*
%{_mandir}/man8/radosgw_admin.8*
%{_mandir}/man8/rados.8*
%{_mandir}/man8/rbd.8*
%{_mandir}/man8/cauthtool.8*
%{_mandir}/man8/cdebugpack.8*
%{_mandir}/man8/cclsinfo.8.gz
%{_mandir}/man8/ceph-authtool.8*
%{_mandir}/man8/ceph-debugpack.8*
%{_mandir}/man8/ceph-clsinfo.8.gz
%{_mandir}/man8/librados-config.8.gz
%{python_sitelib}/rados.py
%{python_sitelib}/rados.pyc
Expand All @@ -228,8 +228,8 @@ fi
%files fuse
%defattr(-,root,root,-)
%doc COPYING
%{_bindir}/cfuse
%{_mandir}/man8/cfuse.8*
%{_bindir}/ceph-fuse
%{_mandir}/man8/ceph-fuse.8*

%files devel
%defattr(-,root,root,-)
Expand Down
4 changes: 2 additions & 2 deletions debian/ceph-fuse.install
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
usr/bin/cfuse
usr/share/man/man8/cfuse.8
usr/bin/ceph-fuse
usr/share/man/man8/ceph-fuse.8
30 changes: 15 additions & 15 deletions debian/ceph.install
Original file line number Diff line number Diff line change
@@ -1,33 +1,33 @@
usr/bin/ceph
usr/bin/cephfs
usr/bin/cconf
usr/bin/cclsinfo
usr/bin/ceph-conf
usr/bin/ceph-clsinfo
usr/bin/crushtool
usr/bin/monmaptool
usr/bin/osdmaptool
usr/bin/crun
usr/bin/cmon
usr/bin/cmds
usr/bin/cosd
usr/bin/ceph-run
usr/bin/ceph-mon
usr/bin/ceph-mds
usr/bin/ceph-osd
usr/bin/cauthtool
usr/bin/cdebugpack
usr/bin/ceph-debugpack
sbin/mkcephfs
usr/lib/ceph/ceph_common.sh
usr/lib/rados-classes/*
usr/share/doc/ceph/sample.ceph.conf
usr/share/doc/ceph/sample.fetch_config
usr/share/man/man8/cmon.8
usr/share/man/man8/cmds.8
usr/share/man/man8/cosd.8
usr/share/man/man8/ceph-mon.8
usr/share/man/man8/ceph-mds.8
usr/share/man/man8/ceph-osd.8
usr/share/man/man8/mkcephfs.8
usr/share/man/man8/crun.8
usr/share/man/man8/ceph-run.8
usr/share/man/man8/crushtool.8
usr/share/man/man8/osdmaptool.8
usr/share/man/man8/monmaptool.8
usr/share/man/man8/cconf.8
usr/share/man/man8/ceph-conf.8
usr/share/man/man8/ceph.8
usr/share/man/man8/cephfs.8
usr/share/man/man8/cauthtool.8
usr/share/man/man8/cclsinfo.8
usr/share/man/man8/cdebugpack.8
usr/share/man/man8/ceph-authtool.8
usr/share/man/man8/ceph-clsinfo.8
usr/share/man/man8/ceph-debugpack.8
etc/bash_completion.d/ceph
42 changes: 21 additions & 21 deletions doc/architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,30 +21,30 @@ overhead.
Monitor cluster
===============

``cmon`` is a lightweight daemon that provides a consensus for
``ceph-mon`` is a lightweight daemon that provides a consensus for
distributed decisionmaking in a Ceph/RADOS cluster.

It also is the initial point of contact for new clients, and will hand
out information about the topology of the cluster, such as the
``osdmap``.

You normally run 3 ``cmon`` daemons, on 3 separate physical machines,
You normally run 3 ``ceph-mon`` daemons, on 3 separate physical machines,
isolated from each other; for example, in different racks or rows.

You could run just 1 instance, but that means giving up on high
availability.

You may use the same hosts for ``cmon`` and other purposes.
You may use the same hosts for ``ceph-mon`` and other purposes.

``cmon`` processes talk to each other using a Paxos_\-style
``ceph-mon`` processes talk to each other using a Paxos_\-style
protocol. They discover each other via the ``[mon.X] mon addr`` fields
in ``ceph.conf``.

.. todo:: What about ``monmap``? Fact check.

Any decision requires the majority of the ``cmon`` processes to be
Any decision requires the majority of the ``ceph-mon`` processes to be
healthy and communicating with each other. For this reason, you never
want an even number of ``cmon``\s; there is no unambiguous majority
want an even number of ``ceph-mon``\s; there is no unambiguous majority
subgroup for an even number.

.. _Paxos: http://en.wikipedia.org/wiki/Paxos_algorithm
Expand All @@ -58,9 +58,9 @@ subgroup for an even number.
RADOS
=====

``cosd`` is the storage daemon that provides the RADOS service. It
uses ``cmon`` for cluster membership, services object read/write/etc
request from clients, and peers with other ``cosd``\s for data
``ceph-osd`` is the storage daemon that provides the RADOS service. It
uses ``ceph-mon`` for cluster membership, services object read/write/etc
request from clients, and peers with other ``ceph-osd``\s for data
replication.

The data model is fairly simple on this level. There are multiple
Expand All @@ -77,7 +77,7 @@ metadata to store file owner etc.

.. todo:: Verify that metadata is unordered.

Underneath, ``cosd`` stores the data on a local filesystem. We
Underneath, ``ceph-osd`` stores the data on a local filesystem. We
recommend using Btrfs_, but any POSIX filesystem that has extended
attributes should work (see :ref:`xattr`).

Expand All @@ -96,37 +96,37 @@ Ceph filesystem
===============

The Ceph filesystem service is provided by a daemon called
``cmds``. It uses RADOS to store all the filesystem metadata
``ceph-mds``. It uses RADOS to store all the filesystem metadata
(directories, file ownership, access modes, etc), and directs clients
to access RADOS directly for the file contents.

The Ceph filesystem aims for POSIX compatibility, except for a few
chosen differences. See :doc:`/appendix/differences-from-posix`.

``cmds`` can run as a single process, or it can be distributed out to
``ceph-mds`` can run as a single process, or it can be distributed out to
multiple physical machines, either for high availability or for
scalability.

For high availability, the extra ``cmds`` instances can be `standby`,
ready to take over the duties of any failed ``cmds`` that was
For high availability, the extra ``ceph-mds`` instances can be `standby`,
ready to take over the duties of any failed ``ceph-mds`` that was
`active`. This is easy because all the data, including the journal, is
stored on RADOS. The transition is triggered automatically by
``cmon``.
``ceph-mon``.

For scalability, multiple ``cmds`` instances can be `active`, and they
For scalability, multiple ``ceph-mds`` instances can be `active`, and they
will split the directory tree into subtrees (and shards of a single
busy directory), effectively balancing the load amongst all `active`
servers.

Combinations of `standby` and `active` etc are possible, for example
running 3 `active` ``cmds`` instances for scaling, and one `standby`.
running 3 `active` ``ceph-mds`` instances for scaling, and one `standby`.

To control the number of `active` ``cmds``\es, see
To control the number of `active` ``ceph-mds``\es, see
:doc:`/ops/manage/grow/mds`.

.. topic:: Status as of 2011-09:

Multiple `active` ``cmds`` operation is stable under normal
Multiple `active` ``ceph-mds`` operation is stable under normal
circumstances, but some failure scenarios may still cause
operational issues.

Expand Down Expand Up @@ -166,14 +166,14 @@ virtualization. This is done with the command-line tool ``rbd`` (see
The latter is also useful in non-virtualized scenarios.

Internally, RBD stripes the device image over multiple RADOS objects,
each typically located on a separate ``cosd``, allowing it to perform
each typically located on a separate ``ceph-osd``, allowing it to perform
better than a single server could.


Client
======

.. todo:: cephfs, cfuse, librados, libceph, librbd
.. todo:: cephfs, ceph-fuse, librados, libceph, librbd


.. todo:: Summarize how much Ceph trusts the client, for what parts (security vs reliability).
Expand Down
4 changes: 2 additions & 2 deletions doc/dev/object-store.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@

"objecter" -> "OSDMap"

"cosd" -> "PG"
"cosd" -> "ObjectStore"
"ceph-osd" -> "PG"
"ceph-osd" -> "ObjectStore"

"crushtool" -> "CrushWrapper"

Expand Down
6 changes: 3 additions & 3 deletions doc/glossary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,13 @@
Object
.. todo:: write me

cosd
ceph-osd
.. todo:: write me

cmon
ceph-mon
.. todo:: write me

cmds
ceph-mds
Ceph MDS, the actual daemon blahblah

radosgw
Expand Down
4 changes: 2 additions & 2 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,10 +61,10 @@ mature next.
The Ceph filesystem is functionally fairly complete, but has not been
tested well enough at scale and under load yet. Multi-master MDS is
still problematic and we recommend running just one active MDS
(standbys are ok). If you have problems with ``kclient`` or ``cfuse``,
(standbys are ok). If you have problems with ``kclient`` or ``ceph-fuse``,
you may wish to try the other option; in general, ``kclient`` is
expected to be faster (but be sure to use the latest Linux kernel!)
while ``cfuse`` provides better stability by not triggering kernel
while ``ceph-fuse`` provides better stability by not triggering kernel
crashes.

As individual systems mature enough, we move to improving their
Expand Down
10 changes: 5 additions & 5 deletions doc/man/8/cclsinfo.rst
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
===========================================
cclsinfo -- show class object information
ceph-clsinfo -- show class object information
===========================================

.. program:: cclsinfo
.. program:: ceph-clsinfo

Synopsis
========

| **cclsinfo** [ *options* ] ... *filename*
| **ceph-clsinfo** [ *options* ] ... *filename*

Description
===========

**cclsinfo** can show name, version, and architecture information
**ceph-clsinfo** can show name, version, and architecture information
about a specific class object.


Expand All @@ -36,7 +36,7 @@ Options
Availability
============

**cclsinfo** is part of the Ceph distributed file system. Please
**ceph-clsinfo** is part of the Ceph distributed file system. Please
refer to the Ceph wiki at http://ceph.newdream.net/wiki for more
information.

Expand Down
Loading

0 comments on commit 6f8f140

Please sign in to comment.