Skip to content

Commit

Permalink
doc: s/osd/OSD/ if not part of a command
Browse files Browse the repository at this point in the history
First attempt to unify usage of OSD over rst files.

Signed-off-by: Danny Al-Gaaf <[email protected]>
  • Loading branch information
dalgaaf committed Mar 8, 2014
1 parent e666019 commit 72ee338
Show file tree
Hide file tree
Showing 10 changed files with 38 additions and 38 deletions.
2 changes: 1 addition & 1 deletion doc/dev/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ How do we find the configuration file? Well, in order, we check:
Each stanza of the configuration file describes the key-value pairs that will be in
effect for a particular subset of the daemons. The "global" stanza applies to
everything. The "mon", "osd", and "mds" stanzas specify settings to take effect
for all monitors, all osds, and all mds servers, respectively. A stanza of the
for all monitors, all OSDs, and all mds servers, respectively. A stanza of the
form mon.$name, osd.$name, or mds.$name gives settings for the monitor, OSD, or
MDS of that name, respectively. Configuration values that appear later in the
file win over earlier ones.
Expand Down
4 changes: 2 additions & 2 deletions doc/dev/osd-class-path.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@
2011-12-05 17:41:00.994075 7ffe8b5c3760 librbd: failed to assign a block name for image
create error: error 5: Input/output error

This usually happens because your osds can't find ``cls_rbd.so``. They
This usually happens because your OSDs can't find ``cls_rbd.so``. They
search for it in ``osd_class_dir``, which may not be set correctly by
default (http://tracker.newdream.net/issues/1722).

Most likely it's looking in ``/usr/lib/rados-classes`` instead of
``/usr/lib64/rados-classes`` - change ``osd_class_dir`` in your
``ceph.conf`` and restart the osds to fix it.
``ceph.conf`` and restart the OSDs to fix it.
20 changes: 10 additions & 10 deletions doc/dev/osd_internals/erasure_coding/pgbackend.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ coding as failure recovery mechanisms.

Much of the existing PG logic, particularly that for dealing with
peering, will be common to each. With both schemes, a log of recent
operations will be used to direct recovery in the event that an osd is
operations will be used to direct recovery in the event that an OSD is
down or disconnected for a brief period of time. Similarly, in both
cases it will be necessary to scan a recovered copy of the PG in order
to recover an empty OSD. The PGBackend abstraction must be
Expand All @@ -35,7 +35,7 @@ and erasure coding which PGBackend must abstract over:
acting set positions.
5. Selection of a pgtemp for backfill may differ between replicated
and erasure coded backends.
6. The set of necessary osds from a particular interval required to
6. The set of necessary OSDs from a particular interval required to
to continue peering may differ between replicated and erasure
coded backends.
7. The selection of the authoritative log may differ between replicated
Expand Down Expand Up @@ -115,7 +115,7 @@ the last interval which went active in order to minimize the number of
divergent objects.

The difficulty is that the current code assumes that as long as it has
an info from at least 1 osd from the prior interval, it can complete
an info from at least 1 OSD from the prior interval, it can complete
peering. In order to ensure that we do not end up with an
unrecoverably divergent object, a K+M erasure coded PG must hear from at
least K of the replicas of the last interval to serve writes. This ensures
Expand All @@ -140,8 +140,8 @@ PGBackend interfaces:
PGTemp
------

Currently, an osd is able to request a temp acting set mapping in
order to allow an up-to-date osd to serve requests while a new primary
Currently, an OSD is able to request a temp acting set mapping in
order to allow an up-to-date OSD to serve requests while a new primary
is backfilled (and for other reasons). An erasure coded pg needs to
be able to designate a primary for these reasons without putting it
in the first position of the acting set. It also needs to be able
Expand All @@ -161,7 +161,7 @@ Client Reads
------------

Reads with the replicated strategy can always be satisfied
synchronously out of the primary osd. With an erasure coded strategy,
synchronously out of the primary OSD. With an erasure coded strategy,
the primary will need to request data from some number of replicas in
order to satisfy a read. The perform_read() interface for PGBackend
therefore will be async.
Expand All @@ -179,7 +179,7 @@ With the replicated strategy, all replicas of a PG are
interchangeable. With erasure coding, different positions in the
acting set have different pieces of the erasure coding scheme and are
not interchangeable. Worse, crush might cause chunk 2 to be written
to an osd which happens already to contain an (old) copy of chunk 4.
to an OSD which happens already to contain an (old) copy of chunk 4.
This means that the OSD and PG messages need to work in terms of a
type like pair<shard_t, pg_t> in order to distinguish different pg
chunks on a single OSD.
Expand Down Expand Up @@ -293,7 +293,7 @@ Backfill
See `Issue #5856`_. For the most part, backfill itself should behave similarly between
replicated and erasure coded pools with a few exceptions:

1. We probably want to be able to backfill multiple osds concurrently
1. We probably want to be able to backfill multiple OSDs concurrently
with an erasure coded pool in order to cut down on the read
overhead.
2. We probably want to avoid having to place the backfill peers in the
Expand All @@ -302,7 +302,7 @@ replicated and erasure coded pools with a few exceptions:

For 2, we don't really need to place the backfill peer in the acting
set for replicated PGs anyway.
For 1, PGBackend::choose_backfill() should determine which osds are
For 1, PGBackend::choose_backfill() should determine which OSDs are
backfilled in a particular interval.

Core changes:
Expand All @@ -315,7 +315,7 @@ Core changes:

PGBackend interfaces:

- choose_backfill(): allows the implementation to determine which osds
- choose_backfill(): allows the implementation to determine which OSDs
should be backfilled in a particular interval.

.. _Issue #5856: http://tracker.ceph.com/issues/5856
12 changes: 6 additions & 6 deletions doc/dev/peering.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Concepts
to [3,1,2] and osd.3 becomes the primary.

*current interval* or *past interval*
a sequence of osd map epochs during which the *acting set* and *up
a sequence of OSD map epochs during which the *acting set* and *up
set* for particular PG do not change

*primary*
Expand Down Expand Up @@ -95,7 +95,7 @@ Concepts
*up_thru*
before a primary can successfully complete the *peering* process,
it must inform a monitor that is alive through the current
osd map epoch by having the monitor set its *up_thru* in the osd
OSD map epoch by having the monitor set its *up_thru* in the osd
map. This helps peering ignore previous *acting sets* for which
peering never completed after certain sequences of failures, such as
the second interval below:
Expand Down Expand Up @@ -135,7 +135,7 @@ process:
of many placement groups.

Before a primary successfully completes the *peering*
process, the osd map must reflect that the OSD was alive
process, the OSD map must reflect that the OSD was alive
and well as of the first epoch in the *current interval*.

Changes can only be made after successful *peering*.
Expand All @@ -157,11 +157,11 @@ The high level process is for the current PG primary to:

2. generate a list of *past intervals* since *last epoch started*.
Consider the subset of those for which *up_thru* was greater than
the first interval epoch by the last interval epoch's osd map; that is,
the first interval epoch by the last interval epoch's OSD map; that is,
the subset for which *peering* could have completed before the *acting
set* changed to another set of OSDs.

Successfull *peering* will require that we be able to contact at
Successful *peering* will require that we be able to contact at
least one OSD from each of *past interval*'s *acting set*.

3. ask every node in that list for its *PG info*, which includes the most
Expand Down Expand Up @@ -213,7 +213,7 @@ The high level process is for the current PG primary to:
my own (*authoritative history*) ... which may involve deciding
to delete divergent objects.

b) await acknowledgement that they have persisted the PG log entries.
b) await acknowledgment that they have persisted the PG log entries.

9. at this point all OSDs in the *acting set* agree on all of the meta-data,
and would (in any future *peering*) return identical accounts of all
Expand Down
6 changes: 3 additions & 3 deletions doc/dev/placement-group.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,12 +58,12 @@ something like this in pseudocode::
locator = object_name
obj_hash = hash(locator)
pg = obj_hash % num_pg
osds_for_pg = crush(pg) # returns a list of osds
OSDs_for_pg = crush(pg) # returns a list of OSDs
primary = osds_for_pg[0]
replicas = osds_for_pg[1:]

If you want to understand the crush() part in the above, imagine a
perfectly spherical datacenter in a vacuum ;) that is, if all osds
perfectly spherical datacenter in a vacuum ;) that is, if all OSDs
have weight 1.0, and there is no topology to the data center (all OSDs
are on the top level), and you use defaults, etc, it simplifies to
consistent hashing; you can think of it as::
Expand All @@ -76,7 +76,7 @@ consistent hashing; you can think of it as::
r = hash(pg)
chosen = all_osds[ r % len(all_osds) ]
if chosen in result:
# osd can be picked only once
# OSD can be picked only once
continue
result.append(chosen)
return result
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/rbd-layering.rst
Original file line number Diff line number Diff line change
Expand Up @@ -277,5 +277,5 @@ A new clone method will be added, which takes the same arguments as
create except size (size of the parent image is used).

Instead of expanding the rbd_info struct, we will break the metadata
retrieval into several api calls. Right now, the only users of
retrieval into several API calls. Right now, the only users of
rbd_stat() other than 'rbd info' only use it to retrieve image size.
12 changes: 6 additions & 6 deletions doc/rados/operations/control.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,18 +91,18 @@ or delete them if they were just created. ::
OSD Subsystem
=============

Query osd subsystem status. ::
Query OSD subsystem status. ::

ceph osd stat

Write a copy of the most recent osd map to a file. See
Write a copy of the most recent OSD map to a file. See
`osdmaptool`_. ::

ceph osd getmap -o file

.. _osdmaptool: ../../man/8/osdmaptool

Write a copy of the crush map from the most recent osd map to
Write a copy of the crush map from the most recent OSD map to
file. ::

ceph osd getcrushmap -o file
Expand Down Expand Up @@ -160,7 +160,7 @@ Remove the given OSD(s). ::

ceph osd rm [{id}...]

Query the current max_osd parameter in the osd map. ::
Query the current max_osd parameter in the OSD map. ::

ceph osd getmaxosd

Expand Down Expand Up @@ -269,11 +269,11 @@ Sends a scrub command to OSD ``{osd-num}``. To send the command to all OSDs, use

ceph osd scrub {osd-num}

Sends a repair command to osdN. To send the command to all osds, use ``*``. ::
Sends a repair command to OSD.N. To send the command to all OSDs, use ``*``. ::

ceph osd repair N

Runs a simple throughput benchmark against osdN, writing ``TOTAL_BYTES``
Runs a simple throughput benchmark against OSD.N, writing ``TOTAL_BYTES``
in write requests of ``BYTES_PER_WRITE`` each. By default, the test
writes 1 GB in total in 4-MB increments. ::

Expand Down
2 changes: 1 addition & 1 deletion doc/rados/operations/pg-concepts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ of the following terms:
*up_thru*
Before a *Primary* can successfully complete the *Peering* process,
it must inform a monitor that is alive through the current
osd map *Epoch* by having the monitor set its *up_thru* in the osd
OSD map *Epoch* by having the monitor set its *up_thru* in the osd
map. This helps *Peering* ignore previous *Acting Sets* for which
*Peering* never completed after certain sequences of failures, such as
the second interval below:
Expand Down
12 changes: 6 additions & 6 deletions doc/rados/troubleshooting/troubleshooting-osd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -412,8 +412,8 @@ on the monitor, while marking themselves ``up``. We call this scenario
If something is causing OSDs to 'flap' (repeatedly getting marked ``down`` and
then ``up`` again), you can force the monitors to stop the flapping with::

ceph osd set noup # prevent osds from getting marked up
ceph osd set nodown # prevent osds from getting marked down
ceph osd set noup # prevent OSDs from getting marked up
ceph osd set nodown # prevent OSDs from getting marked down

These flags are recorded in the osdmap structure::

Expand All @@ -426,9 +426,9 @@ You can clear the flags with::
ceph osd unset nodown

Two other flags are supported, ``noin`` and ``noout``, which prevent
booting OSDs from being marked ``in`` (allocated data) or down
ceph-osds from eventually being marked ``out`` (regardless of what the
current value for ``mon osd down out interval`` is).
booting OSDs from being marked ``in`` (allocated data) or protect OSDs
from eventually being marked ``out`` (regardless of what the current value for
``mon osd down out interval`` is).

.. note:: ``noup``, ``noout``, and ``nodown`` are temporary in the
sense that once the flags are cleared, the action they were blocking
Expand All @@ -454,4 +454,4 @@ current value for ``mon osd down out interval`` is).
.. _unsubscribe from the ceph-users email list: mailto:[email protected]
.. _Inktank: http://inktank.com
.. _OS recommendations: ../../../install/os-recommendations
.. _ceph-devel: [email protected]
.. _ceph-devel: [email protected]
4 changes: 2 additions & 2 deletions doc/rados/troubleshooting/troubleshooting-pg.rst
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ data, but it is ``down``. The full range of possible states include::

* already probed
* querying
* osd is down
* OSD is down
* not queried (yet)

Sometimes it simply takes some time for the cluster to query possible
Expand Down Expand Up @@ -286,4 +286,4 @@ in the `Pool, PG and CRUSH Config Reference`_ for details.
.. _check: ../../operations/placement-groups#get-the-number-of-placement-groups
.. _here: ../../configuration/pool-pg-config-ref
.. _Placement Groups: ../../operations/placement-groups
.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref
.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref

0 comments on commit 72ee338

Please sign in to comment.