Skip to content

Commit

Permalink
doc: Multiple spelling fixes
Browse files Browse the repository at this point in the history
I ran a lot of the docs through aspell and found a number of spelling problems.

Signed-off-by: Bryan Stillwell <[email protected]>
  • Loading branch information
bstillwell-godaddy committed Aug 9, 2018
1 parent ebaa806 commit 791b00d
Show file tree
Hide file tree
Showing 14 changed files with 25 additions and 25 deletions.
2 changes: 1 addition & 1 deletion doc/install/install-ceph-gateway.rst
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ system-wide value. You can also set it for each instance in your Ceph
configuration file.

Once you have changed your bucket sharding configuration in your Ceph
configuration file, restart your gateway. On Red Hat Enteprise Linux execute::
configuration file, restart your gateway. On Red Hat Enterprise Linux execute::

sudo systemctl restart ceph-radosgw.service

Expand Down
2 changes: 1 addition & 1 deletion doc/install/manual-freebsd-deployment.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Current implementation works on ZFS pools
* Some cache and log (ZIL) can be attached.
Please note that this is different from the Ceph journals. Cache and log are
totally transparent for Ceph, and help the filesystem to keep the system
consistant and help performance.
consistent and help performance.
Assuming that ada2 is an SSD::

gpart create -s GPT ada2
Expand Down
2 changes: 1 addition & 1 deletion doc/man/8/ceph-dencoder.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Commands

.. option:: count_tests

Print the number of built-in test instances of the previosly
Print the number of built-in test instances of the previously
selected type that **ceph-dencoder** is able to generate.

.. option:: select_test <n>
Expand Down
2 changes: 1 addition & 1 deletion doc/man/8/ceph-kvstore-tool.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Synopsis
Description
===========

:program:`ceph-kvstore-tool` is a kvstore manipulation tool. It allows users to manipule
:program:`ceph-kvstore-tool` is a kvstore manipulation tool. It allows users to manipulate
leveldb/rocksdb's data (like OSD's omap) offline.

Commands
Expand Down
2 changes: 1 addition & 1 deletion doc/man/8/ceph-volume.rst
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ Usage::
Optional Arguments:

* [-h, --help] show the help message and exit
* [--auto-detect-objectstore] Automatically detect the objecstore by inspecting
* [--auto-detect-objectstore] Automatically detect the objectstore by inspecting
the OSD
* [--bluestore] bluestore objectstore (default)
* [--filestore] filestore objectstore
Expand Down
2 changes: 1 addition & 1 deletion doc/man/8/crushtool.rst
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ pools; it only runs simulations by mapping values in the range

.. option:: --show-utilization

Displays the expected and actual utilisation for each device, for
Displays the expected and actual utilization for each device, for
each number of replicas. For instance::

device 0: stored : 951 expected : 853.333
Expand Down
4 changes: 2 additions & 2 deletions doc/rados/api/librados.rst
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,8 @@ In the end, you will want to close your IO context and connection to RADOS with
rados_shutdown(cluster);


Asychronous IO
==============
Asynchronous IO
===============

When doing lots of IO, you often don't need to wait for one operation
to complete before starting the next one. `Librados` provides
Expand Down
8 changes: 4 additions & 4 deletions doc/rados/operations/health-checks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ DEVICE_HEALTH_IN_USE
____________________

One or more devices is expected to fail soon and has been marked "out"
of the cluster based on ``mgr/devicehalth/mark_out_threshold``, but it
of the cluster based on ``mgr/devicehealth/mark_out_threshold``, but it
is still participating in one more PGs. This may be because it was
only recently marked "out" and data is still migrating, or because data
cannot be migrated off for some reason (e.g., the cluster is nearly
Expand Down Expand Up @@ -335,7 +335,7 @@ Detailed information about which PGs are affected is available from::
ceph health detail

In most cases the root cause is that one or more OSDs is currently
down; see the dicussion for ``OSD_DOWN`` above.
down; see the discussion for ``OSD_DOWN`` above.

The state of specific problematic PGs can be queried with::

Expand Down Expand Up @@ -392,7 +392,7 @@ OSD_SCRUB_ERRORS
________________

Recent OSD scrubs have uncovered inconsistencies. This error is generally
paired with *PG_DAMANGED* (see above).
paired with *PG_DAMAGED* (see above).

See :doc:`pg-repair` for more information.

Expand All @@ -419,7 +419,7 @@ ___________

The number of PGs in use in the cluster is below the configurable
threshold of ``mon_pg_warn_min_per_osd`` PGs per OSD. This can lead
to suboptimizal distribution and balance of data across the OSDs in
to suboptimal distribution and balance of data across the OSDs in
the cluster, and similar reduce overall performance.

This may be an expected condition if data pools have not yet been
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/operations/monitoring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ log.
Monitoring Health Checks
========================

Ceph continously runs various *health checks* against its own status. When
Ceph continuously runs various *health checks* against its own status. When
a health check fails, this is reflected in the output of ``ceph status`` (or
``ceph health``). In addition, messages are sent to the cluster log to
indicate when a check fails, and when the cluster recovers.
Expand Down
8 changes: 4 additions & 4 deletions doc/rados/troubleshooting/troubleshooting-mon.rst
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ What if the state is ``probing``?
multi-monitor cluster, the monitors will stay in this state until they
find enough monitors to form a quorum -- this means that if you have 2 out
of 3 monitors down, the one remaining monitor will stay in this state
indefinitively until you bring one of the other monitors up.
indefinitely until you bring one of the other monitors up.

If you have a quorum, however, the monitor should be able to find the
remaining monitors pretty fast, as long as they can be reached. If your
Expand Down Expand Up @@ -337,7 +337,7 @@ Can I increase the maximum tolerated clock skew?
This value is configurable via the ``mon-clock-drift-allowed`` option, and
although you *CAN* it doesn't mean you *SHOULD*. The clock skew mechanism
is in place because clock skewed monitor may not properly behave. We, as
developers and QA afficcionados, are comfortable with the current default
developers and QA aficionados, are comfortable with the current default
value, as it will alert the user before the monitors get out hand. Changing
this value without testing it first may cause unforeseen effects on the
stability of the monitors and overall cluster healthiness, although there is
Expand Down Expand Up @@ -402,7 +402,7 @@ or::
Recovery using healthy monitor(s)
---------------------------------

If there is any survivers, we can always `replace`_ the corrupted one with a
If there is any survivors, we can always `replace`_ the corrupted one with a
new one. And after booting up, the new joiner will sync up with a healthy
peer, and once it is fully sync'ed, it will be able to serve the clients.

Expand Down Expand Up @@ -527,7 +527,7 @@ You have quorum

ceph tell mon.* config set debug_mon 10/10

No quourm
No quorum

Use the monitor's admin socket and directly adjust the configuration
options::
Expand Down
4 changes: 2 additions & 2 deletions doc/rados/troubleshooting/troubleshooting-osd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -381,7 +381,7 @@ Currently, we recommend deploying clusters with XFS.

We recommend against using btrfs or ext4. The btrfs filesystem has
many attractive features, but bugs in the filesystem may lead to
performance issues and suprious ENOSPC errors. We do not recommend
performance issues and spurious ENOSPC errors. We do not recommend
ext4 because xattr size limitations break our support for long object
names (needed for RGW).

Expand Down Expand Up @@ -477,7 +477,7 @@ Events from the OSD after stuff has been given to local disk
- op_applied: The op has been write()'en to the backing FS (ie, applied in
memory but not flushed out to disk) on the primary
- sub_op_applied: op_applied, but for a replica's "subop"
- sub_op_committed: op_commited, but for a replica's subop (only for EC pools)
- sub_op_committed: op_committed, but for a replica's subop (only for EC pools)
- sub_op_commit_rec/sub_op_apply_rec from <X>: the primary marks this when it
hears about the above, but for a particular replica <X>
- commit_sent: we sent a reply back to the client (or primary OSD, for sub ops)
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/troubleshooting/troubleshooting-pg.rst
Original file line number Diff line number Diff line change
Expand Up @@ -569,7 +569,7 @@ rule needs, ``--rule`` is the value of the ``ruleset`` field
displayed by ``ceph osd crush rule dump``. The test will try mapping
one million values (i.e. the range defined by ``[--min-x,--max-x]``)
and must display at least one bad mapping. If it outputs nothing it
means all mappings are successfull and you can stop right there: the
means all mappings are successful and you can stop right there: the
problem is elsewhere.

The CRUSH rule can be edited by decompiling the crush map::
Expand Down
4 changes: 2 additions & 2 deletions doc/start/documenting-ceph.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ instructions will help the Ceph project immensely.

The Ceph documentation source resides in the ``ceph/doc`` directory of the Ceph
repository, and Python Sphinx renders the source into HTML and manpages. The
http://ceph.com/docs link currenly displays the ``master`` branch by default,
http://ceph.com/docs link currently displays the ``master`` branch by default,
but you may view documentation for older branches (e.g., ``argonaut``) or future
branches (e.g., ``next``) as well as work-in-progress branches by substituting
``master`` with the branch name you prefer.
Expand Down Expand Up @@ -188,7 +188,7 @@ To build the documentation on Debian/Ubuntu, Fedora, or CentOS/RHEL, execute::

admin/build-doc

To scan for the reachablity of external links, execute::
To scan for the reachability of external links, execute::

admin/build-doc linkcheck

Expand Down
6 changes: 3 additions & 3 deletions doc/start/kube-helm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
Installation (Kubernetes + Helm)
================================

The ceph-helm_ project enables you to deploy Ceph in a Kubernetes environement.
This documentation assumes a Kubernetes environement is available.
The ceph-helm_ project enables you to deploy Ceph in a Kubernetes environment.
This documentation assumes a Kubernetes environment is available.

Current limitations
===================
Expand Down Expand Up @@ -157,7 +157,7 @@ Run the helm install command to deploy Ceph::
NAME TYPE
ceph-rbd ceph.com/rbd

The output from helm install shows us the different types of ressources that will be deployed.
The output from helm install shows us the different types of resources that will be deployed.

A StorageClass named ``ceph-rbd`` of type ``ceph.com/rbd`` will be created with ``ceph-rbd-provisioner`` Pods. These
will allow a RBD to be automatically provisioned upon creation of a PVC. RBDs will also be formatted when mapped for the first
Expand Down

0 comments on commit 791b00d

Please sign in to comment.