Skip to content

Commit

Permalink
Merge pull request ceph#46919 from jsoref/spelling-docs
Browse files Browse the repository at this point in the history
doc: Fix many spelling errors
  • Loading branch information
anthonyeleven authored Jul 4, 2022
2 parents 3dbf673 + 8abce15 commit cf1415a
Show file tree
Hide file tree
Showing 74 changed files with 139 additions and 113 deletions.
2 changes: 1 addition & 1 deletion doc/_themes/ceph/layout.html
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@
<script src="{{ pathto('_static/js/html5shiv.min.js', 1) }}"></script>
<![endif]-->
{%- if not embedded %}
{# XXX Sphinx 1.8.0 made this an external js-file, quick fix until we refactor the template to inherert more blocks directly from sphinx #}
{# XXX Sphinx 1.8.0 made this an external js-file, quick fix until we refactor the template to inherit more blocks directly from sphinx #}
{% if sphinx_version >= "1.8.0" %}
<script type="text/javascript" id="documentation_options" data-url_root="{{ url_root }}" src="{{ pathto('_static/documentation_options.js', 1) }}"></script>
{%- for scriptfile in script_files %}
Expand Down
2 changes: 1 addition & 1 deletion doc/architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -603,7 +603,7 @@ name the Ceph OSD Daemons specifically (e.g., ``osd.0``, ``osd.1``, etc.), but
rather refer to them as *Primary*, *Secondary*, and so forth. By convention,
the *Primary* is the first OSD in the *Acting Set*, and is responsible for
coordinating the peering process for each placement group where it acts as
the *Primary*, and is the **ONLY** OSD that that will accept client-initiated
the *Primary*, and is the **ONLY** OSD that will accept client-initiated
writes to objects for a given placement group where it acts as the *Primary*.

When a series of OSDs are responsible for a placement group, that series of
Expand Down
2 changes: 1 addition & 1 deletion doc/ceph-volume/lvm/batch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ If one requires a different sizing policy for wal, db or journal devices,

Implicit sizing
---------------
Scenarios in which either devices are under-comitted or not all data devices are
Scenarios in which either devices are under-committed or not all data devices are
currently ready for use (due to a broken disk for example), one can still rely
on `ceph-volume` automatic sizing.
Users can provide hints to `ceph-volume` as to how many data devices should have
Expand Down
2 changes: 1 addition & 1 deletion doc/cephadm/operations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -348,7 +348,7 @@ CEPHADM_CHECK_KERNEL_LSM
Each host within the cluster is expected to operate within the same Linux
Security Module (LSM) state. For example, if the majority of the hosts are
running with SELINUX in enforcing mode, any host not running in this mode is
flagged as an anomaly and a healtcheck (WARNING) state raised.
flagged as an anomaly and a healthcheck (WARNING) state raised.

CEPHADM_CHECK_SUBSCRIPTION
~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion doc/cephadm/services/monitoring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ steps below:
ceph orch apply alertmanager

#. Deploy Prometheus. A single Prometheus instance is sufficient, but
for high availablility (HA) you might want to deploy two:
for high availability (HA) you might want to deploy two:

.. prompt:: bash #

Expand Down
2 changes: 1 addition & 1 deletion doc/cephadm/services/osd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -565,7 +565,7 @@ To include disks equal to or greater than 40G in size:
Sizes don't have to be specified exclusively in Gigabytes(G).

Other units of size are supported: Megabyte(M), Gigabyte(G) and Terrabyte(T).
Other units of size are supported: Megabyte(M), Gigabyte(G) and Terabyte(T).
Appending the (B) for byte is also supported: ``MB``, ``GB``, ``TB``.


Expand Down
2 changes: 1 addition & 1 deletion doc/cephfs/mantle.rst
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ in the MDS Map. The balancer pulls the Lua code from RADOS synchronously. We do
this with a timeout: if the asynchronous read does not come back within half
the balancing tick interval the operation is cancelled and a Connection Timeout
error is returned. By default, the balancing tick interval is 10 seconds, so
Mantle will use a 5 second second timeout. This design allows Mantle to
Mantle will use a 5 second timeout. This design allows Mantle to
immediately return an error if anything RADOS-related goes wrong.

We use this implementation because we do not want to do a blocking OSD read
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/blkin.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Create tracing session, enable tracepoints and start trace::
lttng enable-event --userspace osd:*
lttng start

Perform some ceph operatin::
Perform some Ceph operation::

rados bench -p ec 5 write

Expand Down
8 changes: 4 additions & 4 deletions doc/dev/ceph_krb_auth.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Authorization
Auditing
Auditing takes the results from both *authentication and authorization* and
records them into an audit log. The audit log records records all actions
records them into an audit log. The audit log records all actions
taking by/during the authentication and authorization for later review by
the administrators. While authentication and authorization are preventive
systems (in which unauthorized access is prevented), auditing is a reactive
Expand Down Expand Up @@ -584,8 +584,8 @@ In order to configure connections (from Ceph nodes) to the KDC:


Given that the *keytab client file* is/should already be copied and available at the
Kerberos client (Ceph cluster node), we should be able to athenticate using it before
going forward: ::
Kerberos client (Ceph cluster node), we should be able to authenticate using it before
continuing: ::

# kdestroy -A && kinit -k -t /etc/gss_client_mon1.ktab -f 'ceph/[email protected]' && klist -f
Ticket cache: KEYRING:persistent:0:0
Expand Down Expand Up @@ -1030,7 +1030,7 @@ In order to get a new MIT KDC Server running:


6. Name Resolution
As mentioned earlier, Kerberos *relies heavly on name resolution*. Most of
As mentioned earlier, Kerberos *relies heavily on name resolution*. Most of
the Kerberos issues are usually related to name resolution, since Kerberos
is *very picky* on both *systems names* and *host lookups*.

Expand Down
8 changes: 4 additions & 4 deletions doc/dev/cephadm/developing-cephadm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ main advantages:
an almost "real" environment.
- Safe and isolated. Does not depend of the things you have installed in
your machine. And the vms are isolated from your environment.
- Easy to work "dev" environment. For "not compilated" software pieces,
- Easy to work "dev" environment. For "not compiled" software pieces,
for example any mgr module. It is an environment that allow you to test your
changes interactively.

Expand All @@ -137,7 +137,7 @@ Complete documentation in `kcli installation <https://kcli.readthedocs.io/en/lat
but we suggest to use the container image approach.

So things to do:
- 1. Review `requeriments <https://kcli.readthedocs.io/en/latest/#libvirt-hypervisor-requisites>`_
- 1. Review `requirements <https://kcli.readthedocs.io/en/latest/#libvirt-hypervisor-requisites>`_
and install/configure whatever is needed to meet them.
- 2. get the kcli image and create one alias for executing the kcli command
::
Expand Down Expand Up @@ -282,8 +282,8 @@ of the cluster.
create loopback devices capable of holding osds.
.. note:: Each osd will require 5GiB of space.

After bootstraping the cluster you can go inside the seed box in which you'll be
able to run cehpadm commands::
After bootstrapping the cluster you can go inside the seed box in which you'll be
able to run cephadm commands::

box -v cluster sh
[root@8d52a7860245] cephadm --help
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/cephadm/host-maintenance.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ The list below shows some of these additional daemons.

By using the --check option first, the Admin can choose whether to proceed. This
workflow is obviously optional for the CLI user, but could be integrated into the
UI workflow to help less experienced Administators manage the cluster.
UI workflow to help less experienced administrators manage the cluster.

By adopting this two-phase approach, a UI based workflow would look something
like this.
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/cephx_protocol.rst
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ we'll assume that we are in that state.
The message C sends to A in phase I is build in ``CephxClientHandler::build_request()`` (in
``auth/cephx/CephxClientHandler.cc``). This routine is used for more than one purpose.
In this case, we first call ``validate_tickets()`` (from routine
``CephXTicektManager::validate_tickets()`` which lives in ``auth/cephx/CephxProtocol.h``).
``CephXTicketManager::validate_tickets()`` which lives in ``auth/cephx/CephxProtocol.h``).
This code runs through the list of possible tickets to determine what we need, setting values
in the ``need`` flag as necessary. Then we call ``ticket.get_handler()``. This routine
(in ``CephxProtocol.h``) finds a ticket of the specified type (a ticket to perform
Expand Down
8 changes: 4 additions & 4 deletions doc/dev/continuous-integration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ Uploading Dependencies

To ensure that prebuilt packages are available by the jenkins agents, we need to
upload them to either ``apt-mirror.front.sepia.ceph.com`` or `chacra`_. To upload
packages to the former would require the help our our lab administrator, so if we
packages to the former would require the help of our lab administrator, so if we
want to maintain the package repositories on regular basis, a better choice would be
to manage them using `chacractl`_. `chacra`_ represents packages repositories using
a resource hierarchy, like::
Expand All @@ -230,9 +230,9 @@ branch
ref
a unique id of a given version of a set packages. This id is used to reference
the set packages under the ``<project>/<branch>``. It is a good practice to
version the packaging recipes, like the ``debian`` directory for building deb
packages and the ``spec`` for building rpm packages, and use the sha1 of the
packaging receipe for the ``ref``. But you could also use a random string for
version the packaging recipes, like the ``debian`` directory for building DEB
packages and the ``spec`` for building RPM packages, and use the SHA1 of the
packaging recipe for the ``ref``. But you could also use a random string for
``ref``, like the tag name of the built source tree.

distro
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/crimson/error-handling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ signature::
std::cout << "oops, the optimistic path generates a new error!";
return crimson::ct_error::input_output_error::make();
},
// we have a special handler to delegate the handling up. For conveience,
// we have a special handler to delegate the handling up. For convenience,
// the same behaviour is available as single argument-taking variant of
// `safe_then()`.
ertr::pass_further{});
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/crimson/osd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ osd
.. describe:: waiting_for_healthy

If an OSD daemon is able to connected to its heartbeat peers, and its own
internal hearbeat does not fail, it is considered healthy. Otherwise, it
internal heartbeat does not fail, it is considered healthy. Otherwise, it
puts itself in the state of `waiting_for_healthy`, and check its own
reachability and internal heartbeat periodically.

Expand Down
4 changes: 2 additions & 2 deletions doc/dev/crimson/poseidonstore.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Towards an object store highly optimized for CPU consumption, three design choic
* **PoseidonStore uses hybrid update strategies for different data size, similar to BlueStore.**

As we discussed, both in-place and out-of-place update strategies have their pros and cons.
Since CPU is only bottlenecked under small I/O workloads, we chose update-in-place for small I/Os to mininize CPU consumption
Since CPU is only bottlenecked under small I/O workloads, we chose update-in-place for small I/Os to minimize CPU consumption
while choosing update-out-of-place for large I/O to avoid double write. Double write for small data may be better than host-GC overhead
in terms of CPU consumption in the long run. Although it leaves GC entirely up to SSDs,

Expand Down Expand Up @@ -230,7 +230,7 @@ Crash consistency
#. Crash occurs right after writing Data blocks

- Data partition --> | Data blocks |
- We don't need to care this case. Data is not alloacted yet in reality. The blocks will be reused.
- We don't need to care this case. Data is not allocated yet. The blocks will be reused.
#. Crash occurs right after WAL

- Data partition --> | Data blocks |
Expand Down
4 changes: 2 additions & 2 deletions doc/dev/deduplication.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,8 +94,8 @@ Regarding how to use, please see ``osd_internals/manifest.rst``
Usage Patterns
==============

The different Ceph interface layers present potentially different oportunities
and costs for deduplication and tiering in general.
Each Ceph interface layer presents unique opportunities and costs for
deduplication and tiering in general.

RadosGW
-------
Expand Down
File renamed without changes.
10 changes: 5 additions & 5 deletions doc/dev/developer_guide/basic-workflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ no tracker issue exists, create one. There is only one case in which you do not
have to create a Redmine tracker issue: the case of minor documentation changes.

Simple documentation cleanup does not require a corresponding tracker issue.
Major documenatation changes do require a tracker issue. Major documentation
Major documentation changes do require a tracker issue. Major documentation
changes include adding new documentation chapters or files, and making
substantial changes to the structure or content of the documentation.

Expand Down Expand Up @@ -220,7 +220,7 @@ upstream repository.

The second command (git checkout -b fix_1) creates a "bugfix branch" called
"fix_1" in your local working copy of the repository. The changes that you make
in order to fix the bug will be commited to this branch.
in order to fix the bug will be committed to this branch.

The third command (git push -u origin fix_1) pushes the bugfix branch from
your local working repository to your fork of the upstream repository.
Expand Down Expand Up @@ -479,13 +479,13 @@ This consists of two parts:
Using a browser extension to auto-fill the merge message
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If you use a browser for merging Github PRs, the easiest way to fill in
the merge message is with the `"Ceph Github Helper Extension"
If you use a browser for merging GitHub PRs, the easiest way to fill in
the merge message is with the `"Ceph GitHub Helper Extension"
<https://github.com/tspmelo/ceph-github-helper>`_ (available for `Chrome
<https://chrome.google.com/webstore/detail/ceph-github-helper/ikpfebikkeabmdnccbimlomheocpgkmn>`_
and `Firefox <https://addons.mozilla.org/en-US/firefox/addon/ceph-github-helper/>`_).

After enabling this extension, if you go to a Github PR page, a vertical helper
After enabling this extension, if you go to a GitHub PR page, a vertical helper
will be displayed at the top-right corner. If you click on the user silhouette button
the merge message input will be automatically populated.

Expand Down
10 changes: 5 additions & 5 deletions doc/dev/developer_guide/dash-devel.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ introduced in this chapter are based on a so called ``vstart`` environment.

.. note::

Every ``vstart`` environment needs Ceph `to be compiled`_ from its Github
Every ``vstart`` environment needs Ceph `to be compiled`_ from its GitHub
repository, though Docker environments simplify that step by providing a
shell script that contains those instructions.

Expand All @@ -54,7 +54,7 @@ You can read more about vstart in `Deploying a development cluster`_.
Additional information for developers can also be found in the `Developer
Guide`_.

.. _Deploying a development cluster: https://docs.ceph.com/docs/master/dev/dev_cluster_deployement/
.. _Deploying a development cluster: https://docs.ceph.com/docs/master/dev/dev_cluster_deployment/
.. _Developer Guide: https://docs.ceph.com/docs/master/dev/quick_guide/

Host-based vs Docker-based Development Environments
Expand Down Expand Up @@ -96,7 +96,7 @@ based on vstart. Those are:

`ceph-dev`_ is an exception to this rule as one of the options it provides
is `build-free`_. This is accomplished through a Ceph installation using
RPM system packages. You will still be able to work with a local Github
RPM system packages. You will still be able to work with a local GitHub
repository like you are used to.


Expand Down Expand Up @@ -1781,7 +1781,7 @@ To specify the grafana dashboard properties such as title, uid etc we can create

local dashboardSchema(title, uid, time_from, refresh, schemaVersion, tags,timezone, timepicker)

To add a graph panel we can spcify the graph schema in a local function such as -
To add a graph panel we can specify the graph schema in a local function such as -

::

Expand Down Expand Up @@ -2340,7 +2340,7 @@ If that checker failed, it means that the current Pull Request is modifying the
Ceph API and therefore:

#. The versioned OpenAPI specification should be updated explicitly: ``tox -e openapi-fix``.
#. The team @ceph/api will be requested for reviews (this is automated via Github CODEOWNERS), in order to asses the impact of changes.
#. The team @ceph/api will be requested for reviews (this is automated via GitHub CODEOWNERS), in order to asses the impact of changes.

Additionally, Sphinx documentation can be generated from the OpenAPI
specification with ``tox -e openapi-doc``.
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/developer_guide/debugging-gdb.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Attaching gdb to the process::

.. note::
It is recommended to compile without any optimizations (``-O0`` gcc flag)
in order to avoid elimintaion of intermediate values.
in order to avoid elimination of intermediate values.

Stopping for breakpoints while debugging may cause timeouts, so the following
configuration options are suggested::
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/developer_guide/running-tests-locally.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ vstart_runner.py can take the following options -
--interactive drops a Python shell when a test fails
--log-ps-output logs ps output; might be useful while debugging
--teardown tears Ceph cluster down after test(s) has finished
runnng
running
--kclient use the kernel cephfs client instead of FUSE
--brxnet=<net/mask> specify a new net/mask for the mount clients' network
namespace container (Default: 192.168.0.0/16)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Test Run`_.
Viewing Test Results
--------------------

When a teuthology run has been completed successfully, use `pulpito`_ dasboard
When a teuthology run has been completed successfully, use `pulpito`_ dashboard
to view the results::

http://pulpito.front.sepia.ceph.com/<job-name>/<job-id>/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ teuthology-describe
documentation and better understanding of integration tests.

Tests can be documented by embedding ``meta:`` annotations in the yaml files
used to define the tests. The results can be seen in the `teuthology-desribe
used to define the tests. The results can be seen in the `teuthology-describe
usecases`_

Since this is a new feature, many yaml files have yet to be annotated.
Expand Down Expand Up @@ -581,5 +581,5 @@ test will be first.
.. _Sepia Lab: https://wiki.sepia.ceph.com/doku.php
.. _teuthology repository: https://github.com/ceph/teuthology
.. _teuthology framework: https://github.com/ceph/teuthology
.. _teuthology-desribe usecases: https://gist.github.com/jdurgin/09711d5923b583f60afc
.. _teuthology-describe usecases: https://gist.github.com/jdurgin/09711d5923b583f60afc
.. _ceph-deploy man page: ../../../../man/8/ceph-deploy
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Ceph binaries must be built for your branch before you can use teuthology to run

#. To ensure that the build process has been initiated, confirm that the branch
name has appeared in the list of "Latest Builds Available" at `Shaman`_.
Soon after you start the build process, the testing infrastructrure adds
Soon after you start the build process, the testing infrastructure adds
other, similarly-named builds to the list of "Latest Builds Available".
The names of these new builds will contain the names of various Linux
distributions of Linux and will be used to test your build against those
Expand Down Expand Up @@ -110,7 +110,7 @@ run), and ``--subset`` (used to reduce the number of tests that are triggered).

.. _teuthology_testing_qa_changes:

Testing QA changes (without re-building binaires)
Testing QA changes (without re-building binaries)
*************************************************

If you are making changes only in the ``qa/`` directory, you do not have to
Expand Down Expand Up @@ -273,8 +273,8 @@ a branch named ``feature-x`` should be named ``wip-$yourname-feature-x``, where
``$yourname`` is replaced with your name. Identifying your branch with your
name makes your branch easily findable on Shaman and Pulpito.

If you are using one of the stable branches (for example, nautilis, mimic,
etc.), include the name of that stable branch in your ceph-ci branch name.
If you are using one of the stable branches (`quincy`, `pacific`, etc.), include
the name of that stable branch in your ceph-ci branch name.
For example, the ``feature-x`` PR branch should be named
``wip-feature-x-nautilus``. *This is not just a convention. This ensures that your branch is built in the correct environment.*

Expand Down
Loading

0 comments on commit cf1415a

Please sign in to comment.