Skip to content

Commit

Permalink
doc/cephadm/upgrade: Add doc for mds upgrade without reducing mds_mds…
Browse files Browse the repository at this point in the history
… to 1.

Signed-off-by: Dhairya Parmar <[email protected]>
  • Loading branch information
dparmar18 committed Aug 9, 2022
1 parent c3fb65a commit c1ff3c7
Showing 1 changed file with 25 additions and 0 deletions.
25 changes: 25 additions & 0 deletions doc/cephadm/upgrade.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,31 @@ The automated upgrade process follows Ceph best practices. For example:
Starting the upgrade
====================

.. note::
.. note::
`Staggered Upgrade`_ of the mons/mgrs may be necessary to have access
to this new feature.

Cephadm by default reduces `max_mds` to `1`. This can be disruptive for large
scale CephFS deployments because the cluster cannot quickly reduce active MDS(s)
to `1` and a single active MDS cannot easily handle the load of all clients
even for a short time. Therefore, to upgrade MDS(s) without reducing `max_mds`,
the `fail_fs` option can to be set to `true` (default value is `false`) prior
to initiating the upgrade:

.. prompt:: bash #

ceph config set mgr mgr/orchestrator/fail_fs true

This would:
#. Fail CephFS filesystems, bringing active MDS daemon(s) to
`up:standby` state.

#. Upgrade MDS daemons safely.

#. Bring CephFS filesystems back up, bringing the state of active
MDS daemon(s) from `up:standby` to `up:active`.

Before you use cephadm to upgrade Ceph, verify that all hosts are currently online and that your cluster is healthy by running the following command:

.. prompt:: bash #
Expand Down

0 comments on commit c1ff3c7

Please sign in to comment.