Skip to content

Commit

Permalink
docs: use mkdocs and restructure docs
Browse files Browse the repository at this point in the history
Signed-off-by: Alexander Trost <[email protected]>
  • Loading branch information
galexrt committed May 18, 2022
1 parent bb58123 commit 7443144
Show file tree
Hide file tree
Showing 93 changed files with 2,226 additions and 2,224 deletions.
31 changes: 31 additions & 0 deletions .docs/macros/includes/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
#!/usr/bin/python3

from pygit2 import Repository
import re

"""
GitHub branch/tag URL replacer
"""

regex = r"(github\.com/.+/rook/.+)/master/"
subst = "\\1/%s/"

def define_env(env):

repo = Repository('.')
if repo is not None:
target = repo.head.shorthand

env.variables['current_branch'] = target

def on_post_page_macros(env):
"""
Replace the branch/tag in the rook GitHub file and directory links pointing to `master`
with the correct one that is currently active.
"""

target = env.variables['current_branch']
if target == 'master':
return

env.raw_markdown = re.sub(regex, subst % target, env.raw_markdown, 0)
8 changes: 8 additions & 0 deletions .docs/overrides/main.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{% extends "base.html" %}

{% block outdated %}
This document is for a development version of Rook.
<a href="{{ '../' ~ base_url }}">
<strong>Click here to go to latest release documentation.</strong>
</a>
{% endblock %}
1 change: 1 addition & 0 deletions .github/workflows/linters.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ jobs:
pip install pylint
pip install pylint --upgrade
pip install requests
pip install pygit2
- name: Lint Python files
run: pylint $(git ls-files '*.py') -E
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,8 @@
# OLM related stuff
deploy/olm/deploy/*
deploy/olm/templates/*

# mkdocs + mike
site/
public/
__pycache__/
8 changes: 8 additions & 0 deletions Documentation/.pages
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
nav:
- Getting-Started
- Helm-Charts
- Storage-Configuration
- CRDs
- Troubleshooting
- Upgrade
- Contributing
8 changes: 8 additions & 0 deletions Documentation/CRDs/.pages
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
title: Custom Resources
nav:
- ceph-cluster-crd.md
- Block-Storage
- Shared-Filesystem
- Object-Storage
- ceph-client-crd.md
- ...
Original file line number Diff line number Diff line change
@@ -1,23 +1,19 @@
---
title: Block Pool CRD
weight: 2700
indent: true
---
{% include_relative branch.liquid %}

# Ceph Block Pool CRD

Rook allows creation and customization of storage pools through the custom resource definitions (CRDs). The following settings are available for pools.

## Samples
## Examples

### Replicated

For optimal performance, while also adding redundancy, this sample will configure Ceph to make three full copies of the data on multiple nodes.

> **NOTE**: This sample requires *at least 1 OSD per node*, with each OSD located on *3 different nodes*.
!!! note
This sample requires *at least 1 OSD per node*, with each OSD located on *3 different nodes*.

Each OSD must be located on a different node, because the [`failureDomain`](ceph-pool-crd.md#spec) is set to `host` and the `replicated.size` is set to `3`.
Each OSD must be located on a different node, because the [`failureDomain`](ceph-block-pool-crd.md#spec) is set to `host` and the `replicated.size` is set to `3`.

```yaml
apiVersion: ceph.rook.io/v1
Expand All @@ -33,6 +29,7 @@ spec:
```
#### Hybrid Storage Pools
Hybrid storage is a combination of two different storage tiers. For example, SSD and HDD.
This helps to improve the read performance of cluster by placing, say, 1st copy of data on the higher performance tier (SSD or NVME) and remaining replicated copies on lower cost tier (HDDs).
Expand All @@ -54,15 +51,18 @@ spec:
primaryDeviceClass: ssd
secondaryDeviceClass: hdd
```
> **IMPORTANT**: The device classes `primaryDeviceClass` and `secondaryDeviceClass` must have at least one OSD associated with them or else the pool creation will fail.
!!! important
The device classes `primaryDeviceClass` and `secondaryDeviceClass` must have at least one OSD associated with them or else the pool creation will fail.

### Erasure Coded

This sample will lower the overall storage capacity requirement, while also adding redundancy by using [erasure coding](#erasure-coding).

> **NOTE**: This sample requires *at least 3 bluestore OSDs*.
!!! note
This sample requires *at least 3 bluestore OSDs*.

The OSDs can be located on a single Ceph node or spread across multiple nodes, because the [`failureDomain`](ceph-pool-crd.md#spec) is set to `osd` and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`).
The OSDs can be located on a single Ceph node or spread across multiple nodes, because the [`failureDomain`](ceph-block-pool-crd.md#spec) is set to `osd` and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`).

```yaml
apiVersion: ceph.rook.io/v1
Expand All @@ -81,7 +81,7 @@ spec:
High performance applications typically will not use erasure coding due to the performance overhead of creating and distributing the chunks in the cluster.

When creating an erasure-coded pool, it is highly recommended to create the pool when you have **bluestore OSDs** in your cluster
(see the [OSD configuration settings](ceph-cluster-crd.md#osd-configuration-settings). Filestore OSDs have
(see the [OSD configuration settings](../ceph-cluster-crd.md#osd-configuration-settings). Filestore OSDs have
[limitations](http://docs.ceph.com/docs/master/rados/operations/erasure-code/#erasure-coding-with-overwrites) that are unsafe and lower performance.

### Mirroring
Expand Down Expand Up @@ -124,10 +124,8 @@ This secret can then be fetched like so:

```console
kubectl get secret -n rook-ceph pool-peer-token-replicapool -o jsonpath='{.data.token}'|base64 -d
eyJmc2lkIjoiOTFlYWUwZGQtMDZiMS00ZDJjLTkxZjMtMTMxMWM5ZGYzODJiIiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFEN1psOWZ3V1VGRHhBQWdmY0gyZi8xeUhYeGZDUTU5L1N0NEE9PSIsIm1vbl9ob3N0IjoiW3YyOjEwLjEwMS4xOC4yMjM6MzMwMCx2MToxMC4xMDEuMTguMjIzOjY3ODldIn0=
```
>```
>eyJmc2lkIjoiOTFlYWUwZGQtMDZiMS00ZDJjLTkxZjMtMTMxMWM5ZGYzODJiIiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFEN1psOWZ3V1VGRHhBQWdmY0gyZi8xeUhYeGZDUTU5L1N0NEE9PSIsIm1vbl9ob3N0IjoiW3YyOjEwLjEwMS4xOC4yMjM6MzMwMCx2MToxMC4xMDEuMTguMjIzOjY3ODldIn0=
>```

The secret must be decoded. The result will be another base64 encoded blob that you will import in the destination cluster:

Expand Down Expand Up @@ -193,20 +191,21 @@ stretched) then you will have 2 replicas per datacenter where each replica ends
* `erasureCoded`: Settings for an erasure-coded pool. If specified, `replicated` settings must not be specified. See below for more details on [erasure coding](#erasure-coding).
* `dataChunks`: Number of chunks to divide the original object into
* `codingChunks`: Number of coding chunks to generate
* `failureDomain`: The failure domain across which the data will be spread. This can be set to a value of either `osd` or `host`, with `host` being the default setting. A failure domain can also be set to a different type (e.g. `rack`), if the OSDs are created on nodes with the supported [topology labels](ceph-cluster-crd.md#osd-topology). If the `failureDomain` is changed on the pool, the operator will create a new CRUSH rule and update the pool.
* `failureDomain`: The failure domain across which the data will be spread. This can be set to a value of either `osd` or `host`, with `host` being the default setting. A failure domain can also be set to a different type (e.g. `rack`), if the OSDs are created on nodes with the supported [topology labels](../ceph-cluster-crd.md#osd-topology). If the `failureDomain` is changed on the pool, the operator will create a new CRUSH rule and update the pool.
If a `replicated` pool of size `3` is configured and the `failureDomain` is set to `host`, all three copies of the replicated data will be placed on OSDs located on `3` different Ceph hosts. This case is guaranteed to tolerate a failure of two hosts without a loss of data. Similarly, a failure domain set to `osd`, can tolerate a loss of two OSD devices.

If erasure coding is used, the data and coding chunks are spread across the configured failure domain.

> **NOTE**: Neither Rook, nor Ceph, prevent the creation of a cluster where the replicated data (or Erasure Coded chunks) can be written safely. By design, Ceph will delay checking for suitable OSDs until a write request is made and this write can hang if there are not sufficient OSDs to satisfy the request.
!!! caution
Neither Rook, nor Ceph, prevent the creation of a cluster where the replicated data (or Erasure Coded chunks) can be written safely. By design, Ceph will delay checking for suitable OSDs until a write request is made and this write can hang if there are not sufficient OSDs to satisfy the request.
* `deviceClass`: Sets up the CRUSH rule for the pool to distribute data only on the specified device class. If left empty or unspecified, the pool will use the cluster's default CRUSH root, which usually distributes data over all OSDs, regardless of their class.
* `crushRoot`: The root in the crush map to be used by the pool. If left empty or unspecified, the default root will be used. Creating a crush hierarchy for the OSDs currently requires the Rook toolbox to run the Ceph tools described [here](http://docs.ceph.com/docs/master/rados/operations/crush-map/#modifying-the-crush-map).
* `enableRBDStats`: Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false. For more info see the [ceph documentation](https://docs.ceph.com/docs/master/mgr/prometheus/#rbd-io-statistics).
* `name`: The name of Ceph pools is based on the `metadata.name` of the CephBlockPool CR. Some built-in Ceph pools
require names that are incompatible with K8s resource names. These special pools can be configured
by setting this `name` to override the name of the Ceph pool that is created instead of using the `metadata.name` for the pool.
Only the following pool names are supported: `device_health_metrics`, `.nfs`, and `.mgr`. See the example
[builtin mgr pool](https://github.com/rook/rook/blob/{{ branchName }}/deploy/examples/pool-builtin-mgr.yaml).
[builtin mgr pool](https://github.com/rook/rook/blob/master/deploy/examples/pool-builtin-mgr.yaml).

* `parameters`: Sets any [parameters](https://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values) listed to the given pool
* `target_size_ratio:` gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool, for more info see the [ceph documentation](https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size)
Expand All @@ -229,7 +228,9 @@ stretched) then you will have 2 replicas per datacenter where each replica ends
* `quotas`: Set byte and object quotas. See the [ceph documentation](https://docs.ceph.com/en/latest/rados/operations/pools/#set-pool-quotas) for more info.
* `maxSize`: quota in bytes as a string with quantity suffixes (e.g. "10Gi")
* `maxObjects`: quota in objects as an integer
> **NOTE**: A value of 0 disables the quota.

!!! note
A value of 0 disables the quota.

### Add specific pool properties

Expand Down Expand Up @@ -273,4 +274,4 @@ The `failureDomain` must be also be taken into account when determining the numb

If you do not have a sufficient number of hosts or OSDs for unique placement the pool can be created, writing to the pool will hang.

Rook currently only configures two levels in the CRUSH map. It is also possible to configure other levels such as `rack` with by adding [topology labels](ceph-cluster-crd.md#osd-topology) to the nodes.
Rook currently only configures two levels in the CRUSH map. It is also possible to configure other levels such as `rack` with by adding [topology labels](../ceph-cluster-crd.md#osd-topology) to the nodes.
Original file line number Diff line number Diff line change
@@ -1,14 +1,8 @@
---
title: RADOS Namespace CRD
weight: 3610
indent: true
title: Block Pool RADOS Namespace CRD
---

{% include_relative branch.liquid %}

This guide assumes you have created a Rook cluster as explained in the main [Quickstart guide](quickstart.md)

# CephBlockPoolRadosNamespace CRD
This guide assumes you have created a Rook cluster as explained in the main [Quickstart guide](../../Getting-Started/quickstart.md)

RADOS currently uses pools both for data distribution (pools are shared into
PGs, which map to OSDs) and as the granularity for security (capabilities can
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,6 @@
---
title: RBD Mirror CRD
weight: 3500
indent: true
---
{% include_relative branch.liquid %}

# Ceph RBDMirror CRD

Rook allows creation and updating rbd-mirror daemon(s) through the custom resource definitions (CRDs).
RBD images can be asynchronously mirrored between two Ceph clusters.
Expand All @@ -27,7 +22,7 @@ spec:
### Prerequisites
This guide assumes you have created a Rook cluster as explained in the main [Quickstart guide](quickstart.md)
This guide assumes you have created a Rook cluster as explained in the main [Quickstart guide](../../Getting-Started/quickstart.md)
## Settings
Expand All @@ -41,7 +36,7 @@ If any setting is unspecified, a suitable default will be used automatically.
### RBDMirror Settings

* `count`: The number of rbd mirror instance to run.
* `placement`: The rbd mirror pods can be given standard Kubernetes placement restrictions with `nodeAffinity`, `tolerations`, `podAffinity`, and `podAntiAffinity` similar to placement defined for daemons configured by the [cluster CRD](https://github.com/rook/rook/blob/{{ branchName }}/deploy/examples/cluster.yaml)..
* `placement`: The rbd mirror pods can be given standard Kubernetes placement restrictions with `nodeAffinity`, `tolerations`, `podAffinity`, and `podAntiAffinity` similar to placement defined for daemons configured by the [cluster CRD](https://github.com/rook/rook/blob/master/deploy/examples/cluster.yaml)..
* `annotations`: Key value pair list of annotations to add.
* `labels`: Key value pair list of labels to add.
* `resources`: The resource requirements for the rbd mirror pods.
Expand All @@ -50,4 +45,4 @@ If any setting is unspecified, a suitable default will be used automatically.
### Configuring mirroring peers

Configure mirroring peers individually for each CephBlockPool. Refer to the
[CephBlockPool documentation](ceph-pool-crd.md#mirroring) for more detail.
[CephBlockPool documentation](ceph-block-pool-crd.md#mirroring) for more detail.
4 changes: 4 additions & 0 deletions Documentation/CRDs/Object-Storage/.pages
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
nav:
- ceph-object-store-crd.md
- ceph-object-store-user-crd.md
- ...
Original file line number Diff line number Diff line change
@@ -1,18 +1,14 @@
---
title: Object Multisite CRDs
weight: 2825
indent: true
---

# Ceph Object Multisite CRDs

The following CRDs enable Ceph object stores to isolate or replicate data via multisite. For more information on multisite, visit the [ceph-object-multisite](/Documentation/ceph-object-multisite.md) documentation.
The following CRDs enable Ceph object stores to isolate or replicate data via multisite. For more information on multisite, visit the [Ceph Object Multisite CRDs documentation](../../Storage-Configuration/Object-Storage-RGW/ceph-object-multisite.md).

## Ceph Object Realm CRD

Rook allows creation of a realm in a ceph cluster for object stores through the custom resource definitions (CRDs). The following settings are available for Ceph object store realms.

### Sample
### Example

```yaml
apiVersion: ceph.rook.io/v1
Expand Down Expand Up @@ -42,7 +38,7 @@ spec:

Rook allows creation of zone groups in a ceph cluster for object stores through the custom resource definitions (CRDs). The following settings are available for Ceph object store zone groups.

### Sample
### Example

```yaml
apiVersion: ceph.rook.io/v1
Expand All @@ -69,7 +65,7 @@ spec:

Rook allows creation of zones in a ceph cluster for object stores through the custom resource definitions (CRDs). The following settings are available for Ceph object store zone.

### Sample
### Example

```yaml
apiVersion: ceph.rook.io/v1
Expand Down Expand Up @@ -99,7 +95,7 @@ spec:

### Pools

The pools allow all of the settings defined in the Pool CRD spec. For more details, see the [Pool CRD](ceph-pool-crd.md) settings. In the example above, there must be at least three hosts (size 3) and at least three devices (2 data + 1 coding chunks) in the cluster.
The pools allow all of the settings defined in the Pool CRD spec. For more details, see the [Pool CRD](../Block-Storage/ceph-block-pool-crd.md) settings. In the example above, there must be at least three hosts (size 3) and at least three devices (2 data + 1 coding chunks) in the cluster.

#### Spec

Expand Down
Loading

0 comments on commit 7443144

Please sign in to comment.