Skip to content

Commit

Permalink
Merge PR ceph#30914 into master
Browse files Browse the repository at this point in the history
* refs/pull/30914/head:
	ceph: Add doc for deploying cephfs-nfs cluster using rook

Reviewed-by: Sidharth Anupkrishnan <[email protected]>
Reviewed-by: Ramana Raja <[email protected]>
Reviewed-by: Patrick Donnelly <[email protected]>
Reviewed-by: Jeff Layton <[email protected]>
  • Loading branch information
batrick committed Nov 11, 2019
2 parents b1f4b2c + aca5c81 commit cadc79b
Showing 1 changed file with 192 additions and 0 deletions.
192 changes: 192 additions & 0 deletions doc/cephfs/nfs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -79,3 +79,195 @@ Current limitations

- Per running ganesha daemon, FSAL_CEPH can only export one Ceph file system
although multiple directories in a Ceph file system may be exported.

Exporting over NFS clusters deployed using rook
===============================================

This tutorial assumes you have a kubernetes cluster deployed. If not `minikube
<https://kubernetes.io/docs/setup/learning-environment/minikube/>`_ can be used
to setup a single node cluster. In this tutorial minikube is used.

.. note:: Configuration of this tutorial should not be used in a a real
production cluster. For the purpose of simplification, the security
aspects of Ceph are overlooked in this setup.

`Rook <https://rook.io/docs/rook/master/ceph-quickstart.html>`_ Setup And Cluster Deployment
--------------------------------------------------------------------------------------------

Clone the rook repository::

git clone https://github.com/rook/rook.git

Deploy the rook operator::

cd cluster/examples/kubernetes/ceph
kubectl create -f common.yaml
kubectl create -f operator.yaml

.. note:: Nautilus release or latest Ceph image should be used.

Before proceding check if the pods are running::

kubectl -n rook-ceph get pod


.. note::
For troubleshooting on any pod use::

kubectl describe -n rook-ceph pod <pod-name>

If using minikube cluster change the **dataDirHostPath** to **/data/rook** in
cluster-test.yaml file. This is to make sure data persists across reboots.

Deploy the ceph cluster::

kubectl create -f cluster-test.yaml

To interact with Ceph Daemons, let's deploy toolbox::

kubectl create -f ./toolbox.yaml

Exec into the rook-ceph-tools pod::

kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash

Check if you have one Ceph monitor, manager, OSD running and cluster is healthy::

[root@minikube /]# ceph -s
cluster:
id: 3a30f44c-a9ce-4c26-9f25-cc6fd23128d0
health: HEALTH_OK

services:
mon: 1 daemons, quorum a (age 14m)
mgr: a(active, since 13m)
osd: 1 osds: 1 up (since 13m), 1 in (since 13m)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 5.0 GiB used, 11 GiB / 16 GiB avail
pgs:

.. note:: Single monitor should never be used in real production deployment. As
it can cause single point of failure.

Create a Ceph File System
-------------------------
Using ceph-mgr volumes module, we will create a ceph file system::

[root@minikube /]# ceph fs volume create myfs

By default replicated size for OSD is 3. Since we are using only one OSD. It can cause error. Let's fix this up by setting replicated size to 1.::

[root@minikube /]# ceph osd pool set cephfs.myfs.meta size 1
[root@minikube /]# ceph osd pool set cephfs.myfs.data size 1

.. note:: The replicated size should never be less than 3 in real production deployment.

Check Cluster status again::

[root@minikube /]# ceph -s
cluster:
id: 3a30f44c-a9ce-4c26-9f25-cc6fd23128d0
health: HEALTH_OK

services:
mon: 1 daemons, quorum a (age 27m)
mgr: a(active, since 27m)
mds: myfs:1 {0=myfs-a=up:active} 1 up:standby-replay
osd: 1 osds: 1 up (since 56m), 1 in (since 56m)

data:
pools: 2 pools, 24 pgs
objects: 22 objects, 2.2 KiB
usage: 5.1 GiB used, 11 GiB / 16 GiB avail
pgs: 24 active+clean

io:
client: 639 B/s rd, 1 op/s rd, 0 op/s wr

Create a NFS-Ganesha Server Cluster
-----------------------------------
Add Storage for NFS-Ganesha Servers to prevent recovery conflicts::

[root@minikube /]# ceph osd pool create nfs-ganesha 64
pool 'nfs-ganesha' created
[root@minikube /]# ceph osd pool set nfs-ganesha size 1
[root@minikube /]# ceph orchestrator nfs add mynfs nfs-ganesha ganesha

Here we have created a NFS-Ganesha cluster called "mynfs" in "ganesha"
namespace with "nfs-ganesha" OSD pool.

Scale out NFS-Ganesha cluster::

[root@minikube /]# ceph orchestrator nfs update mynfs 2

Configure NFS-Ganesha Exports
-----------------------------
Initially rook creates ClusterIP service for the dashboard. With this service
type, only the pods in same kubernetes cluster can access it.

Expose Ceph Dashboard port::

kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-mgr-dashboard
kubectl get service -n rook-ceph rook-ceph-mgr-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr-dashboard NodePort 10.108.183.148 <none> 8443:31727/TCP 117m

This makes the dashboard reachable outside kubernetes cluster and the service
type is changed to NodePort service.

Create JSON file for dashboard::

$ cat ~/export.json
{
"cluster_id": "mynfs",
"path": "/",
"fsal": {"name": "CEPH", "user_id":"admin", "fs_name": "myfs", "sec_label_xattr": null},
"pseudo": "/cephfs",
"tag": null,
"access_type": "RW",
"squash": "no_root_squash",
"protocols": [4],
"transports": ["TCP"],
"security_label": true,
"daemons": ["mynfs.a", "mynfs.b"],
"clients": []
}

.. note:: Don't use this JSON file for real production deployment. As here the
ganesha servers are given client-admin access rights.

We need to download and run this `script
<https://raw.githubusercontent.com/ceph/ceph/master/src/pybind/mgr/dashboard/run-backend-rook-api-request.sh>`_
to pass the JSON file contents. Dashboard creates NFS-Ganesha export file
based on this JSON file.::

./run-backend-rook-api-request.sh POST /api/nfs-ganesha/export "$(cat <json-file-path>)"

Expose the NFS Servers::

kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-mynfs-a
kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-mynfs-b
kubectl get services -n rook-ceph rook-ceph-nfs-mynfs-a rook-ceph-nfs-mynfs-b
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-nfs-mynfs-a NodePort 10.101.186.111 <none> 2049:31013/TCP 72m
rook-ceph-nfs-mynfs-b NodePort 10.99.216.92 <none> 2049:31587/TCP 63m

.. note:: Ports are chosen at random by Kubernetes from a certain range.
Specific port number can be added to nodePort field in spec.

Testing access to NFS Servers
-----------------------------
Open a root shell on the host and mount one of the NFS servers::

mkdir -p /mnt/rook
mount -t nfs -o port=31013 $(minikube ip):/cephfs /mnt/rook

Normal file operations can be performed on /mnt/rook if the mount is successful.

.. note:: If minikube is used then VM host is the only client for the servers.
In a real kubernetes cluster, multiple hosts can be used as clients,
only when kubernetes cluster node IP addresses are accessible to
them.

0 comments on commit cadc79b

Please sign in to comment.