Skip to content

Latest commit

 

History

History
132 lines (98 loc) · 4.07 KB

index.rst

File metadata and controls

132 lines (98 loc) · 4.07 KB

Ceph Filesystem

The Ceph Filesystem (CephFS) is a POSIX-compliant filesystem that uses a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3 and Swift APIs, or native bindings (librados).

Note

If you are evaluating CephFS for the first time, please review the best practices for deployment: :doc:`/cephfs/best-practices`

.. ditaa::
            +-----------------------+  +------------------------+
            |                       |  |      CephFS FUSE       |
            |                       |  +------------------------+
            |                       |
            |                       |  +------------------------+
            |  CephFS Kernel Object |  |     CephFS Library     |
            |                       |  +------------------------+
            |                       |
            |                       |  +------------------------+
            |                       |  |        librados        |
            +-----------------------+  +------------------------+

            +---------------+ +---------------+ +---------------+
            |      OSDs     | |      MDSs     | |    Monitors   |
            +---------------+ +---------------+ +---------------+


Using CephFS

Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in your Ceph Storage Cluster.

div.body h3{margin:5px 0px 0px 0px;}

Step 1: Metadata Server

To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at least one :term:`Ceph Metadata Server` running.

.. toctree::
        :maxdepth: 1

        Provision/Add/Remove MDS(s) <add-remove-mds>
        MDS failover and standby configuration <standby>
        MDS Configuration Settings <mds-config-ref>
        Client Configuration Settings <client-config-ref>
        Journaler Configuration <journaler>
        Manpage ceph-mds <../../man/8/ceph-mds>

Step 2: Mount CephFS

Once you have a healthy Ceph Storage Cluster with at least one Ceph Metadata Server, you may create and mount your Ceph Filesystem. Ensure that your client has network connectivity and the proper authentication keyring.

.. toctree::
        :maxdepth: 1

        Create a CephFS file system <createfs>
        Mount CephFS with the Kernel Driver <kernel>
        Mount CephFS as FUSE <fuse>
        Mount CephFS in fstab <fstab>
        Use the CephFS Shell <cephfs-shell>
        Supported Features of Kernel Driver <kernel-features>
        Manpage ceph-fuse <../../man/8/ceph-fuse>
        Manpage mount.ceph <../../man/8/mount.ceph>
        Manpage mount.fuse.ceph <../../man/8/mount.fuse.ceph>


Additional Details

.. toctree::
    :maxdepth: 1

    Deployment best practices <best-practices>
    MDS States <mds-states>
    Administrative commands <administration>
    Understanding MDS Cache Size Limits <cache-size-limits>
    POSIX compatibility <posix>
    Experimental Features <experimental-features>
    CephFS Quotas <quota>
    Using Ceph with Hadoop <hadoop>
    cephfs-journal-tool <cephfs-journal-tool>
    File layouts <file-layouts>
    Client eviction <eviction>
    Handling full filesystems <full>
    Health messages <health-messages>
    Troubleshooting <troubleshooting>
    Disaster recovery <disaster-recovery>
    Client authentication <client-auth>
    Upgrading old filesystems <upgrading>
    Configuring directory fragmentation <dirfrags>
    Configuring multiple active MDS daemons <multimds>
    Export over NFS <nfs>
    Application best practices <app-best-practices>
    Scrub <scrub>
    LazyIO <lazyio>

.. toctree::
   :hidden:

    Advanced: Metadata repair <disaster-recovery-experts>

For developers

.. toctree::
    :maxdepth: 1

    Client's Capabilities <capabilities>
    libcephfs <../../api/libcephfs-java/>
    Mantle <mantle>