Skip to content

Commit

Permalink
doc: describe CephFS max_file_size
Browse files Browse the repository at this point in the history
Add a description of max_file_size to the CephFS admin docs.

Thanks to John Spray <[email protected]> on ceph-users for this
information.

Signed-off-by: Ken Dreyer <[email protected]>
  • Loading branch information
ktdreyer committed May 26, 2017
1 parent 5b40557 commit 02753cd
Showing 1 changed file with 32 additions and 0 deletions.
32 changes: 32 additions & 0 deletions doc/cephfs/administration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,38 @@ creation of multiple filesystems use ``ceph fs flag set enable_multiple true``.
fs rm_data_pool <filesystem name> <pool name/id>


Settings
--------

::

fs set <fs name> max_file_size <size in bytes>

CephFS has a configurable maximum file size, and it's 1TB by default.
You may wish to set this limit higher if you expect to store large files
in CephFS. It is a 64-bit field.

Setting ``max_file_size`` to 0 does not disable the limit. It would
simply limit clients to only creating empty files.


Maximum file sizes and performance
----------------------------------

CephFS enforces the maximum file size limit at the point of appending to
files or setting their size. It does not affect how anything is stored.

When users create a file of an enormous size (without necessarily
writing any data to it), some operations (such as deletes) cause the MDS
to have to do a large number of operations to check if any of the RADOS
objects within the range that could exist (according to the file size)
really existed.

The ``max_file_size`` setting prevents users from creating files that
appear to be eg. exabytes in size, causing load on the MDS as it tries
to enumerate the objects during operations like stats or deletes.


Daemons
-------

Expand Down

0 comments on commit 02753cd

Please sign in to comment.