Skip to content

Commit

Permalink
Merge pull request ceph#1754 from nereocystis/hardware-to-glossary
Browse files Browse the repository at this point in the history
doc: Include links from hardware-recommendations to glossary
  • Loading branch information
Sage Weil committed May 2, 2014
2 parents c6ada53 + c879e89 commit 331869a
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions doc/start/hardware-recommendations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ CPU

Ceph metadata servers dynamically redistribute their load, which is CPU
intensive. So your metadata servers should have significant processing power
(e.g., quad core or better CPUs). Ceph OSDs run the RADOS service, calculate
data placement with CRUSH, replicate data, and maintain their own copy of the
(e.g., quad core or better CPUs). Ceph OSDs run the :term:`RADOS` service, calculate
data placement with :term:`CRUSH`, replicate data, and maintain their own copy of the
cluster map. Therefore, OSDs should have a reasonable amount of processing power
(e.g., dual core processors). Monitors simply maintain a master copy of the
cluster map, so they are not CPU intensive. You must also consider whether the
Expand Down Expand Up @@ -344,4 +344,4 @@ configurations for Ceph OSDs, and a lighter configuration for monitors.
.. _Argonaut v. Bobtail Performance Preview: http://ceph.com/uncategorized/argonaut-vs-bobtail-performance-preview/
.. _Bobtail Performance - I/O Scheduler Comparison: http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/
.. _Mapping Pools to Different Types of OSDs: http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
.. _OS Recommendations: ../os-recommendations
.. _OS Recommendations: ../os-recommendations

0 comments on commit 331869a

Please sign in to comment.