From c879e895da494b14bd03d45131704dccda518d76 Mon Sep 17 00:00:00 2001 From: Kevin Dalley Date: Thu, 1 May 2014 17:04:43 -0700 Subject: [PATCH] doc: Include links from hardware-recommendations to glossary Included :term: in parts of hardware-recommendations so that glossary links appear. Signed-off-by: Kevin Dalley --- doc/start/hardware-recommendations.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/start/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst index 58d2f437d1d63..ffbc37a58900f 100644 --- a/doc/start/hardware-recommendations.rst +++ b/doc/start/hardware-recommendations.rst @@ -24,8 +24,8 @@ CPU Ceph metadata servers dynamically redistribute their load, which is CPU intensive. So your metadata servers should have significant processing power -(e.g., quad core or better CPUs). Ceph OSDs run the RADOS service, calculate -data placement with CRUSH, replicate data, and maintain their own copy of the +(e.g., quad core or better CPUs). Ceph OSDs run the :term:`RADOS` service, calculate +data placement with :term:`CRUSH`, replicate data, and maintain their own copy of the cluster map. Therefore, OSDs should have a reasonable amount of processing power (e.g., dual core processors). Monitors simply maintain a master copy of the cluster map, so they are not CPU intensive. You must also consider whether the @@ -344,4 +344,4 @@ configurations for Ceph OSDs, and a lighter configuration for monitors. .. _Argonaut v. Bobtail Performance Preview: http://ceph.com/uncategorized/argonaut-vs-bobtail-performance-preview/ .. _Bobtail Performance - I/O Scheduler Comparison: http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/ .. _Mapping Pools to Different Types of OSDs: http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds -.. _OS Recommendations: ../os-recommendations \ No newline at end of file +.. _OS Recommendations: ../os-recommendations