Skip to content

Commit

Permalink
mgr/influx: Use Queue to store points which need to be written
Browse files Browse the repository at this point in the history
This allows us to multiplex data being send to Influx as we have
a configurable amount of workers sending data to Influx.

The main bottleneck for the performance seems to be fetching all
the perf counters using this code:

    self.get_all_perf_counters()

On a larger cluster, for example 2000 OSDs this can take about 20s
where flushing to Influx only takes 5s.

A 2000 OSD cluster generates about 100k data points on every run,
prior to using a Queue these would all be send to Influx in series
in that took over 15 seconds to complete.

Python Six is being used in the code to make sure it's compatible
with both Python 2 and 3.

Signed-off-by: Wido den Hollander <[email protected]>
  • Loading branch information
wido committed Aug 21, 2018
1 parent 19f2525 commit 25e7d31
Show file tree
Hide file tree
Showing 2 changed files with 182 additions and 76 deletions.
4 changes: 3 additions & 1 deletion doc/mgr/influx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,13 @@ For example, a typical configuration might look like this:
Additional optional configuration settings are:

:interval: Time between reports to InfluxDB. Default 5 seconds.
:interval: Time between reports to InfluxDB. Default 30 seconds.
:database: InfluxDB database name. Default "ceph". You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.
:port: InfluxDB server port. Default 8086
:ssl: Use https connection for InfluxDB server. Use "true" or "false". Default false
:verify_ssl: Verify https cert for InfluxDB server. Use "true" or "false". Default true
:threads: How many worker threads should be spawned for sending data to InfluxDB. Default is 5
:batch_size: How big batches of data points should be when sending to InfluxDB. Default is 5000

---------
Debugging
Expand Down
Loading

0 comments on commit 25e7d31

Please sign in to comment.