Skip to content

Commit

Permalink
Fixed typo errors and sentence framing
Browse files Browse the repository at this point in the history
  • Loading branch information
Anirudh050600 authored Feb 22, 2022
1 parent b841055 commit d73c5d5
Showing 1 changed file with 17 additions and 15 deletions.
32 changes: 17 additions & 15 deletions docs/contributing-to-metric-library.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,25 @@
## Overview

Legend supports a wide variety of metric types for creating panels for "components" like SQS, Promtail, Loki, EC2 and many more. Refer to [sample_input.yaml](../sample_input.yaml) for getting to know all the supported metrics.
Legend supports a wide variety of metric types for creating panels for "components" like SQS, Promtail, Loki, EC2, and many more. Refer to [sample_input.yaml](../sample_input.yaml) for getting to know all the supported metrics.

But if your use-case requires a metric component type which is currently not supported by legend, then you can refer to the following guide to add metrics to legend.
But if your use-case requires a metric component type that is currently not supported by legend, then you can refer to the following guide to add metrics to legend.

## Metrics library

All the metrics plotted per component are part of the metrics library which lives within legend at `legend/metrics_library/metrics`.
Each component has an associated metrics file in the metrics library in the format
`<component>_metrics.yaml`. The metrics file is a Jinja2 template which is rendered to
`<component>_metrics.yaml`. The metrics file is a Jinja2 template that is rendered to
a yaml file.

## Spec

The input file has to be written based on the data source of the metrics.
Each component is seperate with a `row` panel in grafana, and the panels within each of these
consists of one or more `targets`. These are the actual graphs which are plotted in Grafana.
Each component is separate with a `row` panel in grafana, and the panels within each of these
consist of one or more `targets`. These are the actual graphs that are plotted in Grafana.

Legend uses [grafonnet-lib](https://github.com/grafana/grafonnet-lib/tree/v0.1.0) internally to generate the
dashboard's json and apply it to Grafana.
dashboard's JSON and apply it to Grafana.

### Basic spec

Expand All @@ -35,46 +35,48 @@ alert_config: # Configuration to add notification channels and tags to alerts
tags: # Tags for your alerts
key: value
references: # Reference link to the components documentation
description: # Short description on what the component does
description: # Short description of what the component does
```
### Panels
```yaml
panels: # The metrics to be plotted
- title: # Title of the panel
type: # Type of the panel in Grafana. Currently only 'graph' is supported in legend
type: # Type of the panel in Grafana. Currently, only 'graph' is supported in legend
description: # Describe what the panel represents
targets:
{% for dimension in data %} # The 'dimensions' dict from the input file is passed to the targets
- metric: django_http_responses_total_by_status_total{job=~"{{ dimension.job }}"} # metric to be plotted. You can use jinja2 templating fill in the vars passed in the input file
- metric: django_http_responses_total_by_status_total{job=~"{{ dimension.job }}"} # metric to be plotted. You can use the jinja2 template to fill in the vars passed in the input file
legend: # Legend to be displayed in the panel. Optional.
ref_no: 1 # Reference number which is used in the alert config. Do not confuse with the ref_id which Grafana creates. 'ref_no' is internal, but when the grafana dashboard is created grafana actually creates a ref id (from A to Z). Legend associates the ref_no to the ref_id and sets the appropriate alert rule on the metric
ref_no: 1 # Reference number which is used in the alert config. Do not confuse with the ref_id which Grafana creates. 'ref_no' is internal, but when the grafana dashboard is created grafana creates a ref id (from A to Z). Legend associates the ref_no to the ref_id and sets the appropriate alert rule on the metric
- metric: # second metric
legend: '{{ '{{instance}}' }}'
ref_no: 2 # Incremenatal ref_no, used to associate alerts to this particular metric
{% endfor %}
alert_config: # Alert config
priority: # Priority of the alert. Must be one of P1-P5. This is configured as a tag in grafana with the key:value - og_priority:<priority>. This priority is associated to the alert/incident in opsgenie automatically
priority: # Priority of the alert. Must be one of P1-P5. This is configured as a tag in grafana with the key:value - og_priority:<priority>. This priority is associated with the alert/incident in opsgenie automatically
message: # Alerting message
rule: # Alerting rule, follows the alerting rules from Grafana
for_duration: 5m # Sample
evaluate_every: 10s # Sample
condition_query: # The list of condition queries, evaluated per target. Follows the same format as described in Grafana alerts
- OR,avg,1,now,5m,gt,20 # The first condition is automatically converted to 'WHEN' when the alert if being configured in grafana. The ref_no of the target must be filled in the third field to reference which target has to be evaluated against this rule.
- OR,avg,1, now,5m,gt,20 # The first condition is automatically converted to 'WHEN' when the alert is being configured in grafana. The ref_no of the target must be filled in the third field to reference which target has to be evaluated against this rule.
- OR,avg,2,now,5m,gt,30

```

### Addtional Spec

``` yaml
formatY1: # The format of the data in the Y1 axis, this follows the Grafana standard (sample : Bps, bytes, s, percent)
labelY1: # The label to put in the Y1 graph panel (sample : bytes/sec , bytes , seconds, percent)
formatY1: # The format of the data in the Y1 axis, follows the Grafana standard (sample: Bps, bytes, s, percent)
labelY1: # The label to put in the Y1 graph panel (sample: bytes/sec, bytes, seconds, percent)
```
## Adding metrics for new components
* If you are adding metrics for a new component, please follow the spec as mentioned above and name the Jinja2 file of your metric with the following convention `<component name>_metrics.j2` and update docs [here](docs/enabling-monitoring.md).
* The variables and the component spec has to be added in the [metrics_schema](../legend/metrics_library/metrics_schema.py) and import the configuration into the [schema](../legend/metrics_library/schema.py) to ensure proper validation - this will enable input validation.
* The variables and the component spec have to be added in the [metrics_schema](../legend/metrics_library/metrics_schema.py) and import the configuration into the [schema](../legend/metrics_library/schema.py) to ensure proper validation - this will enable input validation.
* In the [sample_input.yaml](../sample_input.yaml) add the component with basic/sample configuration - this will enable testing and ensure backward compatibility also making adaptability easy.


0 comments on commit d73c5d5

Please sign in to comment.