Skip to content

Commit

Permalink
Remove the use of metric.reporters in OAuth metrics, use `strimzi.m…
Browse files Browse the repository at this point in the history
…etric.reporters` instead (strimzi#193)

* Remove the use of `metric.reporters` in OAuth metrics, use `strimzi.metric.reporters` instead.
JmxReporter has to be listed explicitly to be instantiated.

Signed-off-by: Marko Strukelj <[email protected]>

* Fix Matcher.match() to Mather.find(()

Signed-off-by: Marko Strukelj <[email protected]>

* Use `strimzi.oauth.metric.reporters` as a config option name

Signed-off-by: Marko Strukelj <[email protected]>

* Fix unused reporters ArrayList.

Signed-off-by: Marko Strukelj <[email protected]>

* Addressed PR comments, implemented using JmxReporter by default

Signed-off-by: Marko Strukelj <[email protected]>

* Shorten the Travis CI build by removing kafka-3.2.3 testsuite run from it. Address README comments.

Signed-off-by: Marko Strukelj <[email protected]>

---------

Signed-off-by: Marko Strukelj <[email protected]>
  • Loading branch information
mstruk authored Jul 7, 2023
1 parent fc97c59 commit a0c4c34
Show file tree
Hide file tree
Showing 22 changed files with 491 additions and 184 deletions.
11 changes: 6 additions & 5 deletions .travis/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -74,13 +74,14 @@ elif [[ "$arch" != 'ppc64le' ]]; then
EXIT=$?
exitIfError

clearDockerEnv
mvn -e -V -B clean install -f testsuite -Pkafka-3_2_3 -DfailIfNoTests=false -Dtest=\!KeycloakKRaftAuthorizationTests
EXIT=$?
exitIfError

# Excluded by default to not exceed Travis job timeout
if [ "$SKIP_DISABLED" == "false" ]; then

clearDockerEnv
mvn -e -V -B clean install -f testsuite -Pkafka-3_2_3 -DfailIfNoTests=false -Dtest=\!KeycloakKRaftAuthorizationTests
EXIT=$?
exitIfError

clearDockerEnv
mvn -e -V -B clean install -f testsuite -Pkafka-3_1_2 -DfailIfNoTests=false -Dtest=\!KeycloakKRaftAuthorizationTests,\!KeycloakZKAuthorizationTests
EXIT=$?
Expand Down
66 changes: 59 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1225,21 +1225,73 @@ Configuring the metrics

By default, the gathering and exporting of metrics is disabled. Metrics are available to get an insight into the performance and failures during token validation, authorization operations and client authentication to the authorization server. You can also monitor the authorization server requests by background services such as refreshing of JWKS keys and refreshing of grants when `KeycloakAuthorizer` is used.

You can enable metrics for token validation on the Kafka broker or for client authentication on the client by setting the following JAAS option to `true`:
You can enable metrics for token validation, and `KeycloakAuthorizer` on the Kafka broker or for client authentication on the client by setting the following JAAS option to `true`:
- `oauth.enable.metrics` (e.g.: "true")

You can enable metrics for `KeycloakAuthorizer` by setting an analogous option in Kafka broker's `server.properties` file:
You can also enable metrics only for `KeycloakAuthorizer` by setting an analogous option in Kafka broker's `server.properties` file:
- `strimzi.authorization.enable.metrics` (e.g.: "true")

If `OAUTH_ENABLE_METRICS` env variable is set or if `oauth.enable.metrics` system property is set, that will both also enable the metrics for `KeycloakAuthorizer`.
If `OAUTH_ENABLE_METRICS` env variable is set or if `oauth.enable.metrics` system property is set, that will also enable the metrics for `KeycloakAuthorizer` (as well as for token validation, and client authentication).

If `oauth.config.id` is specified in JAAS configuration of the listener or the client, it will be available in MBean / metric name as `contextId` attribute. If not specified, it will be calculated from JAAS configuration for the validator or default to `client` in client JAAS config, or `keycloak-authorizer` for KeycloakAuthorizer metrics.
The OAuth metrics ignores the Kafka `metric.reporters` option in order to prevent automatically instantiating double instances of reporters. Most reporters may expect that they are singleton object and may not function properly in multiple copies.
Instead, there is `strimzi.oauth.metric.reporters` option where the reporters that support multiple copies can be specified for the purpose of metrics integration:
- `strimzi.oauth.metric.reporters` (e.g.: "org.apache.kafka.common.metrics.JmxReporter,org.some.package.SomeMetricReporter", use ',' as a separator to enable multiple reporters.)

Metrics are exposed through JMX managed beans. They can also be exposed as Prometheus metrics by using the Prometheus JMX Exporter agent, and mapping the JMX metrics names to prometheus metrics names.
If this configuration option is not set and OAuth metrics is enabled for some component, then a new instance of `org.apache.kafka.common.metrics.JmxReporter` will automatically be instantiated to provide JMX integration for OAuth metrics.
However, if `strimzi.oauth.metric.reporters` is set, then only the reporters specified in the list will be instantiated and integrated. Setting the option to an empty string will result in no reporters instantiated.

The OAuth metrics also honor the `metric.reporters`, `metrics.num.samples`, `metrics.recording.level` and `metrics.sample.window.ms` configurations of Kafka runtimes. When OAuth metrics are enabled the OAuth layer creates duplicate instances of `JmxReporter` and the configured `MetricReporter`s since at the moment there is no other way to integrate with the existing metrics system of Kafka runtimes.
The configuration option `strimzi.oauth.metric.reporters` on the Kafka broker has to be configured as an env variable or a system property. Using it on the Kafka broker inside `server.properties` does not work reliably due to multiple pluggability mechanisms that can be used (authorizer, authentication callback handler, inter-broker client).
Some of these mechanisms get the `server.properties` filtered so only configuration recognised by Kafka makes it through. However, this is a global OAuth Metrics configuration, and it is initialized on first use by any of the components, using the configuration provided to that component.
Specifically, the inter-broker client using OAUTHBEARER might be the first to trigger OAuth Metrics initialisation on the broker, and does not see this config option.

When OAuth metrics are enabled, managed beans are registered on demand, containing the attributes that are easily translated into Prometheus metrics.
In order to reliably configure `strimzi.oauth.metric.reporters` one of the following options should be used when starting a Kafka broker:
- `STRIMZI_OAUTH_METRIC_REPORTERS` env variable
- `strimzi.oauth.metric.reporters` env variable or system property

At the moment there is no way to integrate with the existing Kafka metrics / reporters objects already instantiated in different Kafka runtimes (producer, consumer, broker, ...).
When OAuth metrics is enabled the OAuth layer has to create its own copies of metric reporters.

NOTE: In OAuth versions preceding 0.13.0 the configuration option `metric.reporters` was used to configure reporters which were consequently automatically instantiated twice.
The configuration option `metric.reporters` is no longer used.

The Kafka options that control sampling are honored: `metrics.num.samples`, `metrics.recording.level` and `metrics.sample.window.ms`.

### Example for Kafka broker:
```
# Enable OAuth metrics for all listeners, Keycloak authorizer, and inter-broker clients:
export OAUTH_ENABLE_METRICS=true
# Use a custom metric reporter rather than the default JmxReporter
export STRIMZI_OAUTH_METRIC_REPORTERS=org.some.package.SomeMetricReporter
bin/kafka-server-start.sh config/server.properties
```

### Example for Kafka client:
```
# Show the content of client properties file
cat ~/client.properties
...
sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
oauth.token.endpoint.uri="https://server/token-endpoint" oauth.client.id="clientId" oauth.client.secret="client-secret" oauth.enable.metrics="true";
strimzi.oauth.metric.reporters=org.some.package.SomeMetricReporter
...
# Start the client
bin/kafka-console-producer.sh --broker-list kafka:9092 --topic my-topic --producer.config=$HOME/client.properties
```

### Simplest example for Kafka broker:
```
# Enable OAuth metrics for all listeners, Keycloak authorizer, and inter-broker clients:
# With no 'strimzi.oauth.metric.reporters' specified 'org.apache.kafka.common.metrics.JmxReporter' will be used automatically
export OAUTH_ENABLE_METRICS=true
bin/kafka-server-start.sh config/server.properties
```

A common use-case is for metrics to be exposed through JMX managed beans. They can then also be exposed as Prometheus metrics by using the Prometheus JMX Exporter agent, and mapping the JMX metrics names to prometheus metrics names.
If `oauth.config.id` is specified in JAAS configuration of the listener or the client, it will be available in MBean / metric name as `contextId` attribute. If not specified, it will be calculated from JAAS configuration for the validator or default to `client` in client JAAS config, or `keycloak-authorizer` for `KeycloakAuthorizer` metrics.

When `JmxReporter` is enabled, managed beans are registered on demand, containing the attributes that are easily translated into Prometheus metrics.

Each registered MBean contains two counter variables - `count`, and `totalTimeMs`.
It also contains three gauge variables - `minTimeMs`, `maxTimeMs` and `avgTimeMs`. These are measured within the configured sample time window.
Expand Down
2 changes: 2 additions & 0 deletions examples/kubernetes/kafka-oauth-authz-metrics-client.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,8 @@ spec:
env:
- name: OAUTH_ENABLE_METRICS
value: "true"
- name: STRIMZI_OAUTH_METRIC_REPORTERS
value: org.apache.kafka.common.metrics.JmxReporter
- name: SECRET
valueFrom:
secretKeyRef:
Expand Down
17 changes: 10 additions & 7 deletions examples/kubernetes/kafka-oauth-single-authz-metrics.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ metadata:
name: my-cluster
spec:
kafka:
version: 3.3.2
version: 3.4.0
replicas: 1
listeners:
- name: plain
Expand Down Expand Up @@ -68,12 +68,14 @@ spec:
env:
- name: OAUTH_ENABLE_METRICS
value: "true"
# - name: KAFKA_DEBUG
# value: "y"
# - name: DEBUG_SUSPEND_FLAG
# value: "y"
# - name: JAVA_DEBUG_PORT
# value: "5005"
- name: STRIMZI_OAUTH_METRIC_REPORTERS
value: org.apache.kafka.common.metrics.JmxReporter
#- name: KAFKA_DEBUG
# value: "y"
#- name: DEBUG_SUSPEND_FLAG
# value: "n"
#- name: JAVA_DEBUG_PORT
# value: "5005"
jmxOptions: {}

zookeeper:
Expand All @@ -94,6 +96,7 @@ spec:
entityOperator:
topicOperator: {}
userOperator: {}

---
kind: ConfigMap
apiVersion: v1
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
/*
* Copyright 2017-2023, Strimzi authors.
* License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html).
*/
package io.strimzi.kafka.oauth.metrics;

import io.strimzi.kafka.oauth.common.Config;

/**
* Configuration that can be specified as ENV vars, System properties or in <code>server.properties</code> configuration file,
* but not as part of the JAAS configuration.
*/
public class GlobalConfig extends Config {

/** The name of the 'strimzi.oauth.metric.reporters' config option */
public static final String STRIMZI_OAUTH_METRIC_REPORTERS = "strimzi.oauth.metric.reporters";
}
Original file line number Diff line number Diff line change
Expand Up @@ -24,23 +24,47 @@
import org.apache.kafka.common.utils.Time;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;

import static io.strimzi.kafka.oauth.metrics.GlobalConfig.STRIMZI_OAUTH_METRIC_REPORTERS;
import static org.apache.kafka.clients.CommonClientConfigs.CLIENT_ID_CONFIG;
import static org.apache.kafka.clients.CommonClientConfigs.METRICS_NUM_SAMPLES_CONFIG;
import static org.apache.kafka.clients.CommonClientConfigs.METRICS_RECORDING_LEVEL_CONFIG;
import static org.apache.kafka.clients.CommonClientConfigs.METRICS_SAMPLE_WINDOW_MS_CONFIG;

/**
* The singleton for handling a cache of all the Sensors to prevent unnecessary redundant re-registrations.
* There is a one-to-one mapping between a SensorKey and a Sensor, and one-to-one mapping between a Sensor and an MBean name.
*
* MBeans are registered as requested by JmxReporter attached to the Metrics object.
* There is a one-to-one mapping between a <code>SensorKey</code> and a <code>Sensor</code>, and one-to-one mapping between a <code>Sensor</code> and an <code>MBean</code> name.
* <p>
* MBeans are registered as requested by <code>JmxReporter</code> attached to the <code>Metrics</code> object.
* The <code>JmxReporter</code> either has to be explicitly configured using a config option <code>strimzi.oauth.metric.reporters</code>,
* or if that config option si not set, a new instance is configured by default.
* <p>
* Since OAuth instantiates its own <code>Metrics</code> object it also has to instantiate reporters to attach them to this <code>Metrics</code> object.
* To prevent double instantiation of <code>MetricReporter</code> objects that require to be singleton, all <code>MetricReporter</code> objects
* to be integrated with <code>OAuthMetrics</code> have to be separately instantiated.
* <p>
* Example 1:
* <pre>
* strimzi.oauth.metric.reporters=org.apache.kafka.common.metrics.JmxReporter,org.some.package.SomeMetricReporter
* </pre>
* The above will instantiate and integrate with OAuth metrics the JmxReporter instance, and a SomeMetricReporter instance.
* <p>
* Example 2:
* <pre>
* strimzi.oauth.metric.reporters=
* </pre>
* The above will not instantiate and integrate any metric reporters with OAuth metrics, not even JmxReporter.
* <p>
* Note: On the Kafka broker it is best to use <code>STRIMZI_OAUTH_METRIC_REPORTERS</code> env variable or <code>strimzi.oauth.metric.reporters</code> system property,
* rather than a `server.properties` global configuration option.
*/
public class OAuthMetrics {

Expand All @@ -57,9 +81,14 @@ public class OAuthMetrics {
*
* @param configMap Configuration properties
*/
@SuppressWarnings("unchecked")
OAuthMetrics(Map<String, ?> configMap) {
this.configMap = configMap;
this.config = new Config(configMap);

// Make sure to add the resolved 'strimzi.oauth.metric.reporters' configuration to the config map
((Map<String, Object>) configMap).put(STRIMZI_OAUTH_METRIC_REPORTERS, config.getValue(STRIMZI_OAUTH_METRIC_REPORTERS));
this.configMap = configMap;

this.metrics = initKafkaMetrics();
}

Expand Down Expand Up @@ -90,26 +119,47 @@ private Metrics initKafkaMetrics() {

private List<MetricsReporter> initReporters() {
AbstractConfig kafkaConfig = initKafkaConfig();
List<MetricsReporter> reporters = kafkaConfig.getConfiguredInstances(CommonClientConfigs.METRIC_REPORTER_CLASSES_CONFIG,
MetricsReporter.class);

if (configMap.get(STRIMZI_OAUTH_METRIC_REPORTERS) != null) {
return kafkaConfig.getConfiguredInstances(STRIMZI_OAUTH_METRIC_REPORTERS, MetricsReporter.class);
}
JmxReporter reporter = new JmxReporter();
reporter.configure(configMap);

reporters.add(reporter);
return reporters;
return Collections.singletonList(reporter);
}

private AbstractConfig initKafkaConfig() {
ConfigDef configDef = new ConfigDef()
.define(CommonClientConfigs.METRIC_REPORTER_CLASSES_CONFIG,
ConfigDef.Type.LIST,
Collections.emptyList(),
new ConfigDef.NonNullValidator(),
ConfigDef.Importance.LOW,
CommonClientConfigs.METRIC_REPORTER_CLASSES_DOC);

return new AbstractConfig(configDef, configMap);
ConfigDef configDef = addMetricReporterToConfigDef(new ConfigDef(), STRIMZI_OAUTH_METRIC_REPORTERS);
return new AbstractConfig(configDef, toMapOfStringValues(configMap));
}

private ConfigDef addMetricReporterToConfigDef(ConfigDef configDef, String name) {
return configDef.define(name,
ConfigDef.Type.LIST,
Collections.emptyList(),
new ConfigDef.NonNullValidator(),
ConfigDef.Importance.LOW,
CommonClientConfigs.METRIC_REPORTER_CLASSES_DOC);
}

private Map<String, String> toMapOfStringValues(Map<String, ?> configMap) {
HashMap<String, String> result = new HashMap<>();
for (Map.Entry<String, ?> ent: configMap.entrySet()) {
Object val = ent.getValue();
if (val == null) {
continue;
}
if (val instanceof Class) {
result.put(ent.getKey(), ((Class<?>) val).getCanonicalName());
} else if (val instanceof List) {
String stringVal = ((List<?>) val).stream().map(String::valueOf).collect(Collectors.joining(","));
if (!stringVal.isEmpty()) {
result.put(ent.getKey(), stringVal);
}
} else {
result.put(ent.getKey(), String.valueOf(ent.getValue()));
}
}
return result;
}

private KafkaMetricsContext createKafkaMetricsContext() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,13 @@ public static synchronized void configure(Map<String, ?> configs) {
}
}

/**
* Close any configured Services so they can be reinitialised again
*/
public static synchronized void close() {
services = null;
}

/**
* Get a configured singleton instance
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,4 +93,31 @@ public static void logStart(String msg) {
System.out.println("======== " + msg);
System.out.println();
}

public static int findFirstMatchingInLog(List<String> log, String regex) {
int lineNum = 0;
Pattern pattern = Pattern.compile(regex);
for (String line: log) {
if (pattern.matcher(line).find()) {
return lineNum;
}
lineNum++;
}
return -1;
}

public static boolean checkLogForRegex(List<String> log, String regex) {
return findFirstMatchingInLog(log, regex) != -1;
}

public static int countLogForRegex(List<String> log, String regex) {
int count = 0;
Pattern pattern = Pattern.compile(regex);
for (String line: log) {
if (pattern.matcher(line).find()) {
count += 1;
}
}
return count;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
* Copyright 2017-2022, Strimzi authors.
* License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html).
*/
package io.strimzi.testsuite.oauth.auth.metrics;
package io.strimzi.testsuite.oauth.common.metrics;

import org.apache.kafka.common.metrics.KafkaMetric;
import org.slf4j.Logger;
Expand Down
19 changes: 14 additions & 5 deletions testsuite/keycloak-auth-tests/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ services:
- "5006:5006"
volumes:
- ${PWD}/../docker/target/kafka/libs:/opt/kafka/libs/strimzi
- ${PWD}/target/test-classes:/opt/kafka/libs/strimzi/reporters
- ${PWD}/../common/target/classes:/opt/kafka/libs/strimzi/reporters
- ${PWD}/../docker/kafka/config:/opt/kafka/config/strimzi
- ${PWD}/../docker/kafka/scripts:/opt/kafka/strimzi
command:
Expand All @@ -66,9 +66,6 @@ services:
- KAFKA_INTER_BROKER_LISTENER_NAME=INTROSPECT
- KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL=OAUTHBEARER

# Common settings for all the listeners
- OAUTH_ENABLE_METRICS=true

# username extraction from JWT token claim
- OAUTH_USERNAME_CLAIM=preferred_username
- KAFKA_PRINCIPAL_BUILDER_CLASS=io.strimzi.kafka.oauth.server.OAuthKafkaPrincipalBuilder
Expand Down Expand Up @@ -129,12 +126,24 @@ services:
- KAFKA_LISTENER_NAME_FORGE_OAUTHBEARER_SASL_SERVER_CALLBACK_HANDLER_CLASS=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler

# Test Metrics Reporters
- KAFKA_METRIC_REPORTERS=io.strimzi.testsuite.oauth.auth.metrics.TestMetricsReporter
- KAFKA_METRICS_CONTEXT_TEST_LABEL=testvalue
- KAFKA_METRICS_NUM_SAMPLES=3
- KAFKA_METRICS_RECORDING_LEVEL=DEBUG
- KAFKA_METRICS_SAMPLE_WINDOW_MS=15000


# OAuth metrics configuration

- OAUTH_ENABLE_METRICS=true
# When enabling metrics we also have to explicitly configure JmxReporter to have metrics available in JMX
# The following value will be available as env var STRIMZI_OAUTH_METRIC_REPORTERS
- STRIMZI_OAUTH_METRIC_REPORTERS=org.apache.kafka.common.metrics.JmxReporter,io.strimzi.testsuite.oauth.common.metrics.TestMetricsReporter

# The following value will turn to 'strimzi.oauth.metric.reporters=...' in 'strimzi.properties' file
# However, that won't work as the value may be filtered to the component that happens to initialise OAuthMetrics
#- KAFKA_STRIMZI_OAUTH_METRIC_REPORTERS=org.apache.kafka.common.metrics.JmxReporter


# For start.sh script to know where the keycloak is listening
- KEYCLOAK_HOST=${KEYCLOAK_HOST:-keycloak}
- REALM=${REALM:-forge}
Expand Down
Loading

0 comments on commit a0c4c34

Please sign in to comment.