Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Doc+) Flush out Slow Logs #118518

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/reference/api-conventions.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ You can pass an `X-Opaque-Id` HTTP header to track the origin of a request in

* Response of any request that includes the header
* <<_identifying_running_tasks,Task management API>> response
* <<_identifying_search_slow_log_origin,Slow logs>>
* <<search-slow-log,Slow logs>>
* <<deprecation-logging,Deprecation logs>>

For the deprecation logs, {es} also uses the `X-Opaque-Id` value to throttle
Expand All @@ -52,7 +52,7 @@ safely generate a unique `traceparent` header for each request.
If provided, {es} surfaces the header's `trace-id` value as `trace.id` in the:

* <<logging,JSON {es} server logs>>
* <<_identifying_search_slow_log_origin,Slow logs>>
* <<search-slow-log,Slow logs>>
* <<deprecation-logging,Deprecation logs>>

For example, the following `traceparent` value would produce the following
Expand Down
5 changes: 5 additions & 0 deletions docs/reference/how-to/search-speed.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -567,3 +567,8 @@ also possible to update the client-side logic in order to route queries to the
relevant indices based on filters. However `constant_keyword` makes it
transparently and allows to decouple search requests from the index topology in
exchange of very little overhead.

[discrete]
=== Default search timeout

By default, search requests don't time out. You can set a timeout using the <<search-timeout,`search.default_search_timeout`>> setting.
269 changes: 170 additions & 99 deletions docs/reference/index-modules/slowlog.asciidoc
Original file line number Diff line number Diff line change
@@ -1,15 +1,118 @@
[[index-modules-slowlog]]
== Slow Log
== Slow log

The slow log records database searching and indexing events that have execution durations above specified thresholds. You can use these logs to investigate analyze or troubleshoot your cluster's historical search and indexing performance.

Slow logs report task duration at the shard level for searches, and at the index level
for indexing, but might not encompass the full task execution time observed on the client. For example, slow logs don't surface HTTP network delays or the impact of <<task-queue-backlog,task queues>>.

Events that meet the specified threshold are emitted into <<logging,{es} logging>> under the `fileset.name` of `slowlog`. These logs can be viewed in the following locations:

* If <<monitoring-overview,{es} monitoring>> is enabled, from
{kibana-ref}/xpack-monitoring.html[Stack Monitoring]. Slow log events have a `logger` value of `index.search.slowlog` or `index.indexing.slowlog`.

* From local {es} service logs directory. Slow log files have a suffix of `_index_search_slowlog.json` or `_index_indexing_slowlog.json`.

[discrete]
[[slow-log-format]]
=== Slow log format

The following is an example of a search event in the slow log:

TIP: If a call was initiated with an `X-Opaque-ID` header, then the ID is automatically included in Search slow logs in the **elasticsearch.slowlog.id** field. See <<x-opaque-id,X-Opaque-Id HTTP header>> for details and best practices.

[source,js]
---------------------------
{
"@timestamp": "2024-12-21T12:42:37.255Z",
"auth.type": "REALM",
"ecs.version": "1.2.0",
"elasticsearch.cluster.name": "distribution_run",
"elasticsearch.cluster.uuid": "Ui23kfF1SHKJwu_hI1iPPQ",
"elasticsearch.node.id": "JK-jn-XpQ3OsDUsq5ZtfGg",
"elasticsearch.node.name": "node-0",
"elasticsearch.slowlog.id": "tomcat-123",
"elasticsearch.slowlog.message": "[index6][0]",
"elasticsearch.slowlog.search_type": "QUERY_THEN_FETCH",
"elasticsearch.slowlog.source": "{\"query\":{\"match_all\":{\"boost\":1.0}}}",
"elasticsearch.slowlog.stats": "[]",
"elasticsearch.slowlog.took": "747.3micros",
"elasticsearch.slowlog.took_millis": 0,
"elasticsearch.slowlog.total_hits": "1 hits",
"elasticsearch.slowlog.total_shards": 1,
"event.dataset": "elasticsearch.index_search_slowlog",
"fileset.name" : "slowlog",
"log.level": "WARN",
"log.logger": "index.search.slowlog.query",
"process.thread.name": "elasticsearch[runTask-0][search][T#5]",
"service.name": "ES_ECS",
"user.name": "elastic",
"user.realm": "reserved"
}

---------------------------
// NOTCONSOLE


The following is an example of an indexing event in the slow log:

[source,js]
---------------------------
{
"@timestamp" : "2024-12-11T22:34:22.613Z",
"auth.type": "REALM",
"ecs.version": "1.2.0",
"elasticsearch.cluster.name" : "41bd111609d849fc9bf9d25b5df9ce96",
"elasticsearch.cluster.uuid" : "BZTn4I9URXSK26imlia0QA",
"elasticsearch.index.id" : "3VfGR7wRRRKmMCEn7Ii58g",
"elasticsearch.index.name": "my-index-000001",
"elasticsearch.node.id" : "GGiBgg21S3eqPDHzQiCMvQ",
"elasticsearch.node.name" : "instance-0000000001",
"elasticsearch.slowlog.id" : "RCHbt5MBT0oSsCOu54AJ",
"elasticsearch.slowlog.source": "{\"key\":\"value\"}"
"elasticsearch.slowlog.took" : "0.01ms",
"event.dataset": "elasticsearch.index_indexing_slowlog",
"fileset.name" : "slowlog",
"log.level" : "TRACE",
"log.logger" : "index.indexing.slowlog.index",
"service.name" : "ES_ECS",
"user.name": "elastic",
"user.realm": "reserved"
}

---------------------------
// NOTCONSOLE

[discrete]
[[enable-slow-log]]
=== Enable slow logging

You can enable slow logging at two levels:

* For all indices under the <<settings,{es} `log4j2.properties` configuration file>>. This method requires a node restart.
* At the index level, using the <<indices-update-settings,update indices settings API>>

By default, all thresholds are set to `-1`, which results in no events being logged.

Slow log thresholds can be enabled for the four logging levels: `trace`, `debug`, `info`, and `warn`. You can mimic setting log level thresholds by disabling more verbose levels.

To view the current slow log settings, use the <<indices-get-settings,get index settings API>>:

[source,console]
--------------------------------------------------
GET _all/_settings?expand_wildcards=all&filter_path=*.settings.index.*.slowlog
--------------------------------------------------

[discrete]
[[search-slow-log]]
=== Search Slow Log
==== Enable slow logging for search events

Search slow logs emit per shard. They must be enabled separately for the shard's link:https://www.elastic.co/blog/understanding-query-then-fetch-vs-dfs-query-then-fetch[query and fetch search phases].

Shard level slow search log allows to log slow search (query and fetch
phases) into a dedicated log file.
You can use the `index.search.slowlog.include.user` or `index.indexing.slowlog.include.user` settings to append `user.*` and `auth.type` fields to slow log entries. These fields contain information about the user who triggered the request.

Thresholds can be set for both the query phase of the execution, and
fetch phase, here is a sample:
The following snippet adjusts all available search slow log settings across all indices using the
<<settings,`log4j2.properties` configuration file>>:

[source,yaml]
--------------------------------------------------
Expand All @@ -22,10 +125,11 @@ index.search.slowlog.threshold.fetch.warn: 1s
index.search.slowlog.threshold.fetch.info: 800ms
index.search.slowlog.threshold.fetch.debug: 500ms
index.search.slowlog.threshold.fetch.trace: 200ms

index.search.slowlog.include.user: true
--------------------------------------------------

All of the above settings are _dynamic_ and can be set for each index using the
<<indices-update-settings, update indices settings>> API. For example:
The following snippet adjusts the same settings for a single index using the <<indices-update-settings,update indices settings API>>:

[source,console]
--------------------------------------------------
Expand All @@ -38,96 +142,37 @@ PUT /my-index-000001/_settings
"index.search.slowlog.threshold.fetch.warn": "1s",
"index.search.slowlog.threshold.fetch.info": "800ms",
"index.search.slowlog.threshold.fetch.debug": "500ms",
"index.search.slowlog.threshold.fetch.trace": "200ms"
}
--------------------------------------------------
// TEST[setup:my_index]

By default thresholds are disabled (set to `-1`).

The logging is done on the shard level scope, meaning the execution of a
search request within a specific shard. It does not encompass the whole
search request, which can be broadcast to several shards in order to
execute. Some of the benefits of shard level logging is the association
of the actual execution on the specific machine, compared with request
level.


The search slow log file is configured in the `log4j2.properties` file.

[discrete]
==== Identifying search slow log origin

It is often useful to identify what triggered a slow running query.
To include information about the user that triggered a slow search,
use the `index.search.slowlog.include.user` setting.

[source,console]
--------------------------------------------------
PUT /my-index-000001/_settings
{
"index.search.slowlog.threshold.fetch.trace": "200ms",
"index.search.slowlog.include.user": true
}
--------------------------------------------------
// TEST[setup:my_index]

This will result in user information being included in the slow log.

[source,js]
---------------------------
{
"@timestamp": "2024-02-21T12:42:37.255Z",
"log.level": "WARN",
"auth.type": "REALM",
"elasticsearch.slowlog.id": "tomcat-123",
"elasticsearch.slowlog.message": "[index6][0]",
"elasticsearch.slowlog.search_type": "QUERY_THEN_FETCH",
"elasticsearch.slowlog.source": "{\"query\":{\"match_all\":{\"boost\":1.0}}}",
"elasticsearch.slowlog.stats": "[]",
"elasticsearch.slowlog.took": "747.3micros",
"elasticsearch.slowlog.took_millis": 0,
"elasticsearch.slowlog.total_hits": "1 hits",
"elasticsearch.slowlog.total_shards": 1,
"user.name": "elastic",
"user.realm": "reserved",
"ecs.version": "1.2.0",
"service.name": "ES_ECS",
"event.dataset": "elasticsearch.index_search_slowlog",
"process.thread.name": "elasticsearch[runTask-0][search][T#5]",
"log.logger": "index.search.slowlog.query",
"elasticsearch.cluster.uuid": "Ui23kfF1SHKJwu_hI1iPPQ",
"elasticsearch.node.id": "JK-jn-XpQ3OsDUsq5ZtfGg",
"elasticsearch.node.name": "node-0",
"elasticsearch.cluster.name": "distribution_run"
}

---------------------------
// NOTCONSOLE

If a call was initiated with an `X-Opaque-ID` header, then the ID is included
in Search Slow logs in the **elasticsearch.slowlog.id** field. See
<<x-opaque-id, X-Opaque-Id HTTP header>> for details and best practices.

[discrete]
[[index-slow-log]]
=== Index Slow log
==== Enable slow logging for indexing events

Indexing slow logs emit per index document.

The indexing slow log, similar in functionality to the search slow
log. The log file name ends with `_index_indexing_slowlog.json`. Log and
the thresholds are configured in the same way as the search slowlog.
Index slowlog sample:
The following snippet adjusts all available indexing slow log settings across all indices using the
<<settings,`log4j2.properties` configuration file>>:

[source,yaml]
--------------------------------------------------
index.indexing.slowlog.threshold.index.warn: 10s
index.indexing.slowlog.threshold.index.info: 5s
index.indexing.slowlog.threshold.index.debug: 2s
index.indexing.slowlog.threshold.index.trace: 500ms

index.indexing.slowlog.source: 1000
index.indexing.slowlog.reformat: true

index.indexing.slowlog.include.user: true
--------------------------------------------------

All of the above settings are _dynamic_ and can be set for each index using the
<<indices-update-settings, update indices settings>> API. For example:

The following snippet adjusts the same settings for a single index using the <<indices-update-settings,update indices settings API>>:

[source,console]
--------------------------------------------------
Expand All @@ -137,39 +182,65 @@ PUT /my-index-000001/_settings
"index.indexing.slowlog.threshold.index.info": "5s",
"index.indexing.slowlog.threshold.index.debug": "2s",
"index.indexing.slowlog.threshold.index.trace": "500ms",
"index.indexing.slowlog.source": "1000"
"index.indexing.slowlog.source": "1000",
"index.indexing.slowlog.reformat": true,
"index.indexing.slowlog.include.user": true
}
--------------------------------------------------
// TEST[setup:my_index]

To include information about the user that triggered a slow indexing event,
use the `index.indexing.slowlog.include.user` setting.
[discrete]
===== Logging the `_source` field

By default, {es} logs the first 1000 characters of the `_source` in the slow log. You can adjust how `_source` is logged using the `index.indexing.slowlog.source` setting. Set `index.indexing.slowlog.source` to `false` or `0` to skip logging the source entirely. Set `index.indexing.slowlog.source` to `true` to log the entire source regardless of size.

The original `_source` is reformatted by default to make sure that it fits on a single log line. If preserving the original document format is important, then you can turn off reformatting by setting `index.indexing.slowlog.reformat` to `false`. This causes source to be logged with the original formatting intact, potentially spanning multiple log lines.

[discrete]
[[slow-log-fields]]

[discrete]
[[troubleshoot-slow-log]]
=== Best practices for slow logging

Logging slow requests can be resource intensive to your {es} cluster depending on the qualifying traffic's volume. For example, emitted logs might increase the index disk usage of your <<monitoring-overview,{es} monitoring>> cluster. To reduce the impact of slow logs, consider the following:

* Enable slow logs against specific indices rather than across all indices.
* Set high thresholds to reduce the number of logged events.
* Enable slow logs only when troubleshooting.

If you aren't sure how to start investigating traffic issues, consider enabling the `warn` threshold with a high `30s` threshold at the index level using the <<indices-update-settings,update indices settings API>>:

* Enable for search requests:
+
[source,console]
--------------------------------------------------
PUT /my-index-000001/_settings
PUT _all/_settings
{
"index.search.slowlog.include.user": true
"index.search.slowlog.threshold.fetch.warn": "30s",
"index.search.slowlog.threshold.query.warn": "30s"
}
--------------------------------------------------

* Enable for indexing requests:
+
[source,console]
--------------------------------------------------
PUT _all/_settings
{
"index.indexing.slowlog.include.user": true
"index.indexing.slowlog.threshold.index.warn": "30s"
}
--------------------------------------------------
// TEST[setup:my_index]

By default Elasticsearch will log the first 1000 characters of the _source in
the slowlog. You can change that with `index.indexing.slowlog.source`. Setting
it to `false` or `0` will skip logging the source entirely, while setting it to
`true` will log the entire source regardless of size. The original `_source` is
reformatted by default to make sure that it fits on a single log line. If preserving
the original document format is important, you can turn off reformatting by setting
`index.indexing.slowlog.reformat` to `false`, which will cause the source to be
logged "as is" and can potentially span multiple log lines.
Slow log thresholds being met does not guarantee cluster performance issues. In the event that symptoms are noticed, slow logs can provide helpful data to diagnose upstream traffic patterns or sources to resolve client-side issues. For example, you can use data included in `X-Opaque-ID`, the `_source` request body, or `user.*` fields to identify the source of your issue. This is similar to troubleshooting <<task-queue-backlog,live expensive tasks>>.

If you're experiencing search performance issues, then you might also consider investigating searches flagged for their query durations using the <<search-profile,profile API>>. You can then use the profiled query to investigate optimization options using the link:{kibana-ref}/xpack-profiler.html[query profiler]. This type of investigation should usually take place in a non-production environment.

The index slow log file is configured in the `log4j2.properties` file.
Slow logging checks each event against the reporting threshold when the event is complete. This means that it can't report if events trigger <<circuit-breaker-errors,circuit breaker errors>>. If suspect circuit breaker errors, then you should also consider enabling <<enable-audit-logging,audit logging>>, which logs events before they are executed.

[discrete]
=== Slow log levels
=== Learn more

You can mimic the search or indexing slow log level by setting appropriate
threshold making "more verbose" loggers to be switched off.
If for instance we want to simulate `index.indexing.slowlog.level: INFO`
then all we need to do is to set
`index.indexing.slowlog.threshold.index.debug` and `index.indexing.slowlog.threshold.index.trace` to `-1`.
To learn about other ways to optimize your search and indexing requests, refer to <<tune-for-search-speed,tune for search speed>> and <<tune-for-indexing-speed,tune for indexing speed>>.