- Pre-requisites
- Installation
- Start your ELK containers
- How to access ELK?
- How to check all indices in Elasticsearch?
- How to delete the index in Elasticsearch?
-
Docker:
- Install Docker, either using a native package (Linux) or wrapped in a virtual machine (Windows, OS X � e.g. using Boot2Docker or Vagrant).
-
A minimum of 4GB RAM assigned to Docker
- Elasticsearch alone needs at least 2GB of RAM to run.
- With Docker for Mac, the amount of RAM dedicated to Docker can be set using the UI: see How to increase docker-machine memory Mac (Stack Overflow).
-
A limit on mmap counts equal to 262,144 or more (IMPORTANT)
- On Linux: use sysctl vm.max_map_count n the host to view the current value, and see Elasticsearch documentation on virtual memory for guidance on how to change this value. Note that the limits **must be changed on the host **; they cannot be changed from within a container.
Command:
sudo sysctl -w vm.max\_map\_count=262144
- On windows: start the Docker quickstart terminal and enter following commands:
docker-machine ssh
sudo sysctl -w vm.max\_map\_count=262144
- On Linux: use sysctl vm.max_map_count n the host to view the current value, and see Elasticsearch documentation on virtual memory for guidance on how to change this value. Note that the limits **must be changed on the host **; they cannot be changed from within a container.
Pull official images of Elasticsearch, Logstash, and Kibana from https://www.docker.elastic.co
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.2.2
docker pull docker.elastic.co/kibana/kibana:6.2.2
docker pull docker.elastic.co/logstash/logstash:6.2.2
-
Start Elasticsearch container
-
docker run --rm -d -it --name es -p 9200:9200 -p 9300:9300 docker.elastic.co/elasticsearch/elasticsearch:6.2.2
-If you want to mount the volumes then use following command:
docker run --rm -d -it --name kibana -p 9200:9200 -p 9300:9300 -v "$PWD/your/dir/path":/usr/share/elasticsearch/data docker.elastic.co/elasticsearch/elasticsearch:6.2.2
-
Start Kibana container
docker run --rm -d -it --name kibana --link es:elasticsearch -p 5601:5601 docker.elastic.co/kibana/kibana:6.2.2
-
Start Logstash Container
-
docker run -d -it --name logstash -p 5000:5000 docker.elastic.co/logstash/logstash:6.2.2 -e 'input { tcp { port => 5000 codec => "json" } } output { elasticsearch { hosts => ["localhost"] index => "indexName"} }'
-
You can also provide the pipeline syntax from logstash.conf file:
docker run --rm -d -it --name logstash -p 5000:5000 -v "$PWD/path/to/config/dir":/config-dir docker.elastic.co/logstash/logstash:6.2.2 -f /config-dir/logback.conf
-
- Kibana:
- Linux: http://localhost:5601
- Windows: http://192.168.99.100:5601 (Virtual Box's IP)
- Elasticsearch:
- To check if elasticsearch is running
- Linux: http://localhost:9200
- Windows: http://192.168.99.100:9200
- To check all the indices in elasticsearch
- To check if elasticsearch is running
- Linux:
curl -XDELETE "http://localhost:9200/"indexname""
- Windows (In docker quickstart terminal):
curl -XDELETE "http://192.168.99.100:9200/"indexName""
- Creat a file
src/test/resource/logback.xml
.( Refersrc/test/resource/logback-elk.xml
) - Add an
net.logstash.logback.appender.LogstashTcpSocketAppender
appender in it and under that appender enter the logstash IP in destination. - When you will execute your code all the log will go in elasticsearch.
- You can provide the logback.conf file at runtime also just set the environment variable
logback.configurationFile=path\to\logback.xml
. If no configuration file provided then it will takesrc/test/resource/logback.xml
as default. - You can add fields using
org.slf4j.MDC.put("key", "value")
in elasticsearch - For more information refer
src/test/resource/logback-elk.xml