-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inject config objects in elasticsearch/kibana at startup #877
Comments
Thanks for the feedback, I'm glad this template is being useful to you! 🙌 I'm 100% onboard with the idea of having a better, more flexible setup mechanism in docker-elk. The current one, based on shell scripts, is very primitive, because the initial goal was only to be able to create the users required by the various stack components ("built-in users"). Since I didn't need any specific tool installed, I reused the |
Hi, Thanks for feedback. Your suggestion makes lot of sense to me. Indeed the choice of using yaml to describe things to inject is not a random choice for me: Long term, using Ansible, the deployment tool choosen by my company WILL be the target. My long term goal is to have a unique Ansible playbook, to describe a full (indempotent) installation:
This will both let developpers have a 'packaged solution' to start quickly their experiments, and reduce complexity of OPS responsible of the run of ELK in PROD (git repo + pull-requests = review proposed changes, version changes, re-build a stack from scratch, ...) Having not found so far a 'serious & ready to use' Ansible role already available for that purpose, it will involve a bit of work to write it. Therefore, for the moment, my intermediate goal is to reduce the gap between sandboxes and prod environment, with material (yaml description files) reusable for long term goal. |
YAML should be fine. Probably even dropping files into certain directories should cover most cases (e.g. |
Beforeall, many thanks for this repository ; it helped me save A LOT of time to have a basis for a proper - reproducible - initialization of the elastic stack, as well a good starting point for evolutions. My turn to give back results of my ideas & explorations.
Problem description
In my elastic journey, my goal is to be able to (re)create on demand and from scratch 'sandbox' elastic stacks, configured as close as possible as my 'production' one. It means not only having the docker images running, but also config objects auto-recreated at startup. This involves:
Ideas & suggestions
To my mind, this involves three improvements:
1. inject elasticsearch objects via its API
For each object to inject in elasticsearch I write a
.request
file containing one elastic API call to make, the same way it is presented in dev tools (first line for HTTP method + path ; lines after is request's body), belowmy-ilm-policy.request
I have several files like that (ilm policies, component templates, data streams & index templates).
Then I wrote a YAML file
setup-elasticsearch.yml
describing the (ordered) list of request files to process during setup (for easy opt-in/opt-out)in
setup/lib.sh
I added a basic function to parse YAML files and transform it in bunch of (prefixed) env vars (cf. https://stackoverflow.com/a/51789677 )And at the end of
entrpoint.sh
I added some code to load the yaml file, iterate over the list and make appropriate curl requests.2. inject kibana objects via its API
Same behavior as for §1 ; work in progress.
It will probably need more logic as not all APIs present an 'upsert-like' behavior (ex: spaces), and thus will need a additional 'first call to check if object already exist => then POST or PUT accordingly)
(foreseen)
setup-kibana.yml
:3. wait whole setup succeeded before ingesting events
I don't want any log to be ingested by my stack until all has been well setup: index templates, data streams config, etc...
As my setup makes logstash being the sole shipper of data in elasticsearch, therefore I added a dependency for the start of logstash, with the complete & successfull execution of the whole 'setup' container.
in
docker-compose.yml
Hope it helps.
The text was updated successfully, but these errors were encountered: