Skip to content

Latest commit

 

History

History
43 lines (27 loc) · 2.05 KB

multihost.md

File metadata and controls

43 lines (27 loc) · 2.05 KB
title layout permalink
Multiple Host
wiki
/multihost

Multiple Host


Going from a single host to a multiple host deployment isn't too difficult. Basically, you just install the Moloch deb/rpm on each machine and point to the same Elasticsearch cluster. The biggest issues include opening up Elasticsearch to more than just localhost and getting the Elasticsearch configuration right.

Note: these instructions assume you've installed from the prebuilt deb/rpm and everything is in /data/Moloch.


Expanding Elasticsearch

If you are using the demo install and you plan on having a large Moloch cluster, you should move Elasticsearch to multiple machines. We no longer provide detailed instructions since Elastic now has lots of good tutorials. If running on dedicated machines give up to 1/2 of physical memory (up to 30G) for Elasticsearch. You can read more about how many nodes in the FAQ.

At a high level you will want to

{: .mb-0 }

  • Change your current cluster from listening on 127.0.0.1 (localhost) to 0.0.0.0
  • Add more Elasticsearch nodes to the cluster
  • Mark the old demo node as ignore
  • Wait for all the shards to move to new nodes
  • Shutdown the old demo node.
  • Setup iptables on the Elasticsearch machines, since by default there is NO protection.

Note: make sure you set gateway.recover_after_nodes and gateway.expected_nodes to the total number of DATA nodes.


Capture/Viewer nodes

Adding multiple capture nodes is easy! Just install the prebuilt deb/rpm package on each machine. It is best to use a system like ansible or chef so you can use the same config.ini file every where and push out to each of the sensors. As long as all the capture and viewer nodes talk to the same Elasticsearch cluster they will show up in the same UI.

If you set up multiple Elasticsearch clusters for multiple Moloch clusters you can merge the results by using a multiviewer.