A "transport" for Pino is some other tool into which the output of Pino is piped. Consider the following example:
var split = require('split2')
var pump = require('pump')
var through = require('through2')
var myTransport = through.obj(function (chunk, enc, cb) {
// do whatever you want here!
console.log(chunk)
cb()
})
pump(process.stdin, split(JSON.parse), myTransport)
The above defines our "transport" as the file my-transport-process.js
.
Now we can get the log data by:
node my-app-which-logs-stuff-to-stdout.js | node my-transport-process.js
Using transports in the same process causes unnecessary load and slows down Node's single threaded event loop.
If you write a transport, let us know and we will add a link here!
pino-couch uploads each log line as a CouchDB document.
$ node yourapp.js | pino-couch -U https://your-server -d mylogs
pino-elasticsearch uploads the log lines in bulk to Elasticsearch, to be displayed in Kibana.
It is extremely simple to use and setup
$ node yourapp.js | pino-elasticsearch
Assuming Elasticsearch is running on localhost.
If you wish to connect to an external elasticsearch instance (recommended for production):
- Check that you defined
network.host
in yourelasticsearch.yml
configuration file. See elasticsearch Network Settings documentation for more details. - Launch:
$ node yourapp.js | pino-elasticsearch --host 192.168.1.42
Assuming Elasticsearch is running on 192.168.1.42
.
If you wish to connect to AWS Elasticsearch:
$ node yourapp.js | pino-elasticsearch --host https://your-url.us-east-1.es.amazonaws.com --port 443 -c ./aws_config.json
Then, head to your
Kibana instance, and create an index pattern on 'pino'
,
the default for pino-elasticsearch
.
pino-mq will take all messages received on process.stdin and send them over a message bus using JSON serialization; this is more a transform for pino messages because you will need some processing on the other end of the queue(s) to process message and store them in a backend; it is useful for :
- moving your backpressure from your application to broker
- transforming messages pressure to another component
node app.js | pino-mq -u "amqp://guest:guest@localhost/" -q "pino-logs"
or (recomended)
node app.js | pino-mq -c pino-mq.json
you can get a sample of configuration file by running:
pino-mq -g
for full documentation of command line switches and pino-mq.json read readme
pino-socket is a transport that will forward logs to a IPv4 UDP or TCP socket.
As an example, use socat
to fake a listener:
$ socat -v udp4-recvfrom:6000,fork exec:'/bin/cat'
And then run an application that uses pino
for logging:
$ node yourapp.js | pino-socket -p 6000
You should see the logs from your application on both consoles.
You can also use pino-socket to upload logs to Logstash via:
$ node yourapp.js | pino-socket -a 127.0.0.1 -p 5000 -m tcp
Assuming your logstash is running on the same host and configured as follows:
input {
tcp {
port => 5000
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => "127.0.0.1:9200"
}
}
See https://www.elastic.co/guide/en/kibana/current/setup.html to learn how to setup Kibana.
If you are a Docker fan, you can use https://github.com/deviantony/docker-elk to setup an ELK stack.
pino-syslog is a transport, really a "transform," that converts
pino's logs to RFC3164 compatible log messages. pino-syslog does not
forward the logs anywhere, it merely re-writes the messages to stdout
. But
in combination with pino-socket, you can relay logs to a syslog server:
$ node yourapp.js | pino-syslog | pino-socket -a syslog.example.com
Example output for the "hello world" log:
<134>Apr 1 16:44:58 MacBook-Pro-3 none[94473]: {"pid":94473,"hostname":"MacBook-Pro-3","level":30,"msg":"hello world","time":1459529098958,"v":1}