From the root directory of sails:
$ BENCHMARK=true mocha test/benchmarks
To get a more detailed report with millisecond timings for each benchmark, run:
$ BENCHMARK=true mocha test/benchmarks -v
These tests are related to benchmarking the performance of different parts of Sails. For now, our benchmark tests should be "integration" or "acceptance" tests. By that, I mean they should measure a specific "user action" (e.g. running sails new
, running sails lift
, sending an HTTP request to a dummy endpoint, connecting a Socket.io client, etc.).
Feature-wide benchmarks are the "lowest-hanging fruit", if you will. We'll spend much less development time, and still get valuable benchmarks that will give us ongoing data on Sails performance. This way, we'll know where to start writing lower-level benchmarks to identify choke-points.
- Pick what you want to test.
- Whatever you choose does not have to be atomic (see examples above)-- in an ideal world, we would have benchmarks for every single function in our apps, but that is not how things work today.
- Write a benchmark test that isolates that functionality. (the hard part)
- Then see how many milliseconds it takes. (easy)
Advice from Felix Geisendörfer (@felixge)
- First of all, keep in mind our problems are definitely not the same as Felix's, and we must remember to follow his own advice:
[What]...does not work is taking performance advise (euro-sic) from strangers...
That said, he's got some great ideas.- Benchmark-Driven Optimization
- I also highly recommend this talk on optimization and benchmarking (slides).
Here are the most important things we need to benchmark:
-
Bootstrap
sails.load
(programmatic)sails.lift
(programmatic) andsails lift
(CLI)sails load
sails new
andsails generate *
- (could be pulled into generic generator suite, like adapters)
-
Router
- private Sails requests via
sails.emit('request')
- http requests to the HTTP server
- http file uploads to the HTTP server
- connections to the socket.io server
- socket emissions to the socket.io server
- socket broadcasts FROM the socket.io server (pubsub hook)
- private Sails requests via
Thankfully, the ORM is already covered by the benchmarks in Waterline core and its generic adapter tests.
- Execution time
- Memory usage
- Low concurrency (c1k)
- High-moderate concurrency (c10k)
- Every permutation of the core hook configuration
- With different configuration options set
Some important things to consider when benchmarking Node.js / Express-based apps in general:
- Keep in mind that, unless you use the cluster module, or spin up multiple instances of the server, you're testing performance on one CPU. Most production servers, cloud or not, have more than one CPU available. This may or may not be relevant, depending on the benchmark and whether it is CPU-intensive.
- Be sure to configure
maxSockets
, since most of the requests in a benchmark test are likely to originate from the same source.
Sources:
Don't know the best route here yet-- but here are some links for reference. Would love to hear your ideas!