Skip to content

Commit

Permalink
Merge pull request grafana#130 from loadimpact/feature/readme
Browse files Browse the repository at this point in the history
WIP updated README
  • Loading branch information
Emily Ekberg authored Feb 27, 2017
2 parents ff78ca0 + ed57858 commit b7ddeb7
Show file tree
Hide file tree
Showing 3 changed files with 46 additions and 132 deletions.
178 changes: 46 additions & 132 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,171 +1,85 @@
k6
=========
![](logo.png)

k6 is the codename for the next generation of [Load Impact](https://loadimpact.com/)'s load generator.
**k6** is a modern load testing tool, building on [Load Impact](https://loadimpact.com/)'s years of experience. It provides a clean, approachable scripting API, distributed and cloud execution, and orchestration via a REST API.

It features a modern codebase built on [Go](https://golang.org/) and integrates ES6, the latest iteration of Javascript, as a scripting language.
This is how load testing should look in the 21st century.

The simplest possible load script would be something along these lines:
[![](demo.gif)](https://asciinema.org/a/cbohbo6pbkxjwo1k8x0gkl7py)

```es6
// The script API is provided as ES6 modules, no global namespace pollution.
// If you prefer the older style of doing things, you may also use require().
import http from "k6/http";

// Export your test code as a 'default' function.
export default function() {
// Make an HTTP request; this will yield a variety of metrics, eg. 'request_duration'.
http.get("http://test.loadimpact.com/");
}
```

To run it, simply do...

```
$ k6 run script.js
Welcome to k6 v0.4.2!
execution: local
output: -
script: script.js
↳ duration: 10s
↳ vus: 10, max: 10
web ui: http://127.0.0.1:6565/
done [==========================================================] 10s / 10s
http_req_blocked: avg=19.57µs, max=14.9ms, med=1.28µs, min=808ns, p90=2.27µs, p95=7.1µs
http_req_connecting: avg=3.25µs, max=7.57ms, med=0s, min=0s, p90=0s, p95=0s
http_req_duration: avg=5.26ms, max=31.48ms, med=4.3ms, min=2.25ms, p90=7.69ms, p95=12.84ms
http_req_looking_up: avg=9.12µs, max=7.3ms, med=0s, min=0s, p90=0s, p95=0s
http_req_receiving: avg=121.95µs, max=13.84ms, med=69.3µs, min=38.57µs, p90=113.79µs, p95=140.04µs
http_req_sending: avg=18.27µs, max=4.92ms, med=12.09µs, min=6.12µs, p90=22.15µs, p95=28µs
http_req_waiting: avg=5.1ms, max=30.39ms, med=4.18ms, min=2.17ms, p90=7.33ms, p95=12.22ms
http_reqs: 17538
runs: 17538
$
```

Scripting
Installation
------------
k6 bundles a number of useful APIs that allows you to control flow of your scripts for both load and functional test execution, e.g.:

```es6
import http from "k6/http";
import { check } from "k6";
import { Trend } from "k6/metrics";
### Mac

// define our threshold within a global options-structure
export let options = {
thresholds: {
request_duration: ["avg<100"],
}
};
```bash
brew tap loadimpact/k6
brew install k6
```

// create our Trend metric
var myTrend = new Trend("request_duration");
### Docker

// Export our test code as a 'default' function.
export default function() {
var r = http.get("https://httpbin.org");
// add response time to our Trend-metric
myTrend.add(r.timings.duration);
// assert for functional correctness
check(r, {
"status is 200": (r) => r.status === 200,
"body size 1234 bytes": (r) => r.body.length === 1234
});
};
```bash
docker pull loadimpact/k6
```
The above code can be run both as a load test or as a functional test, and will:

* create a Trend metric named “request_duration” and referred to in the code using the variable name myTrend
* define a threshold for the Trend metric. This threshold says that the load test should fail if the average value of the Trend metric goes below 100. This means that if at any time during the load test, the currently computed average of all sample values added to myTrend is less than 100, then the whole load test will be marked as failed.
* create a default function that will be executed repeatedly by all VUs in the load test. This function makes an HTTP request and adds the HTTP duration (`response.timings.duration`) to the Trend metric, while also asserting for HTTP 200 response (`response.status`) and expected size of HTTP body (`response.body.length`).
### Other Platforms

*For more information, see the [Getting Started Guide](tutorials/getting-started.md) and [Metrics Management Reference](tutorials/metrics-management.md)*
Grab a prebuilt binary from [the Releases page](https://github.com/loadimpact/k6/releases).

Installation
Introduction
------------

There are a couple of ways to set up k6:
k6 works with the concept of **virtual users** (VUs), which run scripts - they're essentially glorified, parallel `while(true)` loops. Scripts are written using JavaScript, as ES6 modules, which allows you to break larger tests into smaller pieces, or make reusable pieces as you like.

### The simplest way to get started is to use our Docker image
Scripts must contain, at the very least, a `default` function - this defines the entry point for your VUs, similar to the `main()` function in many other languages:

```sh
docker pull loadimpact/k6
docker run --rm --net=host -v $(pwd)/myscript.js:/myscript.js loadimpact/k6 run /myscript.js
```bash
export default function() {
// do things here...
}
```

It's recommended to run k6 with `--net=host` as it slightly improves network throughput, and causes container ports to be accessible on the host without explicit exposure. Note that this means opting out of the network isolation normally provided to containers, refer to [the Docker manual](https://docs.docker.com/v1.8/articles/networking/#how-docker-networks-a-container) for more information.

*"Why not just run my script normally, from top to bottom"*, you might ask - the answer is: we do, but code **inside** and **outside** your `default` function can do different things.

### You can also build k6 from source
Code inside `default` is called "VU code", and is run over and over for as long as the test is running. Code outside of it is called "init code", and is run only once per VU.

This requires a working Go environment (Go 1.7 or later - [set up](https://golang.org/doc/install)) and you will also need git, make, node.js and a couple of other dependencies that you can install like this: `npm i -g bower ember-cli`. When you have all prerequisites, you can build k6:
VU code can make HTTP requests, emit metrics, and generally do everything you'd expect a load test to do - with a few important exceptions: you can't load anything from your local filesystem, or import any other modules. This all has to be done from init code.

```sh
go get -u github.com/loadimpact/k6
```
There are two reasons for this. The first is, of course: performance.

(If you opt for cloning or updating through `git pull` rather than `go get` for whatever reason, note that you also need to do `git submodule update --init` to get JS dependencies up to date.)
If you read a file from disk on every single script iteration, it'd be needlessly slow; even if you cache the contents of the file and any imported modules, it'd mean the *first run* of the script would be much slower than all the others. Worse yet, if you have a script that imports or loads things based on things that can only be known at runtime, you'd get slow iterations thrown in every time you load something new.

But there's another, more interesting reason. By forcing all imports and file reads into the init context, we design for distributed execution. We know which files will be needed, so we distribute only those files. We know which modules will be imported, so we can bundle them up from the get-go. And, tying into the performance point above, the other nodes don't even need writable filesystems - everything can be kept in-memory.

### Step-by-step guide to build k6 from source
As an added bonus, you can use this to reuse data between iterations (but only for the same VU):

Following the below steps exactly should result in a working k6 executable. The only thing you need is [Docker](https://docker.com/), or you may try with a clean Ubuntu 14.04 installation, in which case you can skip the first docker command below. First we set up our build environment:
```js
var counter = 0;

```sh
docker run -it ubuntu:14.04 /bin/bash
apt-get update
apt-get install -y git make nodejs-legacy npm curl
curl https://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz | tar -C /usr/local -xzf -
export GOROOT=/usr/local/go
export PATH=$PATH:$GOROOT/bin
export GOPATH=$HOME/go
mkdir $GOPATH
npm install -g bower [email protected]
export default function() {
counter++;
}
```
*(quick coffee break opportunity here)*

Then we're ready to build k6:
Development Setup
-----------------

```sh
go get -d -u github.com/loadimpact/k6
cd $GOPATH/src/github.com/loadimpact/k6
make
```
*(last chance for coffee)*

You should now have a k6 binary in your current working directory.


Usage
-----
go get -u github.com/loadimpact/k6
```

k6 works with the concept of "virtual users", or "VUs". A VU is essentially a glorified `while (true)` loop that runs a script over and over and reports stats or errors generated.
The only catch is, if you want the web UI available, it has to be built separately. Requires a working NodeJS installation.

Let's say you've written a script called `myscript.js` (you can copy the one from the top of this page), and you want to run it with 100 VUs for 30 seconds. You'd do something like this:
First, install the `ember-cli` and `bower` tools if you don't have them already:

```sh
k6 run -u 100 -d 30s myscript.js
```

The first thing you might notice is that the duration is written "30s", not "30". This is because we're using Go's duration notation, which means `90s`, `1m30s`, `24h` and `2d` are all valid durations, and much more readable than if you had to convert everything to seconds.

The second thing you might notice (or maybe not, if you're just reading this) is k6 saying this when it starts:
```sh
web ui: http://127.0.0.1:6565/
npm install -g ember-cli bower
```
This is the address to a built-in HTTP server serving a full-fledged web UI where you can view realtime statistics and errors. The default behaviour is to shut everything down (including the web UI) once a test execution has completed, but there is a -l/--linger flag you can pass to `k6 run` that will cause it to stay running until killed (e.g. with CTRL-C). This can be useful if you want to view the results of the test in the web UI.

But the web UI is not the only thing this HTTP server does. It also exposes a REST API on the same port for controlling test execution, which you can call yourself with an HTTP client of your choice (curl, httpie, ...), or using the commandline wrappers - essentially every k6 command aside from `run` wraps an API call. For example, this will scale the running test down to 50 VUs:
Then build the UI:

```sh
k6 scale 50
```

This is a quite powerful feature when combined with options like `-d 0` / `--duration 0`, which causes the test to run indefinitely until told otherwise. You're fully in control of how your test is executed!

*For more information, see the [tutorials](tutorials/getting-started.md)*
cd $GOPATH/src/github.com/loadimpact/k6/web
npm install && bower install
ember build
```
Binary file added demo.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit b7ddeb7

Please sign in to comment.