-
Notifications
You must be signed in to change notification settings - Fork 1
Execution Guide
A benchmark is a considered a collection of frameworks (test applications) that are executed on the client.
To run a benchmark on a particular device, visit the home page of a running Axiom instance. (see the Deployment Guide if one does not exist)
- Select the Run Benchmark button.
- Select the test applications to be executed. These options are populated by the TestProfiles.json configuration file, discussed in further detail in the Test Development guide. In this instance, we have two test applications, called React and Vue, that are available for testing. Select both of them, because we're adventurous. Click Submit.
- The next page can be a quick one - this is where the actual test execution process occurs. From a layman's perspective, nothing significant seems to occur - the navigation bar progresses as the tests execute, the screen may flash with some text or other elements, but many tests will be stressing the javascript engine with no UI changes. See the How It Works section below for more technical details.
- Once the tests are finished executing, the client is taken to a page to see their results.
Here, we go into details about the process that runs between the server and client during test execution.
This process begins when the client goes to the configuration page. This page simply serves as a URL generator that provides the available test applications, allows the user to select them, and forwards them to the run URL with them specified. This is an example:
https://axiom-benchmark.herokuapp.com/benchmark/run?react=on&vue=on
We see that they have react and vue test applications turned on. These parameters are saved in a urlParams object and will be passed with the benchmark_request that happens in the Client Requests Benchmark section below.
Once the client lands on the execution page, the web page loads a client side javascript file that will handle execution logic, located at /public/javascript/benchmark_client.js. If we view this file, the first thing (after function definitions) is creating a socket:
var socket = io();
This socket, created using the Socket.IO javascript library, with no options specified, creates a socket communication between the client and the socket server, which is mounted at the root of the site in app.js.
When the client calls var socket = io(),
The server gets the incoming connection request. We spin up a new Benchmark Agent for the client in /benchmark_server/BenchmarkAgentManager.js:
// when a new client connects, dispatch a new benchmark agent.
io.on('connection', function(socket) {
var newAgent = new BenchmarkAgent(socket, completion);
}.bind(this));
This Benchmark Agent will handle test execution for that particular client at the end of that new socket for the duration that it is connected. We now wait for a benchmark_request from the client.
Once the client creates the connection to the server, we're almost ready to send the urlParams to the server. We first attach the client's userAgent string, so that the server can identify the client. Optionally, other information could be injected here as well. This urlParams object is then passed as parameters to a benchmark_request message:
socket.on('connect', function() {
log('connected to server.');
urlParams.userAgent = navigator.userAgent;
socket.emit('benchmark_request', urlParams);
});
When the server receives the benchmark_request, several important steps happen. We validate the request, making sure that the requested test applications are in fact available. The user agent string is interpreted using the user agent API and custom user agent rules. With this information, we create the benchmark in the database. This is done early on as opposed to the end of the tests to provide flexibility to future versions of Axiom where incomplete benchmarks may be useful information.
This step completes when the server calls loadNextFramework()
, which starts with the first framework.
A framework_load message serves multiple purposes: it tells the client to clean up the previous framework, and load the resources for the incoming framework.
Server side, we send a framework_load to the client. Parameters include the URLs for the javascript and HTML files necessary to run the framework. More information can be found in the Test Development Guide and in the source code.
Client side, when a **framework_load **is received, the resources loaded from the previous framework are discarded if present, and the resources in the parameters are fetched and loaded into the browser. The UUT object is the primary result of these operations - see the Test Development Guide for more information about the UUT object.
//loads a new framework script.
//the framework script should generate a UUT to be unloaded later.
socket.on('framework_load', function(params) {
//reset bench
resetUUT();
$('#testbench').empty();
//load next framework
}
When this process is complete, we reply to the server with a framework_ready. When the server gets framework_ready, we can begin the testing process with a test_request.
A **test_request **is sent by the server, requesting the client to run a particular test as the name implies. The client receives the **test_request **and passes the received parameters to UUT.runTest:
//executes the requested test on the loaded framework
socket.on('test_request', function(params) {
var callback = function(result) {
socket.emit('test_result', result);
}
window.UUT.runTest(params, callback);
});
This function is defined for all frameworks automatically by Axiom, and triggers the function name provided by the function
parameter that is defined on the UUT object by the test application:
(simplified from source code for brevity)
window.UUT.runTest = function (params, callback) {
window.UUT[params.function](callback);
};
This is the main point of contact between the benchmark client and the test application. The appropriate function, defined in UUT, is triggered by a message that originated from the server. The callback, defined earlier in this subsection, is passed to the test function to call when it has completed its test routine.
Once the test is completed and the test application has passed the results to the callback, the client sends a **test_result **message with the results attached as parameters. The server ingests these results for that test in the current framework, and the single test is complete.
When a server receives a test result, one of three things will happen:
- If another test in this framework is scheduled, we send another test_request. See the Test Request / Test Result section for more information.
- If the result was for the last test in the framework, and another framework is scheduled, we send another framework_load command for the next framework. See the Framework Load / Framework Ready section for more information.
- If the results was for the last test in the framework, and no more frameworks are scheduled, we move to complete the benchmark. See the next section for more information.
When all tests in all scheduled frameworks are complete, the server moves to complete the benchmark. This is accomplished with a benchmark_done message:
done() {
var data = {
'id' : this.id
};
this.socket.emit('benchmark_done', data);
}
We pass on the result id to the client, so that the client can automatically redirect to the results page for its particular benchmark run:
(simplified from source code for brevity)
socket.on('benchmark_done', function(params) {
var url = window.location.origin + '/report?benchmark=' + params.id
window.location.href = url
});
There are other supporting functions that happen during the benchmark process.
The client can receive periodic progress update in the form of benchmark_progress messages to keep the user informed and prevent them from exiting because of a perceived hang:
(simplified from source code for brevity)
socket.on('benchmark_progress', function(params) {
$('#progressBar').val(params.progress)
});