This repo is used for testing the throughout performance of different QUIC implemenations, QUINN and MSQUIC, against TCP. This test can not only be done to test the maximal achievable throughput for each protocol/implementation, but also to test the effect of different disturbances. Each server and client works in a easy request-response scenario, where the download throughput of the client is measured based on the time which is needed to read the whole response of the server.
The size of the server-response is important and should be large to measure the ressource usage (which is only upadted every second) and to see the maximally achieveable throughput. When using disturbances, there are different degrees which can be changed by the user (see "Get Started") and are applied on the servers interface for the simulation. The results are stored in output/log files which can be used to measure the effectivness of the protocol/implementation. Furthermore the results can also be displayed graphically in plots (see "Get Started").
For the whole Simulation many tools are used, here is a list of required tools which your Linux system needs to run these test.
- sshpass - Used for SSH login using a password to forward the commands to server and client.
- screen - Used for to execute the Server and monitoring in the background.
- lsof - Needed for finding the PID's of the servers.
- jq - Used for storing the results in .json files.
- pidstat - Used for monitoring individual processes CPU and memory usage.
- python3 - Needed for processing, computing and plotting the results.
- scp - Copying certs for authorization.
- tc - Simulates the Delay, Packet loss and Reordering.
- cargo - Needed to run Rust.
To use the MSQUIC API in the MSQUIC-Server and Client you need to build MSQUIC on each device first, look up how to build MSQUIC here: https://github.com/microsoft/msquic/blob/main/docs/BUILD.md
To ensure that your system can locate the necessary msquic header files and libraries when compiling and linking the msquic server and client, you need to configure a couple of environment variables.
nano ~/.bashrc
Add the following line to the file, which will set the C_INCLUDE_PATH environment variable to the directory where the msquic header files are located (look for the file msquic.h). This tells the compiler where to find the necessary headers when compiling your code. Below the C_INCLUDE_PATH line, add another line to set the LIBRARY_PATH and LD_LIBRARY_PATH environment variable. This will direct the linker to the directory containing the msquic libraries during the build process (look for the file libmsquic.so to find the library Path). Remember the LD_LIBRARY_PATH as this will be needed in the simulation.sh later on as well.
export C_INCLUDE_PATH=...:$C_INCLUDE_PATH
export LIBRARY_PATH=...:$LIBRARY_PATH
export LD_LIBRARY_PATH=...:$LD_LIBRARY_PATH
Safe the changes the be permanent:
source ~/.bashrc
To use the msquic server you need to create a key certificate, the name is also important so use the command below to create a private key and a corresponding self-signed certificate inside the thisRepo/msquic.
openssl req -nodes -new -x509 -keyout serverKey.pem -out serverCert.pem
To build the server and client for each implementation, the bash script build.sh does all the necessary steps automatically. For the build of the MSQUIC-Server and -Client the build of MSQUICwith the updated system variables is required. Also rust and gcc are necessary for the build. First you have to make the file executable:
chmod +x build.sh
Then the build will be done automatically when exexcuting the shell script:
./build.sh
In the first lines of the main script (simulation.sh) you have to specify some data about the server and client, e.g: IPs, Passwords, Ports, Paths, ... This is necessary to sent data between server and client and to connect with sshpass to the remote devices. This step does only repeated if Data changes over time.
In the root folder you will find the shell script simulation.sh, first make it a executable
chmod +x simulation.sh
Then you can use different flags to simulate the scenarios in this list below. If you want to change the disturbance sizes or other more specific values look at the functions starting with run_... there the ranges of the disturbance can be easily changed. By default the iteration count for a scenario is ten, if you want to change it, you can do so in the simulating_scenario() function.
- -u Run undisturbed scenario
- -d Run scenario with delay"
- -r Run scenario with packet reordering
- -l Run scenario withh packet loss
After the tests you will have multiple JSON file containing the result data for each protocol/implementation you will have the mean and standard deviation of the Throughput and CPU/RAM usage. The JSON files without the ending _result are the logs which store every of the iterations for each protocol, meaning ten iterations for each of the three protocols. The files with the ending result are the mean of all the iterations for this scenario. They can then be used to do the plots, which can be produced by the python files located in the root directory and named plot... . These files take the result files as an input and store the plots as .png in the root directory as well. For example to create the plot which displays the packet loss percentage on the x-axis and the throughput of each protocl on the y-axis and be created like this:
python3 plot_loss.py packetloss_0_result.json ... packetloss 5_result.json
Examples of the result files and plots can be found in the folder sample_results.