Skip to content

Latest commit

 

History

History
 
 

benchmarks

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

IREE Benchmark Suites Tool

This directory contains the tools to run IREE benchmark suites and generate reports. More information about benchmark suites can be found here.

Benchmark Tools

Currently we have run_benchmarks_on_android.py and run_benchmarks_on_linux.py scripts to run benchmark suites on Android devices (with adb) and Linux machines.

The available arguments can be shown with --help. Some common usages are listed below. Here we assume:

IREE_BUILD_DIR="/path/to/IREE build root dir". It should contain the "benchmark_suites" directory built with the target "iree-benchmark-suites".

IREE_NORMAL_TOOL_DIR="/path/to/IREE tool dir". It is usually "$IREE_BUILD_DIR/tools".

IREE_TRACED_TOOL_DIR="/path/to/IREE tool dir built with IREE_ENABLE_RUNTIME_TRACING=ON".

See details about IREE_ENABLE_RUNTIME_TRACING here.

Run all benchmarks

./run_benchmarks_on_linux.py \
  --normal_benchmark_tool_dir=$IREE_NORMAL_TOOL_DIR \
  --output=results.json $IREE_BUILD_DIR

Run all benchmarks and perform the Tracy captures

./run_benchmarks_on_linux.py \
  --normal_benchmark_tool_dir=$IREE_NORMAL_TOOL_DIR \
  --traced_benchmark_tool_dir=$IREE_TRACED_TOOL_DIR \
  --trace_capture_tool=/path/to/iree-tracy-capture \
  --capture_tarball=captured_tracy_files.tar.gz
  --output=results.json $IREE_BUILD_DIR

Run selected benchmarks with the filters

./run_benchmarks_on_linux.py \
  --normal_benchmark_tool_dir=$IREE_NORMAL_TOOL_DIR \
  --model_name_regex="MobileBertSquad" \
  --driver_filter_regex="dylib" \
  --mode_regex="4-threads" \
  --output=results.json $IREE_BUILD_DIR

Generating Benchmark Report

The tools here are mainly designed for benchmark automation pipelines. The post_benchmarks_as_pr_comment.py and upload_benchmarks_to_dashboard.py scripts are used to upload and post reports to pull requests or the dashboard.

If you want to generate a comparison report locally, you can use diff_local_benchmarks.py script to compare two result json files and generate the report. For example:

./diff_local_benchmarks.py --base before.json --target after.json > report.md