Skip to content

Commit

Permalink
Run markdownlint on some files under docs/developers/*. (iree-org#1…
Browse files Browse the repository at this point in the history
…5168)

Progress on iree-org#15116 (prep for
moving some/all of these files into `docs/website/`)

Notable formatting changes applied by following the lint rules:

* No skipping header levels, e.g. `## h2 <br> #### h4`, with no `### h3`
* No loose links, e.g. `www.google.com` --> `<www.google.com>`
* Wrap lines at 80 characters
* Set language on all code blocks (or explicitly say just "text")
* Indentation, spacing, and line break standardization
* Drop leading `$` characters from "shell" codeblocks (ehhhh)
  • Loading branch information
ScottTodd authored Oct 19, 2023
1 parent b13037b commit c232aeb
Show file tree
Hide file tree
Showing 28 changed files with 298 additions and 226 deletions.
10 changes: 9 additions & 1 deletion build_tools/scripts/run_markdownlint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,21 @@ declare -a included_files_patterns=(
# All .md files (disabled while we decide how rigorously to apply lint checks)
# "./**/*.md"

# Just .md files for the user-facing website.
# .md files for the website.
"./docs/website/**/*.md"

# Some developer documentation .md files that we may move to the website.
"./docs/developers/debugging/*.md"
"./docs/developers/developing_iree/*.md"
"./docs/developers/get_started/*.md"
)

declare -a excluded_files_patterns=(
"**/third_party/**"
"**/node_modules/**"

# Exclude generated files.
"./docs/website/docs/reference/mlir-dialects/!(index).md"
)

# ${excluded_files_patterns} is expanded into
Expand Down
5 changes: 5 additions & 0 deletions docs/.markdownlint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -90,3 +90,8 @@ link-fragments: false
# https://github.com/DavidAnson/markdownlint/issues/40 may add one.
#
# TLDR: Add your links inline, as in [text](www.example.com).

# Blockquotes are used for alerts in some flavors of markdown.
# Allow weird formatting inside them.
# https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#alerts
no-blanks-blockquote: false
17 changes: 9 additions & 8 deletions docs/developers/debugging/compile_time_regressions.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,8 @@ Building the compiler from source and using
specific commits in IREE, though it typically won't let you step through changes
in submodules (e.g. MLIR updates in `third_party/llvm-project/`).

__Tip__: [Configure ccache](../developing_iree/ccache.md) if you'll be rebuilding the compiler while bisecting
**Tip**: [Configure ccache](../developing_iree/ccache.md) if you'll be
rebuilding the compiler while bisecting

A manual workflow with `git bisect` looks like this:

Expand Down Expand Up @@ -82,7 +83,7 @@ git bisect run run_bisect.sh

Other sample scripts:

<details><summary>Compile executable sources individually with a timeout:</summary>
#### Compile executable sources individually with a timeout

```bash
#!/bin/bash
Expand Down Expand Up @@ -153,8 +154,6 @@ for SOURCE in "${SOURCES[@]}"; do
done
```

</details>

## Profiling and tracing

If you want to understand _why_ the compiler is fast or slow, or if you want to
Expand All @@ -166,6 +165,7 @@ options.
The `-mlir-timing` flag enables
[Pass Timing](https://mlir.llvm.org/docs/PassManagement/#pass-timing)
instrumentation. Once the compiler finishes running, this prints a report like

```shell
===-------------------------------------------------------------------------===
... Pass execution timing report ...
Expand Down Expand Up @@ -194,10 +194,10 @@ are outlier passes that take significantly longer to run than others.

Here are some previous analyses for inspiration:

* https://github.com/openxla/iree/issues/12033
* https://github.com/openxla/iree/issues/12035
* https://github.com/openxla/iree/issues/12183
* https://github.com/openxla/iree/issues/13189
* <https://github.com/openxla/iree/issues/12033>
* <https://github.com/openxla/iree/issues/12035>
* <https://github.com/openxla/iree/issues/12183>
* <https://github.com/openxla/iree/issues/13189>

Example slow trace:

Expand All @@ -218,6 +218,7 @@ point. For compile time regressions, it helps to snapshot the IR at a few key
phases and look for differences between fast compilation and slow compilation.

Here is one useful flag combination:

```shell
--mlir-disable-threading \
--mlir-elide-elementsattrs-if-larger=8 \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ This doc describes tips about triaging correctness issue. Feel free to reach out
to @hanhanW or ask questions on Discord if you need help or tips on triaging
correctness issue.

# Decouple the reproduce from integrations.
# Decouple the reproduce from integrations

## TF integration tests

Expand All @@ -17,7 +17,7 @@ for tracking this.
Follow [README](https://github.com/iree-org/iree-samples#readme) to run the model.
The MLIR files will be generated. You'll find the saved file from log. E.g.,

```
``` shell
[ RUN ] MobilenetV2Int8Test.test_compile_tflite
I0401 17:27:04.084272 140182373025024 test_util.py:119] Setting up for IREE
I0401 17:27:04.085064 140182373025024 binaries.py:218] Invoke IREE Pipeline:
Expand Down
31 changes: 24 additions & 7 deletions docs/developers/debugging/lldb_on_android.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,50 @@
This doc shows how to use LLDB to debug native binaries on Android. For a more complete explanation, see the [official LLDB documentation on remote debugging](https://lldb.llvm.org/use/remote.html).
This doc shows how to use LLDB to debug native binaries on Android. For a more
complete explanation, see the
[official LLDB documentation on remote debugging](https://lldb.llvm.org/use/remote.html).

# Debugging with LLDB on Android

## Prerequisites

We assume the following setup:
1. [Android NDK is installed](https://developer.android.com/ndk/downloads) and the `ANDROID_NDK` environment variable is set to the installation path.
1. Your Android device connected and configured for [`adb`](https://developer.android.com/studio/command-line/adb).
1. The Android binary of interest is already compiled and the command to run it (in `adb shell`) is `<your-binary> [program args...]`. This does *not* have to be a proper Android app with a manifest, etc.

1. [Android NDK is installed](https://developer.android.com/ndk/downloads) and
the `ANDROID_NDK` environment variable is set to the installation path.
1. Your Android device connected and configured for
[`adb`](https://developer.android.com/studio/command-line/adb).
1. The Android binary of interest is already compiled and the command to run it
(in `adb shell`) is `<your-binary> [program args...]`. This does *not* have
to be a proper Android app with a manifest, etc.

## Running Manually

1. Push the toolchain files, including `lldb-server`, to your device:

```shell
adb shell "mkdir -p /data/local/tmp/tools"
adb push "$ANDROID_NDK"/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/14.0.6/lib/linux/aarch64/* /data/local/tmp/tools
```

You may need to adjust the clang toolchain version to match the one in your NDK. You can find it with `find "$ANDROID_NDK/toolchains/llvm/prebuilt" -name lldb-server`.
You may need to adjust the clang toolchain version to match the one in your
NDK. You can find it with
`find "$ANDROID_NDK/toolchains/llvm/prebuilt" -name lldb-server`.

1. Set up port forwarding. We are going to use port 5039 but you are free to
pick a different one:

1. Set up port forwarding. We are going to use port 5039 but you are free to pick a different one:
```shell
adb forward tcp:5039 tcp:5039
```

1. Start an `lldb-server` in a new interactive adb shell:

```shell
adb shell
/data/local/tmp/tools/lldb-server platform --listen '*:5039' --server
```

1. Launch `lldb`, connect to the server and run the binary:

```shell
lldb -o 'platform select remote-android' \
-o 'platform connect connect://:5039' \
Expand All @@ -41,4 +55,7 @@ We assume the following setup:

You can either use the system `lldb` or a prebuilt under `"$ANDROID_NDK"/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/14.0.6/lib/linux/<your-host-arch>`.

Explanation: each `-o` (short for `--one-shot`) tells lldb to execute a command on startup. You can run those manually in the lldb shell, if you prefer. Then, we tell lldb which working directory to use, where to find the executable, and what command line arguments to use.
Explanation: each `-o` (short for `--one-shot`) tells lldb to execute a
command on startup. You can run those manually in the lldb shell, if you
prefer. Then, we tell lldb which working directory to use, where to find the
executable, and what command line arguments to use.
1 change: 0 additions & 1 deletion docs/developers/debugging/releases.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@ ones:
export PATH=/opt/python/cp39-cp39/bin:$PATH
```


Build core installation:

```shell
Expand Down
2 changes: 1 addition & 1 deletion docs/developers/debugging/tf_integrations_test_repro.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Debugging failures in TF/TFLite integration tests.
# Debugging failures in TF/TFLite integration tests

These are steps to reproduce/address failures in TF/TFLite integration tests.
These instructions are most stable on Linux, though they may work with a few
Expand Down
62 changes: 31 additions & 31 deletions docs/developers/developing_iree/benchmark_suites.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ IREE Benchmarks Suites is a collection of benchmarks for IREE developers to
track performance improvements/regressions during development.

The benchmark suites are run for each commit on the main branch and the results
are uploaded to https://perf.iree.dev for regression analysis (for the current
are uploaded to <https://perf.iree.dev> for regression analysis (for the current
supported targets). On pull requests, users can add labels `benchmarks:*` to
trigger the benchmark runs. The results will be compared with
https://perf.iree.dev and post in the comments.
<https://perf.iree.dev> and post in the comments.

Information about the definitions of the benchmark suites can be found in the
[IREE Benchmark Suites Configurations](/build_tools/python/benchmark_suites/iree/README.md).
Expand All @@ -31,19 +31,19 @@ The available presets are:

Execution benchmarks:

- `android-cpu`: benchmarks for mobile CPUs
- `android-gpu`: benchmarks for mobile GPUs
- `cuda`: benchmarks for CUDA with a small model set
- `cuda-large`: benchmarks for CUDA with a large model set
- `vulkan-nvidia`: benchmarks for Vulkan on NVIDIA graphics cards
- `x86_64`: benchmarks for x86_64 CPUs with a small model set
- `x86_64-large`: benchmarks for x86_64 with a large model set
- `android-cpu`: benchmarks for mobile CPUs
- `android-gpu`: benchmarks for mobile GPUs
- `cuda`: benchmarks for CUDA with a small model set
- `cuda-large`: benchmarks for CUDA with a large model set
- `vulkan-nvidia`: benchmarks for Vulkan on NVIDIA graphics cards
- `x86_64`: benchmarks for x86_64 CPUs with a small model set
- `x86_64-large`: benchmarks for x86_64 with a large model set

Compilation benchmarks (to collect compilation statistics, such as module
sizes):

- `comp-stats`: compilation benchmarks with a small model set
- `comp-stats-large`: compilation benchmark with a large model set
- `comp-stats`: compilation benchmarks with a small model set
- `comp-stats-large`: compilation benchmark with a large model set

Note that `*-large` presets will download and build a few hundreds GBs of
artifacts.
Expand Down Expand Up @@ -121,15 +121,15 @@ build_tools/benchmarks/run_benchmarks_on_linux.py \

Note that:

- `<target_device_name>` selects a benchmark group targets a specific device:
- Common options:
- `c2-standard-16` for x86_64 CPU benchmarks.
- `a2-highgpu-1g` for NVIDIA GPU benchmarks.
- All device names are defined under
- `<target_device_name>` selects a benchmark group targets a specific device:
- Common options:
- `c2-standard-16` for x86_64 CPU benchmarks.
- `a2-highgpu-1g` for NVIDIA GPU benchmarks.
- All device names are defined under
[build_tools/python/e2e_test_framework/device_specs](/build_tools/python/e2e_test_framework/device_specs).
- To run x86_64 benchmarks, right now `--cpu_uarch` needs to be provided and
- To run x86_64 benchmarks, right now `--cpu_uarch` needs to be provided and
only `CascadeLake` is available currently.
- To build traced benchmark tools, see
- To build traced benchmark tools, see
[Profiling with Tracy](/docs/developers/developing_iree/profiling_with_tracy.md).

Filters can be used to select the benchmarks:
Expand Down Expand Up @@ -198,16 +198,16 @@ build_tools/benchmarks/diff_local_benchmarks.py \
Each benchmark has its benchmark ID in the benchmark suites, you will see a
benchmark ID at:

- In the serie's URL of https://perf.iree.dev
- Execution benchmark: `https://perf.iree.dev/serie?IREE?<benchmark_id>`
- Compilation benchmark:
- In the serie's URL of <https://perf.iree.dev>
- Execution benchmark: `https://perf.iree.dev/serie?IREE?<benchmark_id>`
- Compilation benchmark:
`https://perf.iree.dev/serie?IREE?<benchmark_id>-<metric_id>`
- In `benchmark_results.json` and `compile_stats_results.json`
- Execution benchmark result has a field `run_config_id`
- Compilation benchmark result has a field `gen_config_id`
- In PR benchmark summary or the markdown generated by
- In `benchmark_results.json` and `compile_stats_results.json`
- Execution benchmark result has a field `run_config_id`
- Compilation benchmark result has a field `gen_config_id`
- In PR benchmark summary or the markdown generated by
`diff_local_benchmarks.py`, each benchmark has the link to its
https://perf.iree.dev URL, which includes the benchmark ID.
<https://perf.iree.dev> URL, which includes the benchmark ID.

If you don't have artifacts locally, see
[Fetching Benchmark Artifacts from CI](#fetching-benchmark-artifacts-from-ci) to
Expand Down Expand Up @@ -247,14 +247,14 @@ build_tools/benchmarks/benchmark_helper.py dump-cmds \

## Fetching Benchmark Artifacts from CI

#### 1. Find the corresponding CI workflow run
### 1. Find the corresponding CI workflow run

On the commit of the benchmark run, you can find the list of the workflow jobs
by clicking the green check mark. Click any job starts with `CI /`:

![image](https://user-images.githubusercontent.com/2104162/234647960-3df9d0f0-a34a-47ad-bda8-095ae44de865.png)

#### 2. Get URLs of GCS artifacts
### 2. Get URLs of GCS artifacts

On the CI page, click `Summary` on the top-left to open the summary page. Scroll
down and the links to artifacts are listed in a section titled "Artifact Links".
Expand All @@ -263,11 +263,11 @@ steps:

![image](https://user-images.githubusercontent.com/2104162/234716421-3a69b6ad-211d-4e39-8f9e-a4f22f91739d.png)

#### 3. Fetch the benchmark artifacts
### 3. Fetch the benchmark artifacts

To fetch files from the GCS URL, the gcloud CLI tool
(https://cloud.google.com/sdk/docs/install) can list the directory contents and
download files (see https://cloud.google.com/sdk/gcloud/reference/storage for
(<https://cloud.google.com/sdk/docs/install>) can list the directory contents and
download files (see <https://cloud.google.com/sdk/gcloud/reference/storage> for
more usages). If you want to use CI artifacts to reproduce benchmarks locally,
see
[Find Compile and Run Commands to Reproduce Benchmarks](#find-compile-and-run-commands-to-reproduce-benchmarks).
Expand Down
Loading

0 comments on commit c232aeb

Please sign in to comment.