Skip to content

Commit

Permalink
update speed-testing docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Chilicyy committed Sep 5, 2022
1 parent e1596de commit e98bc81
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions docs/Test_speed.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,13 @@ To get inference speed with TensorRT in FP16 mode on T4, you can follow the step
First, export pytorch model as onnx format using the following command:

```shell
python deploy/ONNX/export_onnx.py --weights yolov6n.pt --device 0 --batch [1 or 32]
python deploy/ONNX/export_onnx.py --weights yolov6n.pt --device 0 --simplify --batch [1 or 32]
```

Second, generate an inference trt engine and test speed using `trtexec`:

```
trtexec --onnx=yolov6n.onnx --workspace=1024 --avgRuns=1000 --inputIOFormats=fp16:chw --outputIOFormats=fp16:chw --fp16
trtexec --explicitBatch --fp16 --inputIOFormats=fp16:chw --outputIOFormats=fp16:chw --buildOnly --workspace=1024 --onnx=yolov6n.onnx --saveEngine=yolov6n.trt
trtexec --fp16 --avgRuns=1000 --workspace=1024 --loadEngine=yolov6n.trt
```

0 comments on commit e98bc81

Please sign in to comment.