Skip to content

Commit

Permalink
feat: update readme for deployment
Browse files Browse the repository at this point in the history
  • Loading branch information
Chilicyy committed Jun 17, 2022
1 parent be1b4f5 commit e3edaf6
Show file tree
Hide file tree
Showing 7 changed files with 19 additions and 22 deletions.
11 changes: 3 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ YOLOv6 is composed of the following methods:
## Coming soon

- [ ] YOLOv6 m/l/x model.
- [ ] Deployment for OPENVINO/MNN/TNN/NCNN...
- [ ] Deployment for MNN/TNN/NCNN/CoreML...


## Quick Start
Expand Down Expand Up @@ -73,13 +73,8 @@ python tools/eval.py --data data/coco.yaml --batch 32 --weights yolov6s.pt --ta

### Deployment

Export as ONNX Format

```shell
python deploy/export_onnx.py --weights yolov6s.pt --device 0
yolov6n.pt
```

* [ONNX](./deploy/ONNX)
* [OpenVINO](./deploy/OpenVINO)

### Tutorials

Expand Down
14 changes: 14 additions & 0 deletions deploy/ONNX/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
## Export ONNX Model

### Check requirements
```shell
pip install onnx>=1.10.0
```

### Export script
```shell
python deploy/ONNX/export_onnx.py --weights yolov6s.pt --img 640 --batch 1

```

### Download
11 changes: 0 additions & 11 deletions deploy/export_onnx.py → deploy/ONNX/export_onnx.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# https://github.com/ultralytics/yolov5/blob/master/export.py
import argparse
import time
import sys
Expand Down Expand Up @@ -58,16 +57,6 @@

y = model(img) # dry run

# TorchScript export
try:
LOGGER.info('\nStarting to export TorchScript...')
export_file = args.weights.replace('.pt', '.torchscript.pt') # filename
ts = torch.jit.trace(model, img)
ts.save(export_file)
LOGGER.info(f'TorchScript export success, saved as {export_file}')
except Exception as e:
LOGGER.info(f'TorchScript export failure: {e}')

# ONNX export
try:
LOGGER.info('\nStarting to export ONNX...')
Expand Down
Empty file added deploy/OpenVINO/README.md
Empty file.
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# https://github.com/ultralytics/yolov5/blob/master/export.py
import argparse
import time
import sys
Expand Down
2 changes: 1 addition & 1 deletion docs/Test_speed.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ To get inference speed with TensorRT in FP16 mode on T4, you can follow the ste
First, export pytorch model as onnx format using the following command:

```shell
python deploy/export_onnx.py --weights yolov6n.pt --device 0 --batch [1 or 32]
python deploy/ONNX/export_onnx.py --weights yolov6n.pt --device 0 --batch [1 or 32]
```

Second, generate an inference trt engine and test speed using `trtexec`:
Expand Down
2 changes: 1 addition & 1 deletion docs/Train_custom_data.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,6 @@ python tools/infer.py --weights output_dir/name/weights/best_ckpt.pt --source im
Export as ONNX Format

```shell
python deploy/export_onnx.py --weights output_dir/name/weights/best_ckpt.pt --device 0
python deploy/ONNX/export_onnx.py --weights output_dir/name/weights/best_ckpt.pt --device 0
```

0 comments on commit e3edaf6

Please sign in to comment.