diff --git a/README.md b/README.md index ed1bf494..2145e612 100644 --- a/README.md +++ b/README.md @@ -76,8 +76,8 @@ python tools/eval.py --data data/coco.yaml --batch 32 --weights yolov6s.pt --ta Export as ONNX Format ```shell -python tools/export_onnx.py --weights yolov6s.pt --device 0 - yolov6n.pt +python deploy/export_onnx.py --weights yolov6s.pt --device 0 + yolov6n.pt ``` diff --git a/tools/export_onnx.py b/deploy/export_onnx.py similarity index 100% rename from tools/export_onnx.py rename to deploy/export_onnx.py diff --git a/docs/Test_speed.md b/docs/Test_speed.md index a7b15823..4982186b 100644 --- a/docs/Test_speed.md +++ b/docs/Test_speed.md @@ -31,7 +31,7 @@ To get inference speed with TensorRT in FP16 mode on T4, you can follow the ste First, export pytorch model as onnx format using the following command: ```shell -python tools/export_onnx.py --weights yolov6n.pt --device 0 --batch [1 or 32] +python deploy/export_onnx.py --weights yolov6n.pt --device 0 --batch [1 or 32] ``` Second, generate an inference trt engine and test speed using `trtexec`: diff --git a/docs/Train_custom_data.md b/docs/Train_custom_data.md index db487437..f07fce40 100644 --- a/docs/Train_custom_data.md +++ b/docs/Train_custom_data.md @@ -125,6 +125,6 @@ python tools/infer.py --weights output_dir/name/weights/best_ckpt.pt --source im Export as ONNX Format ```shell -python tools/export_onnx.py --weights output_dir/name/weights/best_ckpt.pt --device 0 +python deploy/export_onnx.py --weights output_dir/name/weights/best_ckpt.pt --device 0 ```