Skip to content

Commit

Permalink
Merge branch 'main' into TRTC++
Browse files Browse the repository at this point in the history
  • Loading branch information
shensheng272 authored Aug 19, 2022
2 parents 46b0b28 + 19f37a4 commit 5206e9e
Show file tree
Hide file tree
Showing 65 changed files with 3,769 additions and 169 deletions.
7 changes: 7 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,9 @@ __pycache__/
# C extensions

# Distribution / packaging

.Python
videos/
build/
runs/
weights/
Expand Down Expand Up @@ -108,3 +110,8 @@ venv.bak/

#user scripts
*.sh

# model files
*.onnx
*.pt
*.engine
33 changes: 27 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,22 @@ python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 256
```

- conf: select config file to specify network/optimizer/hyperparameters
- data: prepare [COCO](http://cocodataset.org) dataset and specify dataset paths in data.yaml
- data: prepare [COCO](http://cocodataset.org) dataset, [YOLO format coco labels](https://github.com/meituan/YOLOv6/releases/download/0.1.0/coco2017labels.zip) and specify dataset paths in data.yaml
- make sure your dataset structure as follows:
```
├── coco
│ ├── annotations
│ │ ├── instances_train2017.json
│ │ └── instances_val2017.json
│ ├── images
│ │ ├── train2017
│ │ └── val2017
│ ├── labels
│ │ ├── train2017
│ │ ├── val2017
│ ├── LICENSE
│ ├── README.txt
```


### Evaluation
Expand All @@ -74,33 +89,35 @@ python tools/eval.py --data data/coco.yaml --batch 32 --weights yolov6s.pt --tas
If your training process is corrupted, you can resume training by
```
# single GPU traning.
python tools/train.py --resume
python tools/train.py --resume
# multi GPU training.
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --resume
```
Your can also specify a checkpoint path to `--resume` parameter by
```
# remember replace /path/to/your/checkpoint/path to the checkpoint path which you want to resume training.
# remember replace /path/to/your/checkpoint/path to the checkpoint path which you want to resume training.
--resume /path/to/your/checkpoint/path
```

### Deployment

* [ONNX](./deploy/ONNX)
* [OpenCV Python/C++](./deploy/ONNX/OpenCV)
* [OpenVINO](./deploy/OpenVINO)
* [Partial Quantization](./tools/partial_quantization)
* [TensorRT](./deploy/TensorRT)

### Tutorials

* [Train custom data](./docs/Train_custom_data.md)
* [Test speed](./docs/Test_speed.md)

* [Tutorial of RepOpt for YOLOv6](./docs/tutorial_repopt.md)

## Benchmark


| Model | Size | mAP<sup>val<br/>0.5:0.95 | Speed<sup>V100<br/>fp16 b32 <br/>(ms) | Speed<sup>V100<br/>fp32 b32 <br/>(ms) | Speed<sup>T4<br/>trt fp16 b1 <br/>(fps) | Speed<sup>T4<br/>trt fp16 b32 <br/>(fps) | Params<br/><sup> (M) | Flops<br/><sup> (G) |
| Model | Size | mAP<sup>val<br/>0.5:0.95 | Speed<sup>V100<br/>fp16 b32 <br/>(ms) | Speed<sup>V100<br/>fp32 b32 <br/>(ms) | Speed<sup>T4<br/>trt fp16 b1 <br/>(fps) | Speed<sup>T4<br/>trt fp16 b32 <br/>(fps) | Params<br/><sup> (M) | FLOPs<br/><sup> (G) |
| :-------------- | ----------- | :----------------------- | :------------------------------------ | :------------------------------------ | ---------------------------------------- | ----------------------------------------- | --------------- | -------------- |
| [**YOLOv6-n**](https://github.com/meituan/YOLOv6/releases/download/0.1.0/yolov6n.pt) | 416<br/>640 | 30.8<br/>35.0 | 0.3<br/>0.5 | 0.4<br/>0.7 | 1100<br/>788 | 2716<br/>1242 | 4.3<br/>4.3 | 4.7<br/>11.1 |
| [**YOLOv6-tiny**](https://github.com/meituan/YOLOv6/releases/download/0.1.0/yolov6t.pt) | 640 | 41.3 | 0.9 | 1.5 | 425 | 602 | 15.0 | 36.7 |
Expand All @@ -109,11 +126,15 @@ Your can also specify a checkpoint path to `--resume` parameter by

- Comparisons of the mAP and speed of different object detectors are tested on [COCO val2017](https://cocodataset.org/#download) dataset.
- Refer to [Test speed](./docs/Test_speed.md) tutorial to reproduce the speed results of YOLOv6.
- Params and Flops of YOLOv6 are estimated on deployed model.
- Params and FLOPs of YOLOv6 are estimated on deployed model.
- Speed results of other methods are tested in our environment using official codebase and model if not found from the corresponding official release.

## Third-party resources
* YOLOv6 NCNN Android app demo: [ncnn-android-yolov6](https://github.com/FeiGeChuanShu/ncnn-android-yolov6) from [FeiGeChuanShu](https://github.com/FeiGeChuanShu)
* YOLOv6 ONNXRuntime/MNN/TNN C++: [YOLOv6-ORT](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/ort/cv/yolov6.cpp), [YOLOv6-MNN](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/mnn/cv/mnn_yolov6.cpp) and [YOLOv6-TNN](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/tnn/cv/tnn_yolov6.cpp) from [DefTruth](https://github.com/DefTruth)
* YOLOv6 TensorRT Python: [yolov6-tensorrt-python](https://github.com/Linaom1214/tensorrt-python/blob/main/yolov6/trt.py) from [Linaom1214](https://github.com/Linaom1214)
* YOLOv6 TensorRT Windows C++: [yolort](https://github.com/zhiqwang/yolov5-rt-stack/tree/main/deployment/tensorrt-yolov6) from [Wei Zeng](https://github.com/Wulingtian)
* YOLOv6 Quantization and Auto Compression Example [YOLOv6-ACT](https://github.com/PaddlePaddle/PaddleSlim/tree/develop/example/auto_compression/pytorch_yolov6) from [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)
* [YOLOv6 web demo](https://huggingface.co/spaces/nateraw/yolov6) on [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/nateraw/yolov6)
* Tutorial: [How to train YOLOv6 on a custom dataset](https://blog.roboflow.com/how-to-train-yolov6-on-a-custom-dataset/) <a href="https://colab.research.google.com/drive/1YnbqOinBZV-c9I7fk_UL6acgnnmkXDMM"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
* Demo of YOLOv6 inference on Google Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mahdilamb/YOLOv6/blob/main/inference.ipynb)
Binary file added assets/image3.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/train_batch.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/voc_loss_curve.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/yolov5s.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/yolov6s.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/yoloxs.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
54 changes: 54 additions & 0 deletions configs/repopt/yolov6s_hs.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# YOLOv6s model
model = dict(
type='YOLOv6s',
pretrained=None,
depth_multiple=0.33,
width_multiple=0.50,
backbone=dict(
type='EfficientRep',
num_repeats=[1, 6, 12, 18, 6],
out_channels=[64, 128, 256, 512, 1024],
),
neck=dict(
type='RepPAN',
num_repeats=[12, 12, 12, 12],
out_channels=[256, 128, 128, 256, 256, 512],
),
head=dict(
type='EffiDeHead',
in_channels=[128, 256, 512],
num_layers=3,
anchors=1,
strides=[8, 16, 32],
iou_type='siou'
)
)

solver = dict(
optim='SGD',
lr_scheduler='Cosine',
lr0=0.01,
lrf=0.01,
momentum=0.937,
weight_decay=0.0005,
warmup_epochs=3.0,
warmup_momentum=0.8,
warmup_bias_lr=0.1
)

data_aug = dict(
hsv_h=0.015,
hsv_s=0.7,
hsv_v=0.4,
degrees=0.0,
translate=0.1,
scale=0.5,
shear=0.0,
flipud=0.0,
fliplr=0.5,
mosaic=1.0,
mixup=0.0,
)

# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
training_mode='hyper_search'
55 changes: 55 additions & 0 deletions configs/repopt/yolov6s_opt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# YOLOv6s model
model = dict(
type='YOLOv6s',
pretrained=None,
scales='./assets/yolov6s_scale.pt',
depth_multiple=0.33,
width_multiple=0.50,
backbone=dict(
type='EfficientRep',
num_repeats=[1, 6, 12, 18, 6],
out_channels=[64, 128, 256, 512, 1024],
),
neck=dict(
type='RepPAN',
num_repeats=[12, 12, 12, 12],
out_channels=[256, 128, 128, 256, 256, 512],
),
head=dict(
type='EffiDeHead',
in_channels=[128, 256, 512],
num_layers=3,
anchors=1,
strides=[8, 16, 32],
iou_type='siou'
)
)

solver = dict(
optim='SGD',
lr_scheduler='Cosine',
lr0=0.01,
lrf=0.01,
momentum=0.937,
weight_decay=0.0005,
warmup_epochs=3.0,
warmup_momentum=0.8,
warmup_bias_lr=0.1
)

data_aug = dict(
hsv_h=0.015,
hsv_s=0.7,
hsv_v=0.4,
degrees=0.0,
translate=0.1,
scale=0.5,
shear=0.0,
flipud=0.0,
fliplr=0.5,
mosaic=1.0,
mixup=0.0,
)

# Choose Rep-block by the training Mode, choices=["repvgg", "hyper-search", "repopt"]
training_mode='repopt'
2 changes: 0 additions & 2 deletions configs/yolov6_tiny.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,7 @@
type='EffiDeHead',
in_channels=[128, 256, 512],
num_layers=3,
begin_indices=24,
anchors=1,
out_indices=[17, 20, 23],
strides=[8, 16, 32],
iou_type='ciou'
)
Expand Down
2 changes: 0 additions & 2 deletions configs/yolov6_tiny_finetune.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,7 @@
type='EffiDeHead',
in_channels=[128, 256, 512],
num_layers=3,
begin_indices=24,
anchors=1,
out_indices=[17, 20, 23],
strides=[8, 16, 32],
iou_type='ciou'
)
Expand Down
2 changes: 0 additions & 2 deletions configs/yolov6n.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,7 @@
type='EffiDeHead',
in_channels=[128, 256, 512],
num_layers=3,
begin_indices=24,
anchors=1,
out_indices=[17, 20, 23],
strides=[8, 16, 32],
iou_type='ciou'
)
Expand Down
2 changes: 0 additions & 2 deletions configs/yolov6n_finetune.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,7 @@
type='EffiDeHead',
in_channels=[128, 256, 512],
num_layers=3,
begin_indices=24,
anchors=1,
out_indices=[17, 20, 23],
strides=[8, 16, 32],
iou_type='ciou'
)
Expand Down
2 changes: 0 additions & 2 deletions configs/yolov6s.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,7 @@
type='EffiDeHead',
in_channels=[128, 256, 512],
num_layers=3,
begin_indices=24,
anchors=1,
out_indices=[17, 20, 23],
strides=[8, 16, 32],
iou_type='siou'
)
Expand Down
2 changes: 0 additions & 2 deletions configs/yolov6s_finetune.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,7 @@
type='EffiDeHead',
in_channels=[128, 256, 512],
num_layers=3,
begin_indices=24,
anchors=1,
out_indices=[17, 20, 23],
strides=[8, 16, 32],
iou_type='siou'
)
Expand Down
11 changes: 11 additions & 0 deletions data/voc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Please insure that your custom_dataset are put in same parent dir with YOLOv6_DIR
train: VOCdevkit/voc_07_12/images/train # train images
val: VOCdevkit/voc_07_12/images/val # val images
test: VOCdevkit/voc_07_12/images/val # test images (optional)

# whether it is coco dataset, only coco dataset should be set to True.
is_coco: False
# Classes
nc: 20 # number of classes
names: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # class names
92 changes: 92 additions & 0 deletions deploy/ONNX/OpenCV/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# Object Detection using YOLOv5/YOLOv6/YOLOX and OpenCV DNN (Python/C++)

## 0. Install Dependancies
```
OpenCV >= 4.5.4
```
Only **OpenCV >= 4.5.4** can read onnx model file by dnn module.

## 1. Usage
Change work directory to `/path/to/YOLOv6/deploy/ONNX/OpenCV`
### 1.1 Python

- YOLOv5&YOLOv6:
```Python
python yolo.py --model /path/to/onnx/yolov5n.onnx --img /path/to/sample.jpg --classesFile /path/to/coco.names
yolov5s.onnx
yolov5m.onnx
yolov6n.onnx
yolov6s.onnx
yolov6t.onnx
```
- YOLOX:
```Python
python yolox.py --model /path/to/onnx/yolox_nano.onnx --img /path/to/sample.jpg --classesFile /path/to/coco.names
yolox_tiny.onnx
yolox_s.onnx
yolox_m.onnx
```

### 1.2 CMake C++ Linux YOLOv5
```C++ Linux
cd yolov5 // modify CMakeLists.txt
mkdir build
cd build
cmake ..
make
./yolov5 /path/to/onnx/yolov5n.onnx /path/to/sample.jpg /path/to/coco.names
yolov5s.onnx
yolov5m.onnx
```

### 1.3 CMake C++ Linux YOLOv6
```C++ Linux
cd yolov6 // modify CMakeLists.txt
mkdir build
cd build
cmake ..
make
./yolov6 /path/to/onnx/yolov6n.onnx /path/to/sample.jpg /path/to/coco.names
yolov6t.onnx
yolov6s.onnx
```

### 1.4 CMake C++ Linux YOLOX
```C++ Linux
cd yolox // modify CMakeLists.txt
mkdir build
cd build
cmake ..
make
./yolox /path/to/onnx/yolox_nano.onnx /path/to/sample.jpg /path/to/coco.names
yolox_tiny.onnx
yolox_s.onnx
yolox_m.onnx
```

## 2. Result
| Model | Speed CPU b1(ms) Python | Speed CPU b1(ms) C++ | mAP<sup>val 0.5:0.95</sup> | params(M) | FLOPs(G) |
| :-- | :-: | :-: | :-: | :-: | :-: |
| **YOLOv5n** | 116.47 | 118.89 | 28.0 | 1.9 | 4.5 |
| **YOLOv5s** | 200.53 | 202.22 | 37.4 | 7.2 | 16.5 |
| **YOLOv5m** | 294.98 | 291.86 | 45.4 | 21.2 | 49.0 |
| | | | | | |
| **YOLOv6-n** | 66.88 | 69.96 | 35.0 | 4.3 | 4.7 |
| **YOLOv6-tiny** | 133.15 | 137.59 | 41.3 | 15.0 | 36.7 |
| **YOLOv6-s** | 164.44 | 163.38 | 43.1 | 17.2 | 44.2 |
| | | | | | |
| **YOLOX-Nano** | 81.06 | 86.75 | 25.8@416 | 0.91 | 1.08@416 |
| **YOLOX-tiny** | 129.72 | 144.19 | 32.8@416 | 5.06 | 6.45@416 |
| **YOLOX-s** | 180.86 | 169.96 | 40.5 | 9.0 | 26.8 |
| **YOLOX-m** | 336.34 | 357.91 | 47.2 | 25.3 | 73.8 |

**Note**:
- All onnx models are converted from official github([Google Drive](https://drive.google.com/drive/folders/1Nw6M_Y6XLASyB0RxhSI2z_QRtt70Picl?usp=sharing)).
- Speed is test by [dnn::Net::getPerfProfile](https://docs.opencv.org/4.5.5/db/d30/classcv_1_1dnn_1_1Net.html), we report the average inference time of 300 runs on the same environment.
- The mAP/params/FLOPs are from official github.
- Test environment: MacOS 11.4 with 2.6 GHz 6-core Intel Core i7, 16GB Memory.

### Visualization
<div align="left"> <img src="../../../assets/yolov5s.jpg" width="1000"></div>
<div align="left"> <img src="../../../assets/yolov6s.jpg" width="1000"></div>
<div align="left"> <img src="../../../assets/yoloxs.jpg" width="1000"></div>
Loading

0 comments on commit 5206e9e

Please sign in to comment.