Skip to content

YOLOv6: a single-stage object detection framework dedicated to industrial applications.

License

Notifications You must be signed in to change notification settings

ShisuiUzumaki/YOLOv6

This branch is 295 commits behind meituan/YOLOv6:main.

Folders and files

NameName
Last commit message
Last commit date
Sep 6, 2022
Sep 6, 2022
Jul 7, 2022
Sep 6, 2022
Sep 5, 2022
Sep 7, 2022
Sep 8, 2022
Jul 19, 2022
Jun 22, 2022
Jun 8, 2022
Sep 8, 2022
Sep 6, 2022
Sep 1, 2022
Sep 6, 2022

Repository files navigation

YOLOv6

Implementation of paper - YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications

Introduction

YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance.

YOLOv6 has a series of models for various industrial scenarios, including N/T/S/M/L, which the architectures vary considering the model size for better accuracy-speed trade-off. And some Bag-of-freebies methods are introduced to further improve the performance, such as self-distillation and more training epochs. For industrial deployment, we adopt QAT with channel-wise distillation and graph optimization to pursue extreme performance.

YOLOv6-N hits 35.9% AP on COCO dataset with 1234 FPS on T4. YOLOv6-S strikes 43.5% AP with 495 FPS, and the quantized YOLOv6-S model achieves 43.3% AP at a accelerated speed of 869 FPS on T4. YOLOv6-T/M/L also have excellent performance, which show higher accuracy than other detectors with the similar inference speed.

What's New

  • Release M/L models and update N/T/S models with enhanced performance.⭐️ Benchmark
  • 2x faster training time.
  • Fix the degration of performance when evaluating on 640x640 inputs.
  • Customized quantization methods. πŸš€ Quantization Tutorial

Quick Start

Install

git clone https://github.com/meituan/YOLOv6
cd YOLOv6
pip install -r requirements.txt

Inference

First, download a pretrained model from the YOLOv6 release

Second, run inference with tools/infer.py

python tools/infer.py --weights yolov6s.pt --source img.jpg / imgdir / video.mp4

Training

Single GPU

python tools/train.py --batch 32 --conf configs/yolov6s.py --data data/coco.yaml --device 0

Multi GPUs (DDP mode recommended)

python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 256 --conf configs/yolov6s.py --data data/coco.yaml --device 0,1,2,3,4,5,6,7
Reproduce our results on COCO

For nano model

python -m torch.distributed.launch --nproc_per_node 4 tools/train.py \
									--batch 128 \
									--conf configs/yolov6n.py \
									--data data/coco.yaml \
									--epoch 400 \
									--device 0,1,2,3,4,5,6,7 \
									--name yolov6n_coco

For s/tiny model

python -m torch.distributed.launch --nproc_per_node 8 tools/train.py \
									--batch 256 \
									--conf configs/yolov6s.py \ # configs/yolov6t.py
									--data data/coco.yaml \
									--epoch 400 \
									--device 0,1,2,3,4,5,6,7 \
									--name yolov6s_coco # yolov6t_coco

For m/l model

# Step 1: Training a base model
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py \
									--batch 256 \
									--conf configs/yolov6m.py \ # configs/yolov6l.py
									--data data/coco.yaml \
									--epoch 300 \
									--device 0,1,2,3,4,5,6,7 \
									--name yolov6m_coco # yolov6l_coco
									
                                                                                      
# Step 2: Self-distillation training
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py \
									--batch 256 \ # 128 for distillation of yolov6l 
									--conf configs/yolov6m.py \ # configs/yolov6l.py
									--data data/coco.yaml \
									--epoch 300 \
									--device 0,1,2,3,4,5,6,7 \
									--distill \
									--teacher_model_path runs/train/yolov6m_coco/weights/best_ckpt.pt \ # # yolov6l_coco
									--name yolov6m_coco # yolov6l_coco
							
  • conf: select config file to specify network/optimizer/hyperparameters
  • data: prepare COCO dataset, YOLO format coco labels and specify dataset paths in data.yaml
  • make sure your dataset structure as follows:
β”œβ”€β”€ coco
β”‚   β”œβ”€β”€ annotations
β”‚   β”‚   β”œβ”€β”€ instances_train2017.json
β”‚   β”‚   └── instances_val2017.json
β”‚   β”œβ”€β”€ images
β”‚   β”‚   β”œβ”€β”€ train2017
β”‚   β”‚   └── val2017
β”‚   β”œβ”€β”€ labels
β”‚   β”‚   β”œβ”€β”€ train2017
β”‚   β”‚   β”œβ”€β”€ val2017
β”‚   β”œβ”€β”€ LICENSE
β”‚   β”œβ”€β”€ README.txt

Evaluation

Reproduce mAP on COCO val2017 dataset with 640Γ—640 resolution

python tools/eval.py --data data/coco.yaml --batch 32 --weights yolov6s.pt --task val --reproduce_640_eval
Resume training

If your training process is corrupted, you can resume training by

# multi GPU training.
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --resume

Your can also specify a checkpoint path to --resume parameter by

# remember to replace /path/to/your/checkpoint/path to the checkpoint path which you want to resume training.
--resume /path/to/your/checkpoint/path

Deployment

Tutorials

Benchmark

Model Size mAPval
0.5:0.95
SpeedT4
trt fp16 b1
(fps)
SpeedT4
trt fp16 b32
(fps)
Params
(M)
FLOPs
(G)
YOLOv6-N 640 35.9300e
36.3400e
802 1234 4.3 11.1
YOLOv6-T 640 40.3300e
41.1400e
449 659 15.0 36.7
YOLOv6-S 640 43.5300e
43.8400e
358 495 17.2 44.2
YOLOv6-M 640 49.5 179 233 34.3 82.2
YOLOv6-L-ReLU 640 51.7 113 149 58.5 144.0
YOLOv6-L 640 52.5 98 121 58.5 144.0
Legacy models
Model Size mAPval
0.5:0.95
SpeedV100
fp16 b32
(ms)
SpeedV100
fp32 b32
(ms)
SpeedT4
trt fp16 b1
(fps)
SpeedT4
trt fp16 b32
(fps)
Params
(M)
FLOPs
(G)
YOLOv6-N 416
640
30.8
35.0
0.3
0.5
0.4
0.7
1100
788
2716
1242
4.3
4.3
4.7
11.1
YOLOv6-T 640 41.3 0.9 1.5 425 602 15.0 36.7
YOLOv6-S 640 43.1 1.0 1.7 373 520 17.2 44.2
  • Results of the mAP and speed are evaluated on COCO val2017 dataset with the input resolution of 640Γ—640.
  • Refer to Test speed tutorial to reproduce the speed results of YOLOv6.
  • Params and FLOPs of YOLOv6 are estimated on deployed models.
  • For N/T/S models, we use more training epochs strategy.
  • For M/L/L-ReLU models, we adopt self-distillation methods to further improve the performance.

Third-party resources

About

YOLOv6: a single-stage object detection framework dedicated to industrial applications.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 92.6%
  • Python 6.5%
  • Other 0.9%