Skip to content
/ YOLOv6 Public
forked from meituan/YOLOv6

YOLOv6: a single-stage object detection framework dedicated to industrial applications.

License

Notifications You must be signed in to change notification settings

kwikwag/YOLOv6

 
 

Repository files navigation

English | 简体中文

YOLOv6-Segmentation

Implementation of Instance Segmentation based on YOLOv6 v4.0 code.


What's New

Object Detection Benchmark

Model Size mAPval
0.5:0.95
SpeedT4
trt fp16 b1
(fps)
SpeedT4
trt fp16 b32
(fps)
Params
(M)
FLOPs
(G)
YOLOv6-N 640 37.5 779 1187 4.7 11.4
YOLOv6-S 640 45.0 339 484 18.5 45.3
YOLOv6-M 640 50.0 175 226 34.9 85.8
YOLOv6-L 640 52.8 98 116 59.6 150.7
YOLOv6-N6 1280 44.9 228 281 10.4 49.8
YOLOv6-S6 1280 50.3 98 108 41.4 198.0
YOLOv6-M6 1280 55.2 47 55 79.6 379.5
YOLOv6-L6 1280 57.2 26 29 140.4 673.4
Table Notes
  • All checkpoints are trained with self-distillation except for YOLOv6-N6/S6 models trained to 300 epochs without distillation.
  • Results of the mAP and speed are evaluated on COCO val2017 dataset with the input resolution of 640×640 for P5 models and 1280x1280 for P6 models.
  • Speed is tested with TensorRT 7.2 on T4.
  • Refer to Test speed tutorial to reproduce the speed results of YOLOv6.
  • Params and FLOPs of YOLOv6 are estimated on deployed models.

Segmentation Benchmark

Model Size mAPbox
50-95
mAPmask
50-95
SpeedT4
trt fp16 b1
(fps)
Params
(M)
FLOPs
(G)
YOLOv6-N 640 35.3 31.2 645 4.9 7.0
YOLOv6-S 640 44.0 38.0 292 19.6 27.7
YOLOv6-M 640 48.2 41.3 148 37.1 54.3
YOLOv6-L 640 51.1 43.7 93 63.6 95.5
YOLOv6-X 640 52.2 44.8 47 119.1 175.5

Table Notes

  • All checkpoints are trained from scratch on COCO for 300 epochs without distillation.
  • Results of the mAP and speed are evaluated on COCO val2017 dataset with the input resolution of 640×640.
  • Speed is tested with TensorRT 8.5 on T4 without post-processing.

Quick Start

Install
git clone https://github.com/meituan/YOLOv6
cd YOLOv6
git checkout yolov6-seg
pip install -r requirements.txt
Training

Single GPU

python tools/train.py --batch 8 --conf configs/yolov6s_seg_finetune.py --data data/coco.yaml --device 0

Multi GPUs (DDP mode recommended)

python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 64 --conf configs/yolov6s_seg_finetune.py --data data/coco.yaml --device 0,1,2,3,4,5,6,7
  • fuse_ab: Not supported in current version
  • conf: select config file to specify network/optimizer/hyperparameters. We recommend to apply yolov6n/s/m/l_finetune.py when training on your custom dataset.
  • data: prepare dataset and specify dataset paths in data.yaml ( COCO, YOLO format coco labels )
  • make sure your dataset structure as follows:
├── coco
│   ├── annotations
│   │   ├── instances_train2017.json
│   │   └── instances_val2017.json
│   ├── images
│   │   ├── train2017
│   │   └── val2017
│   ├── labels
│   │   ├── train2017
│   │   ├── val2017
│   ├── LICENSE
│   ├── README.txt
Evaluation

Reproduce mAP on COCO val2017 dataset with 640×640 resolution

python tools/eval.py --data data/coco.yaml --batch 32 --weights yolov6s_seg.pt --task val
Inference

First, download a pretrained model from the YOLOv6 release or use your trained model to do inference.

Second, run inference with tools/infer.py

python tools/infer.py --weights yolov6s_seg.pt --source img.jpg / imgdir / video.mp4

If you want to inference on local camera or web camera, you can run:

python tools/infer.py --weights yolov6s_seg.pt --webcam --webcam-addr 0

webcam-addr can be local camera number id or rtsp address. Maybe you want to eval a solo-head model, remember to add the --issolo parameter.

Deployment
Tutorials
Third-party resources

If you have any questions, welcome to join our WeChat group to discuss and exchange.

About

YOLOv6: a single-stage object detection framework dedicated to industrial applications.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 94.4%
  • Python 5.0%
  • Other 0.6%