Skip to content

Commit

Permalink
Remove trailing whitespace
Browse files Browse the repository at this point in the history
  • Loading branch information
zhiqwang committed Jun 22, 2022
1 parent 688a4e1 commit 3dd63ef
Show file tree
Hide file tree
Showing 36 changed files with 237 additions and 253 deletions.
7 changes: 7 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,23 +28,23 @@ YOLOv6 is composed of the following methods:
```shell
git clone https://github.com/meituan/YOLOv6
cd YOLOv6
pip install -r requirements.txt
pip install -r requirements.txt
```

### Inference
### Inference

First, download a pretrained model from the YOLOv6 release

Second, run inference with `tools/infer.py`

```shell
python tools/infer.py --weights yolov6s.pt --source [img.jpg / imgdir]
yolov6n.pt
yolov6n.pt
```

### Training

Single GPU
Single GPU

```shell
python tools/train.py --batch 256 --conf configs/yolov6s.py --data data/coco.yaml --device 0
Expand Down
10 changes: 5 additions & 5 deletions configs/yolov6_tiny.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
model = dict(
type='YOLOv6t',
pretrained=None,
depth_multiple=0.25,
depth_multiple=0.25,
width_multiple=0.50,
backbone=dict(
type='EfficientRep',
Expand Down Expand Up @@ -35,12 +35,12 @@
weight_decay=0.0005,
warmup_epochs=3.0,
warmup_momentum=0.8,
warmup_bias_lr=0.1
warmup_bias_lr=0.1
)

data_aug = dict(
hsv_h=0.015,
hsv_s=0.7,
hsv_h=0.015,
hsv_s=0.7,
hsv_v=0.4,
degrees=0.0,
translate=0.1,
Expand Down
8 changes: 4 additions & 4 deletions configs/yolov6_tiny_finetune.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
model = dict(
type='YOLOv6t',
pretrained='./weights/yolov6t.pt',
depth_multiple=0.25,
depth_multiple=0.25,
width_multiple=0.50,
backbone=dict(
type='EfficientRep',
Expand All @@ -24,7 +24,7 @@
strides=[8, 16, 32],
iou_type='ciou'
)
)
)

solver = dict(
optim='SGD',
Expand All @@ -35,9 +35,9 @@
weight_decay=0.00036,
warmup_epochs=2.0,
warmup_momentum=0.5,
warmup_bias_lr=0.05
warmup_bias_lr=0.05
)

data_aug = dict(
hsv_h=0.0138,
hsv_s=0.664,
Expand Down
10 changes: 5 additions & 5 deletions configs/yolov6n.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
model = dict(
type='YOLOv6n',
pretrained=None,
depth_multiple=0.33,
depth_multiple=0.33,
width_multiple=0.25,
backbone=dict(
type='EfficientRep',
Expand Down Expand Up @@ -35,12 +35,12 @@
weight_decay=0.0005,
warmup_epochs=3.0,
warmup_momentum=0.8,
warmup_bias_lr=0.1
warmup_bias_lr=0.1
)

data_aug = dict(
hsv_h=0.015,
hsv_s=0.7,
hsv_h=0.015,
hsv_s=0.7,
hsv_v=0.4,
degrees=0.0,
translate=0.1,
Expand All @@ -50,4 +50,4 @@
fliplr=0.5,
mosaic=1.0,
mixup=0.0,
)
)
6 changes: 3 additions & 3 deletions configs/yolov6n_finetune.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
strides=[8, 16, 32],
iou_type='ciou'
)
)
)

solver = dict(
optim='SGD',
Expand All @@ -35,9 +35,9 @@
weight_decay=0.00036,
warmup_epochs=2.0,
warmup_momentum=0.5,
warmup_bias_lr=0.05
warmup_bias_lr=0.05
)

data_aug = dict(
hsv_h=0.0138,
hsv_s=0.664,
Expand Down
6 changes: 3 additions & 3 deletions configs/yolov6s.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,12 +35,12 @@
weight_decay=0.0005,
warmup_epochs=3.0,
warmup_momentum=0.8,
warmup_bias_lr=0.1
warmup_bias_lr=0.1
)

data_aug = dict(
hsv_h=0.015,
hsv_s=0.7,
hsv_h=0.015,
hsv_s=0.7,
hsv_v=0.4,
degrees=0.0,
translate=0.1,
Expand Down
8 changes: 4 additions & 4 deletions configs/yolov6s_finetune.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
model = dict(
type='YOLOv6s',
pretrained='./weights/yolov6s.pt',
depth_multiple=0.33,
depth_multiple=0.33,
width_multiple=0.50,
backbone=dict(
type='EfficientRep',
Expand All @@ -24,7 +24,7 @@
strides=[8, 16, 32],
iou_type='siou'
)
)
)

solver = dict(
optim='SGD',
Expand All @@ -35,9 +35,9 @@
weight_decay=0.00036,
warmup_epochs=2.0,
warmup_momentum=0.5,
warmup_bias_lr=0.05
warmup_bias_lr=0.05
)

data_aug = dict(
hsv_h=0.0138,
hsv_s=0.664,
Expand Down
1 change: 0 additions & 1 deletion data/coco.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,3 @@ names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', '
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush' ]

4 changes: 2 additions & 2 deletions deploy/ONNX/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

### Check requirements
```shell
pip install onnx>=1.10.0
pip install onnx>=1.10.0
```

### Export script
```shell
python deploy/ONNX/export_onnx.py --weights yolov6s.pt --img 640 --batch 1

```

### Download
14 changes: 7 additions & 7 deletions deploy/ONNX/export_onnx.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
import argparse
import time
import sys
import os
import os
import torch
import torch.nn as nn
import onnx

ROOT = os.getcwd()
ROOT = os.getcwd()
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT))

Expand All @@ -21,9 +21,9 @@

if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default='./yolov6s.pt', help='weights path')
parser.add_argument('--weights', type=str, default='./yolov6s.pt', help='weights path')
parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size') # height, width
parser.add_argument('--batch-size', type=int, default=1, help='batch size')
parser.add_argument('--batch-size', type=int, default=1, help='batch size')
parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
parser.add_argument('--inplace', action='store_true', help='set Detect() inplace=True')
parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
Expand All @@ -34,9 +34,9 @@

# Check device
cuda = args.device != 'cpu' and torch.cuda.is_available()
device = torch.device('cuda:0' if cuda else 'cpu')
device = torch.device('cuda:0' if cuda else 'cpu')
assert not (device.type == 'cpu' and args.half), '--half only compatible with GPU export, i.e. use --device 0'
# Load PyTorch model
# Load PyTorch model
model = load_checkpoint(args.weights, map_location=device, inplace=True, fuse=True) # load FP32 model
for layer in model.modules():
if isinstance(layer, RepVGGBlock):
Expand All @@ -55,7 +55,7 @@
m.act = SiLU()
elif isinstance(m, EffiDeHead):
m.inplace = args.inplace

y = model(img) # dry run

# ONNX export
Expand Down
4 changes: 2 additions & 2 deletions deploy/OpenVINO/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@
### Check requirements
```shell
pip install --upgrade pip
pip install openvino-dev
pip install openvino-dev
```

### Export script
```shell
python deploy/OpenVINO/export_openvino.py --weights yolov6s.pt --img 640 --batch 1

```

### Download
14 changes: 7 additions & 7 deletions deploy/OpenVINO/export_openvino.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@
import argparse
import time
import sys
import os
import os
import torch
import torch.nn as nn
import onnx
import subprocess

ROOT = os.getcwd()
ROOT = os.getcwd()
if str(ROOT) not in sys.path:
sys.path.append(str(ROOT))

Expand All @@ -22,9 +22,9 @@

if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default='./yolov6s.pt', help='weights path')
parser.add_argument('--weights', type=str, default='./yolov6s.pt', help='weights path')
parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='image size') # height, width
parser.add_argument('--batch-size', type=int, default=1, help='batch size')
parser.add_argument('--batch-size', type=int, default=1, help='batch size')
parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
parser.add_argument('--inplace', action='store_true', help='set Detect() inplace=True')
parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
Expand All @@ -35,9 +35,9 @@

# Check device
cuda = args.device != 'cpu' and torch.cuda.is_available()
device = torch.device('cuda:0' if cuda else 'cpu')
device = torch.device('cuda:0' if cuda else 'cpu')
assert not (device.type == 'cpu' and args.half), '--half only compatible with GPU export, i.e. use --device 0'
# Load PyTorch model
# Load PyTorch model
model = load_checkpoint(args.weights, map_location=device, inplace=True, fuse=True) # load FP32 model
for layer in model.modules():
if isinstance(layer, RepVGGBlock):
Expand All @@ -56,7 +56,7 @@
m.act = SiLU()
elif isinstance(m, EffiDeHead):
m.inplace = args.inplace

y = model(img) # dry run

# ONNX export
Expand Down
5 changes: 2 additions & 3 deletions docs/Test_speed.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Download the models you want to test from the latest release.

## 1. Prepare testing environment

Refer to README, install packages corresponding to CUDA, CUDNN and TensorRT version.
Refer to README, install packages corresponding to CUDA, CUDNN and TensorRT version.

Here, we use Torch1.8.0 inference on V100 and TensorRT 7.2 on T4.

Expand All @@ -31,12 +31,11 @@ To get inference speed with TensorRT in FP16 mode on T4, you can follow the ste
First, export pytorch model as onnx format using the following command:

```shell
python deploy/ONNX/export_onnx.py --weights yolov6n.pt --device 0 --batch [1 or 32]
python deploy/ONNX/export_onnx.py --weights yolov6n.pt --device 0 --batch [1 or 32]
```

Second, generate an inference trt engine and test speed using `trtexec`:

```
trtexec --onnx=yolov6n.onnx --workspace=1024 --avgRuns=1000 --inputIOFormats=fp16:chw --outputIOFormats=fp16:chw
```

Loading

0 comments on commit 3dd63ef

Please sign in to comment.