This repo is build on top of https://github.com/biubug6/Pytorch_Retinaface
Train loop moved to Pytorch Lightning
IT added a set of functionality:
- Distributed training
- fp16
- Syncronized BatchNorm
- Support for various loggers like W&B or Neptune.ml
Hyperparameters that were scattered across the code moved to the config at retinadace/config
Augmentations => Albumentations
Color that were manually implemented replaced by the Albumentations library.
Todo:
- Horizontal Flip is not implemented in Albumentations
- Spatial transforms like rotations or transpose are not implemented yet.
Color transforms are defined in the config.
In order to track thr progress, mAP metric is calculated on validation.
The pipeline expects labels in the format:
[
{
"file_name": "0--Parade/0_Parade_marchingband_1_849.jpg",
"annotations": [
{
"x_min": 449,
"y_min": 330,
"width": 122,
"height": 149,
"landmarks": [
488.906,1
373.643,
0.0,
542.089,
376.442,
0.0,
515.031,
412.83,
0.0,
485.174,
425.893,
0.0,
538.357,
431.491,
0.0,
0.82
]
}
]
},
python retinaface/train.py -h
usage: train.py [-h] -c CONFIG_PATH
optional arguments:
-h, --help show this help message and exit
-c CONFIG_PATH, --config_path CONFIG_PATH
Path to the config.
python retinaface/inference.py -h
usage: inference.py [-h] -i INPUT_PATH -c CONFIG_PATH -o OUTPUT_PATH [-v]
[-g NUM_GPUS] [-m MAX_SIZE] [-b BATCH_SIZE]
[-j NUM_WORKERS]
[--confidence_threshold CONFIDENCE_THRESHOLD]
[--nms_threshold NMS_THRESHOLD] -w WEIGHT_PATH
[--keep_top_k KEEP_TOP_K] [--world_size WORLD_SIZE]
[--local_rank LOCAL_RANK] [--fp16]
optional arguments:
-h, --help show this help message and exit
-i INPUT_PATH, --input_path INPUT_PATH
Path with images.
-c CONFIG_PATH, --config_path CONFIG_PATH
Path to config.
-o OUTPUT_PATH, --output_path OUTPUT_PATH
Path to save jsons.
-v, --visualize Visualize predictions
-g NUM_GPUS, --num_gpus NUM_GPUS
The number of GPUs to use.
-m MAX_SIZE, --max_size MAX_SIZE
Resize the largest side to this number
-b BATCH_SIZE, --batch_size BATCH_SIZE
batch_size
-j NUM_WORKERS, --num_workers NUM_WORKERS
num_workers
--confidence_threshold CONFIDENCE_THRESHOLD
confidence_threshold
--nms_threshold NMS_THRESHOLD
nms_threshold
-w WEIGHT_PATH, --weight_path WEIGHT_PATH
Path to weights.
--keep_top_k KEEP_TOP_K
keep_top_k
--world_size WORLD_SIZE
number of nodes for distributed training
--local_rank LOCAL_RANK
node rank for distributed training
--fp16 Use fp6
Weights for the model with config.
python -m torch.distributed.launch --nproc_per_node=<num_gpus> retinaface/inference.py <parameters>