We provide the training and testing codes for SAN, implemented in PyTorch.
- Download 300W-Style and AFLW-Style from Google Drive, and extract the downloaded files into
~/datasets/
. - In 300W-Style and AFLW-Style directories, the
Original
sub-directory contains the original images from 300-W and AFLW. - The sketch, light, and gray style images are used to analyze the image style variance in facial landmark detection.
300W-Gray
: 300W afw helen ibug lfpw300W-Light
: 300W afw helen ibug lfpw300W-Sketch
: 300W afw helen ibug lfpw300W-Original
: 300W afw helen ibug lfpwBounding_Boxes
aflw-Gray
: 0 2 3aflw-Light
: 0 2 3aflw-Sketch
: 0 2 3aflw-Original
: 0 2 3annotation
: 0 2 3
cd cache_data
python aflw_from_mat.py
python generate_300W.py
The generated list file will be saved into ./cache_data/lists/300W
and ./cache_data/lists/AFLW
.
python crop_pic.py
The above commands will pre-crop the face images, and save them into ./cache_data/cache/300W
and ./cache_data/cache/AFLW
.
- Step-1 : cluster images into different groups, for example
sh scripts/300W/300W_Cluster.sh 0,1 GTB 3
. - Step-2 : use
sh scripts/300W/300W_CYCLE_128.sh 0,1 GTB
orsh scripts/300W/300W_CYCLE_128.sh 0,1 DET
to train SAN on 300-W.
- Step-1 : cluster images into different groups, for example
sh scripts/AFLW/AFLW_Cluster.sh 0,1 GTB 3
. - Step-2 : use
sh scripts/AFLW/AFLW_CYCLE_128.FULL.sh
orsh scripts/AFLW/AFLW_CYCLE_128.FRONT.sh
to train SAN on AFLW.
Please cite the following paper in your publications if it helps your research:
@inproceedings{dong2018san,
title={Style Aggregated Network for Facial Landmark Detection},
author={Dong, Xuanyi and Yan, Yan and Ouyang, Wanli and Yang, Yi},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2018},
}
To ask questions or report issues, please open an issue on the issues tracker.