Skip to content

Style Aggregated Network for Facial Landmark Detection, CVPR 2018

Notifications You must be signed in to change notification settings

stoneyang-face/SAN

 
 

Repository files navigation

The Source Codes will come soon before 1st June.

We provide the training and testing codes for SAN, implemented in PyTorch.

Preparation

Dependencies

Datasets download

  • Download 300W-Style and AFLW-Style from Google Drive, and extract the downloaded files into ~/datasets/.
  • In 300W-Style and AFLW-Style directories, the Original sub-directory contains the original images from 300-W and AFLW.
  • The sketch, light, and gray style images are used to analyze the image style variance in facial landmark detection.

Figure 1. Our 300W-Style and AFLW-Style datasets. There are four styles, original, sketch, light, and gray.
300W-Style Directory
  • 300W-Gray : 300W afw helen ibug lfpw
  • 300W-Light : 300W afw helen ibug lfpw
  • 300W-Sketch : 300W afw helen ibug lfpw
  • 300W-Original : 300W afw helen ibug lfpw
  • Bounding_Boxes
AFLW-Style Directory
  • aflw-Gray : 0 2 3
  • aflw-Light : 0 2 3
  • aflw-Sketch : 0 2 3
  • aflw-Original : 0 2 3
  • annotation: 0 2 3

Generate lists for training and evaluation

cd cache_data
python aflw_from_mat.py
python generate_300W.py

The generated list file will be saved into ./cache_data/lists/300W and ./cache_data/lists/AFLW.

Prepare images for training the style-aggregated face generation module

python crop_pic.py

The above commands will pre-crop the face images, and save them into ./cache_data/cache/300W and ./cache_data/cache/AFLW.

Training and Evaluation

300-W

  • Step-1 : cluster images into different groups, for example sh scripts/300W/300W_Cluster.sh 0,1 GTB 3.
  • Step-2 : use sh scripts/300W/300W_CYCLE_128.sh 0,1 GTB or sh scripts/300W/300W_CYCLE_128.sh 0,1 DET to train SAN on 300-W.

AFLW

  • Step-1 : cluster images into different groups, for example sh scripts/AFLW/AFLW_Cluster.sh 0,1 GTB 3.
  • Step-2 : use sh scripts/AFLW/AFLW_CYCLE_128.FULL.sh or sh scripts/AFLW/AFLW_CYCLE_128.FRONT.sh to train SAN on AFLW.

Normalization

Figure 2. We use the distance between the outer corners of the eyes, i.e., the 37-th and the 46-th points, for normalization.

Citation

Please cite the following paper in your publications if it helps your research:

@inproceedings{dong2018san,
   title={Style Aggregated Network for Facial Landmark Detection},
   author={Dong, Xuanyi and Yan, Yan and Ouyang, Wanli and Yang, Yi},
   booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
   year={2018},
}

Contact

To ask questions or report issues, please open an issue on the issues tracker.

About

Style Aggregated Network for Facial Landmark Detection, CVPR 2018

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 77.7%
  • Shell 22.3%