By Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj and Le Song
The repository contains the entire pipeline (including all the preprossings) for deep face recognition with SphereFace
. The recognition pipeline contains three major steps: face detection, face alignment and face recognition.
SphereFace is a recently proposed face recognition method. It was initially described in an arXiv technical report and then published in CVPR 2017. To facilitate the face recognition research, we give an example of training on CAISA-WebFace and testing on LFW.
The provided network prototxt example is a 28-layer CNN, which is the same as Center Face. To fully reproduce the results in the paper, you need to make some small modifications (network architecture) according to the SphereFace paper.
SphereFace is released under the MIT License (refer to the LICENSE file for details).
If you find SphereFace useful in your research, please consider to cite:
@inproceedings{liu2017sphereface,
author = {Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song},
title = {SphereFace: Deep Hypersphere Embedding for Face Recognition},
booktitle = {Proceedings of the IEEE conference on computer vision and pattern recognition},
Year = {2017}
}
- July 20, 2017
- This repository was built.
- To be updated:
- Our pretrained models, some intermediate results and extracted features will be released soon.
- Requirements for
Matlab
- Requirements for
Caffe
andmatcaffe
(see: Caffe installation instructions) - Requirements for
MTCNN
(see: MTCNN - face detection & alignment) andPdollar toolbox
(see: Piotr's Image & Video Matlab Toolbox).
-
Clone the SphereFace repository. We'll call the directory that you cloned SphereFace as
SPHEREFACE_ROOT
.git clone --recursive https://github.com/wy1iu/sphereface.git
-
Build Caffe and matcaffe
cd $SPHEREFACE_ROOT/tools/caffe-sphereface # Now follow the Caffe installation instructions here: # http://caffe.berkeleyvision.org/installation.html # If you're experienced with Caffe and have all of the requirements installed # and your Makefile.config in place, then simply do: make all -j8 && make matcaffe
After successfully completing installation, you'll be ready to run all the following experiments.
Note 1: In this part, we assume you are in the directory $SPHEREFACE_ROOT/preprocess/
-
Download the training set (
CASIA-WebFace
) and test set (LFW
) and place them in $SPHEREFACE_ROOT/preprocess/data/.mv /your_path/CASIA_WebFace data/ ./code/get_lfw.sh tar xvf data/lfw.tgz -C data/
Please make sure that the directory of
data/
contains two datasets. -
Detect faces and facial landmarks in CAISA-WebFace and LFW datasets using
MTCNN
(see: MTCNN - face detection & alignment).# In Matlab Command Window run code/face_detect_demo.m
This will create a file
dataList.mat
in the directory ofresult/
. -
Align faces to a canonical pose using similarity transformation.
# In Matlab Command Window run code/face_align_demo.m
This will create two folders (
CASIA-WebFace-112X96
andlfw-112X96
) in the directory ofresult/
, containing the aligned face images.
Note 2: In this part, we assume you are in the directory $SPHEREFACE_ROOT/train/
-
Get a list of training images and labels.
mv ../preprocess/result/CASIA-WebFace-112X96 data/ # In Matlab Command Window run code/get_list.m
We move the aligned face images from preprocess folder to train folder and create a list
CASIA-WebFace-112X96.txt
in the directory ofdata/
for training. -
Train sphereface model.
./code/sphereface/sphereface_train.sh 0,1
We obtain a trained model
sphereface_model_iter_28000.caffemodel
and corresponding log filesphereface.log
in the directory ofresult/sphereface/
.
Note 3: In this part, we assume you are in the directory $SPHEREFACE_ROOT/test/
-
Get the pair list of LFW (view 2).
mv ../preprocess/result/lfw-112X96 data/ ./code/get_pairs.sh
Make sure that the
pairs.txt
in the directory ofdata/
-
Extract deep features and test on LFW.
# In Matlab Command Window run code/evaluation.m
Finally we get the accuracy.
Questions can also be left as issues in the repository. We will be happy to answer them.