A flexible, effective and fast network for cross-view gait recognition. It consistent with the results in GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition
We arrived Rank@1=95.0% on CASIA-B and Rank@1=87.1% on OU-MVLP.
- Python 3.6
- PyTorch 0.4+
- GPU
Noted that our code is tested based on PyTorch 0.4
Download CASIA-B Dataset
ATTENTION
- Organize the directory as:
your_dataset_path/resolutions/dataset_names/subject_ids/walking_conditions/views
. E.g.gaitdata/64/CASIA-B/001/nm-01/000/
. (We will update the code to be more compatible.) - You should cut and align the raw silhouette by yourself. Our experiments use the align method in this paper.
Futhermore, you also can test our code on OU-MVLP Dataset. The number of channels and the training batchsize is slightly different for this dataset. For more detail, please refer to our paper.
In config.py
, you might want to change the following settings:
WORK_PATH
path to save/load checkpointsCUDA_VISIBLE_DEVICES
indices of GPUsdataset_path
(necessary) root path of the dataset (for the above example, it is "gaitdata")
Train a model by
python train.py
Use trained model to extract feature by
python test.py
--iter
iteration of the checkpoint to load--batch_size
batch size of the parallel test
It will output Rank@1 of all three walking conditions.
Note that the test is parallelizable.
To conduct a faster evaluation, you could use --batch_size
to change the batch size for test.