Skip to content

Commit

Permalink
Remove the dependency on Synchronized-BatchNorm-PyTorch; Update README
Browse files Browse the repository at this point in the history
  • Loading branch information
AbnerHqC committed Dec 6, 2018
1 parent 9376f40 commit 61d18ca
Show file tree
Hide file tree
Showing 2 changed files with 69 additions and 18 deletions.
81 changes: 66 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,34 @@
# GaitSet

A flexible, effective and fast network for cross-view gait recognition.
It consistent with the results in [GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition](https://arxiv.org/abs/1811.06186)
[GaitSet](https://arxiv.org/abs/1811.06186) is a **flexible**, **effective** and **fast** network for cross-view gait recognition.

We arrived **Rank@1=95.0%** on [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp)
and **Rank@1=87.1%** on [OU-MVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html).
#### Flexible
The input of GaitSet is a set of silhouettes.

## What's new
- Update the organization of the dataset directory. See [Dataset & Preparation](#dataset--preparation).
You might have to change your `dataset_path` in `config.py`.
- Add a new arg (cache) in both training and test. See [Train](#train) & [Test](#test--evaluation)
- There are **NOT ANY constrains** on an input,
which means it can contain **any number** of **non-consecutive** silhouettes filmed under **different viewpoints**
with **different walking conditions**.

- As the input is a set, the **permutation** of the elements in the input
will **NOT change** the output at all.

#### Effective
It achieves **Rank@1=95.0%** on [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp)
and **Rank@1=87.1%** on [OU-MVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html),
excluding identical-view cases.

#### Fast
With 8 NVIDIA 1080TI GPUs, It only takes **7 minutes** to conduct an evaluation on
[OU-MVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html) which contains 133,780 sequences
and average 70 frames per sequence.

## Prerequisites

- Python 3.6
- PyTorch 0.4+
- GPU


## Getting started
### Installation

Expand All @@ -25,18 +37,19 @@ You might have to change your `dataset_path` in `config.py`.
- install [cuDNN7.0](https://developer.nvidia.com/cudnn)
- Install [PyTorch](http://pytorch.org/)

Noted that our code is tested based on PyTorch 0.4
Noted that our code is tested based on [PyTorch 0.4](http://pytorch.org/)

### Dataset & Preparation
Download [CASIA-B Dataset](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp)

**\*\*\*ATTENTION\*\*\***
**!!! ATTENTION !!! ATTENTION !!! ATTENTION !!!**
- Organize the directory as:
`your_dataset_path/subject_ids/walking_conditions/views`.
E.g. `CASIA-B/001/nm-01/000/`.
- You should cut and align the raw silhouette by yourself. Our experiments use the align method in
- You need to cut and align the raw silhouettes in the dataset before you can use it.
Our experiments used the alignment method in
[this paper](https://ipsjcva.springeropen.com/articles/10.1186/s41074-018-0039-6).
- The resolution of the sample should be **$64\times64$**
- The input sample **MUST be resized into 64x64**

Futhermore, you also can test our code on [OU-MVLP Dataset](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html).
The number of channels and the training batchsize is slightly different for this dataset.
Expand All @@ -45,10 +58,10 @@ For more detail, please refer to [our paper](https://arxiv.org/abs/1811.06186).
### Configuration

In `config.py`, you might want to change the following settings:
- `dataset_path` **(NECESSARY)** root path of the dataset
(for the above example, it is "gaitdata")
- `WORK_PATH` path to save/load checkpoints
- `CUDA_VISIBLE_DEVICES` indices of GPUs
- `dataset_path` (necessary) root path of the dataset
(for the above example, it is "gaitdata")

### Train
Train a model by
Expand All @@ -60,7 +73,7 @@ This will accelerate the training.
**Note that** if this arg is set as FALSE, samples will NOT be kept in the memory
even they have been used in the former iterations. #Default: TRUE

### Test & Evaluation
### Evaluation
Use trained model to extract feature by
```bash
python test.py
Expand All @@ -73,3 +86,41 @@ This might accelerate the testing. #Default: FALSE
It will output Rank@1 of all three walking conditions.
Note that the test is **parallelizable**.
To conduct a faster evaluation, you could use `--batch_size` to change the batch size for test.

### Transformation
A script for transforming a set of silhouettes into a discriminative representation will be released soon.


## Authors & Contributors
GaitSet is authored by
[Hanqing Chao](https://www.linkedin.com/in/hanqing-chao-9aa42412b/),
[Yiwei He](https://www.linkedin.com/in/yiwei-he-4a6a6bbb/),
[Junping Zhang](http://www.pami.fudan.edu.cn/~jpzhang/)
and JianFeng Feng from Fudan Universiy.
[Junping Zhang](http://www.pami.fudan.edu.cn/~jpzhang/)
is the corresponding author.
The code is developed by
[Hanqing Chao](https://www.linkedin.com/in/hanqing-chao-9aa42412b/)
and [Yiwei He](https://www.linkedin.com/in/yiwei-he-4a6a6bbb/).
Currently, it is being maintained by
[Hanqing Chao](https://www.linkedin.com/in/hanqing-chao-9aa42412b/)
and Kun Wang.


## Citation
Please cite these papers in your publications if it helps your research:
```
@inproceedings{cao2017realtime,
author = {Chao, Hanqing and He, Yiwei and Zhang, Junping and Feng, Jianfeng},
booktitle = {AAAI},
title = {{GaitSet}: Regarding Gait as a Set for Cross-View Gait Recognition},
year = {2019}
}
```
Link to paper:
- [GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition](https://arxiv.org/abs/1811.06186)


## License
GaitSet is freely available for free non-commercial use, and may be redistributed under these conditions.
For commercial queries, contact [Junping Zhang](http://www.pami.fudan.edu.cn/~jpzhang/).
6 changes: 3 additions & 3 deletions model/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@

import numpy as np
import torch
import torch.nn as nn
import torch.autograd as autograd
import torch.optim as optim
import torch.utils.data as tordata
from sync_batchnorm import DataParallelWithCallback

from .network import TripletLoss, SetNet
from .utils import TripletSampler
Expand Down Expand Up @@ -55,9 +55,9 @@ def __init__(self,
self.img_size = img_size

self.encoder = SetNet(self.hidden_dim).float()
self.encoder = DataParallelWithCallback(self.encoder)
self.encoder = nn.DataParallel(self.encoder)
self.triplet_loss = TripletLoss(self.P * self.M, self.hard_or_full_trip, self.margin).float()
self.triplet_loss = DataParallelWithCallback(self.triplet_loss)
self.triplet_loss = nn.DataParallel(self.triplet_loss)
self.encoder.cuda()
self.triplet_loss.cuda()

Expand Down

0 comments on commit 61d18ca

Please sign in to comment.