This repository is a PyTorch implementation of ASENet_V2+GD proposed in our paper [Toward a More Robust Fine-grained Fashion Retrieval] accepted by MIPR 2023.
- PyTorch 1.1.0
- CUDA 10.1.168
- Python 3.6.2
We use anaconda to create our experimental environment. You can rebuild it by the following commands.
conda create -n {your_env_name} python=3.6
conda activate {your_env_name}
pip install -r requirements.txt
...
conda deactivate
We supply our dataset split and some descriptions of the datasets with a bunch of meta files. Download them by the following script.
wget -c -P data/ http://www.maryeon.com/file/meta_data.tar.gz
cd data/
tar -zxvf meta_data.tar.gz
As the full FashionAI has not been publicly released, we utilize its early version for the FashionAI Global Challenge 2018. You can first sign in and download the data. Once done, you should uncompress them into the FashionAI
directory:
unzip fashionAI_attributes_train1.zip fashionAI_attributes_train2.zip -d {your_project_path}/data/FashionAI
You can simply train the model on FashionAI dataset(default)
python main.py
As training terminates, two snapshots are saved for testing. One is the model that has the highest performance on validation set and the other is the one of the latest epoch. You can load any of them and test on the test set.
python main.py --test --resume runs/{your_exp_name}/xx.pth.tar
If it's of any help to your research, consider citing our work:
Xiao, L., Zhang, X. and Yamasaki, T., 2023, August. Toward a More Robust Fine-Grained Fashion Retrieval. In 2023 IEEE 6th International Conference on Multimedia Information Processing and Retrieval (MIPR) (pp. 1-4). IEEE.
@inproceedings{Ling_MIPR2023,
title={Toward a More Robust Fine-grained Fashion Retrieval},
author={Ling Xiao and Xiaofeng Zhang and Toshihiko Yamasaki},
booktitle={IEEE International Conference on Multimedia Information Processing and Retrieval(MIPR)},
pages = {1-4},
year = {2023}
}