Paper: https://arxiv.org/abs/2311.18405
The code upload is incomplete......
Image-based virtual try-on enables users to virtually try on different garments by altering original clothes in their photographs. Generative Adversarial Networks (GANs) dominate the research field in image-based virtual try-on, but have not resolved problems such as unnatural deformation of garments and the blurry generation quality. Recently, diffusion models have emerged with surprising performance across various image generation tasks. While the generative quality of diffusion models is impressive, achieving controllability poses a significant challenge when applying it to virtual try-on tasks and multiple denoising iterations limit its potential for real-time applications. In this paper, we propose Controllable Accelerated virtual Try-on with Diffusion Model called CAT-DM. To enhance the controllability, a basic diffusion-based virtual try-on network is designed, which utilizes ControlNet to introduce additional control conditions and improves the feature extraction of garment images. In terms of acceleration, CAT-DM initiates a reverse denoising process with an implicit distribution generated by a pre-trained GAN-based model. Compared with previous try-on methods based on diffusion models, CAT-DM not only retains the pattern and texture details of the in-shop garment but also reduces the sampling steps without compromising generation quality. Extensive experiments demonstrate the superiority of CAT-DM against both GAN-based and diffusion-based methods in producing more realistic images and accurately reproducing garment patterns.
Our experiments were conducted on two NVIDIA GeForce RTX 4090 graphics cards, with a single RTX 4090 having 24GB of video memory. Please note that our model cannot be trained on graphics cards with less video memory than the RTX 4090.
- Clone the repository
git clone https://github.com/zengjianhao/CAT-DM
- A suitable
conda
environment namedCAT-DM
can be created and activated with:
cd CAT-DM
conda env create -f environment.yaml
conda activate CAT-DM
- If you want to change the name of the environment you created, you need to modify the
name
in bothenvironment.yaml
andsetup.py
. - You need to make sure that
conda
is installed on your computer. - If there is a network error, try updating the environment using
conda env update -f environment.yaml
.
- Installing xFormers:
git clone https://github.com/facebookresearch/xformers.git
cd xformers
git submodule update --init --recursive
pip install -r requirements.txt
pip install -U xformers
cd ..
rm -rf xformers
- open
src/taming-transformers/taming/data/utils.py
, deletefrom torch._six import string_classes
, and changeelif isinstance(elem, string_classes):
toelif isinstance(elem, str):
- Download the VITON-HD dataset
- Create a folder
datasets
- Put the VITON-HD dataset into this folder and rename it to
VITON-HD
- Generate the mask images
# Generate the train dataset mask images
python tools/viton_mask.py datasets/VITON-HD/train datasets/VITON-HD/train/mask
# Generate the test dataset mask images
python tools/viton_mask.py datasets/VITON-HD/test datasets/VITON-HD/test/mask
- Download the DressCode dataset
- Create a folder
datasets
- Put the VITON-HD dataset into this folder and rename it to
DressCode
- Generate the mask images and the agnostic images
# Generate the dresses dataset mask images and the agnostic images
python tools/dresscode_mask.py datasets/DressCode/dresses datasets/DressCode/dresses/mask
# Generate the lower_body dataset mask images and the agnostic images
python tools/dresscode_mask.py datasets/DressCode/lower_body datasets/DressCode/lower_body/mask
# Generate the upper_body dataset mask images and the agnostic images
python tools/dresscode_mask.py datasets/DressCode/upper_body datasets/DressCode/upper_body/mask
datasets
folder should be as follows:
datasets
├── VITON-HD
│ ├── test
│ │ ├── agnostic-mask
│ │ ├── mask
│ │ ├── cloth
│ │ ├── image
│ │ ├── image-densepose
│ │ ├── ...
│ ├── test_pairs.txt
│ ├── train
│ │ ├── agnostic-mask
│ │ ├── mask
│ │ ├── cloth
│ │ ├── image
│ │ ├── image-densepose
│ │ ├── ...
│ └── train_pairs.txt
├── DressCode
│ ├── dresses
│ │ ├── dense
│ │ ├── images
│ │ ├── mask
│ │ ├── ...
│ ├── lower_body
│ │ ├── dense
│ │ ├── images
│ │ ├── mask
│ │ ├── ...
│ ├── upper_body
│ │ ├── dense
│ │ ├── images
│ │ ├── mask
│ │ ├── ...
│ ├── test_pairs_paired.txt
│ ├── test_pairs_unpaired.txt
│ ├── train_pairs.txt
│ └── ...
PS: When we conducted the experiment, VITON-HD did not release the agnostic-mask
. We used our own implemented mask
, so if you are using VITON-HD's agnostic-mask
, the generated results may vary.
- Download the Paint-by-Example model
- Put the Paint-by-Example dataset into the folder
checkpoints
and rename it topbe.ckpt
- Make the ControlNet model
- VITON-HD:
python tools/add_control.py checkpoints/pbe.ckpt checkpoints/pbe-dim6.ckpt configs/train-viton.yaml
- DressCode:
python tools/add_control.py checkpoints/pbe.ckpt checkpoints/pbe-dim5.ckpt configs/train-dresscode.yaml
- VITON-HD:
bash train-viton.sh
bash train-dresscode.sh
- Download the Pretrain model and directly generate the try-on results:
bash test-viton.sh
- Poisson Blending
- Download the Pretrain model and directly generate the try-on results:
bash test-dresscode.sh
- Poisson Blending
@article{zeng2023cat,
title={CAT-DM: Controllable Accelerated Virtual Try-on with Diffusion Model},
author={Zeng, Jianhao and Song, Dan and Nie, Weizhi and Tian, Hongshuo and Wang, Tongtong and Liu, Anan},
journal={arXiv preprint arXiv:2311.18405},
year={2023}
}