Skip to content
/ FDAE Public

Implementation of "Factorized Diffusion Autoencoder for Unsupervised Disentangled Representation Learning" (AAAI 2024)

Notifications You must be signed in to change notification settings

wuancong/FDAE

Repository files navigation

Factorized diffusion autoencoder

Factorized Diffusion Autoencoder for Unsupervised Disentangled Representation Learning (AAAI 2024)
[paper] [supplementary material]

This repository contains the code for training and evaluation on Cars3d, Shapes3d and mpi3d_real_complex.

Dependencies

The implementation is based on pytorch 1.13.1+cu117 and 1 NVIDIA RTX 3090. The required packages are listed in requirements.txt.

pip install -r requirements.txt

Prepare data

Model training and testing

We provide examples of training in train.sh. After training, both model checkpoint and results of evaluation metrics DCI, FactorVAE and MIG are saved in the ./logs folder. If computing mean and std of multiple results is required, run script collect_result.py (with model_list variable modified) to output results in ./exp_results.

Visualization

image_sample.sh provides examples of visualizing masks and generating images by swapping content codes and mask codes.

Acknowledgement

We based our codes on openai/consistency_models and evaluation codes of xrenaa/DisCo.

Citation

If you find this method or code useful, please cite

@inproceedings{wu2024fdae,
  title={Factorized Factorized Diffusion Autoencoder for Unsupervised Disentangled Representation Learning},
  author={Wu, Ancong and Zheng, Wei-Shi},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2024},
}

About

Implementation of "Factorized Diffusion Autoencoder for Unsupervised Disentangled Representation Learning" (AAAI 2024)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published