A Collection of Variational AutoEncoders (VAEs) implemented in PyTorch with focus on reproducibility. The aim of this project is to provide a quick and simple working example for many of the cool VAE models out there. All the models are trained on the CelebA dataset for consistency and comparison. The architecture of all the models are kept as similar as possible with the same layers, except for cases where the original paper necessitates a radically different architecture.
- Python >= 3.5
- PyTorch >= 1.3
- Pytorch Lightning >= 0.5.3 (GitHub Repo)
$ git clone https://github.com/AntixK/PyTorch-VAE
$ cd PyTorch-VAE
$ pip install -r requirements.txt
$ cd PyTorch-VAE
$ python run.py -c configs/<config-file-name.yaml>
Config file template
model_params:
name: "<name of VAE model>"
in_channels: 3
latent_dim:
exp_params:
data_path: "<path to the celebA dataset>"
img_size: 64 # Models are designed to work for this size
batch_size: 64 # Better to have a square number
LR: 0.005
trainer_params:
gpus: 1
max_nb_epochs: 50
logging_params:
save_dir: "logs/"
name: "<experiment name>"
manual_seed:
Model | Paper | Reconstruction | Samples |
---|---|---|---|
VAE | Link | ||
WAE - MMD (RBF Kernel) | Link | ||
WAE - MMD (IMQ Kernel) | Link | ||
Beta-VAE | Link | ||
IWAE (5 Samples) | Link | ||
DFCVAE | Link |
- VanillaVAE
- Conditional VAE
- Gamma VAE
- Beta VAE
- Beta TC-VAE
- DFC VAE
- InfoVAE (MMD-VAE)
- WAE-MMD
- AAE
- TwoStageVAE
- VAE-GAN
- Vamp VAE
- HVAE (VAE with Vamp Prior)
- IWAE
- VLAE
- FactorVAE
- PixelVAE
- VQVAE
- StyleVAE
If you have trained a better model using these implementations by finetuning the hyper-params in the config file, I would be happy to include your result (along with your config file) in this repo, citing your name 😊.