A Collection of Variational AutoEncoders (VAEs) implemented in PyTorch with focus on reproducibility. The aim of this project is to provide a quick and simple working example for many of the cool VAE models out there.
- Python >= 3.5
- PyTorch >= 1.3
- Pytorch Lightning >= 0.5.3 (GitHub Repo)
$ git clone https://github.com/AntixK/PyTorch-VAE
$ cd PyTorch-VAE
$ pip install -r requirements.txt
$ cd PyTorch-VAE
$ python run.py -c configs/<config-file-name.yaml>
Model | Paper | Reconstruction | Samples |
---|---|---|---|
VAE | Link | ||
WAE - MMD (RBF Kernel) | Link | ||
WAE - MMD (IMQ Kernel) | Link | ||
Beta-VAE | Link |
| IWAE (5 Samples) |Link | | |
- VanillaVAE
- Conditional VAE
- Gamma VAE
- Beta VAE
- DFC VAE
- InfoVAE (MMD-VAE)
- WAE-MMD
- AAE
- TwoStageVAE
- VAE-GAN
- Vamp VAE
- HVAE (VAE with Vamp Prior)
- IWAE
- VLAE
- FactorVAE
- PixelVAE
If you have trained a better model using these implementations by finetuning the hyper-params in the config file, I would be happy to include your result (along with your config file) in this repo, citing your name F607 .