The 1st axis: Ellipse area | The 2nd axis: Horizontal and vertical radius | The 3rd axis: two diagonal directions |
---|---|---|
Figure 1: Interpolation in the latent space of PCAAE for ellipse images.
The 1st axis: Hair colour | The 2nd axis: Head pose | The 3rd axis: Gender |
---|---|---|
Figure 2: Interpolation in the latent space of PCAAE for the latent space of PGAN.
[Preprint] [Preprint] [Project Page]
We would like to first introduce the state-of-the-art GAN model used in this work : ProgressiveGAN or PGAN (Karras el al., ICLR 2018). This models achieves high-quality face synthesis by learning unconditional GANs. For more details about this models please refer to the original paper, as well as the official implementations.
In addition, we would like to introduce the state-of-the-art Disentangled VAE used for comparing in this work :
-
Standard VAE Loss from Auto-Encoding Variational Bayes
-
β-VAEH from β-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
-
β-VAEB from Understanding disentangling in β-VAE
-
FactorVAE from Disentangling by Factorising
-
β-TCVAE from Isolating Sources of Disentanglement in Variational Autoencoders
We also compare our method with StyleGAN:
We show our model in [Colab Jupyter Notebook]
Or we can test our model and the compared methods through:
python demo/test_pca_ae_ellipses.py
python demo/test_pca_pgan.py
@article{pham2022pca,
title={PCA-AE: Principal Component Analysis Autoencoder for Organising the Latent Space of Generative Networks},
author={Pham, Chi-Hieu and Ladjal, Sa{\"\i}d and Newson, Alasdair},
journal={Journal of Mathematical Imaging and Vision},
pages={1--17},
year={2022},
publisher={Springer}
}
@article{ladjal2019pca,
title={A PCA-like autoencoder},
author={Ladjal, Sa{\"\i}d and Newson, Alasdair and Pham, Chi-Hieu},
journal={arXiv preprint arXiv:1904.01277},
year={2019}
}