-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "Progressive Growing of GANs" (ProGAN) model #1105
base: master
Are you sure you want to change the base?
Conversation
Thanks for your contribution. Have you been able to reproduce the FID number on the ProgGAN datasets? |
Sorry for a late response, I've only tested it on our private dataset, need help to evaluate it on Celeba |
I am trying to run metrics using the original implementation of Progressive Growing of GAN by tkarras but I am not able to, has anyone any luck? or can help me with it? |
Hey ! You use 2 generators with one being the running average of the other and set adam parameter beta1 to 0. Why this choice ? Why not use the same generator for training and testing ? |
May I ask why are you in the ProGanModel doing generation from torch.randn and not from source Image A? It seems also that you don't even save the image A as input, this contradicts with the whole cycle consistency of the CycleGan as there's no loss between source image A and reconstruction of A. |
We will leave it as is. It might introduce too many changes to add a new GAN to the current repo. But if anyone is interested in progressive gans, you can refer to this PR. |
I've decided to add a new model to the framework - an implementation of the paper "Progressive Growing of GANs": https://arxiv.org/abs/1710.10196.
It's basically a GAN that generates images from a random noise with a support of high resolution images by training G and D nets progressively. I've decided to post a PR because found it easily to implement it inside this repo's framework so it fits it very well (despite the name of repo :)
Please note that PR contains changes from PR #1090 with NVIDIA apex because larger batch sizes are prefered.