Skip to content

Experiment with DCGAN by adding conditional context information

Notifications You must be signed in to change notification settings

lt911/Experiment_with_conditional_DCGAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Experiment_with_conditional_DCGAN

Experiment with DCGAN by adding conditional context information

In this project, I have using the code from DCGAN_example from the PyTorch tutorial as my base skeleton for the work. Without any changes to the code, the model takes a dataset, and a discrimintor and generator will be trained and stored.

As for the generator, it generates random pictures similar the the data it is fed. Besides generating only random images, I want to control the content of an iamge based on its class label (Dataset which I used for this project were MNIST hand-written digits and CiFar10). The idea of using conditional GAN is based on Conditional Generative Adversarial Nets. In addtional to using just a one-hot embedding for the context information, I used also pre-trained GloVe word vectors to embed the context information.

The results seems reasonable with using one-hot embedding while the 300d word embedding may be too complicated that further experiments need to involve dropout regularization.

Other than the embedding method, as for the concatenated context informatiton, I used both addtional MLP layers and additional decov-layers for further traing.

To run the code, you can use command line like:

python cdcgan.py --dataset cifar10 --dataroot cifar10 --outf cifar10_out_emb_0.9_decov --cuda --emb --niter 250 --beta1 0.9

About

Experiment with DCGAN by adding conditional context information

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published