-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The in_channels=1 for the G? #57
Comments
No. |
Got it, thank you. |
hi, if I use gray image to train the kernel,
Do I need to modify the model if I use grayscale image for training(input images with 1 channel)? Or I just need run train.py directly? |
It has been a while since I ran this code. As I recall you don't need to modify the Generator. Maybe the Discriminator but try and see if it fails |
thanks for reply. I think when the image is grayscale, I can directly use the model to train without modification, because the image has been changed to RGB when use the read_image() function in the "util.py". Is it right? |
You are probably right. the "quick & dirty" way that will definitely work is to duplicate the single channel you have to 3 identical channels and then everything should work. |
def get_top_left(self, size, for_g, idx): Hi, I find that "center" represent the chosen pixel's index in the flatten vector, and (row, col) represent the coordinate in the input image which is corresponding to the "center" in the flatten vector. but I feel confused about the meaning of "top" and "left",can you tell me the meaning of them or how you get the 64×64 crop from (row, col)? |
this is a sketchy and not so elegant implementation, I agree! |
I got it, thank you very much! |
Hi, Thanks for sharing the code, which is very clear. However, I am confused about the in_channels of the Generator in the "network.py". If the input is RGB images, should the in_channels=3?
'''
class Generator(nn.Module):
def init(self, conf):
super(Generator, self).init()
struct = conf.G_structure
# First layer - Converting RGB image to latent space
self.first_layer = nn.Conv2d(in_channels=1, out_channels=conf.G_chan, kernel_size=struct[0], bias=False)
'''
The text was updated successfully, but these errors were encountered: