You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Thanks for sharing the great work, which is very clear. However, I am confused about why there needs flip operation at the end of calc_curr_k function in KernelGAN.py? Looking forward to your reply. Thank you very much.
def calc_curr_k(self):
"""given a generator network, the function calculates the kernel it is imitating"""
delta = torch.Tensor([1.]).unsqueeze(0).unsqueeze(-1).unsqueeze(-1)
for ind, w in enumerate(self.G.parameters()):
curr_k = F.conv2d(delta, w, padding=self.conf.G_kernel_size - 1) if ind == 0 else F.conv2d(curr_k, w)
self.curr_k = curr_k.squeeze().flip([0, 1])
The text was updated successfully, but these errors were encountered:
Hope I understand your question. If so, in order to extract the kernel of G, we pass a delta (1 in the center, 0 everywhere else) through G.
Only then we get the kernel flipped, so we must unflip it.
Try it even with a 1 layer G and you will notice the kernel flipping
Hope that helps
Hi, Thanks for sharing the great work, which is very clear. However, I am confused about why there needs flip operation at the end of calc_curr_k function in KernelGAN.py? Looking forward to your reply. Thank you very much.
def calc_curr_k(self):
"""given a generator network, the function calculates the kernel it is imitating"""
delta = torch.Tensor([1.]).unsqueeze(0).unsqueeze(-1).unsqueeze(-1)
for ind, w in enumerate(self.G.parameters()):
curr_k = F.conv2d(delta, w, padding=self.conf.G_kernel_size - 1) if ind == 0 else F.conv2d(curr_k, w)
self.curr_k = curr_k.squeeze().flip([0, 1])
The text was updated successfully, but these errors were encountered: