-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image width and height are not equal #45
Comments
I think it should work out of the box model = models.PerceptualLoss(model='net-lin', net='vgg', use_gpu=False, spatial=False)
H,W = 160, 64
dummy = torch.zeros(1,3,H,W)
print(model(dummy, dummy).item()) |
Thanks for your answer, it does work in the example you provided. After my testing as the test_network.py script settings, I found that this problem is mainly caused by the setting of {net/spatial} parameter. model = models.PerceptualLoss(model='net-lin', net='alex', use_gpu=False, spatial=True)
H,W = 160, 64
dummy = torch.zeros(1,3,H,W) the error information as follow: |
Got it. I pushed a change to do resizing in both dimensions. Let me know if it works. Keep in mind the features in each layer will not necessarily be well-aligned in these cases |
The issue still persists for me. I was trying compute_dists_pair.py and got the same error when the image height and width were different but it all worked fine when I resize them to have same height and width. The error looks like this: |
The two images must be in the same size |
It seems that the code can't calculate the metric on the images with unequal width and height, can it be expanded to calculate images of various sizes?
The text was updated successfully, but these errors were encountered: