-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
small input size #4
Comments
Convolutional layers can take on any size spatial input. The sizes of the feature maps will simply be scaled appropriately. You can pass any spatial size input to the metric.* *above 16x16, so that the conv5 layer will at least be 1x1. I wouldn't recommend using anything below 64x64 though |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am trying to independently replicate the LPIPS metric in Keras, initially focusing on uncalibrated VGG. Following the
README
I was getting thetest_network.py
working, but am a little confused by the three example imagesex_ref.png
,ex_p0.png
, andex_p1.png
and how they are processed.Each of these images are 64x64, and in
test_network.py
they are passed to the vgg network without scaling. But the native input size of VGG is 224x224 and the pytorch models documentation clearly states that input sizes are expected to that size (or larger):Notably, when provided with 224x224 inputs, the layer sizes are:
However when they left at 64x64 without scaling, the layer sizes are smaller at each stage:
I'm not familiar with pytorch internals and so it's not clear to me how to interpret this behaviour in porting this to Keras. So my questions are:
The text was updated successfully, but these errors were encountered: