-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relation between scaling weights of paper and implementation #59
Comments
Yes, the weights in the implementation correspond to w^2 in the paper. Fig 10 is also plotting w^2 |
Thanks @richzhang ! Therefore, if I am not mistaken, the convolution could even be performed after the averaging. Like, or even |
You cannot collapse the channel direction before multiplying by w (which is scaling each channel). In other words, any of these are fine: Hope that makes sense |
Ah yes sure, my notation isn't very accurate. The norm and MSE should be spatial only. |
Great, yes that seems correct then! (also add a sum over channel direction, in front of w_l) |
Oh my bad, I wanted to write a dot product and not an element wise product. In fact, it means that we can drop the |
Thank you, you can close the issue 👍 |
Hello @richzhang,
In the LPIPS paper, the 1x1 scaling convolution of the difference of the activations is performed before the squaring.
But, in the implementation, the difference of the activations is first squared, and after scaled.
Is this a mistake ? If yes, in the paper or in the implementation ?
The text was updated successfully, but these errors were encountered: