Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

retPerLayer=True not working as expected #70

Closed
paudom opened this issue May 26, 2021 · 5 comments
Closed

retPerLayer=True not working as expected #70

paudom opened this issue May 26, 2021 · 5 comments

Comments

@paudom
Copy link

paudom commented May 26, 2021

While trying different parameters for the LPIPS class, I encountered that when using retPerLayer=True in the forward method the returned values are not as I would expect.
I think the problem is due to the following lines:

107.   val = res[0]
108.   for l in range(1,self.L):
109.       val += res[l]

After debugging I notice the following:
Before doing the for on line 108

res = [tensor([[[[0.1195]]]...ackward1>), tensor([[[[0.0963]]]...ackward1>), tensor([[[[0.1021]]]...ackward1>), tensor([[[[0.1396]]]...ackward1>), tensor([[[[0.1210]]]...ackward1>)]

so val is 0.1195 at first, but after iterating through the for res[0] keeps updating at the same value of val so at the end I get the following:

res = [tensor([[[[0.5785]]]...ackward1>), tensor([[[[0.0963]]]...ackward1>), tensor([[[[0.1021]]]...ackward1>), tensor([[[[0.1396]]]...ackward1>), tensor([[[[0.1210]]]...ackward1>)]
val = 0.5785

Could maybe be solved by assigning val to 0 at line 107?

107.   val = 0
108.   for l in range(self.L):
109.       val += res[l]

I'm trying to use only the output of some layers instead of the accumulation of all the layers.
Thanks in advance!!!

@richzhang
Copy link
Owner

retPerLayer simply returns the layer-wise results, in addition to the accumulated results https://github.com/richzhang/PerceptualSimilarity/blob/master/lpips/lpips.py#L119. Try looking at the 2nd output

@paudom
Copy link
Author

paudom commented Aug 19, 2021

What I was saying is that when the layer-wise results are activated the first position res[0] is the same as the accumulated value instead of being the value of the first layer.

@richzhang
Copy link
Owner

Could you look at the 2nd output, which has a list containing every layer?

@paudom
Copy link
Author

paudom commented Aug 20, 2021

That's exactly what I'm referring to. The second output is a list, yes. But the first element of that list is exactly the same as the accumulated value! res[0] == val and that is not correct, as the first element should be the output of the first layer not the accumulated value.

@richzhang
Copy link
Owner

Yikes, you're totally right. I fixed it and uploaded a new version to pypi. Thanks!

@paudom paudom closed this as completed Aug 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants