-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About code error #9
Comments
I meet this problem too... |
@haolin512900 Have you solved this error?? |
replace "G_loss.backward()" |
thank you
…------------------ 原始邮件 ------------------
发件人: ***@***.***>;
发送时间: 2022年3月21日(星期一) 晚上10:50
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [Vious/LBAM_Pytorch] About code error (#9)
replace "G_loss.backward()"
whith "loss1 = G_loss.detach_().requires_grad_(True)
loss1.backward()"
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi Thank your great project!
The code works up to PyTorch 1.4.There seems to be an problem with PyTorch 1.6. the description as followed:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:[torch.cuda.FloatTensor [1, 1024, 4, 4]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Can you update the code for PyTorch 1.5 or 1.6?😂
The text was updated successfully, but these errors were encountered: