You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
collector: MultiaSyncDataCollector()
init seed: 42, final seed: 971637020
7%|████████▌ | 36800/500000 [20:23<4:48:42, 26.74it/s]
Error executing job with overrides: []
Traceback (most recent call last):
File "/home/frank/Projects/rl_dev/examples/dreamer/dreamer.py", line 359, in main
scaler2.scale(actor_loss_td["loss_actor"]).backward()
File "/home/frank/anaconda3/envs/rl_dev/lib/python3.9/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/frank/anaconda3/envs/rl_dev/lib/python3.9/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: masked_scatter_: expected self and source to have same dtypes but gotHalf and Float
System info
Describe the characteristic of your environment:
Describe how the library was installed (pip, source, ...)
Describe the bug
When using `autocast`` in dreamer example, there is a RuntimeError:
RuntimeError: masked_scatter_: expected self and source to have same dtypes but gotHalf and Float
Unfortunately, its seem to be a bug of PyTorch itself. (pytorch/pytorch#81876)
To Reproduce
Run dreamer example.
Whole output:
System info
Describe the characteristic of your environment:
Reason and Possible fixes
Maybe we should disable
autocast
until this bug is fixed by torch?Checklist
The text was updated successfully, but these errors were encountered: