Skip to content

Commit

Permalink
[MPS] Unregister put_() op due to lack of implementation (pytorch#94231)
Browse files Browse the repository at this point in the history
Currently, the `put_()` is not implemented on MPS backend, so this patch will unregister it and insert it into blocklist of TestConsistency.
Pull Request resolved: pytorch#94231
Approved by: https://github.com/kulinseth
  • Loading branch information
razarmehr authored and pytorchmergebot committed Feb 7, 2023
1 parent bc6d54f commit bc8a378
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 1 deletion.
2 changes: 1 addition & 1 deletion aten/src/ATen/native/native_functions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7434,7 +7434,7 @@
- func: put_(Tensor(a!) self, Tensor index, Tensor source, bool accumulate=False) -> Tensor(a!)
variants: method
dispatch:
CPU, CUDA, MPS: put_
CPU, CUDA: put_
autogen: put.out

- func: put(Tensor self, Tensor index, Tensor source, bool accumulate=False) -> Tensor
Expand Down
3 changes: 3 additions & 0 deletions test/test_mps.py
Original file line number Diff line number Diff line change
Expand Up @@ -8701,6 +8701,9 @@ class TestConsistency(TestCase):
# count_nonzero returns wrong results for these dtypes
'nonzero': [torch.uint8, torch.float16],

# failures due to lack of op implementation on MPS backend
'put': ['torch.bool', 'torch.float16', 'torch.float32', 'torch.int16', 'torch.int32', 'torch.int64', 'torch.uint8'],

# These were moved from ALLOWLIST to BLOCK as they are not working
# locally
'tile': ['torch.float16', 'torch.float32', 'torch.int16', 'torch.int32', 'torch.int64', 'torch.uint8'],
Expand Down

0 comments on commit bc8a378

Please sign in to comment.