-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make einsum a leaf #1364
Comments
@bdhirsh Do you think this is possible? |
(Oh, it seems that we raced on commenting and you're already on this thread) |
@ezyang in case you have some insight 😄 |
we need to write a backward kernel for einsum. Do you have an einsum_backward op? We could stub one in and just not have an implementation in PT proper |
I didn't find anything by doing a quick search, will check with xla team regarding einsum backward. |
So there isn't
@ezyang If you could make |
I don't think we need to do anything in PyTorch; just add einsum to the autograd list in xla_native_functions.yaml and then implement the custom autograd function the same as the other ops. We could upstream them but this is probably easiest. |
Oh OK. It seems like |
@steventk-g You can use https://github.com/pytorch/xla/blob/master/torch_xla/csrc/aten_autograd_ops.h#L10 as an example to write both forward and backward part for |
After #3843, we will need changes to support (1) einsum on more than 2 inputs and (2) einsum on equations like |
#1225
XLA has a optimized
einsum
implementation that we can use. Requires a change in upstream.The text was updated successfully, but these errors were encountered: