-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize reduce(reshape_1D) #5748
Conversation
When reducing a 1D tensor the order of elements doesn't matter. This allows us to use a more relaxed version of reshape.
return failure(); | ||
} | ||
rewriter.modifyOpInPlace(reshapeOp, | ||
[&]() { reshapeOp.setAllowReorder(true); }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM, but is there any harm in creating two reshape ops if there are non-reduce users? I think with linear layouts an allow-reorder reshape should always be a no-op.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh and this pattern works for HistogramOp
as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM, but is there any harm in creating two reshape ops if there are non-reduce users? I think with linear layouts an allow-reorder reshape should always be a no-op.
The potential harm is extra pressure if this makes the tensor exist with different layouts. We kind of already have this problem when doing ramterialization in layout propagation pass but to be conservative I would prefer avoiding it until it is proven to help.
Oh and this pattern works for HistogramOp as well.
good point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The potential harm is extra pressure if this makes the tensor exist with different layouts.
Hrm okay. It's a bit annoying that we are just papering over a flaw in the layout propagation pass, but hopefully we can resolve that at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well if we create two reshapes with and without allow_reorder then there is likely to have extra pressure whatever we do in layout propagation. What I meant is that layout propagation also does some of it but it is not great
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be over cautious but we can re-visit based on findings
When reducing a 1D tensor the order of elements doesn't matter. This allows us to use a more relaxed version of reshape.