-
Notifications
You must be signed in to change notification settings - Fork 627
[TorchToLinalg] Direct lowering from Torch to Linalg for torch.aten.convolution_backward #4384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
4f1cb20 to
8e2b616
Compare
|
@zjgarvey hey! May I ask you to take a look when you're available? Thank you in advance for the review. |
zjgarvey
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! This is an excellent start.
We need to keep the existing decomposition for other backends. I have a few other comments for you to look at, but that's the biggest blocker right now.
a-sidorova
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zjgarvey thank you for review! I have applied your comments in the latest commit. Could you please take a look at the changes one more time?
b20062d to
30dac4b
Compare
30dac4b to
31eac08
Compare
zjgarvey
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few more comments before we merge. Sorry if this took a while to get back to.
projects/pt1/python/torch_mlir_e2e_test/configs/jit_importer_backend.py
Outdated
Show resolved
Hide resolved
31eac08 to
3a30a55
Compare
a-sidorova
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zjgarvey thank you for the one more review! I applied your comments - may I ask you to take a look again?
projects/pt1/python/torch_mlir_e2e_test/configs/jit_importer_backend.py
Outdated
Show resolved
Hide resolved
zjgarvey
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is good with the caveat that you revert the commit changing OpTy::create -> rewriter.create as rewriter.create is indeed deprecated now. It's not a huge deal since we will need to go through the whole codebase and update this eventually, but would be good to do this before merging.
Additionally, feel free to restore the llvm_unreachable if you think it's appropriate.
Thank you for the review! I reverted the both commits - May I ask you to relaunch CI please? It should be passed now. |
|
https://github.com/llvm/torch-mlir/actions/runs/20088891516/job/57707427422?pr=4384#step:8:4063 I think this is a quick thing to fix. |
My bad - missed this one error within other previous errors. Fixed in 24b0df0. Could you please relaunch one more time again please? 😃 |
I just did, but we need you to get triage access so you can trigger builds. I'll ask around. |
|
Ill add her to triage. Should have done that a while ago |
@zjgarvey thanks! I pushed the commit with updates in xfail sets - CI has been automatically launched. |
|
There appears to be a hang on the fx importer tests. If you know of any that would cause a hang, you might need to add them to the crashing set instead of xfail. |
Description:
torch.aten.convolution_backwardfrom Torch to Linalg. Enabled this pass by default. The pass generateslinalg.genericops instead oflinalg.conv_<>for better lowering.DecomposeAtenConvolutionBackwardOpfromTorch/Transforms/DecomposeComplexOps.cpp.convolution_backward.mlir. Also added more test cases for better test coverage.Issue: