Skip to content

Conversation

@a-sidorova
Copy link
Collaborator

@a-sidorova a-sidorova commented Nov 20, 2025

Description:

  • Added the direct lowering pass for torch.aten.convolution_backward from Torch to Linalg. Enabled this pass by default. The pass generates linalg.generic ops instead of linalg.conv_<> for better lowering.
  • Removed the previous pass DecomposeAtenConvolutionBackwardOp from Torch/Transforms/DecomposeComplexOps.cpp.
  • Created new lit tests for backward convolution in the separate file convolution_backward.mlir. Also added more test cases for better test coverage.
  • Added new e2e tests for backward convolution for better test coverage.

Issue:

@a-sidorova a-sidorova force-pushed the feature/linalg_conv_bwd branch from 4f1cb20 to 8e2b616 Compare November 21, 2025 13:31
@a-sidorova a-sidorova marked this pull request as ready for review November 21, 2025 13:36
@a-sidorova
Copy link
Collaborator Author

@zjgarvey hey! May I ask you to take a look when you're available? Thank you in advance for the review.

@zjgarvey zjgarvey self-requested a review November 21, 2025 19:36
Copy link
Collaborator

@zjgarvey zjgarvey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! This is an excellent start.

We need to keep the existing decomposition for other backends. I have a few other comments for you to look at, but that's the biggest blocker right now.

Copy link
Collaborator Author

@a-sidorova a-sidorova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zjgarvey thank you for review! I have applied your comments in the latest commit. Could you please take a look at the changes one more time?

@a-sidorova a-sidorova requested a review from zjgarvey November 25, 2025 05:09
@a-sidorova a-sidorova force-pushed the feature/linalg_conv_bwd branch from b20062d to 30dac4b Compare November 25, 2025 08:19
@a-sidorova a-sidorova force-pushed the feature/linalg_conv_bwd branch from 30dac4b to 31eac08 Compare November 25, 2025 08:25
Copy link
Collaborator

@zjgarvey zjgarvey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more comments before we merge. Sorry if this took a while to get back to.

@a-sidorova a-sidorova force-pushed the feature/linalg_conv_bwd branch from 31eac08 to 3a30a55 Compare December 9, 2025 10:08
Copy link
Collaborator Author

@a-sidorova a-sidorova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zjgarvey thank you for the one more review! I applied your comments - may I ask you to take a look again?

@a-sidorova a-sidorova requested a review from zjgarvey December 9, 2025 11:15
Copy link
Collaborator

@zjgarvey zjgarvey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is good with the caveat that you revert the commit changing OpTy::create -> rewriter.create as rewriter.create is indeed deprecated now. It's not a huge deal since we will need to go through the whole codebase and update this eventually, but would be good to do this before merging.

Additionally, feel free to restore the llvm_unreachable if you think it's appropriate.

@a-sidorova
Copy link
Collaborator Author

I think this is good with the caveat that you revert the commit changing OpTy::create -> rewriter.create as rewriter.create is indeed deprecated now. It's not a huge deal since we will need to go through the whole codebase and update this eventually, but would be good to do this before merging.

Additionally, feel free to restore the llvm_unreachable if you think it's appropriate.

Thank you for the review! I reverted the both commits - May I ask you to relaunch CI please? It should be passed now.

@zjgarvey
Copy link
Collaborator

@a-sidorova
Copy link
Collaborator Author

https://github.com/llvm/torch-mlir/actions/runs/20088891516/job/57707427422?pr=4384#step:8:4063

I think this is a quick thing to fix.

My bad - missed this one error within other previous errors. Fixed in 24b0df0. Could you please relaunch one more time again please? 😃

@sjain-stanford
Copy link
Member

Could you please relaunch one more time again please? 😃

I just did, but we need you to get triage access so you can trigger builds. I'll ask around.

@zjgarvey
Copy link
Collaborator

Ill add her to triage. Should have done that a while ago

@a-sidorova
Copy link
Collaborator Author

Ill add her to triage. Should have done that a while ago

@zjgarvey thanks! I pushed the commit with updates in xfail sets - CI has been automatically launched.

@zjgarvey
Copy link
Collaborator

There appears to be a hang on the fx importer tests. If you know of any that would cause a hang, you might need to add them to the crashing set instead of xfail.

@a-sidorova
Copy link
Collaborator Author

a-sidorova commented Dec 12, 2025

@zjgarvey I corrected xfail and crashed tests for configs in e2e test in 3d1fa85 - dilated bwd convolution was stuck somewhere. But now CI has been successfully passed and we can merge this PR to master

@zjgarvey zjgarvey self-requested a review December 12, 2025 15:31
@zjgarvey zjgarvey merged commit 3cebce2 into llvm:main Dec 12, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants