Skip to content

Conversation

@gsprochette
Copy link
Collaborator

Description

This PR unpins the torch version from "torch==2.7.0" to "torch>=2.7.0".

The version was pinned mainly to correspond to the version the stable-fast wheels were built against, so merging this PR will cause instability in the stable-fast install (in stable-fast and full extras) until we upload the new wheels (currently in testing).

Related Issue

Fixes #(issue number)

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Ran the tests on torch 2.9 and made sure we are getting the same results as before

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Additional Notes

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment @cursor review or bugbot run to trigger another review on this PR

dependencies = [
"torch==2.7.0",
"torchvision==0.22.0",
"torch>=2.7.0",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Unpinned torch conflicts with torch2.7-specific gptqmodel wheel

Unpinning torch to >=2.7.0 creates a version mismatch with the gptqmodel==4.0.0.dev0+cu126torch2.7 dependency (in the gptq extra). The gptqmodel wheel's version string explicitly indicates it was built for torch 2.7 (torch2.7 suffix), but users may now install torch 2.8+ while getting a gptqmodel binary compiled against torch 2.7's ABI. This can cause runtime errors or crashes when using GPTQ functionality due to binary incompatibility between the torch version and the gptqmodel extension.

Fix in Cursor Fix in Web

@gsprochette
Copy link
Collaborator Author

@ParagEkbote you looked at unpinning the torch version in an other PR, would you like to review this?
One flaw of this PR is that if torch isn't pinned we can't pin torchao either, but the pyproject.toml file doesn't really allow to pin torchao conditionnally on torch so we'll just have to warn the user and rely on them?

@ParagEkbote
Copy link
Contributor

ParagEkbote commented Dec 17, 2025

@ParagEkbote you looked at unpinning the torch version in an other PR, would you like to review this? One flaw of this PR is that if torch isn't pinned we can't pin torchao either, but the pyproject.toml file doesn't really allow to pin torchao conditionnally on torch so we'll just have to warn the user and rely on them?

In this PR, why are we aiming to unpin the torch version for future builds, since torch has platform-specific constraints? Is it simpler to specify the minimum version of torchao>0.12.0 and update the torch version every minor release?

We could also define pytorch extras in optional dependencies for the users who are willing to try out torch==2.8.0 or newer versions:

[project.optional-dependencies]
torch29 = [
  "torch==2.9.0",
  "torchao==0.14.1",
]

torch28 = [
  "torch==2.8.0",
  "torchao==0.13.0",
]

Another point to be considered is that if we unpin torchao itself, it will lead to breakages since the upstream API has been updated as compared to the quantizer defined in pruna.

WDYT?

cc: @gsprochette

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants