feat!: Remove ORTModule, torch-ort, eager mode, and lazy tensor#28058
Draft
MaanavD wants to merge 3 commits intomicrosoft:mainfrom
Draft
feat!: Remove ORTModule, torch-ort, eager mode, and lazy tensor#28058MaanavD wants to merge 3 commits intomicrosoft:mainfrom
MaanavD wants to merge 3 commits intomicrosoft:mainfrom
Conversation
BREAKING CHANGE: Remove all ORTModule, torch-ort, eager mode, and lazy
tensor code from ONNX Runtime. The onnxruntime-training package with
ORTModule was deprecated over a year ago.
What's removed:
- orttraining/orttraining/python/training/ortmodule/ (ORTModule frontend)
- orttraining/orttraining/python/training/amp/ (mixed precision)
- orttraining/orttraining/python/training/experimental/ (experimental features)
- orttraining/orttraining/python/training/optim/ (PyTorch optimizers)
- orttraining/orttraining/python/training/ort_triton/ (Triton integration)
- orttraining/orttraining/python/training/utils/ (ORTModule utilities)
- orttraining/orttraining/lazy_tensor/ (LazyTensor backend)
- orttraining/orttraining/models/ (sample training models)
- orttraining/orttraining/core/ (most of it, except checkpoint_common
and training_op_defs needed by training_api/training_ops)
- onnxruntime/core/eager/ (eager execution)
- onnxruntime/python/torch_cpp_extensions/ (aten op executor)
- orttraining/orttraining/training_ops/{cpu,cuda}/torch/ (PythonOp kernels)
- All ORTModule docs, tests, and CI references
- CMake: ENABLE_TRAINING, ENABLE_TRAINING_TORCH_INTEROP, ENABLE_LAZY_TENSOR
- cmake/onnxruntime_training.cmake (entire file)
What's kept:
- training_api (C/C++ on-device training API)
- training.api Python bindings
- training.onnxblock
- training.artifacts
- training_ops (gradient kernels)
- --enable_training_apis and --enable_training_ops build flags
- onnxruntime-training PyPI package name (for training_api)
--enable_training now prints a deprecation warning and redirects to
--enable_training_apis + --enable_training_ops.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add stub __init__.py files for ortmodule, amp, optim, experimental, ort_triton, and utils packages. These raise ImportError with a clear message explaining the removal and suggesting pinning <=1.26. Also registers the stub packages in setup.py so they ship in the wheel. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Restore all deleted source files so they remain browsable in the repository. The code is not compiled (CMake excludes it) and not shipped in packages (setup.py only lists stub packages). - Restore 353 source files (Python, C++, docs, tests) - __init__.py stubs still raise ImportError with deprecation message - Add deprecation banners to ORTModule documentation Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
| @@ -1 +1,5 @@ | |||
| from .loss_scaler import DynamicLossScaler, LossScaler # noqa: F401 | |||
| raise ImportError( | |||
| @@ -1 +1,5 @@ | |||
| from .loss_scaler import DynamicLossScaler, LossScaler # noqa: F401 | |||
| raise ImportError( | |||
| "onnxruntime.training.amp has been removed as of v1.27. " | ||
| "It was part of the deprecated ORTModule frontend. " | ||
| "Pin onnxruntime-training<=1.26 if you still need it." | ||
| ) No newline at end of file |
| @@ -1 +1,5 @@ | |||
| from .gradient_graph._gradient_graph_tools import export_gradient_graph # noqa: F401 | |||
| raise ImportError( | |||
| @@ -1 +1,5 @@ | |||
| from .gradient_graph._gradient_graph_tools import export_gradient_graph # noqa: F401 | |||
| raise ImportError( | |||
|
|
||
| # ORTModule must be loaded only after all validation passes | ||
| from .ortmodule import ORTModule # noqa: E402, F401 | ||
| raise ImportError( |
| raise ImportError( | ||
| "ORTModule has been removed from onnxruntime-training as of v1.27. " | ||
| "There is no replacement. Pin onnxruntime-training<=1.26 if you still need it." | ||
| ) No newline at end of file |
| "torch_nvtx_range_push", | ||
| "unflatten_data_using_schema", | ||
| ] | ||
| raise ImportError( |
| "torch_nvtx_range_push", | ||
| "unflatten_data_using_schema", | ||
| ] | ||
| raise ImportError( |
| "onnxruntime.training.utils has been removed as of v1.27. " | ||
| "It was part of the deprecated ORTModule frontend. " | ||
| "Pin onnxruntime-training<=1.26 if you still need it." | ||
| ) No newline at end of file |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
BREAKING CHANGE: Disable ORTModule / torch-ort (keep source for reference)
onnxruntime-trainingwith ORTModule was deprecated over a year ago but never codified in code. This PR disables all ORTModule code from building and shipping, while keeping the source files in the repository for reference.Approach: Keep but disable
from onnxruntime.training.ortmodule import ORTModuleraisesImportErrorwith a clear messageWhat's disabled
onnxruntime.training.ortmoduleand all sub-packagestraining.amp,training.optim,training.experimental,training.ort_triton,training.utilslazy_tensor/,models/,core/agent/,core/framework/torch/,core/session/,core/optimizer/, PythonOp kernelsENABLE_TRAINING,ENABLE_TRAINING_TORCH_INTEROP,ENABLE_LAZY_TENSORWhat's unchanged
training_api— C/C++ on-device training APItraining.api— Python bindingstraining.onnxblock/training.artifactstraining_ops— Gradient kernelsonnxruntime-trainingPyPI package nameDeprecation UX
All 5 ORTModule docs have deprecation banners added.
Build flag changes
--enable_training--enable_training_apis--enable_training_apis--enable_training_opsVerification needed
--enable_training_apis --enable_training_opssucceedsimport onnxruntimeworksfrom onnxruntime.training.api import *worksfrom onnxruntime.training.ortmodule import ORTModuleraises clearImportError