Skip to content

Commit 0ba24a3

Browse files
authored
Remove FP16_Optimizer patch for DeepSpeed
Deepspeed already includes the same FusedAdam FP16_Optimizer originally from NVIDIA/apex here: https://github.com/deepspeedai/DeepSpeed/blob/master/deepspeed/runtime/fp16/fused_optimizer.py Currently this line gives a warning saying: ``` /opt/conda/envs/py_3.10/lib/python3.10/site-packages/onnxruntime/training/optim/_modifier_registry.py:56: UserWarning: Skip modifying optimizer because of optimizer name not found in the registry: accelerate.utils.deepspeed.DeepSpeedOptimizerWrapper ``` i.e. the FP16 Optimizer from onnxruntime (https://github.com/microsoft/onnxruntime/blob/main/orttraining/orttraining/python/training/optim/fp16_optimizer.py) is not actually wrapping the DeepSpeed fused Adam optimizer anyway, so this line is redundant.
1 parent 3adbe7c commit 0ba24a3

File tree

1 file changed

+0
-6
lines changed

1 file changed

+0
-6
lines changed

optimum/onnxruntime/trainer.py

-6
Original file line numberDiff line numberDiff line change
@@ -572,12 +572,6 @@ def _inner_training_loop(
572572
)
573573
self.model = unwrap_model(model)
574574

575-
# ORT optimized FP16 optimizer for Deepspeed training
576-
if self.is_deepspeed_enabled and args.fp16:
577-
from onnxruntime.training.optim.fp16_optimizer import FP16_Optimizer
578-
579-
self.optimizer = FP16_Optimizer(self.optimizer)
580-
581575
if self.is_fsdp_enabled:
582576
self.model = model
583577

0 commit comments

Comments
 (0)