You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -59,7 +62,7 @@ The [export](https://huggingface.co/docs/optimum/exporters/overview) and optimiz
59
62
60
63
### OpenVINO
61
64
62
-
This requires to install the OpenVINO extra by doing `pip install optimum[openvino,nncf]`
65
+
This requires to install the OpenVINO extra by doing `pip install --upgrade-strategy eager optimum[openvino,nncf]`
63
66
64
67
To load a model and run inference with OpenVINO Runtime, you can just replace your `AutoModelForXxx` class with the corresponding `OVModelForXxx` class. To load a PyTorch checkpoint and convert it to the OpenVINO format on-the-fly, you can set `export=True` when loading your model.
65
68
@@ -82,7 +85,7 @@ You can find more examples in the [documentation](https://huggingface.co/docs/op
82
85
83
86
### Neural Compressor
84
87
85
-
This requires to install the Neural Compressor extra by doing `pip install optimum[neural-compressor]`
88
+
This requires to install the Neural Compressor extra by doing `pip install --upgrade-strategy eager optimum[neural-compressor]`
86
89
87
90
Dynamic quantization can be applied on your model:
88
91
@@ -167,7 +170,7 @@ We support many providers:
167
170
168
171
### Habana
169
172
170
-
This requires to install the Habana extra by doing `pip install optimum[habana]`
173
+
This requires to install the Habana extra by doing `pip install --upgrade-strategy eager optimum[habana]`
171
174
172
175
```diff
173
176
- from transformers import Trainer, TrainingArguments
0 commit comments