You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+2-3
Original file line number
Diff line number
Diff line change
@@ -72,7 +72,7 @@ Below are examples of how to use OpenVINO and its [NNCF](https://docs.openvino.a
72
72
73
73
#### Export:
74
74
75
-
It is possible to export your model to the [OpenVINO IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) format with the CLI :
75
+
It is also possible to export your model to the [OpenVINO IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) format with the CLI :
76
76
77
77
```plain
78
78
optimum-cli export openvino --model gpt2 ov_model
@@ -223,15 +223,14 @@ To load your IPEX model, you can just replace your `AutoModelForXxx` class with
For more details, please refer to the [documentation](https://intel.github.io/intel-extension-for-pytorch/#introduction).
230
229
231
230
232
231
## Running the examples
233
232
234
-
Check out the [`examples`](https://github.com/huggingface/optimum-intel/tree/main/examples) directory to see how 🤗 Optimum Intel can be used to optimize models and accelerate inference.
233
+
Check out the [`examples`](https://github.com/huggingface/optimum-intel/tree/main/examples)and [`notebooks`](https://github.com/huggingface/optimum-intel/tree/main/notebooks)directory to see how 🤗 Optimum Intel can be used to optimize models and accelerate inference.
235
234
236
235
Do not forget to install requirements for every example:
0 commit comments