You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For more details, please refer to the [documentation](https://intel.github.io/intel-extension-for-pytorch/#introduction).
230
229
231
230
232
231
## Running the examples
233
232
234
-
Check out the [`examples`](https://github.com/huggingface/optimum-intel/tree/main/examples) directory to see how 🤗 Optimum Intel can be used to optimize models and accelerate inference.
233
+
Check out the [`examples`](https://github.com/huggingface/optimum-intel/tree/main/examples)and [`notebooks`](https://github.com/huggingface/optimum-intel/tree/main/notebooks)directory to see how 🤗 Optimum Intel can be used to optimize models and accelerate inference.
235
234
236
235
Do not forget to install requirements for every example:
float16 or bfloat16 or float32: load in a specified dtype, ignoring the model config.torch_dtype if one exists. If not specified, the model will get loaded in float32.
237
+
trust_remote_code (`bool`, *optional*)
238
+
Allows to use custom code for the modeling hosted in the model repository. This option should only be set for repositories you trust and in which you have read the code, as it will execute on your local machine arbitrary code present in the model repository.
239
+
file_name (`str`, *optional*):
240
+
The file name of the model to load. Overwrites the default file name and allows one to load the model
241
+
with a different name.
242
+
"""
209
243
ifuse_auth_tokenisnotNone:
210
244
warnings.warn(
211
245
"The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.",
0 commit comments