Skip to content

Commit d31cfde

Browse files
authored
Add upgrade strategy installation instruction in doc (huggingface#1228)
* add upgrade strategy * fix type * add furiosa
1 parent 43e8004 commit d31cfde

File tree

2 files changed

+20
-15
lines changed

2 files changed

+20
-15
lines changed

README.md

+12-9
Original file line numberDiff line numberDiff line change
@@ -16,21 +16,24 @@ If you'd like to use the accelerator-specific features of 🤗 Optimum, you can
1616

1717
| Accelerator | Installation |
1818
|:-----------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------|
19-
| [ONNX Runtime](https://onnxruntime.ai/docs/) | `python -m pip install optimum[onnxruntime]` |
20-
| [Intel Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) | `python -m pip install optimum[neural-compressor]`|
21-
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `python -m pip install optimum[openvino,nncf]` |
22-
| [Habana Gaudi Processor (HPU)](https://habana.ai/training/) | `python -m pip install optimum[habana]` |
19+
| [ONNX Runtime](https://onnxruntime.ai/docs/) | `pip install --upgrade-strategy eager optimum[onnxruntime]` |
20+
| [Intel Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) | `pip install --upgrade-strategy eager optimum[neural-compressor]`|
21+
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `pip install --upgrade-strategy eager optimum[openvino,nncf]` |
22+
| [Habana Gaudi Processor (HPU)](https://habana.ai/training/) | `pip install --upgrade-strategy eager optimum[habana]` |
23+
| [FuriosaAI](https://www.furiosa.ai/) | `pip install --upgrade-strategy eager optimum[furiosa]` |
24+
25+
+The `--upgrade-strategy eager` option is needed to ensure the different packages are upgraded to the latest possible version.
2326

2427
To install from source:
2528

2629
```bash
2730
python -m pip install git+https://github.com/huggingface/optimum.git
2831
```
2932

30-
For the accelerator-specific features, append `#egg=optimum[accelerator_type]` to the above command:
33+
For the accelerator-specific features, append `optimum[accelerator_type]` to the above command:
3134

3235
```bash
33-
python -m pip install git+https://github.com/huggingface/optimum.git#egg=optimum[onnxruntime]
36+
python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git
3437
```
3538

3639
## Accelerated Inference
@@ -59,7 +62,7 @@ The [export](https://huggingface.co/docs/optimum/exporters/overview) and optimiz
5962

6063
### OpenVINO
6164

62-
This requires to install the OpenVINO extra by doing `pip install optimum[openvino,nncf]`
65+
This requires to install the OpenVINO extra by doing `pip install --upgrade-strategy eager optimum[openvino,nncf]`
6366

6467
To load a model and run inference with OpenVINO Runtime, you can just replace your `AutoModelForXxx` class with the corresponding `OVModelForXxx` class. To load a PyTorch checkpoint and convert it to the OpenVINO format on-the-fly, you can set `export=True` when loading your model.
6568

@@ -82,7 +85,7 @@ You can find more examples in the [documentation](https://huggingface.co/docs/op
8285

8386
### Neural Compressor
8487

85-
This requires to install the Neural Compressor extra by doing `pip install optimum[neural-compressor]`
88+
This requires to install the Neural Compressor extra by doing `pip install --upgrade-strategy eager optimum[neural-compressor]`
8689

8790
Dynamic quantization can be applied on your model:
8891

@@ -167,7 +170,7 @@ We support many providers:
167170

168171
### Habana
169172

170-
This requires to install the Habana extra by doing `pip install optimum[habana]`
173+
This requires to install the Habana extra by doing `pip install --upgrade-strategy eager optimum[habana]`
171174

172175
```diff
173176
- from transformers import Trainer, TrainingArguments

docs/source/installation.mdx

+8-6
Original file line numberDiff line numberDiff line change
@@ -22,20 +22,22 @@ If you'd like to use the accelerator-specific features of 🤗 Optimum, you can
2222

2323
| Accelerator | Installation |
2424
|:-----------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------|
25-
| [ONNX runtime](https://onnxruntime.ai/docs/) | `python -m pip install optimum[onnxruntime]` |
26-
| [Intel Neural Compressor (INC)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) | `python -m pip install optimum[neural-compressor]`|
27-
| [Intel OpenVINO](https://docs.openvino.ai/latest/index.html) | `python -m pip install optimum[openvino,nncf]` |
28-
| [Habana Gaudi Processor (HPU)](https://habana.ai/training/) | `python -m pip install optimum[habana]` |
25+
| [ONNX runtime](https://onnxruntime.ai/docs/) | `pip install --upgrade-strategy eager install optimum[onnxruntime]`|
26+
| [Intel Neural Compressor (INC)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) | `pip install --upgrade-strategy eager optimum[neural-compressor]` |
27+
| [Intel OpenVINO](https://docs.openvino.ai/latest/index.html) | `pip install --upgrade-strategy eager optimum[openvino,nncf]` |
28+
| [Habana Gaudi Processor (HPU)](https://habana.ai/training/) | `pip install --upgrade-strategy eager optimum[habana]` |
29+
| [FuriosaAI](https://www.furiosa.ai/) | `pip install --upgrade-strategy eager optimum[furiosa]` |
2930

31+
The `--upgrade-strategy eager` option is needed to ensure the different packages are upgraded to the latest possible version.
3032

3133
If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you can install the base library from source as follows:
3234

3335
```bash
3436
python -m pip install git+https://github.com/huggingface/optimum.git
3537
```
3638

37-
For the accelerator-specific features, you can install them by appending `#egg=optimum[accelerator_type]` to the `pip` command, e.g.
39+
For the accelerator-specific features, you can install them by appending `optimum[accelerator_type]` to the `pip` command, e.g.
3840

3941
```bash
40-
python -m pip install git+https://github.com/huggingface/optimum.git#egg=optimum[onnxruntime]
42+
python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git
4143
```

0 commit comments

Comments
 (0)