You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
prompt ="sailing ship in storm by Leonardo da Vinci"
121
-
image = pipeline(prompt).images[0]
122
-
```
12
+
Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), you can load it by replacing the `AutoModelForXxx` class with the corresponding `ORTModelForXxx`.
123
13
124
-
To load your PyTorch model and convert it to ONNX on-the-fly, you can set `export=True`.
result = pipe("He never went out without a book under his arm")
186
25
```
187
26
27
+
More information for all the supported `ORTModelForXxx` in our [documentation](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort)
188
28
189
-
## Stable Diffusion XL
190
-
191
-
Before using `ORTStableDiffusionXLPipeline` make sure to have `diffusers` and `invisible_watermark` installed. You can install the libraries as follows:
192
29
193
-
```bash
194
-
pip install diffusers
195
-
pip install invisible-watermark>=0.2.0
196
-
```
197
-
198
-
### Text-to-Image
199
-
200
-
Here is an example of how you can load a SDXL ONNX model from [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and run inference using ONNX Runtime :
30
+
### Diffusers models
201
31
202
-
```python
203
-
from optimum.onnxruntime import ORTStableDiffusionXLPipeline
32
+
Once your model was [exported to the ONNX format](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), you can load it by replacing the `DiffusionPipeline` class with the corresponding `ORTDiffusionPipeline`.
prompt = "sailing ship in storm by Leonardo da Vinci"
43
+
image = pipeline(prompt).images[0]
213
44
```
214
45
46
+
## Converting your model to ONNX on-the-fly
215
47
216
-
### Image-to-Image
217
-
218
-
Here is an example of how you can load a PyTorch SDXL model, convert it to ONNX on-the-fly and run inference using ONNX Runtime for *image-to-image* :
48
+
In case your model wasn't already [converted to ONNX](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model), [`~optimum.onnxruntime.ORTModel`] includes a method to convert your model to ONNX on-the-fly.
49
+
Simply pass `export=True` to the [`~optimum.onnxruntime.ORTModel.from_pretrained`] method, and your model will be loaded and converted to ONNX on-the-fly:
219
50
220
51
```python
221
-
from optimum.onnxruntime import ORTStableDiffusionXLImg2ImgPipeline
>>> model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)
232
57
```
233
58
234
59
235
-
### Refining the image output
236
-
237
-
The image can be refined by making use of a model like [stabilityai/stable-diffusion-xl-refiner-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0). In this case, you only have to output the latents from the base model.
60
+
## Pushing your model to the Hub
238
61
62
+
You can also call `push_to_hub` directly on your model to upload it to the [Hub](https://hf.co/models).
239
63
240
64
```python
241
-
from optimum.onnxruntime import ORTStableDiffusionXLImg2ImgPipeline
Here is an example of how you can load a Latent Consistency Models (LCMs) from [SimianLuo/LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) and run inference using ONNX Runtime :
67
+
>>># Load the model from the hub and export it to the ONNX format
0 commit comments