You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/source/bettertransformer/tutorials/convert.mdx
+3-3
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ Sometimes you can directly load your model on your GPU devices using `accelerate
45
45
46
46
## Step 2: Set your model on your preferred device
47
47
48
-
If you did not used`device_map="auto"` to load your model (or if your model does not support `device_map="auto"`), you can manually set your model to a GPU:
48
+
If you did not use`device_map="auto"` to load your model (or if your model does not support `device_map="auto"`), you can manually set your model to a GPU:
49
49
```python
50
50
>>> model = model.to(0) # or model.to("cuda:0")
51
51
```
@@ -92,7 +92,7 @@ You can also use `transformers.pipeline` as usual and pass the converted model d
92
92
>>>...
93
93
```
94
94
95
-
Please refer to the [official documentation of `pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines) for further usage. If you face into any issue, do not hesitate to open an isse on GitHub!
95
+
Please refer to the [official documentation of `pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines) for further usage. If you run into any issue, do not hesitate to open an issue on GitHub!
96
96
97
97
## Training compatibility
98
98
@@ -113,4 +113,4 @@ model = BetterTransformer.transform(model)
0 commit comments