@@ -171,6 +171,12 @@ def pipeline(
171
171
The task defining which pipeline will be returned. Currently accepted tasks are:
172
172
173
173
- `"text-generation"`: will return a [`TextGenerationPipeline`]:.
174
+ - `"fill-mask"`: will return a [`FillMaskPipeline`].
175
+ - `"question-answering"`: will return a [`QuestionAnsweringPipeline`].
176
+ - `"image-classificatio"`: will return a [`ImageClassificationPipeline`].
177
+ - `"text-classification"`: will return a [`TextClassificationPipeline`].
178
+ - `"token-classification"`: will return a [`TokenClassificationPipeline`].
179
+ - `"audio-classification"`: will return a [`AudioClassificationPipeline`].
174
180
175
181
model (`str` or [`PreTrainedModel`], *optional*):
176
182
The model that will be used by the pipeline to make predictions. This can be a model identifier or an
@@ -185,7 +191,7 @@ def pipeline(
185
191
is not specified or not a string, then the default tokenizer for `config` is loaded (if it is a string).
186
192
However, if `config` is also not given or not a string, then the default tokenizer for the given `task`
187
193
will be loaded.
188
- accelerator (`str`, *optional*, defaults to `"ipex"` ):
194
+ accelerator (`str`, *optional*):
189
195
The optimization backends, choose from ["ipex", "inc", "openvino"].
190
196
use_fast (`bool`, *optional*, defaults to `True`):
191
197
Whether or not to use a Fast tokenizer if possible (a [`PreTrainedTokenizerFast`]).
0 commit comments