@@ -221,11 +221,11 @@ class OVWeightQuantizationConfig(OVQuantizationConfigBase):
221
221
- A path to a *directory* containing vocabulary files required by the tokenizer, for instance saved
222
222
using the [`~PreTrainedTokenizer.save_pretrained`] method, e.g., `./my_model_directory/`.
223
223
dataset (`str or List[str]`, *optional*):
224
- The dataset used for data-aware compression or quantization with NNCF. You can provide your own dataset
225
- in a list of strings or just use the one from the list ['wikitext2','c4','c4-new'] for language models
226
- or ['conceptual_captions','laion/220k-GPT4Vision-captions-from-LIVIS','laion/filtered-wit'] for diffusion models .
227
- Alternatively, you can provide data objects via `calibration_dataset` argument
228
- of `OVQuantizer.quantize()` method.
224
+ The dataset used for data-aware compression with NNCF. For language models you can provide your own dataset
225
+ in a list of strings or just use the one from the list ['wikitext2','c4','c4-new']. For diffusion models it
226
+ must be one of ['conceptual_captions', 'laion/220k-GPT4Vision-captions-from-LIVIS', 'laion/filtered-wit'].
227
+ Alternatively, you can provide data objects via `calibration_dataset` argument of `OVQuantizer.quantize()`
228
+ method.
229
229
ratio (`float`, defaults to 1.0):
230
230
The ratio between baseline and backup precisions (e.g. 0.9 means 90% of layers quantized to INT4_ASYM
231
231
and the rest to INT8_ASYM).
0 commit comments