We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent b114fdd commit 75b794bCopy full SHA for 75b794b
docs/source/optimization_ov.mdx
@@ -74,7 +74,7 @@ model = OVModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)
74
75
> **NOTE:** `load_in_8bit` is enabled by default for models larger than 1 billion parameters.
76
77
-For the 4-bit weight quantization you can use yhe `quantization_config` to specify the optimization parameters, for example:
+For the 4-bit weight quantization you can use the `quantization_config` to specify the optimization parameters, for example:
78
79
```python
80
from optimum.intel import OVModelForCausalLM, OVWeightQuantizationConfig
0 commit comments