Skip to content

Commit 75b794b

Browse files
committed
typo
1 parent b114fdd commit 75b794b

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/optimization_ov.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ model = OVModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)
7474

7575
> **NOTE:** `load_in_8bit` is enabled by default for models larger than 1 billion parameters.
7676

77-
For the 4-bit weight quantization you can use yhe `quantization_config` to specify the optimization parameters, for example:
77+
For the 4-bit weight quantization you can use the `quantization_config` to specify the optimization parameters, for example:
7878

7979
```python
8080
from optimum.intel import OVModelForCausalLM, OVWeightQuantizationConfig

0 commit comments

Comments
 (0)