Skip to content

Commit fa63e40

Browse files
Update description
1 parent 569fe61 commit fa63e40

File tree

2 files changed

+8
-6
lines changed

2 files changed

+8
-6
lines changed

optimum/intel/openvino/configuration.py

+4-3
Original file line numberDiff line numberDiff line change
@@ -865,9 +865,10 @@ def __init__(
865865
(1) weights of weighted layers to the precision given in the `weight_quantization_config`, and
866866
(2) weights and activations of other possible layers; precision is given in the `full_quantization_config`.
867867
868-
By default, all weighted layers are quantized in the first step. This leaves only non-weighted layers for the second step.
869-
If some layers are instructed to be ignored in the first step with `weight_quantization_config.ignored_scope` parameter,
870-
weights and activations of these layers are fully quantized to the precision given in the `full_quantization_config`.
868+
By default, weights of all weighted layers are quantized in the first step. In the second step activations of
869+
weighted and non-weighted layers are quantized. If some layers are instructed to be ignored in the first step
870+
with `weight_quantization_config.ignored_scope` parameter, both weights and activations of these layers are
871+
quantized to the precision given in the `full_quantization_config`.
871872
872873
Args:
873874
weight_quantization_config (`OVWeightQuantizationConfig` or `dict`):

optimum/intel/openvino/quantization.py

+4-3
Original file line numberDiff line numberDiff line change
@@ -1170,9 +1170,10 @@ def _mixed_quantization(
11701170
(1) weights of weighted layers to the precision given in the `quantization_config.weight_quantization_config`, and
11711171
(2) weights and activations of other possible layers; precision is given in the `quantization_config.full_quantization_config`.
11721172
1173-
By default, all weighted layers are quantized in the first step. This leaves only non-weighted layers for the second step.
1174-
If some weighted layers are instructed to be ignored in the first step with `weight_quantization_config.ignored_scope` parameter,
1175-
weights and activations of these layers are fully quantized to the precision given in the `quantization_config.full_quantization_config`.
1173+
By default, weights of all weighted layers are quantized in the first step. In the second step activations of
1174+
weighted and non-weighted layers are quantized. If some layers are instructed to be ignored in the first step
1175+
with `weight_quantization_config.ignored_scope` parameter, both weights and activations of these layers are
1176+
quantized to the precision given in the `full_quantization_config`.
11761177
11771178
Args:
11781179
model (`openvino.runtime.Model`):

0 commit comments

Comments
 (0)