Skip to content

Commit a9958d0

Browse files
authored
Ensure that mul operators with shared initializer will not be absorbed in SmoothQuant (#2063)
Signed-off-by: duansheng.liu <44742794+duanshengliu@users.noreply.github.com>
1 parent 25d1af8 commit a9958d0

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

neural_compressor/adaptor/ox_utils/smooth_quant.py

+3
Original file line numberDiff line numberDiff line change
@@ -295,6 +295,9 @@ def mul(node, scale): # pragma: no cover
295295
return False
296296
for inp in node.input:
297297
if self.model.get_initializer(inp) is not None:
298+
# Ensure that mul operators with shared initializer will not be absorbed.
299+
if self.model.get_initializer_share_num(inp) > 1:
300+
return False
298301
key = node.input[0].split("_smooth_output")[0]
299302
tensor = self.model.get_initializer(inp)
300303
new_tensor = (

0 commit comments

Comments
 (0)