Skip to content

Commit cf9ad2f

Browse files
committed
clean code
Signed-off-by: Kaihui-intel <kaihui.tang@intel.com>
1 parent d249af4 commit cf9ad2f

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

test/3x/torch/quantization/weight_only/test_transformers.py

-2
Original file line numberDiff line numberDiff line change
@@ -252,6 +252,4 @@ def test_vlm(self):
252252
# phi-3-vision-128k-instruct
253253
model_name = "microsoft/Phi-3-vision-128k-instruct"
254254
woq_model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=woq_config, attn_implementation='eager')
255-
256-
from intel_extension_for_pytorch.nn.modules import WeightOnlyQuantizedLinear
257255
assert isinstance(woq_model.model.layers[0].self_attn.o_proj, WeightOnlyQuantizedLinear), "quantizaion failed."

0 commit comments

Comments
 (0)