Skip to content

Commit c8a6528

Browse files
Update optimum/gptq/quantizer.py
Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
1 parent c446522 commit c8a6528

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

optimum/gptq/quantizer.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -702,7 +702,7 @@ def tmp(_, input, output):
702702
model = self.post_init_model(model)
703703

704704
torch.cuda.empty_cache()
705-
if hasattr(torch, "xpu"):
705+
if hasattr(torch, "xpu") and torch.xpu.is_available():
706706
torch.xpu.empty_cache()
707707
return model
708708

0 commit comments

Comments
 (0)