Skip to content

Commit 4dd2e44

Browse files
committed
fix flash attn check
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
1 parent cbc232b commit 4dd2e44

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

optimum/exporters/ipex/modeling_utils.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -628,7 +628,7 @@ def postprocess_attention_output(self, attn_output):
628628
return attn_output
629629

630630
# Maybe removed after torch 2.6 released
631-
def has_flash_attn(query):
631+
def has_flash_attn(self, query):
632632
if query.device.type == "cpu":
633633
return is_torch_version(">", "2.4.99")
634634
elif query.device.type == "xpu":

0 commit comments

Comments
 (0)