Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refine codegen #1424

Merged
merged 1 commit into from
Mar 5, 2025
Merged

Refine codegen #1424

merged 1 commit into from
Mar 5, 2025

Conversation

guangyey
Copy link
Contributor

@guangyey guangyey commented Mar 3, 2025

Motivation

Following the comments here, this PR intend to refine code related to codegen and remove redundant code.

@guangyey guangyey force-pushed the guangyey/refine branch 5 times, most recently from 803902e to 805d96f Compare March 3, 2025 10:06
@guangyey guangyey requested a review from EikanWang March 3, 2025 10:11
@guangyey
Copy link
Contributor Author

guangyey commented Mar 3, 2025

@EikanWang This PR only intends to refine the codegen and remove redundant code. I will separate a dedicated PR to install header files when this PR lands.

Comment on lines 18 to 22
if(WIN32)
set(FILE_DISPLAY_CMD type)
# replace forward slash with back slash for compatibility with 'type' command on Windows
string(REPLACE "/" "\\" RegisterXPU_PATH_BACKSLASH "${RegisterXPU_PATH}")
string(REPLACE "/" "\\" XPUFallback_PATH_BACKSLASH "${XPUFallback_PATH}")
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_PATH_BACKSLASH} ">>" ${RegisterXPU_PATH_BACKSLASH})
string(REPLACE "/" "\\" RegisterXPU_GENERATED_BACKSLASH "${RegisterXPU_GENERATED}")
string(REPLACE "/" "\\" XPUFallback_TEMPLATE_BACKSLASH "${XPUFallback_TEMPLATE}")
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_TEMPLATE_BACKSLASH} ">>" ${RegisterXPU_GENERATED_BACKSLASH})
else()
set(FILE_DISPLAY_CMD cat)
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_PATH} ">>" ${RegisterXPU_PATH})
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_TEMPLATE} ">>" ${RegisterXPU_GENERATED})
endif()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  if(WIN32) set(FILE_DISPLAY_CMD type) else() set(FILE_DISPLAY_CMD cat) endif()
  file(TO_NATIVE_PATH "${RegisterXPU_GENERATED}" RegisterXPU_GENERATED_FILE)
  file(TO_NATIVE_PATH "${XPUFallback_TEMPLATE_BACKSLASH}" XPUFallback_TEMPLATE_FILE)
  set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_TEMPLATE} ">>" ${RegisterXPU_GENERATED})

@EikanWang
Copy link
Contributor

By the way, @guangyey , actually, we do not want to maintain the python scripts to refine the generated files in torch-xpu-ops. It is better to refine the torch-gen in the stock pytorch.

@guangyey guangyey force-pushed the guangyey/refine branch 2 times, most recently from 88c24a6 to 833e1f5 Compare March 5, 2025 05:50
@guangyey
Copy link
Contributor Author

guangyey commented Mar 5, 2025

The failures are irrelevant.

test_transformers_xpu.py::TestTransformersXPU::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_2_key_padding_mask_dim_2_bool_xpu
test_transformers_xpu.py::TestTransformersXPU::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_3_key_padding_mask_dim_2_bool_xpu
test_transformers_xpu.py::TestTransformersXPU::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_False_d_model_12_xpu
test_transformers_xpu.py::TestTransformersXPU::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_True_d_model_12_xpu
test_transformers_xpu.py::TestTransformersXPU::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_False_d_model_12_xpu
test_transformers_xpu.py::TestTransformersXPU::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_True_d_model_12_xpu
test_linalg_xpu.py::TestLinalgXPU::test_gemm_bias_offline_tunableop_xpu_bfloat16
test_meta_xpu.py::TestMetaXPU::test_dispatch_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_bfloat16
test_meta_xpu.py::TestMetaXPU::test_dispatch_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_float16
test_meta_xpu.py::TestMetaXPU::test_dispatch_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_float32
test_meta_xpu.py::TestMetaXPU::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_scaled_dot_product_attention_xpu_float32
test_meta_xpu.py::TestMetaXPU::test_dispatch_symbolic_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_bfloat16
test_meta_xpu.py::TestMetaXPU::test_dispatch_symbolic_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_float16
test_meta_xpu.py::TestMetaXPU::test_dispatch_symbolic_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_float32

@guangyey guangyey added this pull request to the merge queue Mar 5, 2025
Merged via the queue into main with commit b4701a1 Mar 5, 2025
8 of 9 checks passed
@guangyey guangyey deleted the guangyey/refine branch March 5, 2025 08:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants