-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refine codegen #1424
Refine codegen #1424
Conversation
803902e
to
805d96f
Compare
@EikanWang This PR only intends to refine the codegen and remove redundant code. I will separate a dedicated PR to install header files when this PR lands. |
bb7ce08
to
53dbfaf
Compare
cmake/Codegen.cmake
Outdated
if(WIN32) | ||
set(FILE_DISPLAY_CMD type) | ||
# replace forward slash with back slash for compatibility with 'type' command on Windows | ||
string(REPLACE "/" "\\" RegisterXPU_PATH_BACKSLASH "${RegisterXPU_PATH}") | ||
string(REPLACE "/" "\\" XPUFallback_PATH_BACKSLASH "${XPUFallback_PATH}") | ||
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_PATH_BACKSLASH} ">>" ${RegisterXPU_PATH_BACKSLASH}) | ||
string(REPLACE "/" "\\" RegisterXPU_GENERATED_BACKSLASH "${RegisterXPU_GENERATED}") | ||
string(REPLACE "/" "\\" XPUFallback_TEMPLATE_BACKSLASH "${XPUFallback_TEMPLATE}") | ||
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_TEMPLATE_BACKSLASH} ">>" ${RegisterXPU_GENERATED_BACKSLASH}) | ||
else() | ||
set(FILE_DISPLAY_CMD cat) | ||
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_PATH} ">>" ${RegisterXPU_PATH}) | ||
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_TEMPLATE} ">>" ${RegisterXPU_GENERATED}) | ||
endif() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if(WIN32) set(FILE_DISPLAY_CMD type) else() set(FILE_DISPLAY_CMD cat) endif()
file(TO_NATIVE_PATH "${RegisterXPU_GENERATED}" RegisterXPU_GENERATED_FILE)
file(TO_NATIVE_PATH "${XPUFallback_TEMPLATE_BACKSLASH}" XPUFallback_TEMPLATE_FILE)
set(REGISTER_FALLBACK_CMD ${FILE_DISPLAY_CMD} ${XPUFallback_TEMPLATE} ">>" ${RegisterXPU_GENERATED})
By the way, @guangyey , actually, we do not want to maintain the python scripts to refine the generated files in torch-xpu-ops. It is better to refine the torch-gen in the stock pytorch. |
88c24a6
to
833e1f5
Compare
The failures are irrelevant. test_transformers_xpu.py::TestTransformersXPU::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_2_key_padding_mask_dim_2_bool_xpu
test_transformers_xpu.py::TestTransformersXPU::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_3_key_padding_mask_dim_2_bool_xpu
test_transformers_xpu.py::TestTransformersXPU::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_False_d_model_12_xpu
test_transformers_xpu.py::TestTransformersXPU::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_True_d_model_12_xpu
test_transformers_xpu.py::TestTransformersXPU::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_False_d_model_12_xpu
test_transformers_xpu.py::TestTransformersXPU::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_True_d_model_12_xpu
test_linalg_xpu.py::TestLinalgXPU::test_gemm_bias_offline_tunableop_xpu_bfloat16
test_meta_xpu.py::TestMetaXPU::test_dispatch_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_bfloat16
test_meta_xpu.py::TestMetaXPU::test_dispatch_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_float16
test_meta_xpu.py::TestMetaXPU::test_dispatch_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_float32
test_meta_xpu.py::TestMetaXPU::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_scaled_dot_product_attention_xpu_float32
test_meta_xpu.py::TestMetaXPU::test_dispatch_symbolic_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_bfloat16
test_meta_xpu.py::TestMetaXPU::test_dispatch_symbolic_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_float16
test_meta_xpu.py::TestMetaXPU::test_dispatch_symbolic_meta_outplace_nn_functional_scaled_dot_product_attention_xpu_float32 |
Motivation
Following the comments here, this PR intend to refine code related to codegen and remove redundant code.