Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolve accuracy issue from fusing prelu #29754

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

byungilm
Copy link
Contributor

@byungilm byungilm commented Mar 26, 2025

Details:

  • blocked fusing condition conv+prelu

Tickets:

Signed-off-by: Min, Byungil <byungil.min@intel.com>
Signed-off-by: Min, Byungil <byungil.min@intel.com>
@byungilm byungilm requested review from a team as code owners March 26, 2025 14:05
@byungilm byungilm changed the title Resolve acc fusing prelu Resolve accuracy issue from fusing prelu Mar 26, 2025
@github-actions github-actions bot added the category: GPU OpenVINO GPU plugin label Mar 26, 2025
@@ -742,6 +742,12 @@ void prepare_primitive_fusing::fuse_simple_primitives(program &p) {
// prelu fusion is not implemented in oneDNN3.1 (CVS-108233)
return;
}

if (activation_func == cldnn::activation_func::relu_negative_slope &&
input.get_output_layout().batch() > 1 && input.is_type<convolution>()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case of dynamic layout .batch() will lead to exception
Additional check is required

Signed-off-by: Min, Byungil <byungil.min@intel.com>
Signed-off-by: Min, Byungil <byungil.min@intel.com>
Signed-off-by: Min, Byungil <byungil.min@intel.com>
@@ -3786,7 +3803,7 @@ TEST_P(post_ops_optimizations_onednn_eltw_any_sum_eltw_linear, basic) {
data("out_hi", get_mem(get_single_element_layout(p), 127)),
data("eltwise_data", get_mem(get_output_layout(p))),
convolution("conv_prim", input_info("input"), "weights", "bias", p.groups, p.stride, p.dilation, p.pad, p.pad, format::is_grouped(get_weights_layout(p).format)),
activation("activation", input_info("conv_prim"), activation_func::relu_negative_slope),
activation("activation", input_info("conv_prim"), activation_func::relu),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you explain why you changed the function to relu from relu_negative_slope?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed activation becuz some cases are (multi batch conv and prelu) condition. After check the test, it changed value of expected_fused count for onednn.

convolution_test_params{ CASE_MUL_BATCH_CONV_U8S8_1, 4, 4, 4 },
convolution_test_params{ CASE_MUL_BATCH_CONV_U8S8_2, 4, 4, 4 },
convolution_test_params{ CASE_MUL_BATCH_CONV_S8S8_1, 4, 4, 4 },
convolution_test_params{ CASE_MUL_BATCH_CONV_S8S8_2, 4, 4, 4 },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to add such many test cases? Isn't it enough to add just one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reduced test-cases.

Signed-off-by: Min, Byungil <byungil.min@intel.com>
#define CASE_MUL_BATCH_CONV_U8S8_1 { 32, 15, 4, 5 }, { 32, 30, 2, 3 }, { 30, 15, 3, 3 }, { 1, 1 }, { 0, 0 }, { 1, 1 }, 1, data_types::u8, format::bfyx, data_types::i8, format::bfyx, data_types::f32, format::bfyx
#define CASE_MUL_BATCH_CONV_U8S8_2 { 20, 15, 5, 5 }, { 20, 30, 3, 3 }, { 30, 15, 3, 3 }, { 1, 1 }, { 0, 0 }, { 1, 1 }, 1, data_types::u8, format::bs_fs_yx_bsv16_fsv16, data_types::i8, format::os_is_yx_osv16_isv16, data_types::f32, format::bfyx
#define CASE_MUL_BATCH_CONV_S8S8_1 { 32, 15, 4, 5 }, { 32, 30, 2, 3 }, { 30, 15, 3, 3 }, { 1, 1 }, { 0, 0 }, { 1, 1 }, 1, data_types::i8, format::bfyx, data_types::i8, format::bfyx, data_types::f32, format::bfyx
#define CASE_MUL_BATCH_CONV_S8S8_2 { 20, 15, 5, 5 }, { 20, 30, 3, 3 }, { 30, 15, 3, 3 }, { 1, 1 }, { 0, 0 }, { 1, 1 }, 1, data_types::i8, format::bs_fs_yx_bsv16_fsv16, data_types::i8, format::os_is_yx_osv16_isv16, data_types::f32, format::bfyx
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove unused macros.

@isanghao isanghao added this pull request to the merge queue Mar 28, 2025
github-merge-queue bot pushed a commit that referenced this pull request Mar 28, 2025
### Details:
 - blocked fusing condition conv+prelu

### Tickets:
 - CVS-163143

---------

Signed-off-by: Min, Byungil <byungil.min@intel.com>
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 28, 2025
@isanghao isanghao added this pull request to the merge queue Mar 28, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 28, 2025
@isanghao isanghao added this pull request to the merge queue Mar 28, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 28, 2025
@byungilm byungilm added this pull request to the merge queue Mar 28, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: GPU OpenVINO GPU plugin
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants