[Keras 3 OpenVINO backend] Support gaussian_blur operation#22668
[Keras 3 OpenVINO backend] Support gaussian_blur operation#22668daehyun99 wants to merge 6 commits intokeras-team:masterfrom
gaussian_blur operation#22668Conversation
There was a problem hiding this comment.
Code Review
This pull request removes several gaussian_blur tests from the OpenVINO backend's exclusion list. However, the gaussian_blur operation has not yet been implemented in the backend, which will cause the tests to fail with a NotImplementedError. You should include the implementation of the operation in this PR to avoid breaking the CI.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #22668 +/- ##
===========================================
- Coverage 83.15% 68.81% -14.35%
===========================================
Files 596 596
Lines 68858 69030 +172
Branches 10774 10795 +21
===========================================
- Hits 57259 47502 -9757
- Misses 8783 18973 +10190
+ Partials 2816 2555 -261
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
- Feat: OpenVINO Backend logic of `gaussian_blur` - Fix: Temp fix for OpenVINO Backend is not support `bfloat16` fully
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request implements the gaussian_blur operation for the OpenVINO backend and enables its corresponding tests. Feedback indicates that the global aliasing of bfloat16 to float32 in the common utilities is a breaking change and should be moved to backend-specific code. The gaussian_blur implementation requires several fixes: ensuring floating-point precision during kernel generation, correcting the 2D kernel construction for non-square shapes, and resolving type mismatches where tensors are passed to OpenVINO operations expecting Python scalars or lists. Additionally, the data_format should be standardized using existing backend utilities.
| str: "string", | ||
| # special case for string value | ||
| "int": "int64" if config.backend() == "tensorflow" else "int32", | ||
| "bfloat16": "float32", |
There was a problem hiding this comment.
This change globally aliases the "bfloat16" dtype string to "float32" in the common Keras utilities. This is a breaking change that affects all backends, preventing users from actually using bfloat16 even when supported (e.g., in JAX or TensorFlow). If the OpenVINO backend has specific limitations with bfloat16, those should be handled within the OpenVINO-specific implementation (e.g., in cast or convert_to_tensor), not in the shared common/dtypes.py file.
| def _create_gaussian_kernel(kernel_size, sigma, ov_type): | ||
| def _get_gaussian_kernel1d(size, sigma): | ||
| x = ( | ||
| ov_opset.range(0, size, 1, output_type=ov_type).output(0) |
There was a problem hiding this comment.
The Gaussian kernel calculation should be performed in floating-point precision (e.g., f32) to avoid significant precision loss during the exp and normalization steps. If ov_type is an integral type (like u8), the range and subsequent arithmetic will produce incorrect results. The kernel should only be converted to the target ov_type after the full 2D kernel is computed and normalized.
| ov_opset.range(0, size, 1, output_type=ov_type).output(0) | |
| ov_opset.range(0, size, 1, output_type=Type.f32).output(0) |
| def _get_gaussian_kernel2d(size, sigma): | ||
| kernel1d_x = _get_gaussian_kernel1d(size[0], sigma[0]) | ||
| kernel1d_y = _get_gaussian_kernel1d(size[1], sigma[1]) | ||
|
|
||
| kernel1d_x = ov_opset.reshape( | ||
| kernel1d_x, | ||
| ov_opset.constant([1, int(size[1])], Type.i32).output(0), | ||
| False, | ||
| ).output(0) | ||
|
|
||
| kernel1d_y = ov_opset.reshape( | ||
| kernel1d_y, | ||
| ov_opset.constant([int(size[0]), 1], Type.i32).output(0), | ||
| False, | ||
| ).output(0) | ||
| return ov_opset.multiply(kernel1d_y, kernel1d_x).output(0) |
There was a problem hiding this comment.
The 2D kernel construction logic is incorrect for non-square kernels and contains problematic int() calls on tensors.
kernel1d_xis generated usingsize[0](Height) but then reshaped usingsize[1](Width) at line 754. This will fail if Height != Width.- Calling
int()on anOpenVINOKerasTensor(likesize[0]) triggersconvert_to_numpy(), which compiles and executes a sub-graph during model construction. This is extremely inefficient and will fail for dynamic shapes.
You should use the original kernel_size tuple for static dimensions or use -1 in reshape to avoid Python-side evaluation.
| def _get_gaussian_kernel2d(size, sigma): | |
| kernel1d_x = _get_gaussian_kernel1d(size[0], sigma[0]) | |
| kernel1d_y = _get_gaussian_kernel1d(size[1], sigma[1]) | |
| kernel1d_x = ov_opset.reshape( | |
| kernel1d_x, | |
| ov_opset.constant([1, int(size[1])], Type.i32).output(0), | |
| False, | |
| ).output(0) | |
| kernel1d_y = ov_opset.reshape( | |
| kernel1d_y, | |
| ov_opset.constant([int(size[0]), 1], Type.i32).output(0), | |
| False, | |
| ).output(0) | |
| return ov_opset.multiply(kernel1d_y, kernel1d_x).output(0) | |
| def _get_gaussian_kernel2d(size, sigma): | |
| kernel1d_h = _get_gaussian_kernel1d(size[0], sigma[0]) | |
| kernel1d_w = _get_gaussian_kernel1d(size[1], sigma[1]) | |
| kernel1d_h = ov_opset.reshape( | |
| kernel1d_h, | |
| ov_opset.constant([-1, 1], Type.i32).output(0), | |
| False, | |
| ).output(0) | |
| kernel1d_w = ov_opset.reshape( | |
| kernel1d_w, | |
| ov_opset.constant([1, -1], Type.i32).output(0), | |
| False, | |
| ).output(0) | |
| return ov_opset.multiply(kernel1d_h, kernel1d_w).output(0) |
| ov_opset.constant( | ||
| [1, 1, kernel_size[0], kernel_size[1]], Type.i32 | ||
| ).output(0), | ||
| False, | ||
| ).output(0) |
There was a problem hiding this comment.
ov_opset.constant expects a numpy array or a list of scalars; it does not accept OpenVINOKerasTensor objects. Since kernel_size was converted to a tensor at line 772, kernel_size[0] is a tensor, and this call will fail. You should use the original kernel_size tuple values here.
| ov_opset.constant( | |
| [1, 1, kernel_size[0], kernel_size[1]], Type.i32 | |
| ).output(0), | |
| False, | |
| ).output(0) | |
| ov_opset.constant( | |
| [1, 1, int(kernel_size_tuple[0]), int(kernel_size_tuple[1])], Type.i32 | |
| ).output(0), |
| [(kernel_size[0] - 1) // 2, (kernel_size[1] - 1) // 2], | ||
| [ | ||
| kernel_size[0] - 1 - (kernel_size[0] - 1) // 2, | ||
| kernel_size[1] - 1 - (kernel_size[1] - 1) // 2, | ||
| ], |
There was a problem hiding this comment.
The pads_begin and pads_end arguments for ov_opset.group_convolution must be a List[int]. Because kernel_size is a tensor, the expression (kernel_size[0] - 1) // 2 results in an OpenVINOKerasTensor, which will cause a TypeError. Use the original Python integers from the kernel_size input for these calculations.
| [(kernel_size[0] - 1) // 2, (kernel_size[1] - 1) // 2], | |
| [ | |
| kernel_size[0] - 1 - (kernel_size[0] - 1) // 2, | |
| kernel_size[1] - 1 - (kernel_size[1] - 1) // 2, | |
| ], | |
| [(int(kernel_size_tuple[0]) - 1) // 2, (int(kernel_size_tuple[1]) - 1) // 2], | |
| [ | |
| int(kernel_size_tuple[0]) - 1 - (int(kernel_size_tuple[0]) - 1) // 2, | |
| int(kernel_size_tuple[1]) - 1 - (int(kernel_size_tuple[1]) - 1) // 2, | |
| ], |
| @@ -726,9 +726,119 @@ def is_valid(index, size_node): | |||
| def gaussian_blur( | |||
| images, kernel_size=(3, 3), sigma=(1.0, 1.0), data_format=None | |||
There was a problem hiding this comment.
|
Close this PR |
Description
Contributor Agreement
Please check all boxes below before submitting your PR for review: