|
| 1 | +# Collection of Custom Operations using OpenVINO Extensibility Mechanism |
| 2 | + |
| 3 | +This module provides a guide and the implementation of a few custom operations in the Intel OpenVINO runtime using its [Extensibility Mechanism](https://docs.openvino.ai/latest/openvino_docs_Extensibility_UG_Intro.html). |
| 4 | + |
| 5 | +There are some use cases when OpenVINO Custom Operations could be applicable: |
| 6 | + |
| 7 | +* There is an ONNX model which contains an operation not supported by OpenVINO. |
| 8 | +* You have a PyTorch model, which could be converted to ONNX, with an operation not supported by OpenVINO. |
| 9 | +* You want to replace a subgraph for ONNX model with one custom operation which would be supported by OpenVINO. |
| 10 | + |
| 11 | +More specifically, here we implement custom OpenVINO operations that add support for the following native PyTorch operation: |
| 12 | + |
| 13 | +* [torch.fft](examples/fft) |
| 14 | + |
| 15 | +And other custom operations introduced by third-party frameworks: |
| 16 | + |
| 17 | +* [calculate_grid](/examples/calculate_grid) and [sparse_conv](/examples/sparse_conv) from [Open3D](https://github.com/isl-org/Open3D) |
| 18 | +* [complex_mul](/examples/complex_mul) from [DIRECT](https://github.com/NKI-AI/direct) |
| 19 | + |
| 20 | +You can find more information about how to create and use OpenVINO Extensions to facilitate mapping of custom operations from framework model representation to OpenVINO representation [here](https://docs.openvino.ai/latest/openvino_docs_Extensibility_UG_Frontend_Extensions.html). |
| 21 | + |
| 22 | + |
| 23 | +## Build custom OpenVINO operation extension library |
| 24 | + |
| 25 | +The C++ code implementing the custom operation is in the `user_ie_extensions` directory. You'll have to build an "extension library" from this code so that it can be loaded at runtime. The steps below describe the build process: |
| 26 | + |
| 27 | +1. Install [OpenVINO Runtime for C++](https://docs.openvino.ai/latest/openvino_docs_install_guides_install_dev_tools.html#for-c-developers). |
| 28 | + |
| 29 | +2. Build the library: |
| 30 | + |
| 31 | +```bash |
| 32 | +cd user_ie_extensions |
| 33 | +mkdir build && cd build |
| 34 | +cmake .. -DCMAKE_BUILD_TYPE=Release && cmake --build . --parallel 4 |
| 35 | +``` |
| 36 | + |
| 37 | +If you need to build only some operations specify them with the `-DCUSTOM_OPERATIONS` option: |
| 38 | +```bash |
| 39 | +cmake .. -DCMAKE_BUILD_TYPE=Release -DCUSTOM_OPERATIONS=complex_mul\;fft |
| 40 | +``` |
| 41 | + |
| 42 | +- Please note that [OpenCV](https://opencv.org/) installation is required to build an extension for the [fft](examples/fft) operation. Other extentions still can be built without OpenCV. |
| 43 | + |
| 44 | +You also could build the extension library [while building OpenVINO](../../README.md). |
| 45 | + |
| 46 | +## Load and use custom OpenVINO operation extension library |
| 47 | + |
| 48 | +You can use the custom OpenVINO operations implementation by loading it into the OpenVINO `Core` object at runtime. Then, load the model from the ONNX file with the `read_model()` API. Here's how to do that in Python: |
| 49 | + |
| 50 | +```python |
| 51 | +from openvino.runtime import Core |
| 52 | + |
| 53 | +# Create Core and register user extension |
| 54 | +core = Core() |
| 55 | +core.add_extension('/path/to/libuser_ov_extensions.so') |
| 56 | + |
| 57 | +# Load model from .onnx file directly |
| 58 | +model = core.read_model('model.onnx') |
| 59 | +compiled_model = core.compile_model(model, 'CPU') |
| 60 | +``` |
| 61 | + |
| 62 | +You also can get OpenVINO IR model with Model Optimizer, just use extra `--extension` flag to specify a path to custom extensions: |
| 63 | + |
| 64 | +```bash |
| 65 | +mo --input_model model.onnx --extension /path/to/libuser_ov_extensions.so |
| 66 | +``` |
0 commit comments