|
1 | 1 | {
|
2 | 2 | "cells": [
|
3 | 3 | {
|
| 4 | + "attachments": {}, |
4 | 5 | "cell_type": "markdown",
|
5 | 6 | "id": "38c66e13",
|
6 | 7 | "metadata": {
|
|
9 | 10 | "source": [
|
10 | 11 | "# Convert a PaddlePaddle Model to OpenVINO™ IR\n",
|
11 | 12 | "\n",
|
12 |
| - "This notebook shows how to convert a MobileNetV3 model from [PaddleHub](https://github.com/PaddlePaddle/PaddleHub), pretrained on the [ImageNet](https://www.image-net.org) dataset, to OpenVINO IR. It also shows how to perform classification inference on a sample image using [OpenVINO Runtime](https://docs.openvino.ai/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and compares the results of the [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) model with the IR model. \n", |
| 13 | + "This notebook shows how to convert a MobileNetV3 model from [PaddleHub](https://github.com/PaddlePaddle/PaddleHub), pre-trained on the [ImageNet](https://www.image-net.org) dataset, to OpenVINO IR. It also shows how to perform classification inference on a sample image, using [OpenVINO Runtime](https://docs.openvino.ai/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and compares the results of the [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) model with the IR model.\n", |
13 | 14 | "\n",
|
14 | 15 | "Source of the [model](https://www.paddlepaddle.org.cn/hubdetail?name=mobilenet_v3_large_imagenet_ssld&en_category=ImageClassification)."
|
15 | 16 | ]
|
|
58 | 59 | ]
|
59 | 60 | },
|
60 | 61 | {
|
| 62 | + "attachments": {}, |
61 | 63 | "cell_type": "markdown",
|
62 | 64 | "id": "137ef187",
|
63 | 65 | "metadata": {},
|
|
94 | 96 | ]
|
95 | 97 | },
|
96 | 98 | {
|
| 99 | + "attachments": {}, |
97 | 100 | "cell_type": "markdown",
|
98 | 101 | "id": "f42abed1",
|
99 | 102 | "metadata": {},
|
100 | 103 | "source": [
|
101 | 104 | "## Show Inference on PaddlePaddle Model\n",
|
102 | 105 | "\n",
|
103 |
| - "In the next cell, we load the model, load and display an image, do inference on that image, and then show the top 3 prediction results." |
| 106 | + "In the next cell, we load the model, load and display an image, do inference on that image, and then show the top three prediction results." |
104 | 107 | ]
|
105 | 108 | },
|
106 | 109 | {
|
|
121 | 124 | ]
|
122 | 125 | },
|
123 | 126 | {
|
| 127 | + "attachments": {}, |
124 | 128 | "cell_type": "markdown",
|
125 | 129 | "id": "510082f7",
|
126 | 130 | "metadata": {},
|
127 | 131 | "source": [
|
128 |
| - "`classifier.predict()` takes an image file name, reads the image, preprocesses the input, then returns the class labels and scores of the image. Preprocessing the image is done behind the scenes. The classification model returns an array with floating point values for each of the 1000 ImageNet classes. The higher the value, the more confident the network is that the class number corresponding to that value (the index of that value in the network output array) is the class number for the image. \n", |
| 132 | + "`classifier.predict()` takes an image file name, reads the image, preprocesses the input, then returns the class labels and scores of the image. Preprocessing the image is done behind the scenes. The classification model returns an array with floating point values for each of the 1000 ImageNet classes. The higher the value, the more confident the network is that the class number corresponding to that value (the index of that value in the network output array) is the class number for the image.\n", |
129 | 133 | "\n",
|
130 | 134 | "To see PaddlePaddle's implementation for the classification function and for loading and preprocessing data, uncomment the next two cells."
|
131 | 135 | ]
|
|
151 | 155 | ]
|
152 | 156 | },
|
153 | 157 | {
|
| 158 | + "attachments": {}, |
154 | 159 | "cell_type": "markdown",
|
155 | 160 | "id": "ec080a4d",
|
156 | 161 | "metadata": {},
|
157 | 162 | "source": [
|
158 |
| - "The `classifier.get_config()` module shows the preprocessing configuration for the model. It should show that images are normalized, resized and cropped, and that the BGR image is converted to RGB before propagating it through the network. In the next cell, we get the `classifier.predictror.preprocess_ops` property that returns list of preprocessing operations to do inference on the OpenVINO IR model using the same method. " |
| 163 | + "The `classifier.get_config()` module shows the preprocessing configuration for the model. It should show that images are normalized, resized and cropped, and that the BGR image is converted to RGB before propagating it through the network. In the next cell, we get the `classifier.predictror.preprocess_ops` property that returns list of preprocessing operations to do inference on the OpenVINO IR model using the same method." |
159 | 164 | ]
|
160 | 165 | },
|
161 | 166 | {
|
|
175 | 180 | ]
|
176 | 181 | },
|
177 | 182 | {
|
| 183 | + "attachments": {}, |
178 | 184 | "cell_type": "markdown",
|
179 | 185 | "id": "f5e47e1f",
|
180 | 186 | "metadata": {},
|
181 | 187 | "source": [
|
182 |
| - "It is useful to show the output of the `process_image()` function, to see the effect of cropping and resizing. Because of the normalization, the colors will look strange, and matplotlib will warn about clipping values. " |
| 188 | + "It is useful to show the output of the `process_image()` function, to see the effect of cropping and resizing. Because of the normalization, the colors will look strange, and matplotlib will warn about clipping values." |
183 | 189 | ]
|
184 | 190 | },
|
185 | 191 | {
|
|
197 | 203 | ]
|
198 | 204 | },
|
199 | 205 | {
|
| 206 | + "attachments": {}, |
200 | 207 | "cell_type": "markdown",
|
201 | 208 | "id": "597bbd42-7706-4b96-a6e7-75cc391d00f4",
|
202 | 209 | "metadata": {},
|
|
221 | 228 | ]
|
222 | 229 | },
|
223 | 230 | {
|
| 231 | + "attachments": {}, |
224 | 232 | "cell_type": "markdown",
|
225 | 233 | "id": "205deec3",
|
226 | 234 | "metadata": {},
|
227 | 235 | "source": [
|
228 | 236 | "## Convert the Model to OpenVINO IR Format\n",
|
229 | 237 | "\n",
|
230 |
| - "Call the OpenVINO Model Optimizer tool to convert the PaddlePaddle model to OpenVINO IR, with FP32 precision. The models are saved to the current directory. We can add the mean values to the model with `--mean_values` and scale the output with the standard deviation with `--scale_values`. With these options, it is not necessary to normalize input data before propagating it through the network. However, to get the exact same output as the PaddlePaddle model, it is necessary to preprocess in the image in the same way. For this tutorial, we therefore do not add the mean and scale values to the model, and we use the `process_image` function, as described in the previous section, to ensure that both the IR and the PaddlePaddle model use the same preprocessing methods. We do show how to get the mean and scale values of the PaddleGAN model, so you can add them to the Model Optimizer command if you want. See the [PyTorch/ONNX to OpenVINO](../102-pytorch-onnx-to-openvino/102-pytorch-onnx-to-openvino.ipynb) notebook for a notebook where these options are used.\n", |
| 238 | + "Call the OpenVINO Model Optimizer tool to convert the PaddlePaddle model to OpenVINO IR, with FP32 precision. The models are saved to the current directory. You can add the mean values to the model with `--mean_values` and scale the output with the standard deviation with `--scale_values`. With these options, it is not necessary to normalize input data before propagating it through the network. However, to get the exact same output as the PaddlePaddle model, it is necessary to preprocess in the image in the same way. Therefore, for this tutorial, you do not add the mean and scale values to the model, and you use the `process_image` function, as described in the previous section, to ensure that both the IR and the PaddlePaddle model use the same preprocessing methods. It is explained how to get the mean and scale values of the PaddleGAN model, so you can add them to the Model Optimizer command if you want. See the [PyTorch/ONNX to OpenVINO](../102-pytorch-onnx-to-openvino/102-pytorch-onnx-to-openvino.ipynb) notebook, where these options are used.\n", |
231 | 239 | "\n",
|
232 | 240 | "Run `! mo --help` in a code cell to show an overview of command line options for Model Optimizer. See the [Model Optimizer Developer Guide](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) for more information about Model Optimizer.\n",
|
233 | 241 | "\n",
|
|
295 | 303 | ]
|
296 | 304 | },
|
297 | 305 | {
|
| 306 | + "attachments": {}, |
298 | 307 | "cell_type": "markdown",
|
299 | 308 | "id": "7d249c27",
|
300 | 309 | "metadata": {},
|
|
0 commit comments