Skip to content

Commit 1106e90

Browse files
sgolebiewski-intelRyan Loneyadrianboguszewskiryanloneypaularamo
authored
Changing domain in hyperlinks (openvinotoolkit#518)
* Changing domain in hyperlinks * Changing domain in hyperlinks in md files * Update 110-ct-segmentation-quantize.ipynb * Update 112-pytorch-post-training-quantization-nncf.ipynb * Update 210-ct-scan-live-inference.ipynb * Update 302-pytorch-quantization-aware-training.ipynb * Update 305-tensorflow-quantization-aware-training.ipynb * Update 102-pytorch-onnx-to-openvino.ipynb * Update 205-vision-background-removal.ipynb * Update README.md * Update 301-tensorflow-training-openvino-pot.ipynb * Update 206-vision-paddlegan-anime.ipynb * Update notebooks/301-tensorflow-training-openvino/README.md Ok, Thank you. Co-authored-by: Ryan Loney <ryanloney@gmail.com> * Update notebooks/301-tensorflow-training-openvino/301-tensorflow-training-openvino-pot.ipynb Co-authored-by: Ryan Loney <ryanloney@gmail.com> Co-authored-by: Ryan Loney <ryan.loney@intel.com> Co-authored-by: Adrian Boguszewski <adrian.boguszewski@intel.com> Co-authored-by: Ryan Loney <ryanloney@gmail.com> Co-authored-by: Paula Ramos <pjramg@gmail.com>
1 parent aa20c0d commit 1106e90

File tree

34 files changed

+57
-57
lines changed

34 files changed

+57
-57
lines changed

notebooks/001-hello-world/001-hello-world.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"\n",
1010
"A very basic introduction to OpenVINO that shows how to perform inference with an image classification model.\n",
1111
"\n",
12-
"We use a pre-trained [MobileNetV3 model](https://docs.openvinotoolkit.org/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). See the [TensorFlow to OpenVINO](../101-tensorflow-to-openvino/101-tensorflow-to-openvino.ipynb) tutorial to learn more about how OpenVINO IR model like this one is created."
12+
"We use a pre-trained [MobileNetV3 model](https://docs.openvino.ai/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). See the [TensorFlow to OpenVINO](../101-tensorflow-to-openvino/101-tensorflow-to-openvino.ipynb) tutorial to learn more about how OpenVINO IR model like this one is created."
1313
]
1414
},
1515
{

notebooks/003-hello-segmentation/003-hello-segmentation.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"\n",
1010
"A very basic introduction to using segmentation models with OpenVINO.\n",
1111
"\n",
12-
"We use the pre-trained [road-segmentation-adas-0001](https://docs.openvinotoolkit.org/latest/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark."
12+
"We use the pre-trained [road-segmentation-adas-0001](https://docs.openvino.ai/latest/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark."
1313
]
1414
},
1515
{

notebooks/003-hello-segmentation/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ This notebook demonstrates how to do inference with segmentation model.
1010

1111
## Notebook Contents
1212

13-
A very basic introduction to segmentation with OpenVINO. This notebook uses the [road-segmentation-adas-0001](https://docs.openvinotoolkit.org/latest/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) and an input image dowloaded from [Mapillary Vistas](https://www.mapillary.com/dataset/vistas). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.
13+
A very basic introduction to segmentation with OpenVINO. This notebook uses the [road-segmentation-adas-0001](https://docs.openvino.ai/latest/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) and an input image dowloaded from [Mapillary Vistas](https://www.mapillary.com/dataset/vistas). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.
1414

1515
## Installation Instructions
1616

notebooks/004-hello-detection/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ This notebook demonstrates how to do inference with detection model.
1010

1111
## Notebook Contents
1212

13-
A very basic introduction to detection with OpenVINO. We use the [horizontal-text-detection-0001](https://docs.openvinotoolkit.org/latest/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). It detects texts in images and returns blob of data in shape of [100, 5]. For each detection description has format [x_min, y_min, x_max, y_max, conf].
13+
A very basic introduction to detection with OpenVINO. We use the [horizontal-text-detection-0001](https://docs.openvino.ai/latest/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). It detects texts in images and returns blob of data in shape of [100, 5]. For each detection description has format [x_min, y_min, x_max, y_max, conf].
1414

1515
## Installation Instructions
1616

notebooks/101-tensorflow-to-openvino/101-tensorflow-to-openvino.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
"source": [
99
"# Convert a TensorFlow Model to OpenVINO\n",
1010
"\n",
11-
"This short tutorial shows how to convert a TensorFlow [MobileNetV3](https://docs.openvinotoolkit.org/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) image classification model to OpenVINO's [Intermediate Representation](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_IR_and_opsets.html) (IR) format using the [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) tool. After creating the IR, we load the model in OpenVINO's [Inference Engine](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and perform inference with a sample image. "
11+
"This short tutorial shows how to convert a TensorFlow [MobileNetV3](https://docs.openvino.ai/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) image classification model to OpenVINO's [Intermediate Representation](https://docs.openvino.ai/latest/openvino_docs_MO_DG_IR_and_opsets.html) (IR) format using the [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) tool. After creating the IR, we load the model in OpenVINO's [Inference Engine](https://docs.openvino.ai/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and perform inference with a sample image. "
1212
]
1313
},
1414
{
@@ -66,7 +66,7 @@
6666
"\n",
6767
"### Convert TensorFlow Model to OpenVINO IR Format\n",
6868
"\n",
69-
"Call the OpenVINO Model Optimizer tool to convert the TensorFlow model to OpenVINO IR with FP16 precision. The models are saved to the current directory. We add the mean values to the model and scale the output with the standard deviation with `--scale_values`. With these options, it is not necessary to normalize input data before propagating it through the network. The original model expects input images in RGB format. The converted model also expects images in RGB format. If you want the converted model to work with BGR images, you can use the `--reverse-input-channels` option. See the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) for more information about Model Optimizer, including a description of the command line options. Check the [model documentation](https://docs.openvinotoolkit.org/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) for information about the model, including input shape, expected color order and mean values.\n",
69+
"Call the OpenVINO Model Optimizer tool to convert the TensorFlow model to OpenVINO IR with FP16 precision. The models are saved to the current directory. We add the mean values to the model and scale the output with the standard deviation with `--scale_values`. With these options, it is not necessary to normalize input data before propagating it through the network. The original model expects input images in RGB format. The converted model also expects images in RGB format. If you want the converted model to work with BGR images, you can use the `--reverse-input-channels` option. See the [Model Optimizer Developer Guide](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) for more information about Model Optimizer, including a description of the command line options. Check the [model documentation](https://docs.openvino.ai/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) for information about the model, including input shape, expected color order and mean values.\n",
7070
"\n",
7171
"We first construct the command for Model Optimizer, and then execute this command in the notebook by prepending the command with a `!`. There may be some errors or warnings in the output. Model Optimization was succesful if the last lines of the output include `[ SUCCESS ] Generated IR version 11 model.`"
7272
]

notebooks/101-tensorflow-to-openvino/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This tutorial explains how to convert [TensorFlow](www.tensorflow.org) models to
88

99
## Notebook Contents
1010

11-
The notebook uses OpenVINO [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) to convert the same [MobilenetV3](https://docs.openvinotoolkit.org/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) used in the [001-hello-world notebook](../001-hello-world/001-hello-world.ipynb).
11+
The notebook uses OpenVINO [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) to convert the same [MobilenetV3](https://docs.openvino.ai/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) used in the [001-hello-world notebook](../001-hello-world/001-hello-world.ipynb).
1212

1313
## Installation Instructions
1414

notebooks/102-pytorch-onnx-to-openvino/102-pytorch-onnx-to-openvino.ipynb

+4-4
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@
169169
"\n",
170170
"Call the OpenVINO Model Optimizer tool to convert the ONNX model to OpenVINO IR with FP16 precision. The models are saved to the current directory. We add the mean values to the model and scale the output with the standard deviation with `--scale_values`. With these options, it is not necessary to normalize input data before propagating it through the network.\n",
171171
"\n",
172-
"See the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) for more information about Model Optimizer."
172+
"See the [Model Optimizer Developer Guide](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) for more information about Model Optimizer."
173173
]
174174
},
175175
{
@@ -453,7 +453,7 @@
453453
"source": [
454454
"## Performance Comparison\n",
455455
"\n",
456-
"Measure the time it takes to do inference on twenty images. This gives an indication of performance. For more accurate benchmarking, use the [OpenVINO Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html). Note that many optimizations are possible to improve the performance. "
456+
"Measure the time it takes to do inference on twenty images. This gives an indication of performance. For more accurate benchmarking, use the [OpenVINO Benchmark Tool](https://docs.openvino.ai/latest/openvino_inference_engine_tools_benchmark_tool_README.html). Note that many optimizations are possible to improve the performance. "
457457
]
458458
},
459459
{
@@ -550,8 +550,8 @@
550550
"\n",
551551
"* [Fastseg](https://github.com/ekzhang/fastseg)\n",
552552
"* [PIP install openvino-dev](https://github.com/openvinotoolkit/openvino/blob/releases/2021/3/docs/install_guides/pypi-openvino-dev.md)\n",
553-
"* [OpenVINO ONNX support](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_ONNX_Support.html)\n",
554-
"* [Model Optimizer Documentation](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html)\n"
553+
"* [OpenVINO ONNX support](https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_ONNX_Support.html)\n",
554+
"* [Model Optimizer Documentation](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html)\n"
555555
]
556556
}
557557
],

notebooks/102-pytorch-onnx-to-openvino/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ This notebook demonstrates how to perform inference on a PyTorch semantic segmen
99

1010
## Notebook Contents
1111

12-
The notebook uses OpenVINO [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) to convert the open source [fastseg](https://github.com/ekzhang/fastseg/) semantic segmentation model, trained on the [CityScapes](https://www.cityscapes-dataset.com) dataset, to OpenVINO IR. It also shows how to perform segmentation inference on an image using OpenVINO [Inference Engine](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and compares the results of the PyTorch model with the IR model.
12+
The notebook uses OpenVINO [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) to convert the open source [fastseg](https://github.com/ekzhang/fastseg/) semantic segmentation model, trained on the [CityScapes](https://www.cityscapes-dataset.com) dataset, to OpenVINO IR. It also shows how to perform segmentation inference on an image using OpenVINO [Inference Engine](https://docs.openvino.ai/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and compares the results of the PyTorch model with the IR model.
1313

1414
## Installation Instructions
1515

notebooks/103-paddle-onnx-to-openvino/103-paddle-onnx-to-openvino-classification.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"source": [
1010
"# Convert a PaddlePaddle Model to ONNX and OpenVINO IR\n",
1111
"\n",
12-
"This notebook shows how to convert a MobileNetV3 model from [PaddleHub](https://github.com/PaddlePaddle/PaddleHub), pretrained on the [ImageNet](https://www.image-net.org) dataset, to OpenVINO IR. It also shows how to perform classification inference on a sample image using OpenVINO's [Inference Engine](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and compares the results of the [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) model with the IR model. \n",
12+
"This notebook shows how to convert a MobileNetV3 model from [PaddleHub](https://github.com/PaddlePaddle/PaddleHub), pretrained on the [ImageNet](https://www.image-net.org) dataset, to OpenVINO IR. It also shows how to perform classification inference on a sample image using OpenVINO's [Inference Engine](https://docs.openvino.ai/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and compares the results of the [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) model with the IR model. \n",
1313
"\n",
1414
"Source of the [model](https://www.paddlepaddle.org.cn/hubdetail?name=mobilenet_v3_large_imagenet_ssld&en_category=ImageClassification)."
1515
]
@@ -238,7 +238,7 @@
238238
"\n",
239239
"Call the OpenVINO Model Optimizer tool to convert the PaddlePaddle model to OpenVINO IR, with FP32 precision. The models are saved to the current directory. We can add the mean values to the model with `--mean_values` and scale the output with the standard deviation with `--scale_values`. With these options, it is not necessary to normalize input data before propagating it through the network. However, to get the exact same output as the PaddlePaddle model, it is necessary to preprocess in the image in the same way. For this tutorial, we therefore do not add the mean and scale values to the model, and we use the `process_image` function, as described in the previous section, to ensure that both the IR and the PaddlePaddle model use the same preprocessing methods. We do show how to get the mean and scale values of the PaddleGAN model, so you can add them to the Model Optimizer command if you want. See the [PyTorch/ONNX to OpenVINO](../102-pytorch-onnx-to-openvino/102-pytorch-onnx-to-openvino.ipynb) notebook for a notebook where these options are used.\n",
240240
"\n",
241-
"Run `! mo --help` in a code cell to show an overview of command line options for Model Optimizer. See the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) for more information about Model Optimizer.\n",
241+
"Run `! mo --help` in a code cell to show an overview of command line options for Model Optimizer. See the [Model Optimizer Developer Guide](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) for more information about Model Optimizer.\n",
242242
"\n",
243243
"In the next cell, we first construct the command for Model Optimizer, and then execute this command in the notebook by prepending the command with a `!`. Model Optimization was succesful if the last lines of the output include `[ SUCCESS ] Generated IR version 11 model`."
244244
]

notebooks/103-paddle-onnx-to-openvino/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This notebook shows how to convert [PaddlePaddle](https://www.paddlepaddle.org.c
88

99
## Notebook Contents
1010

11-
The notebook uses [Paddle2ONNX](https://github.com/PaddlePaddle/paddle2onnx) and OpenVINO [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) to convert a MobileNet V3 [PaddleHub](https://github.com/PaddlePaddle/PaddleHub) model, pretrained on the [ImageNet](https://www.image-net.org) dataset, to OpenVINO IR. It also shows how to perform classification inference on an image using OpenVINO [Inference Engine](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and compares the results of the PaddlePaddle model with the IR model.
11+
The notebook uses [Paddle2ONNX](https://github.com/PaddlePaddle/paddle2onnx) and OpenVINO [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) to convert a MobileNet V3 [PaddleHub](https://github.com/PaddlePaddle/PaddleHub) model, pretrained on the [ImageNet](https://www.image-net.org) dataset, to OpenVINO IR. It also shows how to perform classification inference on an image using OpenVINO [Inference Engine](https://docs.openvino.ai/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) and compares the results of the PaddlePaddle model with the IR model.
1212

1313
## Installation Instructions
1414

notebooks/105-language-quantize-bert/105-language-quantize-bert.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
},
99
"source": [
1010
"# Quantize NLP models with OpenVINO Post-Training Optimization Tool ​\n",
11-
"This tutorial demonstrates how to apply INT8 quantization to the Natural Language Processing model known as [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)), using the [Post-Training Optimization Tool API](https://docs.openvinotoolkit.org/latest/pot_compression_api_README.html) (part of the [OpenVINO Toolkit](https://docs.openvinotoolkit.org/)). We will use a fine-tuned [HuggingFace BERT](https://huggingface.co/transformers/model_doc/bert.html) [PyTorch](https://pytorch.org/) model trained on the [Microsoft Research Paraphrase Corpus (MRPC)](https://www.microsoft.com/en-us/download/details.aspx?id=52398). The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:\n",
11+
"This tutorial demonstrates how to apply INT8 quantization to the Natural Language Processing model known as [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)), using the [Post-Training Optimization Tool API](https://docs.openvino.ai/latest/pot_compression_api_README.html) (part of the [OpenVINO Toolkit](https://docs.openvino.ai/)). We will use a fine-tuned [HuggingFace BERT](https://huggingface.co/transformers/model_doc/bert.html) [PyTorch](https://pytorch.org/) model trained on the [Microsoft Research Paraphrase Corpus (MRPC)](https://www.microsoft.com/en-us/download/details.aspx?id=52398). The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:\n",
1212
"\n",
1313
"- Download and prepare the BERT model and MRPC dataset\n",
1414
"- Define data loading and accuracy validation functionality\n",
@@ -695,7 +695,7 @@
695695
}
696696
},
697697
"source": [
698-
"Finally, we will measure the inference performance of OpenVINO FP32 and INT8 models. To do this, we use [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html) - OpenVINO's inference performance measurement tool.\n",
698+
"Finally, we will measure the inference performance of OpenVINO FP32 and INT8 models. To do this, we use [Benchmark Tool](https://docs.openvino.ai/latest/openvino_inference_engine_tools_benchmark_tool_README.html) - OpenVINO's inference performance measurement tool.\n",
699699
"\n",
700700
"> NOTE: `benchmark_app` is able to measure the performance of the Intermediate Representation (IR) models only. For more accurate performance, we recommended running `benchmark_app` in a terminal/command prompt after closing other applications. Run `benchmark_app -m model.xml -d CPU` to benchmark async inference on CPU for one minute. Change `CPU` to `GPU` to benchmark on GPU. Run `benchmark_app --help` to see an overview of all command line options."
701701
]

0 commit comments

Comments
 (0)