Skip to content

Commit 7f7b9cb

Browse files
sgolebiewski-intelRyan Loney
and
Ryan Loney
authored
Proofreading the notebooks. (openvinotoolkit#583)
* Proofreading the notebooks. Providing minor grammar and stylistic changes, as well as fixing broken links. * Fixed docs URL * Apply suggestions from code review * Apply suggestions from code review * Update notebooks/utils/notebook_utils.ipynb * Update notebooks/110-ct-segmentation-quantize/110-ct-segmentation-quantize.ipynb * Update notebooks/110-ct-segmentation-quantize/110-ct-segmentation-quantize.ipynb * Update notebooks/202-vision-superresolution/202-vision-superresolution-video.ipynb * Update 202-vision-superresolution-image.ipynb * IE to OV Changing Inference Engine to OpenVINO Runtime * Reverting 301 to original * Changing intro of 301 * Update 001-hello-world.ipynb * Update 405-paddle-ocr-webcam.ipynb Co-authored-by: Ryan Loney <ryan.loney@intel.com>
1 parent 150ba38 commit 7f7b9cb

File tree

85 files changed

+1906
-1895
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

85 files changed

+1906
-1895
lines changed

notebooks/001-hello-world/001-hello-world.ipynb

+7-7
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@
77
"source": [
88
"# Hello Image Classification\n",
99
"\n",
10-
"A very basic introduction to OpenVINO that shows how to perform inference with an image classification model.\n",
10+
"This basic introduction to OpenVINOshows how to do inference with an image classification model.\n",
1111
"\n",
12-
"We use a pre-trained [MobileNetV3 model](https://docs.openvino.ai/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). See the [TensorFlow to OpenVINO](../101-tensorflow-to-openvino/101-tensorflow-to-openvino.ipynb) tutorial to learn more about how OpenVINO IR model like this one is created."
12+
"A pre-trained [MobileNetV3 model](https://docs.openvino.ai/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial. For more information about how OpenVINO IR models are created, refer to the [TensorFlow to OpenVINO](../101-tensorflow-to-openvino/101-tensorflow-to-openvino.ipynb) tutorial."
1313
]
1414
},
1515
{
@@ -70,13 +70,13 @@
7070
"metadata": {},
7171
"outputs": [],
7272
"source": [
73-
"# The MobileNet model expects images in RGB format\n",
73+
"# The MobileNet model expects images in RGB format.\n",
7474
"image = cv2.cvtColor(cv2.imread(filename=\"data/coco.jpg\"), code=cv2.COLOR_BGR2RGB)\n",
7575
"\n",
76-
"# resize to MobileNet image shape\n",
76+
"# Resize to MobileNet image shape.\n",
7777
"input_image = cv2.resize(src=image, dsize=(224, 224))\n",
7878
"\n",
79-
"# reshape to model input shape\n",
79+
"# Reshape to model input shape.\n",
8080
"input_image = np.expand_dims(input_image, 0)\n",
8181
"plt.imshow(image);"
8282
]
@@ -110,8 +110,8 @@
110110
"# Convert the inference result to a class name.\n",
111111
"imagenet_classes = open(\"utils/imagenet_2012.txt\").read().splitlines()\n",
112112
"\n",
113-
"# The model description states that for this model, class 0 is background,\n",
114-
"# so we add background at the beginning of imagenet_classes\n",
113+
"# The model description states that for this model, class 0 is a background.\n",
114+
"# Therefore, a background must be added at the beginning of imagenet_classes.\n",
115115
"imagenet_classes = ['background'] + imagenet_classes\n",
116116
"\n",
117117
"imagenet_classes[result_index]"

notebooks/001-hello-world/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Introduction to OpenVINO
1+
# Introduction to OpenVINO
22

33
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F001-hello-world%2F001-hello-world.ipynb)
44

@@ -12,4 +12,4 @@ This notebook demonstrates usage of [MobileNet V3](https://github.com/openvinoto
1212

1313
## Installation Instructions
1414

15-
If you have not done so already, please follow the [Installation Guide](../../README.md) to install all required dependencies.
15+
If you have not installed all required dependencies, follow the [Installation Guide](../../README.md).

notebooks/002-openvino-api/002-openvino-api.ipynb

+37-37
Large diffs are not rendered by default.

notebooks/002-openvino-api/README.md

+8-8
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
1-
# OpenVINO API tutorial
1+
# OpenVINO API tutorial
22

33
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F002-openvino-api%2F002-openvino-api.ipynb)
44

55

6-
This notebook explains the basics of the OpenVINO Inference Engine API.
7-
It provides a segmentation and classification IR model and a segmentation ONNX model. You can replace these model files with own models.
6+
This notebook explains the basics of the OpenVINO Runtime API.
7+
It provides a segmentation and classification IR model and a segmentation ONNX model. The model files can be replaced with your own models.
88

9-
Despite the exact output being different, the process remains the same.
9+
Despite the exact output being different, the process remains the same.
1010

1111
## Notebook Contents
1212

13-
An OpenVINO API tutorial that covers the following:
13+
The OpenVINO API tutorial consists of the following steps:
1414

15-
* Load Inference Engine and Show Info
15+
* Loading OpenVINO Runtime and Showing Info
1616
* Loading a Model
17-
* IR Model
17+
* OpenVINO IR Model
1818
* ONNX Model
1919
* Getting Information about a Model
2020
* Model Inputs
@@ -26,4 +26,4 @@ An OpenVINO API tutorial that covers the following:
2626

2727
## Installation Instructions
2828

29-
If you have not done so already, please follow the [Installation Guide](../../README.md) to install all required dependencies.
29+
If you have not installed all required dependencies, follow the [Installation Guide](../../README.md).

notebooks/003-hello-segmentation/003-hello-segmentation.ipynb

+16-16
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@
77
"source": [
88
"# Hello Image Segmentation\n",
99
"\n",
10-
"A very basic introduction to using segmentation models with OpenVINO.\n",
10+
"A very basic introduction to using segmentation models with OpenVINO.\n",
1111
"\n",
12-
"We use the pre-trained [road-segmentation-adas-0001](https://docs.openvino.ai/latest/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark."
12+
"In this tutorial, a pre-trained [road-segmentation-adas-0001](https://docs.openvino.ai/latest/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark."
1313
]
1414
},
1515
{
@@ -77,19 +77,19 @@
7777
"metadata": {},
7878
"outputs": [],
7979
"source": [
80-
"# The segmentation network expects images in BGR format\n",
80+
"# The segmentation network expects images in BGR format.\n",
8181
"image = cv2.imread(\"data/empty_road_mapillary.jpg\")\n",
8282
"\n",
8383
"rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n",
8484
"image_h, image_w, _ = image.shape\n",
8585
"\n",
86-
"# N,C,H,W = batch size, number of channels, height, width\n",
86+
"# N,C,H,W = batch size, number of channels, height, width.\n",
8787
"N, C, H, W = input_layer_ir.shape\n",
8888
"\n",
89-
"# OpenCV resize expects the destination size as (width, height)\n",
89+
"# OpenCV resize expects the destination size as (width, height).\n",
9090
"resized_image = cv2.resize(image, (W, H))\n",
9191
"\n",
92-
"# reshape to network input shape\n",
92+
"# Reshape to the network input shape.\n",
9393
"input_image = np.expand_dims(\n",
9494
" resized_image.transpose(2, 0, 1), 0\n",
9595
") \n",
@@ -111,10 +111,10 @@
111111
"metadata": {},
112112
"outputs": [],
113113
"source": [
114-
"# Run the inference\n",
114+
"# Run the inference.\n",
115115
"result = compiled_model([input_image])[output_layer_ir]\n",
116116
"\n",
117-
"# Prepare data for visualization\n",
117+
"# Prepare data for visualization.\n",
118118
"segmentation_mask = np.argmax(result, axis=1)\n",
119119
"plt.imshow(segmentation_mask.transpose(1, 2, 0))"
120120
]
@@ -134,17 +134,17 @@
134134
"metadata": {},
135135
"outputs": [],
136136
"source": [
137-
"# Define colormap, each color represents a class\n",
137+
"# Define colormap, each color represents a class.\n",
138138
"colormap = np.array([[68, 1, 84], [48, 103, 141], [53, 183, 120], [199, 216, 52]])\n",
139139
"\n",
140-
"# Define the transparency of the segmentation mask on the photo\n",
140+
"# Define the transparency of the segmentation mask on the photo.\n",
141141
"alpha = 0.3\n",
142142
"\n",
143-
"# Use function from notebook_utils.py to transform mask to an RGB image\n",
143+
"# Use function from notebook_utils.py to transform mask to an RGB image.\n",
144144
"mask = segmentation_map_to_image(segmentation_mask, colormap)\n",
145145
"resized_mask = cv2.resize(mask, (image_w, image_h))\n",
146146
"\n",
147-
"# Create image with mask put on\n",
147+
"# Create an image with mask.\n",
148148
"image_with_mask = cv2.addWeighted(resized_mask, alpha, rgb_image, 1 - alpha, 0)"
149149
]
150150
},
@@ -163,19 +163,19 @@
163163
"metadata": {},
164164
"outputs": [],
165165
"source": [
166-
"# Define titles with images\n",
166+
"# Define titles with images.\n",
167167
"data = {\"Base Photo\": rgb_image, \"Segmentation\": mask, \"Masked Photo\": image_with_mask}\n",
168168
"\n",
169-
"# Create subplot to visualize images\n",
169+
"# Create a subplot to visualize images.\n",
170170
"fig, axs = plt.subplots(1, len(data.items()), figsize=(15, 10))\n",
171171
"\n",
172-
"# Fill subplot\n",
172+
"# Fill the subplot.\n",
173173
"for ax, (name, image) in zip(axs, data.items()):\n",
174174
" ax.axis('off')\n",
175175
" ax.set_title(name)\n",
176176
" ax.imshow(image)\n",
177177
"\n",
178-
"# Display image\n",
178+
"# Display an image.\n",
179179
"plt.show(fig)"
180180
]
181181
}

notebooks/003-hello-segmentation/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Introduction to Segmentation in OpenVINO
1+
# Introduction to Segmentation in OpenVINO
22

33
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F003-hello-segmentation%2F003-hello-segmentation.ipynb)
44

@@ -10,8 +10,8 @@ This notebook demonstrates how to do inference with segmentation model.
1010

1111
## Notebook Contents
1212

13-
A very basic introduction to segmentation with OpenVINO. This notebook uses the [road-segmentation-adas-0001](https://docs.openvino.ai/latest/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) and an input image dowloaded from [Mapillary Vistas](https://www.mapillary.com/dataset/vistas). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.
13+
A very basic introduction to segmentation with OpenVINO. This notebook uses the [road-segmentation-adas-0001](https://docs.openvino.ai/latest/omz_models_model_road_segmentation_adas_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) and an input image downloaded from [Mapillary Vistas](https://www.mapillary.com/dataset/vistas). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.
1414

1515
## Installation Instructions
1616

17-
If you have not done so already, please follow the [Installation Guide](../../README.md) to install all required dependencies.
17+
If you have not installed all required dependencies, follow the [Installation Guide](../../README.md).

notebooks/004-hello-detection/004-hello-detection.ipynb

+21-21
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@
77
"source": [
88
"# Hello Object Detection\n",
99
"\n",
10-
"A very basic introduction to using object detection models with OpenVINO.\n",
10+
"A very basic introduction to using object detection models with OpenVINO.\n",
1111
"\n",
12-
"We use the [horizontal-text-detection-0001](https://docs.openvino.ai/latest/omz_models_model_horizontal_text_detection_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the format `[x_min, y_min, x_max, y_max, conf]`, where\n",
12+
"The [horizontal-text-detection-0001](https://docs.openvino.ai/latest/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the `[x_min, y_min, x_max, y_max, conf]` format, where the\n",
1313
"`(x_min, y_min)` are the coordinates of the top left bounding box corner, `(x_max, y_max)` are the coordinates of the bottom right bounding box corner and `conf` is the confidence for the predicted class."
1414
]
1515
},
@@ -73,16 +73,16 @@
7373
"metadata": {},
7474
"outputs": [],
7575
"source": [
76-
"# Text detection models expects image in BGR format\n",
76+
"# Text detection models expect an image in BGR format.\n",
7777
"image = cv2.imread(\"data/intel_rnb.jpg\")\n",
7878
"\n",
79-
"# N,C,H,W = batch size, number of channels, height, width\n",
79+
"# N,C,H,W = batch size, number of channels, height, width.\n",
8080
"N, C, H, W = input_layer_ir.shape\n",
8181
"\n",
82-
"# Resize image to meet network expected input sizes\n",
82+
"# Resize the image to meet network expected input sizes.\n",
8383
"resized_image = cv2.resize(image, (W, H))\n",
8484
"\n",
85-
"# Reshape to network input shape\n",
85+
"# Reshape to the network input shape.\n",
8686
"input_image = np.expand_dims(resized_image.transpose(2, 0, 1), 0)\n",
8787
"\n",
8888
"plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB));"
@@ -103,10 +103,10 @@
103103
"metadata": {},
104104
"outputs": [],
105105
"source": [
106-
"# Create inference request\n",
106+
"# Create an inference request.\n",
107107
"boxes = compiled_model([input_image])[output_layer_ir]\n",
108108
"\n",
109-
"# Remove zero only boxes\n",
109+
"# Remove zero only boxes.\n",
110110
"boxes = boxes[~np.all(boxes == 0, axis=1)]"
111111
]
112112
},
@@ -125,38 +125,38 @@
125125
"metadata": {},
126126
"outputs": [],
127127
"source": [
128-
"# For each detection, the description has the format: [x_min, y_min, x_max, y_max, conf]\n",
129-
"# Image passed here is in BGR format with changed width and height. To display it in colors expected by matplotlib we use cvtColor function\n",
128+
"# For each detection, the description is in the [x_min, y_min, x_max, y_max, conf] format:\n",
129+
"# The image passed here is in BGR format with changed width and height. To display it in colors expected by matplotlib, use cvtColor function\n",
130130
"def convert_result_to_image(bgr_image, resized_image, boxes, threshold=0.3, conf_labels=True):\n",
131-
" # Define colors for boxes and descriptions\n",
131+
" # Define colors for boxes and descriptions.\n",
132132
" colors = {\"red\": (255, 0, 0), \"green\": (0, 255, 0)}\n",
133133
"\n",
134-
" # Fetch image shapes to calculate ratio\n",
134+
" # Fetch the image shapes to calculate a ratio.\n",
135135
" (real_y, real_x), (resized_y, resized_x) = bgr_image.shape[:2], resized_image.shape[:2]\n",
136136
" ratio_x, ratio_y = real_x / resized_x, real_y / resized_y\n",
137137
"\n",
138-
" # Convert base image from bgr to rgb format\n",
138+
" # Convert the base image from BGR to RGB format.\n",
139139
" rgb_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2RGB)\n",
140140
"\n",
141-
" # Iterate through non-zero boxes\n",
141+
" # Iterate through non-zero boxes.\n",
142142
" for box in boxes:\n",
143-
" # Pick confidence factor from last place in array\n",
143+
" # Pick a confidence factor from the last place in an array.\n",
144144
" conf = box[-1]\n",
145145
" if conf > threshold:\n",
146-
" # Convert float to int and multiply corner position of each box by x and y ratio\n",
147-
" # In case that bounding box is found at the top of the image, \n",
148-
" # we position upper box bar little lower to make it visible on image \n",
146+
" # Convert float to int and multiply corner position of each box by x and y ratio.\n",
147+
" # If the bounding box is found at the top of the image, \n",
148+
" # position the upper box bar little lower to make it visible on the image. \n",
149149
" (x_min, y_min, x_max, y_max) = [\n",
150150
" int(max(corner_position * ratio_y, 10)) if idx % 2 \n",
151151
" else int(corner_position * ratio_x)\n",
152152
" for idx, corner_position in enumerate(box[:-1])\n",
153153
" ]\n",
154154
"\n",
155-
" # Draw box based on position, parameters in rectangle function are: image, start_point, end_point, color, thickness\n",
155+
" # Draw a box based on the position, parameters in rectangle function are: image, start_point, end_point, color, thickness.\n",
156156
" rgb_image = cv2.rectangle(rgb_image, (x_min, y_min), (x_max, y_max), colors[\"green\"], 3)\n",
157157
"\n",
158-
" # Add text to image based on position and confidence\n",
159-
" # Parameters in text function are: image, text, bottom-left_corner_textfield, font, font_scale, color, thickness, line_type\n",
158+
" # Add text to the image based on position and confidence.\n",
159+
" # Parameters in text function are: image, text, bottom-left_corner_textfield, font, font_scale, color, thickness, line_type.\n",
160160
" if conf_labels:\n",
161161
" rgb_image = cv2.putText(\n",
162162
" rgb_image,\n",

notebooks/004-hello-detection/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Introduction to Detection in OpenVINO
1+
# Introduction to Detection in OpenVINO
22

33
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F004-hello-detection%2F004-hello-detection.ipynb)
44

@@ -10,8 +10,8 @@ This notebook demonstrates how to do inference with detection model.
1010

1111
## Notebook Contents
1212

13-
A very basic introduction to detection with OpenVINO. We use the [horizontal-text-detection-0001](https://docs.openvino.ai/latest/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). It detects texts in images and returns blob of data in shape of [100, 5]. For each detection description has format [x_min, y_min, x_max, y_max, conf].
13+
In this basic introduction to detection with OpenVINO, the [horizontal-text-detection-0001](https://docs.openvino.ai/latest/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects text in images and returns blob of data in shape of [100, 5]. For each detection, a description is in the [x_min, y_min, x_max, y_max, conf] format.
1414

1515
## Installation Instructions
1616

17-
If you have not done so already, please follow the [Installation Guide](../../README.md) to install all required dependencies.
17+
If you have not installed all required dependencies, follow the [Installation Guide](../../README.md).

0 commit comments

Comments
 (0)