Skip to content

Commit 5e91d27

Browse files
authored
Avoid html in navigation for table of content (openvinotoolkit#1277)
1 parent 133b25d commit 5e91d27

File tree

98 files changed

+2518
-3582
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

98 files changed

+2518
-3582
lines changed

.ci/spellcheck/.pyspelling.wordlist.txt

+1
Original file line numberDiff line numberDiff line change
@@ -505,6 +505,7 @@ UNet
505505
Unet
506506
Unimodal
507507
unsqueeze
508+
Uparrow
508509
Upcroft
509510
upsample
510511
upsampled

notebooks/001-hello-world/001-hello-world.ipynb

+13-20
Original file line numberDiff line numberDiff line change
@@ -11,23 +11,21 @@
1111
"\n",
1212
"A pre-trained [MobileNetV3 model](https://docs.openvino.ai/2023.0/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial. For more information about how OpenVINO IR models are created, refer to the [TensorFlow to OpenVINO](../101-tensorflow-classification-to-openvino/101-tensorflow-classification-to-openvino.ipynb) tutorial.",
1313
"\n",
14-
"<a id=\"0\"></a>\n",
15-
"Table of content:\n",
16-
"- [Imports](#1)\n",
17-
"- [Download the Model and data samples](#2)\n",
18-
"- [Select inference device](#3)\n",
19-
"- [Load the Model](#4)\n",
20-
"- [Load an Image](#5)\n",
21-
"- [Do Inference](#6)\n"
14+
"#### Table of content:",
15+
"- [Imports](#Imports-Uparrow)\n",
16+
"- [Download the Model and data samples](#Download-the-Model-and-data-samples-Uparrow)\n",
17+
"- [Select inference device](#Select-inference-device-Uparrow)\n",
18+
"- [Load the Model](#Load-the-Model-Uparrow)\n",
19+
"- [Load an Image](#Load-an-Image-Uparrow)\n",
20+
"- [Do Inference](#Do-Inference-Uparrow)\n"
2221
]
2322
},
2423
{
2524
"cell_type": "markdown",
2625
"id": "e4c8cbe5",
2726
"metadata": {},
2827
"source": [
29-
"<a id=\"1\"></a>\n",
30-
"## Imports [&#8657;](#0)\n"
28+
"## Imports [$\\Uparrow$](#Table-of-content:)\n"
3129
]
3230
},
3331
{
@@ -54,8 +52,7 @@
5452
"id": "fb4afb34",
5553
"metadata": {},
5654
"source": [
57-
"<a id=\"2\"></a>\n",
58-
"## Download the Model and data samples [&#8657;](#0)\n"
55+
"## Download the Model and data samples [$\\Uparrow$](#Table-of-content:)\n"
5956
]
6057
},
6158
{
@@ -95,8 +92,7 @@
9592
"id": "5d2cf255-ac39-4656-b742-ec12520f423b",
9693
"metadata": {},
9794
"source": [
98-
"<a id=\"3\"></a>\n",
99-
"## Select inference device [&#8657;](#0)\n",
95+
"## Select inference device [$\\Uparrow$](#Table-of-content:)\n",
10096
"\n",
10197
"select device from dropdown list for running inference using OpenVINO"
10298
]
@@ -149,8 +145,7 @@
149145
"id": "55e49ae7",
150146
"metadata": {},
151147
"source": [
152-
"<a id=\"4\"></a>\n",
153-
"## Load the Model [&#8657;](#0)\n"
148+
"## Load the Model [$\\Uparrow$](#Table-of-content:)\n"
154149
]
155150
},
156151
{
@@ -172,8 +167,7 @@
172167
"id": "a19fc080",
173168
"metadata": {},
174169
"source": [
175-
"<a id=\"5\"></a>\n",
176-
"## Load an Image [&#8657;](#0)\n"
170+
"## Load an Image [$\\Uparrow$](#Table-of-content:)\n"
177171
]
178172
},
179173
{
@@ -210,8 +204,7 @@
210204
"id": "6be327b6",
211205
"metadata": {},
212206
"source": [
213-
"<a id=\"6\"></a>\n",
214-
"## Do Inference [&#8657;](#0)\n"
207+
"## Do Inference [$\\Uparrow$](#Table-of-content:)\n"
215208
]
216209
},
217210
{

notebooks/003-hello-segmentation/003-hello-segmentation.ipynb

+17-26
Original file line numberDiff line numberDiff line change
@@ -11,25 +11,23 @@
1111
"\n",
1212
"In this tutorial, a pre-trained [road-segmentation-adas-0001](https://docs.openvino.ai/2023.0/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.",
1313
"\n",
14-
"<a id=\"0\"></a>\n",
15-
"Table of content:\n",
16-
"- [Imports](#1)\n",
17-
"- [Download model weights](#2)\n",
18-
"- [Select inference device](#3)\n",
19-
"- [Load the Model](#4)\n",
20-
"- [Load an Image](#5)\n",
21-
"- [Do Inference](#6)\n",
22-
"- [Prepare Data for Visualization](#7)\n",
23-
"- [Visualize data](#8)\n"
14+
"#### Table of content:",
15+
"- [Imports](#Imports-Uparrow)\n",
16+
"- [Download model weights](#Download-model-weights-Uparrow)\n",
17+
"- [Select inference device](#Select-inference-device-Uparrow)\n",
18+
"- [Load the Model](#Load-the-Model-Uparrow)\n",
19+
"- [Load an Image](#Load-an-Image-Uparrow)\n",
20+
"- [Do Inference](#Do-Inference-Uparrow)\n",
21+
"- [Prepare Data for Visualization](#Prepare-Data-for-Visualization-Uparrow)\n",
22+
"- [Visualize data](#Visualize-data-Uparrow)\n"
2423
]
2524
},
2625
{
2726
"cell_type": "markdown",
2827
"id": "e2f2f808",
2928
"metadata": {},
3029
"source": [
31-
"<a id=\"1\"></a>\n",
32-
"## Imports [&#8657;](#0)\n"
30+
"## Imports [$\\Uparrow$](#Table-of-content:)\n"
3331
]
3432
},
3533
{
@@ -54,8 +52,7 @@
5452
"id": "7f2028e9",
5553
"metadata": {},
5654
"source": [
57-
"<a id=\"2\"></a>\n",
58-
"## Download model weights [&#8657;](#0)\n"
55+
"## Download model weights [$\\Uparrow$](#Table-of-content:)\n"
5956
]
6057
},
6158
{
@@ -119,8 +116,7 @@
119116
"id": "4e1083df-a497-4a19-91d2-20915c18954b",
120117
"metadata": {},
121118
"source": [
122-
"<a id=\"3\"></a>\n",
123-
"## Select inference device [&#8657;](#0)\n",
119+
"## Select inference device [$\\Uparrow$](#Table-of-content:)\n",
124120
"\n",
125121
"select device from dropdown list for running inference using OpenVINO"
126122
]
@@ -173,8 +169,7 @@
173169
"id": "2a340210",
174170
"metadata": {},
175171
"source": [
176-
"<a id=\"4\"></a>\n",
177-
"## Load the Model [&#8657;](#0)\n"
172+
"## Load the Model [$\\Uparrow$](#Table-of-content:)\n"
178173
]
179174
},
180175
{
@@ -198,8 +193,7 @@
198193
"id": "c18397f8",
199194
"metadata": {},
200195
"source": [
201-
"<a id=\"5\"></a>\n",
202-
"## Load an Image [&#8657;](#0)\n",
196+
"## Load an Image [$\\Uparrow$](#Table-of-content:)\n",
203197
"A sample image from the [Mapillary Vistas](https://www.mapillary.com/dataset/vistas) dataset is provided. "
204198
]
205199
},
@@ -255,8 +249,7 @@
255249
"id": "df0e7703",
256250
"metadata": {},
257251
"source": [
258-
"<a id=\"6\"></a>\n",
259-
"## Do Inference [&#8657;](#0)\n"
252+
"## Do Inference [$\\Uparrow$](#Table-of-content:)\n"
260253
]
261254
},
262255
{
@@ -300,8 +293,7 @@
300293
"id": "9503a83b",
301294
"metadata": {},
302295
"source": [
303-
"<a id=\"7\"></a>\n",
304-
"## Prepare Data for Visualization [&#8657;](#0)\n"
296+
"## Prepare Data for Visualization [$\\Uparrow$](#Table-of-content:)\n"
305297
]
306298
},
307299
{
@@ -330,8 +322,7 @@
330322
"id": "49d16e71",
331323
"metadata": {},
332324
"source": [
333-
"<a id=\"8\"></a>\n",
334-
"## Visualize data [&#8657;](#0)\n"
325+
"## Visualize data [$\\Uparrow$](#Table-of-content:)\n"
335326
]
336327
},
337328
{

notebooks/004-hello-detection/004-hello-detection.ipynb

+15-23
Original file line numberDiff line numberDiff line change
@@ -12,24 +12,22 @@
1212
"The [horizontal-text-detection-0001](https://docs.openvino.ai/2023.0/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the `[x_min, y_min, x_max, y_max, conf]` format, where the\n",
1313
"`(x_min, y_min)` are the coordinates of the top left bounding box corner, `(x_max, y_max)` are the coordinates of the bottom right bounding box corner and `conf` is the confidence for the predicted class.",
1414
"\n",
15-
"<a id=\"0\"></a>\n",
16-
"Table of content:\n",
17-
"- [Imports](#1)\n",
18-
"- [Download model weights](#2)\n",
19-
"- [Select inference device](#3)\n",
20-
"- [Load the Model](#4)\n",
21-
"- [Load an Image](#5)\n",
22-
"- [Do Inference](#6)\n",
23-
"- [Visualize Results](#7)\n"
15+
"#### Table of content:",
16+
"- [Imports](#Imports-Uparrow)\n",
17+
"- [Download model weights](#Download-model-weights-Uparrow)\n",
18+
"- [Select inference device](#Select-inference-device-Uparrow)\n",
19+
"- [Load the Model](#Load-the-Model-Uparrow)\n",
20+
"- [Load an Image](#Load-an-Image-Uparrow)\n",
21+
"- [Do Inference](#Do-Inference-Uparrow)\n",
22+
"- [Visualize Results](#Visualize-Results-Uparrow)\n"
2423
]
2524
},
2625
{
2726
"cell_type": "markdown",
2827
"id": "740bfdd8",
2928
"metadata": {},
3029
"source": [
31-
"<a id=\"1\"></a>\n",
32-
"## Imports [&#8657;](#0)\n"
30+
"## Imports [$\\Uparrow$](#Table-of-content:)\n"
3331
]
3432
},
3533
{
@@ -55,8 +53,7 @@
5553
"id": "4bd379ea",
5654
"metadata": {},
5755
"source": [
58-
"<a id=\"2\"></a>\n",
59-
"## Download model weights [&#8657;](#0)\n"
56+
"## Download model weights [$\\Uparrow$](#Table-of-content:)\n"
6057
]
6158
},
6259
{
@@ -119,8 +116,7 @@
119116
"id": "151c5b81-2cf9-41a7-95ec-8170a33de965",
120117
"metadata": {},
121118
"source": [
122-
"<a id=\"3\"></a>\n",
123-
"## Select inference device [&#8657;](#0)\n",
119+
"## Select inference device [$\\Uparrow$](#Table-of-content:)\n",
124120
"\n",
125121
"select device from dropdown list for running inference using OpenVINO"
126122
]
@@ -173,8 +169,7 @@
173169
"id": "85b48949",
174170
"metadata": {},
175171
"source": [
176-
"<a id=\"4\"></a>\n",
177-
"## Load the Model [&#8657;](#0)\n"
172+
"## Load the Model [$\\Uparrow$](#Table-of-content:)\n"
178173
]
179174
},
180175
{
@@ -198,8 +193,7 @@
198193
"id": "705ce668",
199194
"metadata": {},
200195
"source": [
201-
"<a id=\"5\"></a>\n",
202-
"## Load an Image [&#8657;](#0)\n"
196+
"## Load an Image [$\\Uparrow$](#Table-of-content:)\n"
203197
]
204198
},
205199
{
@@ -240,8 +234,7 @@
240234
"id": "f9fcaba9",
241235
"metadata": {},
242236
"source": [
243-
"<a id=\"6\"></a>\n",
244-
"## Do Inference [&#8657;](#0)\n"
237+
"## Do Inference [$\\Uparrow$](#Table-of-content:)\n"
245238
]
246239
},
247240
{
@@ -263,8 +256,7 @@
263256
"id": "09dfac5d",
264257
"metadata": {},
265258
"source": [
266-
"<a id=\"7\"></a>\n",
267-
"## Visualize Results [&#8657;](#0)\n"
259+
"## Visualize Results [$\\Uparrow$](#Table-of-content:)\n"
268260
]
269261
},
270262
{

0 commit comments

Comments
 (0)