Skip to content

Commit 7e329c4

Browse files
authored
update links for removed notebooks in main (openvinotoolkit#1988)
1 parent ef554db commit 7e329c4

File tree

23 files changed

+289
-75
lines changed

23 files changed

+289
-75
lines changed

notebooks/107-speech-recognition-quantization/107-speech-recognition-quantization-data2vec.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@
88
"source": [
99
"# Quantize Data2Vec Speech Recognition Model using NNCF PTQ API\n",
1010
"\n",
11-
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/speech-recognition-quantization/speech-recognition-quantization-data2vec.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
1211
"This tutorial demonstrates how to use the NNCF (Neural Network Compression Framework) 8-bit quantization in post-training mode (without the fine-tuning pipeline) to optimize the speech recognition model, known as [Data2Vec](https://arxiv.org/abs/2202.03555) for the high-speed inference via OpenVINO™ Toolkit. This notebook uses a fine-tuned [data2vec-audio-base-960h](https://huggingface.co/facebook/data2vec-audio-base-960h) [PyTorch](https://pytorch.org/) model trained on the [LibriSpeech ASR corpus](https://www.openslr.org/12). The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:\n",
1312
"\n",
1413
"- Download and prepare model.\n",
@@ -263,6 +262,7 @@
263262
]
264263
},
265264
{
265+
"attachments": {},
266266
"cell_type": "markdown",
267267
"id": "0bb514d4-2d00-4a8c-a858-76730c59e3f4",
268268
"metadata": {},
@@ -1124,4 +1124,4 @@
11241124
},
11251125
"nbformat": 4,
11261126
"nbformat_minor": 5
1127-
}
1127+
}

notebooks/109-performance-tricks/109-latency-tricks.ipynb

+19-3
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
{
22
"cells": [
33
{
4+
"attachments": {},
45
"cell_type": "markdown",
56
"metadata": {},
67
"source": [
78
"# Performance tricks in OpenVINO for latency mode\n",
89
"\n",
9-
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/performance-tricks/latency-tricks.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
1010
"\n",
1111
"The goal of this notebook is to provide a step-by-step tutorial for improving performance for inferencing in a latency mode. Low latency is especially desired in real-time applications when the results are needed as soon as possible after the data appears. This notebook assumes computer vision workflow and uses [YOLOv5n](https://github.com/ultralytics/yolov5) model. We will simulate a camera application that provides frames one by one.\n",
1212
"\n",
@@ -26,7 +26,8 @@
2626
"\n",
2727
"\n",
2828
"\n",
29-
"#### Table of contents:\n\n",
29+
"#### Table of contents:\n",
30+
"\n",
3031
"- [Prerequisites](#Prerequisites)\n",
3132
"- [Data](#Data)\n",
3233
"- [Model](#Model)\n",
@@ -47,6 +48,7 @@
4748
]
4849
},
4950
{
51+
"attachments": {},
5052
"cell_type": "markdown",
5153
"metadata": {},
5254
"source": [
@@ -108,6 +110,7 @@
108110
]
109111
},
110112
{
113+
"attachments": {},
111114
"cell_type": "markdown",
112115
"metadata": {},
113116
"source": [
@@ -169,6 +172,7 @@
169172
]
170173
},
171174
{
175+
"attachments": {},
172176
"cell_type": "markdown",
173177
"metadata": {},
174178
"source": [
@@ -219,6 +223,7 @@
219223
]
220224
},
221225
{
226+
"attachments": {},
222227
"cell_type": "markdown",
223228
"metadata": {},
224229
"source": [
@@ -262,6 +267,7 @@
262267
]
263268
},
264269
{
270+
"attachments": {},
265271
"cell_type": "markdown",
266272
"metadata": {},
267273
"source": [
@@ -314,6 +320,7 @@
314320
]
315321
},
316322
{
323+
"attachments": {},
317324
"cell_type": "markdown",
318325
"metadata": {},
319326
"source": [
@@ -424,6 +431,7 @@
424431
]
425432
},
426433
{
434+
"attachments": {},
427435
"cell_type": "markdown",
428436
"metadata": {},
429437
"source": [
@@ -477,6 +485,7 @@
477485
]
478486
},
479487
{
488+
"attachments": {},
480489
"cell_type": "markdown",
481490
"metadata": {},
482491
"source": [
@@ -547,6 +556,7 @@
547556
]
548557
},
549558
{
559+
"attachments": {},
550560
"cell_type": "markdown",
551561
"metadata": {},
552562
"source": [
@@ -600,6 +610,7 @@
600610
]
601611
},
602612
{
613+
"attachments": {},
603614
"cell_type": "markdown",
604615
"metadata": {},
605616
"source": [
@@ -651,6 +662,7 @@
651662
]
652663
},
653664
{
665+
"attachments": {},
654666
"cell_type": "markdown",
655667
"metadata": {},
656668
"source": [
@@ -702,6 +714,7 @@
702714
]
703715
},
704716
{
717+
"attachments": {},
705718
"cell_type": "markdown",
706719
"metadata": {},
707720
"source": [
@@ -749,6 +762,7 @@
749762
]
750763
},
751764
{
765+
"attachments": {},
752766
"cell_type": "markdown",
753767
"metadata": {},
754768
"source": [
@@ -800,6 +814,7 @@
800814
]
801815
},
802816
{
817+
"attachments": {},
803818
"cell_type": "markdown",
804819
"metadata": {},
805820
"source": [
@@ -873,6 +888,7 @@
873888
]
874889
},
875890
{
891+
"attachments": {},
876892
"cell_type": "markdown",
877893
"metadata": {},
878894
"source": [
@@ -924,4 +940,4 @@
924940
},
925941
"nbformat": 4,
926942
"nbformat_minor": 4
927-
}
943+
}

notebooks/109-performance-tricks/109-throughput-tricks.ipynb

+20-3
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
{
22
"cells": [
33
{
4+
"attachments": {},
45
"cell_type": "markdown",
56
"metadata": {
67
"collapsed": false,
@@ -11,7 +12,6 @@
1112
"source": [
1213
"# Performance tricks in OpenVINO for throughput mode\n",
1314
"\n",
14-
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/performance-tricks/throughput-tricks.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
1515
"\n",
1616
"The goal of this notebook is to provide a step-by-step tutorial for improving performance for inferencing in a throughput mode. High throughput is especially desired in applications when the results are not expected to appear as soon as possible but to lower the whole processing time. This notebook assumes computer vision workflow and uses [YOLOv5n](https://github.com/ultralytics/yolov5) model. We will simulate a video processing application that has access to all frames at once (e.g. video editing).\n",
1717
"\n",
@@ -28,7 +28,8 @@
2828
"A similar notebook focused on the latency mode is available [here](109-latency-tricks.ipynb).\n",
2929
"\n",
3030
"\n",
31-
"#### Table of contents:\n\n",
31+
"#### Table of contents:\n",
32+
"\n",
3233
"- [Prerequisites](#Prerequisites)\n",
3334
"- [Data](#Data)\n",
3435
"- [Model](#Model)\n",
@@ -50,6 +51,7 @@
5051
]
5152
},
5253
{
54+
"attachments": {},
5355
"cell_type": "markdown",
5456
"metadata": {},
5557
"source": [
@@ -104,6 +106,7 @@
104106
]
105107
},
106108
{
109+
"attachments": {},
107110
"cell_type": "markdown",
108111
"metadata": {
109112
"collapsed": false,
@@ -179,6 +182,7 @@
179182
]
180183
},
181184
{
185+
"attachments": {},
182186
"cell_type": "markdown",
183187
"metadata": {
184188
"collapsed": false,
@@ -238,6 +242,7 @@
238242
]
239243
},
240244
{
245+
"attachments": {},
241246
"cell_type": "markdown",
242247
"metadata": {
243248
"collapsed": false,
@@ -290,6 +295,7 @@
290295
]
291296
},
292297
{
298+
"attachments": {},
293299
"cell_type": "markdown",
294300
"metadata": {
295301
"collapsed": false,
@@ -356,6 +362,7 @@
356362
]
357363
},
358364
{
365+
"attachments": {},
359366
"cell_type": "markdown",
360367
"metadata": {
361368
"collapsed": false,
@@ -475,6 +482,7 @@
475482
]
476483
},
477484
{
485+
"attachments": {},
478486
"cell_type": "markdown",
479487
"metadata": {
480488
"collapsed": false,
@@ -537,6 +545,7 @@
537545
]
538546
},
539547
{
548+
"attachments": {},
540549
"cell_type": "markdown",
541550
"metadata": {
542551
"collapsed": false,
@@ -621,6 +630,7 @@
621630
]
622631
},
623632
{
633+
"attachments": {},
624634
"cell_type": "markdown",
625635
"metadata": {
626636
"collapsed": false,
@@ -708,6 +718,7 @@
708718
]
709719
},
710720
{
721+
"attachments": {},
711722
"cell_type": "markdown",
712723
"metadata": {
713724
"collapsed": false,
@@ -760,6 +771,7 @@
760771
]
761772
},
762773
{
774+
"attachments": {},
763775
"cell_type": "markdown",
764776
"metadata": {
765777
"collapsed": false,
@@ -816,6 +828,7 @@
816828
]
817829
},
818830
{
831+
"attachments": {},
819832
"cell_type": "markdown",
820833
"metadata": {
821834
"collapsed": false,
@@ -875,6 +888,7 @@
875888
]
876889
},
877890
{
891+
"attachments": {},
878892
"cell_type": "markdown",
879893
"metadata": {
880894
"collapsed": false,
@@ -931,6 +945,7 @@
931945
]
932946
},
933947
{
948+
"attachments": {},
934949
"cell_type": "markdown",
935950
"metadata": {
936951
"collapsed": false,
@@ -985,6 +1000,7 @@
9851000
]
9861001
},
9871002
{
1003+
"attachments": {},
9881004
"cell_type": "markdown",
9891005
"metadata": {
9901006
"collapsed": false,
@@ -1070,6 +1086,7 @@
10701086
]
10711087
},
10721088
{
1089+
"attachments": {},
10731090
"cell_type": "markdown",
10741091
"metadata": {
10751092
"collapsed": false,
@@ -1126,4 +1143,4 @@
11261143
},
11271144
"nbformat": 4,
11281145
"nbformat_minor": 4
1129-
}
1146+
}

notebooks/110-ct-segmentation-quantize/110-ct-scan-live-inference.ipynb

+1-3
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,6 @@
77
"source": [
88
"# Live Inference and Benchmark CT-scan Data with OpenVINO™\n",
99
"\n",
10-
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/ct-segmentation-quantize/ct-scan-live-inference.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
11-
"\n",
1210
"## Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 4 \n",
1311
"\n",
1412
"This tutorial is a part of a series on how to train, optimize, quantize and show live inference on a medical segmentation model. The goal is to accelerate inference on a kidney segmentation model. The [UNet](https://arxiv.org/abs/1505.04597) model is trained from scratch, and the data is from [Kits19](https://github.com/neheller/kits19).\n",
@@ -666,4 +664,4 @@
666664
},
667665
"nbformat": 4,
668666
"nbformat_minor": 4
669-
}
667+
}

0 commit comments

Comments
 (0)