Skip to content

Commit 1f4e336

Browse files
authored
Add redirect link to each notebook and main README (openvinotoolkit#1873)
Ticket: CVS-137485
1 parent e2ec5ab commit 1f4e336

File tree

157 files changed

+2030
-1715
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

157 files changed

+2030
-1715
lines changed

README.md

+2
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@ English | [简体中文](README_cn.md)
66
[![CI](https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/treon_precommit.yml/badge.svg?event=push)](https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/treon_precommit.yml?query=event%3Apush)
77
[![CI](https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/docker.yml/badge.svg?event=push)](https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/docker.yml?query=event%3Apush)
88

9+
> **Note:** This branch of the notebooks repository is deprecated and will be maintained until September 30, 2024. The new branch offers a better user experience and simplifies maintenance due to significant refactoring and a more preferred folder name structure. Please use the local [`README.md`](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/README.md) file and [OpenVINO™ Notebooks at GitHub Pages](https://openvinotoolkit.github.io/openvino_notebooks/) to navigate through the content.
10+
911
A collection of ready-to-run Jupyter notebooks for learning and experimenting with the OpenVINO™ Toolkit. The notebooks provide an introduction to OpenVINO basics and teach developers how to leverage our API for optimized deep learning inference.
1012

1113
🚀 Checkout interactive GitHub pages application for navigation between OpenVINO™ Notebooks content:

notebooks/001-hello-world/001-hello-world.ipynb

+6-2
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77
"source": [
88
"# Hello Image Classification\n",
99
"\n",
10+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/hello-world/hello-world.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
11+
"\n",
1012
"This basic introduction to OpenVINO™ shows how to do inference with an image classification model.\n",
1113
"\n",
1214
"A pre-trained [MobileNetV3 model](https://docs.openvino.ai/2024/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial. For more information about how OpenVINO IR models are created, refer to the [TensorFlow to OpenVINO](../101-tensorflow-classification-to-openvino/101-tensorflow-classification-to-openvino.ipynb) tutorial.\n",
@@ -317,7 +319,9 @@
317319
],
318320
"libraries": [],
319321
"other": [],
320-
"tasks": ["Image Classification"]
322+
"tasks": [
323+
"Image Classification"
324+
]
321325
}
322326
},
323327
"widgets": {
@@ -330,4 +334,4 @@
330334
},
331335
"nbformat": 4,
332336
"nbformat_minor": 5
333-
}
337+
}

notebooks/002-openvino-api/002-openvino-api.ipynb

+3-1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77
"source": [
88
"# OpenVINO™ Runtime API Tutorial\n",
99
"\n",
10+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/openvino-api/openvino-api.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
11+
"\n",
1012
"This notebook explains the basics of the OpenVINO Runtime API.\n",
1113
"\n",
1214
"The notebook is divided into sections with headers. The next cell contains global requirements for installation and imports. Each section is standalone and does not depend on any previous sections. All models used in this tutorial are provided as examples. These model files can be replaced with your own models. The exact outputs will be different, but the process is the same. \n",
@@ -1523,4 +1525,4 @@
15231525
},
15241526
"nbformat": 4,
15251527
"nbformat_minor": 5
1526-
}
1528+
}

notebooks/003-hello-segmentation/003-hello-segmentation.ipynb

+2
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77
"source": [
88
"# Hello Image Segmentation\n",
99
"\n",
10+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/hello-segmentation/hello-segmentation.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
11+
"\n",
1012
"A very basic introduction to using segmentation models with OpenVINO™.\n",
1113
"\n",
1214
"In this tutorial, a pre-trained [road-segmentation-adas-0001](https://docs.openvino.ai/2024/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.\n",

notebooks/004-hello-detection/004-hello-detection.ipynb

+3-1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77
"source": [
88
"# Hello Object Detection\n",
99
"\n",
10+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/hello-detection/hello-detection.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
11+
"\n",
1012
"A very basic introduction to using object detection models with OpenVINO™.\n",
1113
"\n",
1214
"The [horizontal-text-detection-0001](https://docs.openvino.ai/2024/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the `[x_min, y_min, x_max, y_max, conf]` format, where the\n",
@@ -415,4 +417,4 @@
415417
},
416418
"nbformat": 4,
417419
"nbformat_minor": 5
418-
}
420+
}

notebooks/101-tensorflow-classification-to-openvino/101-tensorflow-classification-to-openvino.ipynb

+3-1
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@
88
"source": [
99
"# Convert a TensorFlow Model to OpenVINO™\n",
1010
"\n",
11+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/tensorflow-classification-to-openvino/tensorflow-classification-to-openvino.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
12+
"\n",
1113
"This short tutorial shows how to convert a TensorFlow [MobileNetV3](https://docs.openvino.ai/2024/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) image classification model to OpenVINO [Intermediate Representation](https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets.html) (OpenVINO IR) format, using [Model Conversion API](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html). After creating the OpenVINO IR, load the model in [OpenVINO Runtime](https://docs.openvino.ai/2024/openvino-workflow/running-inference.html) and do inference with a sample image. \n",
1214
"\n",
1315
"\n",
@@ -511,4 +513,4 @@
511513
},
512514
"nbformat": 4,
513515
"nbformat_minor": 4
514-
}
516+
}

notebooks/102-pytorch-to-openvino/102-pytorch-onnx-to-openvino.ipynb

+3-1
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@
99
"source": [
1010
"# Convert a PyTorch Model to ONNX and OpenVINO™ IR\n",
1111
"\n",
12+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/pytorch-to-openvino/pytorch-onnx-to-openvino.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
13+
"\n",
1214
"This tutorial demonstrates step-by-step instructions on how to do inference on a PyTorch semantic segmentation model, using OpenVINO Runtime.\n",
1315
"\n",
1416
"First, the PyTorch model is exported in [ONNX](https://onnx.ai/) format and then converted to OpenVINO IR. Then the respective ONNX and OpenVINO IR models are loaded into OpenVINO Runtime to show model predictions.\n",
@@ -894,4 +896,4 @@
894896
},
895897
"nbformat": 4,
896898
"nbformat_minor": 5
897-
}
899+
}

notebooks/102-pytorch-to-openvino/102-pytorch-to-openvino.ipynb

+2
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@
1010
"source": [
1111
"# Convert a PyTorch Model to OpenVINO™ IR\n",
1212
"\n",
13+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/pytorch-to-openvino/pytorch-to-openvino.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
14+
"\n",
1315
"This tutorial demonstrates step-by-step instructions on how to do inference on a PyTorch classification model using OpenVINO Runtime.\n",
1416
"Starting from OpenVINO 2023.0 release, OpenVINO supports direct PyTorch model conversion without an intermediate step to convert them into ONNX format. In order, if you try to use the lower OpenVINO version or prefer to use ONNX, please check this [tutorial](../102-pytorch-to-openvino/102-pytorch-onnx-to-openvino.ipynb).\n",
1517
"\n",

notebooks/103-paddle-to-openvino/103-paddle-to-openvino-classification.ipynb

+3-1
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@
99
"source": [
1010
"# Convert a PaddlePaddle Model to OpenVINO™ IR\n",
1111
"\n",
12+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/paddle-to-openvino/paddle-to-openvino-classification.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
13+
"\n",
1214
"This notebook shows how to convert a MobileNetV3 model from [PaddleHub](https://github.com/PaddlePaddle/PaddleHub), pre-trained on the [ImageNet](https://www.image-net.org) dataset, to OpenVINO IR. It also shows how to perform classification inference on a sample image, using [OpenVINO Runtime](https://docs.openvino.ai/2024/openvino-workflow/running-inference.html) and compares the results of the [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) model with the IR model.\n",
1315
"\n",
1416
"Source of the [model](https://www.paddlepaddle.org.cn/hubdetail?name=mobilenet_v3_large_imagenet_ssld&en_category=ImageClassification).\n",
@@ -742,4 +744,4 @@
742744
},
743745
"nbformat": 4,
744746
"nbformat_minor": 5
745-
}
747+
}

notebooks/104-model-tools/104-model-tools.ipynb

+3-1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77
"metadata": {},
88
"source": [
99
"# Working with Open Model Zoo Models\n",
10+
"\n",
11+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/model-tools/model-tools.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
1012
"This tutorial shows how to download a model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo), convert it to OpenVINO™ IR format, show information about the model, and benchmark the model.\n",
1113
"\n",
1214
"\n",
@@ -791,4 +793,4 @@
791793
},
792794
"nbformat": 4,
793795
"nbformat_minor": 5
794-
}
796+
}

notebooks/105-language-quantize-bert/105-language-quantize-bert.ipynb

+2
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@
88
},
99
"source": [
1010
"# Quantize NLP models with Post-Training Quantization ​in NNCF\n",
11+
"\n",
12+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/language-quantize-bert/language-quantize-bert.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
1113
"This tutorial demonstrates how to apply `INT8` quantization to the Natural Language Processing model known as [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)), using the [Post-Training Quantization API](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/quantizing-models-post-training/basic-quantization-flow.html) (NNCF library). A fine-tuned [HuggingFace BERT](https://huggingface.co/transformers/model_doc/bert.html) [PyTorch](https://pytorch.org/) model, trained on the [Microsoft Research Paraphrase Corpus (MRPC)](https://www.microsoft.com/en-us/download/details.aspx?id=52398), will be used. The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:\n",
1214
"\n",
1315
"- Download and prepare the BERT model and MRPC dataset.\n",

notebooks/106-auto-device/106-auto-device.ipynb

+2
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@
99
"source": [
1010
"# Automatic Device Selection with OpenVINO™\n",
1111
"\n",
12+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/auto-device/auto-device.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
13+
"\n",
1214
"The [Auto device](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection.html) (or AUTO in short) selects the most suitable device for inference by considering the model precision, power efficiency and processing capability of the available [compute devices](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html). The model precision (such as `FP32`, `FP16`, `INT8`, etc.) is the first consideration to filter out the devices that cannot run the network efficiently.\n",
1315
"\n",
1416
"Next, if dedicated accelerators are available, these devices are preferred (for example, integrated and discrete [GPU](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html)). [CPU](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device.html) is used as the default \"fallback device\". Keep in mind that AUTO makes this selection only once, during the loading of a model. \n",

notebooks/107-speech-recognition-quantization/107-speech-recognition-quantization-data2vec.ipynb

+3-1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77
"metadata": {},
88
"source": [
99
"# Quantize Data2Vec Speech Recognition Model using NNCF PTQ API\n",
10+
"\n",
11+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/speech-recognition-quantization/speech-recognition-quantization-data2vec.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
1012
"This tutorial demonstrates how to use the NNCF (Neural Network Compression Framework) 8-bit quantization in post-training mode (without the fine-tuning pipeline) to optimize the speech recognition model, known as [Data2Vec](https://arxiv.org/abs/2202.03555) for the high-speed inference via OpenVINO™ Toolkit. This notebook uses a fine-tuned [data2vec-audio-base-960h](https://huggingface.co/facebook/data2vec-audio-base-960h) [PyTorch](https://pytorch.org/) model trained on the [LibriSpeech ASR corpus](https://www.openslr.org/12). The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:\n",
1113
"\n",
1214
"- Download and prepare model.\n",
@@ -1122,4 +1124,4 @@
11221124
},
11231125
"nbformat": 4,
11241126
"nbformat_minor": 5
1125-
}
1127+
}

notebooks/107-speech-recognition-quantization/107-speech-recognition-quantization-wav2vec2.ipynb

+3-1
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@
99
},
1010
"source": [
1111
"# Quantize Wav2Vec Speech Recognition Model using NNCF PTQ API\n",
12+
"\n",
13+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/speech-recognition-quantization/speech-recognition-quantization-wav2vec2.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
1214
"This tutorial demonstrates how to apply `INT8` quantization to the speech recognition model, known as [Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2), using the NNCF (Neural Network Compression Framework) 8-bit quantization in post-training mode (without the fine-tuning pipeline). This notebook uses a fine-tuned [Wav2Vec2-Base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) [PyTorch](https://pytorch.org/) model trained on the [LibriSpeech ASR corpus](https://www.openslr.org/12). The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:\n",
1315
"\n",
1416
"- Download and prepare the Wav2Vec2 model and LibriSpeech dataset.\n",
@@ -968,4 +970,4 @@
968970
},
969971
"nbformat": 4,
970972
"nbformat_minor": 5
971-
}
973+
}

notebooks/108-gpu-device/108-gpu-device.ipynb

+3-1
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@
1010
"source": [
1111
"# Working with GPUs in OpenVINO™\n",
1212
"\n",
13+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/gpu-device/gpu-device.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
14+
"\n",
1315
"\n",
1416
"#### Table of contents:\n",
1517
"\n",
@@ -1990,4 +1992,4 @@
19901992
},
19911993
"nbformat": 4,
19921994
"nbformat_minor": 5
1993-
}
1995+
}

notebooks/109-performance-tricks/109-latency-tricks.ipynb

+2
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66
"source": [
77
"# Performance tricks in OpenVINO for latency mode\n",
88
"\n",
9+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/performance-tricks/latency-tricks.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
10+
"\n",
911
"The goal of this notebook is to provide a step-by-step tutorial for improving performance for inferencing in a latency mode. Low latency is especially desired in real-time applications when the results are needed as soon as possible after the data appears. This notebook assumes computer vision workflow and uses [YOLOv5n](https://github.com/ultralytics/yolov5) model. We will simulate a camera application that provides frames one by one.\n",
1012
"\n",
1113
"The performance tips applied in this notebook could be summarized in the following figure. Some of the steps below can be applied to any device at any stage, e.g., `shared_memory`; some can be used only to specific devices, e.g., `INFERENCE_NUM_THREADS` to CPU. As the number of potential configurations is vast, we recommend looking at the steps below and then apply a trial-and-error approach. You can incorporate many hints simultaneously, like more inference threads + shared memory. It should give even better performance, but we recommend testing it anyway.\n",

notebooks/109-performance-tricks/109-throughput-tricks.ipynb

+2
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@
1111
"source": [
1212
"# Performance tricks in OpenVINO for throughput mode\n",
1313
"\n",
14+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/performance-tricks/throughput-tricks.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
15+
"\n",
1416
"The goal of this notebook is to provide a step-by-step tutorial for improving performance for inferencing in a throughput mode. High throughput is especially desired in applications when the results are not expected to appear as soon as possible but to lower the whole processing time. This notebook assumes computer vision workflow and uses [YOLOv5n](https://github.com/ultralytics/yolov5) model. We will simulate a video processing application that has access to all frames at once (e.g. video editing).\n",
1517
"\n",
1618
"The performance tips applied in this notebook could be summarized in the following figure. Some of the steps below can be applied to any device at any stage, e.g., batch size; some can be used only to specific devices, e.g., inference threads number to CPU. As the number of potential configurations is vast, we recommend looking at the steps below and then apply a trial-and-error approach. You can incorporate many hints simultaneously, like more inference threads + async processing. It should give even better performance, but we recommend testing it anyway.\n",

notebooks/110-ct-segmentation-quantize/110-ct-scan-live-inference.ipynb

+3-1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77
"source": [
88
"# Live Inference and Benchmark CT-scan Data with OpenVINO™\n",
99
"\n",
10+
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/ct-segmentation-quantize/ct-scan-live-inference.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
11+
"\n",
1012
"## Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 4 \n",
1113
"\n",
1214
"This tutorial is a part of a series on how to train, optimize, quantize and show live inference on a medical segmentation model. The goal is to accelerate inference on a kidney segmentation model. The [UNet](https://arxiv.org/abs/1505.04597) model is trained from scratch, and the data is from [Kits19](https://github.com/neheller/kits19).\n",
@@ -664,4 +666,4 @@
664666
},
665667
"nbformat": 4,
666668
"nbformat_minor": 4
667-
}
669+
}

0 commit comments

Comments
 (0)