diff --git a/assets/hub/datvuthanh_hybridnets.ipynb b/assets/hub/datvuthanh_hybridnets.ipynb
index aa4f8f2e1c41..1517b0be49bb 100644
--- a/assets/hub/datvuthanh_hybridnets.ipynb
+++ b/assets/hub/datvuthanh_hybridnets.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "be13aca4",
+ "id": "868bea22",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9f46b3a7",
+ "id": "e9f7793f",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
- "id": "f079e0a6",
+ "id": "0b81925c",
"metadata": {},
"source": [
"## Model Description\n",
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "89b63c9a",
+ "id": "a94c6da6",
"metadata": {},
"outputs": [],
"source": [
@@ -109,7 +109,7 @@
},
{
"cell_type": "markdown",
- "id": "eae528e6",
+ "id": "398e7a1e",
"metadata": {},
"source": [
"### Citation\n",
@@ -120,7 +120,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8feaa77f",
+ "id": "baa685ca",
"metadata": {
"attributes": {
"classes": [
diff --git a/assets/hub/facebookresearch_WSL-Images_resnext.ipynb b/assets/hub/facebookresearch_WSL-Images_resnext.ipynb
index 317b79b59395..76ee24dd1e33 100644
--- a/assets/hub/facebookresearch_WSL-Images_resnext.ipynb
+++ b/assets/hub/facebookresearch_WSL-Images_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "b7487376",
+ "id": "0447313a",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "10ecf357",
+ "id": "3e8191f6",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "47970bce",
+ "id": "e2787c19",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "90383ad4",
+ "id": "8b3b55f5",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9465bd28",
+ "id": "7057f347",
"metadata": {},
"outputs": [],
"source": [
@@ -99,7 +99,7 @@
},
{
"cell_type": "markdown",
- "id": "fff056e9",
+ "id": "0365ed38",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb b/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
index 8a6b2e947d9f..b5915f46ebf1 100644
--- a/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
+++ b/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "b50b472f",
+ "id": "d46eccce",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3be6e235",
+ "id": "f1f0a6cc",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
- "id": "02d0e518",
+ "id": "e848e6af",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 120)` where `N` is the number of images to be generated.\n",
@@ -45,7 +45,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "59dfedce",
+ "id": "53da09cc",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
},
{
"cell_type": "markdown",
- "id": "f93224ee",
+ "id": "ab7181b3",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
diff --git a/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb b/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
index 4f4270a101ae..9351946aedd4 100644
--- a/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
+++ b/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "3303b222",
+ "id": "0d1fe0d1",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5856529b",
+ "id": "e4310126",
"metadata": {},
"outputs": [],
"source": [
@@ -44,7 +44,7 @@
},
{
"cell_type": "markdown",
- "id": "67904979",
+ "id": "fcb133b4",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 512)` where `N` is the number of images to be generated.\n",
@@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b7f7549c",
+ "id": "bf3bada1",
"metadata": {},
"outputs": [],
"source": [
@@ -74,7 +74,7 @@
},
{
"cell_type": "markdown",
- "id": "181f0653",
+ "id": "8e3d6b01",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
diff --git a/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb b/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
index 3d60e23bb39a..02f63c523324 100644
--- a/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
+++ b/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "fbcdcee1",
+ "id": "62f0bb78",
"metadata": {},
"source": [
"# 3D ResNet\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7b207fa6",
+ "id": "5a4113bf",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "26212949",
+ "id": "9305bf4c",
"metadata": {},
"source": [
"Import remaining functions:"
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6bdf14fd",
+ "id": "82943df4",
"metadata": {},
"outputs": [],
"source": [
@@ -64,7 +64,7 @@
},
{
"cell_type": "markdown",
- "id": "2b7f754c",
+ "id": "7c6f596d",
"metadata": {},
"source": [
"#### Setup\n",
@@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0b698375",
+ "id": "4eb84c15",
"metadata": {
"attributes": {
"classes": [
@@ -94,7 +94,7 @@
},
{
"cell_type": "markdown",
- "id": "645803c5",
+ "id": "9ef6cd6a",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
@@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2c9b7cbd",
+ "id": "b8825f8d",
"metadata": {},
"outputs": [],
"source": [
@@ -116,7 +116,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bfb7f25c",
+ "id": "d8f5aef4",
"metadata": {},
"outputs": [],
"source": [
@@ -131,7 +131,7 @@
},
{
"cell_type": "markdown",
- "id": "53d0c3a7",
+ "id": "e1df31aa",
"metadata": {},
"source": [
"#### Define input transform"
@@ -140,7 +140,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8535ca5a",
+ "id": "e6c9aa1a",
"metadata": {},
"outputs": [],
"source": [
@@ -174,7 +174,7 @@
},
{
"cell_type": "markdown",
- "id": "9f49044c",
+ "id": "8a2882d2",
"metadata": {},
"source": [
"#### Run Inference\n",
@@ -185,7 +185,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "32bbabec",
+ "id": "3661dc2a",
"metadata": {},
"outputs": [],
"source": [
@@ -197,7 +197,7 @@
},
{
"cell_type": "markdown",
- "id": "4ab43160",
+ "id": "67dcec2a",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
@@ -206,7 +206,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "99b863ce",
+ "id": "d17f4cd9",
"metadata": {},
"outputs": [],
"source": [
@@ -231,7 +231,7 @@
},
{
"cell_type": "markdown",
- "id": "29ccb8df",
+ "id": "98de972d",
"metadata": {},
"source": [
"#### Get Predictions"
@@ -240,7 +240,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2fab8866",
+ "id": "15d186a1",
"metadata": {},
"outputs": [],
"source": [
@@ -259,7 +259,7 @@
},
{
"cell_type": "markdown",
- "id": "b92e70d2",
+ "id": "3a7bc9fd",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb b/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb
index d63253949cf4..38f1f5d89dde 100644
--- a/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb
+++ b/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "cc0e5ffd",
+ "id": "07c693bd",
"metadata": {},
"source": [
"# SlowFast\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e84bc51f",
+ "id": "5fdde47c",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "212de7ae",
+ "id": "c664e911",
"metadata": {},
"source": [
"Import remaining functions:"
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e9650eda",
+ "id": "3280026e",
"metadata": {},
"outputs": [],
"source": [
@@ -65,7 +65,7 @@
},
{
"cell_type": "markdown",
- "id": "1b8873af",
+ "id": "87782004",
"metadata": {},
"source": [
"#### Setup\n",
@@ -76,7 +76,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "585878b2",
+ "id": "933e9feb",
"metadata": {
"attributes": {
"classes": [
@@ -95,7 +95,7 @@
},
{
"cell_type": "markdown",
- "id": "deeb02db",
+ "id": "de50b728",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
@@ -104,7 +104,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2491cecf",
+ "id": "cfee8c23",
"metadata": {},
"outputs": [],
"source": [
@@ -117,7 +117,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "59b527bd",
+ "id": "fe2b7832",
"metadata": {},
"outputs": [],
"source": [
@@ -132,7 +132,7 @@
},
{
"cell_type": "markdown",
- "id": "923e4c66",
+ "id": "e507c782",
"metadata": {},
"source": [
"#### Define input transform"
@@ -141,7 +141,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f3b39c8c",
+ "id": "360b93a3",
"metadata": {},
"outputs": [],
"source": [
@@ -198,7 +198,7 @@
},
{
"cell_type": "markdown",
- "id": "54c3fc5b",
+ "id": "5159094f",
"metadata": {},
"source": [
"#### Run Inference\n",
@@ -209,7 +209,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e6552ef8",
+ "id": "f5832f29",
"metadata": {},
"outputs": [],
"source": [
@@ -221,7 +221,7 @@
},
{
"cell_type": "markdown",
- "id": "59f143eb",
+ "id": "25d3f0fc",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
@@ -230,7 +230,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "129da338",
+ "id": "5af721e3",
"metadata": {},
"outputs": [],
"source": [
@@ -255,7 +255,7 @@
},
{
"cell_type": "markdown",
- "id": "a7cb1292",
+ "id": "ac16d1fb",
"metadata": {},
"source": [
"#### Get Predictions"
@@ -264,7 +264,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ba6fbc5a",
+ "id": "a5db5a70",
"metadata": {},
"outputs": [],
"source": [
@@ -283,7 +283,7 @@
},
{
"cell_type": "markdown",
- "id": "7c697608",
+ "id": "b90281ae",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb b/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb
index ff8f2af538fc..d7e4ea616b48 100644
--- a/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb
+++ b/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "6c9f3e90",
+ "id": "b049e897",
"metadata": {},
"source": [
"# X3D\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e2d06f4d",
+ "id": "7a156f96",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
- "id": "f7c96f47",
+ "id": "65623832",
"metadata": {},
"source": [
"Import remaining functions:"
@@ -43,7 +43,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "95f0f0fd",
+ "id": "1ebc55da",
"metadata": {},
"outputs": [],
"source": [
@@ -65,7 +65,7 @@
},
{
"cell_type": "markdown",
- "id": "78e2996c",
+ "id": "7cfe9127",
"metadata": {},
"source": [
"#### Setup\n",
@@ -76,7 +76,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c5097500",
+ "id": "291ec28a",
"metadata": {},
"outputs": [],
"source": [
@@ -88,7 +88,7 @@
},
{
"cell_type": "markdown",
- "id": "8b7c5660",
+ "id": "143ce6f8",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d7a4ef8f",
+ "id": "03a3bc3e",
"metadata": {},
"outputs": [],
"source": [
@@ -110,7 +110,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a553052c",
+ "id": "84a1bcb7",
"metadata": {},
"outputs": [],
"source": [
@@ -125,7 +125,7 @@
},
{
"cell_type": "markdown",
- "id": "9397b8e6",
+ "id": "1be172c4",
"metadata": {},
"source": [
"#### Define input transform"
@@ -134,7 +134,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "115c6320",
+ "id": "db3326b0",
"metadata": {},
"outputs": [],
"source": [
@@ -187,7 +187,7 @@
},
{
"cell_type": "markdown",
- "id": "45bd254a",
+ "id": "7d3a5c92",
"metadata": {},
"source": [
"#### Run Inference\n",
@@ -198,7 +198,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1c30b35a",
+ "id": "32a7eaa6",
"metadata": {},
"outputs": [],
"source": [
@@ -210,7 +210,7 @@
},
{
"cell_type": "markdown",
- "id": "0287ad94",
+ "id": "b4d945e0",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
@@ -219,7 +219,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cc7926fa",
+ "id": "b76ebc32",
"metadata": {},
"outputs": [],
"source": [
@@ -244,7 +244,7 @@
},
{
"cell_type": "markdown",
- "id": "61cdab61",
+ "id": "78620584",
"metadata": {},
"source": [
"#### Get Predictions"
@@ -253,7 +253,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f1840fd0",
+ "id": "faff05cf",
"metadata": {},
"outputs": [],
"source": [
@@ -272,7 +272,7 @@
},
{
"cell_type": "markdown",
- "id": "a25a5cb9",
+ "id": "718bd052",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb b/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb
index b33ff58cacac..20e9d4595b9f 100644
--- a/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb
+++ b/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "670ebe1f",
+ "id": "cb63bb7d",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "80ab417c",
+ "id": "dec3093e",
"metadata": {},
"outputs": [],
"source": [
@@ -47,7 +47,7 @@
},
{
"cell_type": "markdown",
- "id": "b7da32e5",
+ "id": "a012f198",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ac9e95d4",
+ "id": "4c217198",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d1d2d820",
+ "id": "b36c6b8b",
"metadata": {},
"outputs": [],
"source": [
@@ -107,7 +107,7 @@
},
{
"cell_type": "markdown",
- "id": "ca73949b",
+ "id": "42bc5500",
"metadata": {},
"source": [
"### Model Description\n",
@@ -144,7 +144,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "446961b4",
+ "id": "bbfbc264",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/huggingface_pytorch-transformers.ipynb b/assets/hub/huggingface_pytorch-transformers.ipynb
index d18c31ac914b..5a356cb89a75 100644
--- a/assets/hub/huggingface_pytorch-transformers.ipynb
+++ b/assets/hub/huggingface_pytorch-transformers.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "81ae177e",
+ "id": "c425e65c",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -43,7 +43,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eb6eabfc",
+ "id": "7ed578e2",
"metadata": {},
"outputs": [],
"source": [
@@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
- "id": "c4a23d2f",
+ "id": "256f9625",
"metadata": {},
"source": [
"# Usage\n",
@@ -86,7 +86,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0f75a2ec",
+ "id": "85cc6eff",
"metadata": {
"attributes": {
"classes": [
@@ -104,7 +104,7 @@
},
{
"cell_type": "markdown",
- "id": "c1ade47a",
+ "id": "01f27dbe",
"metadata": {},
"source": [
"## Models\n",
@@ -115,7 +115,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "35020511",
+ "id": "08e4b813",
"metadata": {
"attributes": {
"classes": [
@@ -138,7 +138,7 @@
},
{
"cell_type": "markdown",
- "id": "5def93e8",
+ "id": "b12f71cd",
"metadata": {},
"source": [
"## Models with a language modeling head\n",
@@ -149,7 +149,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0f0bd37a",
+ "id": "51d4b026",
"metadata": {
"attributes": {
"classes": [
@@ -172,7 +172,7 @@
},
{
"cell_type": "markdown",
- "id": "cd0159d5",
+ "id": "0ffc82fe",
"metadata": {},
"source": [
"## Models with a sequence classification head\n",
@@ -183,7 +183,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9ed53563",
+ "id": "17176b0a",
"metadata": {
"attributes": {
"classes": [
@@ -206,7 +206,7 @@
},
{
"cell_type": "markdown",
- "id": "884453be",
+ "id": "ad286dbb",
"metadata": {},
"source": [
"## Models with a question answering head\n",
@@ -217,7 +217,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b712db3e",
+ "id": "b283ebec",
"metadata": {
"attributes": {
"classes": [
@@ -240,7 +240,7 @@
},
{
"cell_type": "markdown",
- "id": "8aacd5d8",
+ "id": "c5ae540e",
"metadata": {},
"source": [
"## Configuration\n",
@@ -251,7 +251,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4bdf7bac",
+ "id": "70325029",
"metadata": {
"attributes": {
"classes": [
@@ -282,7 +282,7 @@
},
{
"cell_type": "markdown",
- "id": "ee27f0dc",
+ "id": "90c993c4",
"metadata": {},
"source": [
"# Example Usage\n",
@@ -295,7 +295,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8cc323a4",
+ "id": "3acc6c12",
"metadata": {},
"outputs": [],
"source": [
@@ -311,7 +311,7 @@
},
{
"cell_type": "markdown",
- "id": "7fdf49b3",
+ "id": "659acb77",
"metadata": {},
"source": [
"## Using `BertModel` to encode the input sentence in a sequence of last layer hidden-states"
@@ -320,7 +320,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3002da44",
+ "id": "9c8f8b09",
"metadata": {},
"outputs": [],
"source": [
@@ -339,7 +339,7 @@
},
{
"cell_type": "markdown",
- "id": "184e9aeb",
+ "id": "269440fb",
"metadata": {},
"source": [
"## Using `modelForMaskedLM` to predict a masked token with BERT"
@@ -348,7 +348,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0fe24204",
+ "id": "cd811f4f",
"metadata": {},
"outputs": [],
"source": [
@@ -370,7 +370,7 @@
},
{
"cell_type": "markdown",
- "id": "e7b731e9",
+ "id": "264b86ea",
"metadata": {},
"source": [
"## Using `modelForQuestionAnswering` to do question answering with BERT"
@@ -379,7 +379,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9624778a",
+ "id": "ef35549b",
"metadata": {},
"outputs": [],
"source": [
@@ -409,7 +409,7 @@
},
{
"cell_type": "markdown",
- "id": "53ea7ec6",
+ "id": "2fe997fe",
"metadata": {},
"source": [
"## Using `modelForSequenceClassification` to do paraphrase classification with BERT"
@@ -418,7 +418,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fef48ffd",
+ "id": "20b72abc",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/hustvl_yolop.ipynb b/assets/hub/hustvl_yolop.ipynb
index 174c0e82d95b..85d7b0590f7e 100644
--- a/assets/hub/hustvl_yolop.ipynb
+++ b/assets/hub/hustvl_yolop.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "9aabbae2",
+ "id": "69a394b1",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -23,7 +23,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3da4c427",
+ "id": "72329db0",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "7adecc0f",
+ "id": "03307f09",
"metadata": {},
"source": [
"## YOLOP: You Only Look Once for Panoptic driving Perception\n",
@@ -132,7 +132,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "023e7caf",
+ "id": "68482bb6",
"metadata": {},
"outputs": [],
"source": [
@@ -148,7 +148,7 @@
},
{
"cell_type": "markdown",
- "id": "a72972f2",
+ "id": "8626677e",
"metadata": {},
"source": [
"### Citation\n",
diff --git a/assets/hub/intelisl_midas_v2.ipynb b/assets/hub/intelisl_midas_v2.ipynb
index 6ae9b1f6d527..fc4c3b06cbc6 100644
--- a/assets/hub/intelisl_midas_v2.ipynb
+++ b/assets/hub/intelisl_midas_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "b2f2c602",
+ "id": "baabd884",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -32,7 +32,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "259d0eef",
+ "id": "b54a1dfb",
"metadata": {
"attributes": {
"classes": [
@@ -48,7 +48,7 @@
},
{
"cell_type": "markdown",
- "id": "2375f1d4",
+ "id": "edf38973",
"metadata": {},
"source": [
"### Example Usage\n",
@@ -59,7 +59,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7ec5c8b3",
+ "id": "093a23e9",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "e286c9fc",
+ "id": "c0d79643",
"metadata": {},
"source": [
"Load a model (see [https://github.com/intel-isl/MiDaS/#Accuracy](https://github.com/intel-isl/MiDaS/#Accuracy) for an overview)"
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1c402dd3",
+ "id": "ed3e96fd",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
},
{
"cell_type": "markdown",
- "id": "1b90c2d6",
+ "id": "51b84a2a",
"metadata": {},
"source": [
"Move model to GPU if available"
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c03108d9",
+ "id": "330f3889",
"metadata": {},
"outputs": [],
"source": [
@@ -117,7 +117,7 @@
},
{
"cell_type": "markdown",
- "id": "a0cb6378",
+ "id": "026e8e09",
"metadata": {},
"source": [
"Load transforms to resize and normalize the image for large or small model"
@@ -126,7 +126,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "793e9c28",
+ "id": "7878df74",
"metadata": {},
"outputs": [],
"source": [
@@ -140,7 +140,7 @@
},
{
"cell_type": "markdown",
- "id": "be636357",
+ "id": "d07c38cb",
"metadata": {},
"source": [
"Load image and apply transforms"
@@ -149,7 +149,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0cc7bca0",
+ "id": "598cf090",
"metadata": {},
"outputs": [],
"source": [
@@ -161,7 +161,7 @@
},
{
"cell_type": "markdown",
- "id": "74f18555",
+ "id": "f29a6900",
"metadata": {},
"source": [
"Predict and resize to original resolution"
@@ -170,7 +170,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e9b7fa74",
+ "id": "0f4cd28d",
"metadata": {},
"outputs": [],
"source": [
@@ -189,7 +189,7 @@
},
{
"cell_type": "markdown",
- "id": "e5bb10a4",
+ "id": "d37ccab2",
"metadata": {},
"source": [
"Show result"
@@ -198,7 +198,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4a38bec8",
+ "id": "6b7d663f",
"metadata": {},
"outputs": [],
"source": [
@@ -208,7 +208,7 @@
},
{
"cell_type": "markdown",
- "id": "d28847b4",
+ "id": "24b033ed",
"metadata": {},
"source": [
"### References\n",
@@ -222,7 +222,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8bdca388",
+ "id": "d1755244",
"metadata": {
"attributes": {
"classes": [
@@ -244,7 +244,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b3a07134",
+ "id": "40e69023",
"metadata": {
"attributes": {
"classes": [
diff --git a/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb b/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb
index a9c9519365db..f74695760575 100644
--- a/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb
+++ b/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "f56c64f8",
+ "id": "d28c437b",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2825c341",
+ "id": "c5cad086",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "eb955805",
+ "id": "ed99d3da",
"metadata": {},
"source": [
"Loads a U-Net model pre-trained for abnormality segmentation on a dataset of brain MRI volumes [kaggle.com/mateuszbuda/lgg-mri-segmentation](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation)\n",
@@ -57,7 +57,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0e997833",
+ "id": "4c95175f",
"metadata": {},
"outputs": [],
"source": [
@@ -71,7 +71,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7bf2d46d",
+ "id": "48f168dc",
"metadata": {},
"outputs": [],
"source": [
@@ -100,7 +100,7 @@
},
{
"cell_type": "markdown",
- "id": "c1d12643",
+ "id": "8e264c2b",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb b/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb
index f03995ee23ed..c8d012effc72 100644
--- a/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb
+++ b/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8d54960f",
+ "id": "dbb53349",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5a07cdea",
+ "id": "e1095550",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "1ac6e5c5",
+ "id": "65d39125",
"metadata": {},
"source": [
"### Example Usage"
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e35c3062",
+ "id": "92e23f4a",
"metadata": {},
"outputs": [],
"source": [
@@ -78,7 +78,7 @@
},
{
"cell_type": "markdown",
- "id": "e433802e",
+ "id": "b4ac6bef",
"metadata": {},
"source": [
"### Model Description\n",
@@ -91,7 +91,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "654a2734",
+ "id": "15fd0711",
"metadata": {
"attributes": {
"classes": [
diff --git a/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb b/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb
index a4bf1cd82e19..1a8c83d525fd 100644
--- a/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "520b2361",
+ "id": "beb104b4",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cd243319",
+ "id": "1a2f4e8f",
"metadata": {},
"outputs": [],
"source": [
@@ -52,7 +52,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e239cb1c",
+ "id": "c46ca521",
"metadata": {},
"outputs": [],
"source": [
@@ -73,7 +73,7 @@
},
{
"cell_type": "markdown",
- "id": "bb06f4e6",
+ "id": "3c20221b",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset.\n",
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bfba0d38",
+ "id": "b7e9be81",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "a540bebd",
+ "id": "d28725f3",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "35adbdca",
+ "id": "6a9d132f",
"metadata": {},
"outputs": [],
"source": [
@@ -132,7 +132,7 @@
},
{
"cell_type": "markdown",
- "id": "b7ab8e08",
+ "id": "0e58cfd4",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probable hypotheses according to the model."
@@ -141,7 +141,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e00e1d49",
+ "id": "559b9b73",
"metadata": {},
"outputs": [],
"source": [
@@ -153,7 +153,7 @@
},
{
"cell_type": "markdown",
- "id": "c8547894",
+ "id": "b4bf5406",
"metadata": {},
"source": [
"Display the result."
@@ -162,7 +162,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c634c6b9",
+ "id": "15d8186b",
"metadata": {},
"outputs": [],
"source": [
@@ -176,7 +176,7 @@
},
{
"cell_type": "markdown",
- "id": "26ef5513",
+ "id": "98fd30e9",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb b/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb
index 76f026279181..f3081035c75c 100644
--- a/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8a2ce901",
+ "id": "febcd650",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -51,7 +51,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5bb4b4bd",
+ "id": "c03b60c6",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +66,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4e01854c",
+ "id": "9d07b76f",
"metadata": {},
"outputs": [],
"source": [
@@ -82,7 +82,7 @@
},
{
"cell_type": "markdown",
- "id": "86a7ff5d",
+ "id": "f3afda4a",
"metadata": {},
"source": [
"Download and setup FastPitch generator model."
@@ -91,7 +91,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "84ac5ed0",
+ "id": "484ceb08",
"metadata": {},
"outputs": [],
"source": [
@@ -100,7 +100,7 @@
},
{
"cell_type": "markdown",
- "id": "9921cb7e",
+ "id": "62037d68",
"metadata": {},
"source": [
"Download and setup vocoder and denoiser models."
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "77c50607",
+ "id": "b01d487c",
"metadata": {},
"outputs": [],
"source": [
@@ -118,7 +118,7 @@
},
{
"cell_type": "markdown",
- "id": "a87c7a8f",
+ "id": "f09afb3b",
"metadata": {},
"source": [
"Verify that generator and vocoder models agree on input parameters."
@@ -127,7 +127,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "67685c14",
+ "id": "876b9cf0",
"metadata": {},
"outputs": [],
"source": [
@@ -147,7 +147,7 @@
},
{
"cell_type": "markdown",
- "id": "f9cce5d6",
+ "id": "5eeb266a",
"metadata": {},
"source": [
"Put all models on available device."
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ec1fe7be",
+ "id": "816044d8",
"metadata": {},
"outputs": [],
"source": [
@@ -167,7 +167,7 @@
},
{
"cell_type": "markdown",
- "id": "f079e4e3",
+ "id": "9b69acb6",
"metadata": {},
"source": [
"Load text processor."
@@ -176,7 +176,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "84fbaace",
+ "id": "5d4f687e",
"metadata": {},
"outputs": [],
"source": [
@@ -185,7 +185,7 @@
},
{
"cell_type": "markdown",
- "id": "a8886ea4",
+ "id": "7f47fac9",
"metadata": {},
"source": [
"Set the text to be synthetized, prepare input and set additional generation parameters."
@@ -194,7 +194,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3e9b8e1f",
+ "id": "95db3c02",
"metadata": {},
"outputs": [],
"source": [
@@ -204,7 +204,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f42415f8",
+ "id": "85059a5e",
"metadata": {},
"outputs": [],
"source": [
@@ -214,7 +214,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2ed01e81",
+ "id": "e849f049",
"metadata": {},
"outputs": [],
"source": [
@@ -228,7 +228,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "aa2d9853",
+ "id": "813ed2e1",
"metadata": {},
"outputs": [],
"source": [
@@ -242,7 +242,7 @@
},
{
"cell_type": "markdown",
- "id": "90e75324",
+ "id": "47c87952",
"metadata": {},
"source": [
"Plot the intermediate spectorgram."
@@ -251,7 +251,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e9aa1c11",
+ "id": "cfb34f45",
"metadata": {},
"outputs": [],
"source": [
@@ -265,7 +265,7 @@
},
{
"cell_type": "markdown",
- "id": "65bce652",
+ "id": "e1f37465",
"metadata": {},
"source": [
"Syntesize audio."
@@ -274,7 +274,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0cacbd1b",
+ "id": "d6a7d331",
"metadata": {},
"outputs": [],
"source": [
@@ -284,7 +284,7 @@
},
{
"cell_type": "markdown",
- "id": "ce024e90",
+ "id": "ca1367c0",
"metadata": {},
"source": [
"Write audio to wav file."
@@ -293,7 +293,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9b25e8eb",
+ "id": "7a44facb",
"metadata": {},
"outputs": [],
"source": [
@@ -303,7 +303,7 @@
},
{
"cell_type": "markdown",
- "id": "a06620ef",
+ "id": "38520f43",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb b/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb
index 2d688c0f7c70..a2d6c981c49e 100644
--- a/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "abafe6d9",
+ "id": "6dd4b12e",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3741dd9c",
+ "id": "c85d778f",
"metadata": {},
"outputs": [],
"source": [
@@ -45,7 +45,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bd053adb",
+ "id": "d1f0059c",
"metadata": {},
"outputs": [],
"source": [
@@ -73,7 +73,7 @@
},
{
"cell_type": "markdown",
- "id": "72541389",
+ "id": "ffdbcf3e",
"metadata": {},
"source": [
"### Load Pretrained model\n",
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eae64ca5",
+ "id": "05e6d3be",
"metadata": {},
"outputs": [],
"source": [
@@ -113,7 +113,7 @@
},
{
"cell_type": "markdown",
- "id": "f9873a08",
+ "id": "5abff2f3",
"metadata": {},
"source": [
"### Prepare inference data\n",
@@ -123,7 +123,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "67e82fae",
+ "id": "47a71175",
"metadata": {},
"outputs": [],
"source": [
@@ -146,7 +146,7 @@
},
{
"cell_type": "markdown",
- "id": "c12cbd6f",
+ "id": "ce19b179",
"metadata": {},
"source": [
"### Run inference\n",
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "95322de4",
+ "id": "6e4a690f",
"metadata": {},
"outputs": [],
"source": [
@@ -168,7 +168,7 @@
},
{
"cell_type": "markdown",
- "id": "e86098ad",
+ "id": "6cd68df9",
"metadata": {},
"source": [
"### Display result"
@@ -177,7 +177,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b56160f6",
+ "id": "6574a5fc",
"metadata": {},
"outputs": [],
"source": [
@@ -191,7 +191,7 @@
},
{
"cell_type": "markdown",
- "id": "f9b9fe82",
+ "id": "327fab94",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb b/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb
index a088b6c7f44d..7e48c580a718 100644
--- a/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8b2b7536",
+ "id": "037b7fe5",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -44,7 +44,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a0c5ae6c",
+ "id": "2a7b35d3",
"metadata": {},
"outputs": [],
"source": [
@@ -59,7 +59,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e5ed017a",
+ "id": "2418d054",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "5e517f53",
+ "id": "e7769b06",
"metadata": {},
"source": [
"Download and setup FastPitch generator model."
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e2c38bf9",
+ "id": "29f8ce0d",
"metadata": {},
"outputs": [],
"source": [
@@ -93,7 +93,7 @@
},
{
"cell_type": "markdown",
- "id": "fd1aec5f",
+ "id": "8da4dfb8",
"metadata": {},
"source": [
"Download and setup vocoder and denoiser models."
@@ -102,7 +102,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fb242647",
+ "id": "e577f81a",
"metadata": {},
"outputs": [],
"source": [
@@ -111,7 +111,7 @@
},
{
"cell_type": "markdown",
- "id": "2032a756",
+ "id": "649f6645",
"metadata": {},
"source": [
"Verify that generator and vocoder models agree on input parameters."
@@ -120,7 +120,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2e40804f",
+ "id": "9eeb264c",
"metadata": {},
"outputs": [],
"source": [
@@ -140,7 +140,7 @@
},
{
"cell_type": "markdown",
- "id": "b678b4d3",
+ "id": "73632d41",
"metadata": {},
"source": [
"Put all models on available device."
@@ -149,7 +149,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c3ceee69",
+ "id": "e349991a",
"metadata": {},
"outputs": [],
"source": [
@@ -160,7 +160,7 @@
},
{
"cell_type": "markdown",
- "id": "031a40e4",
+ "id": "036a62bd",
"metadata": {},
"source": [
"Load text processor."
@@ -169,7 +169,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0a0a74c9",
+ "id": "80416462",
"metadata": {},
"outputs": [],
"source": [
@@ -178,7 +178,7 @@
},
{
"cell_type": "markdown",
- "id": "5ecafb6b",
+ "id": "928919b0",
"metadata": {},
"source": [
"Set the text to be synthetized, prepare input and set additional generation parameters."
@@ -187,7 +187,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b36eaec8",
+ "id": "03d13092",
"metadata": {},
"outputs": [],
"source": [
@@ -197,7 +197,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6aa61b6e",
+ "id": "bea36a40",
"metadata": {},
"outputs": [],
"source": [
@@ -207,7 +207,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "02960423",
+ "id": "5bb42456",
"metadata": {},
"outputs": [],
"source": [
@@ -221,7 +221,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "471d77cd",
+ "id": "dc0b6a33",
"metadata": {},
"outputs": [],
"source": [
@@ -235,7 +235,7 @@
},
{
"cell_type": "markdown",
- "id": "7b51eed3",
+ "id": "20fd75be",
"metadata": {},
"source": [
"Plot the intermediate spectorgram."
@@ -244,7 +244,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "df475a60",
+ "id": "77a0cde4",
"metadata": {},
"outputs": [],
"source": [
@@ -258,7 +258,7 @@
},
{
"cell_type": "markdown",
- "id": "8fd90fbe",
+ "id": "c69aa42b",
"metadata": {},
"source": [
"Syntesize audio."
@@ -267,7 +267,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "dcd22a2d",
+ "id": "a7029235",
"metadata": {},
"outputs": [],
"source": [
@@ -277,7 +277,7 @@
},
{
"cell_type": "markdown",
- "id": "67a464e8",
+ "id": "fb5893c2",
"metadata": {},
"source": [
"Write audio to wav file."
@@ -286,7 +286,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0b5fb145",
+ "id": "a26825e8",
"metadata": {},
"outputs": [],
"source": [
@@ -296,7 +296,7 @@
},
{
"cell_type": "markdown",
- "id": "8ab51e50",
+ "id": "a5ba1778",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb b/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb
index 964dcef6087c..6da0acec61a4 100644
--- a/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "2b9a8e78",
+ "id": "ffd95949",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -44,7 +44,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "525961f2",
+ "id": "6701ad46",
"metadata": {},
"outputs": [],
"source": [
@@ -54,7 +54,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "abc8cd95",
+ "id": "20cfaed8",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "e4b02d94",
+ "id": "880a8198",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset."
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "60e8ad14",
+ "id": "e08d72d7",
"metadata": {},
"outputs": [],
"source": [
@@ -96,7 +96,7 @@
},
{
"cell_type": "markdown",
- "id": "c69a137b",
+ "id": "5722da09",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -105,7 +105,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0352a55b",
+ "id": "5ea80436",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "8c1bccdd",
+ "id": "59201f1f",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probably hypothesis according to the model."
@@ -132,7 +132,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8ba57db9",
+ "id": "5aae8389",
"metadata": {},
"outputs": [],
"source": [
@@ -144,7 +144,7 @@
},
{
"cell_type": "markdown",
- "id": "5291605d",
+ "id": "df0535a4",
"metadata": {},
"source": [
"Display the result."
@@ -153,7 +153,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f89d4a20",
+ "id": "d54344c7",
"metadata": {},
"outputs": [],
"source": [
@@ -167,7 +167,7 @@
},
{
"cell_type": "markdown",
- "id": "47257054",
+ "id": "48b37162",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_resnext.ipynb b/assets/hub/nvidia_deeplearningexamples_resnext.ipynb
index a47fba4ef2d0..0294baabdcf9 100644
--- a/assets/hub/nvidia_deeplearningexamples_resnext.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "a9c13ee4",
+ "id": "23d80fa9",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2489f6ab",
+ "id": "fc1e0b0b",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a1df13f7",
+ "id": "68f04d16",
"metadata": {},
"outputs": [],
"source": [
@@ -84,7 +84,7 @@
},
{
"cell_type": "markdown",
- "id": "7dbb3425",
+ "id": "02eda6d2",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset."
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "173c9599",
+ "id": "6d7522e9",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "25265ee7",
+ "id": "6e4daed4",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4890a2d1",
+ "id": "9591f996",
"metadata": {},
"outputs": [],
"source": [
@@ -133,7 +133,7 @@
},
{
"cell_type": "markdown",
- "id": "936802c8",
+ "id": "f5b9aa03",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probably hypothesis according to the model."
@@ -142,7 +142,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "59794705",
+ "id": "60a12aa6",
"metadata": {},
"outputs": [],
"source": [
@@ -154,7 +154,7 @@
},
{
"cell_type": "markdown",
- "id": "9e2c7e92",
+ "id": "fc775696",
"metadata": {},
"source": [
"Display the result."
@@ -163,7 +163,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "13833ec7",
+ "id": "63bb05f8",
"metadata": {},
"outputs": [],
"source": [
@@ -177,7 +177,7 @@
},
{
"cell_type": "markdown",
- "id": "a1b8fc2b",
+ "id": "a5bacc25",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb b/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb
index 2e96f2314e52..9330b8aba510 100644
--- a/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "560a73b0",
+ "id": "0e6436a0",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d48786f5",
+ "id": "5b964c04",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "efe367e3",
+ "id": "d7e95e7c",
"metadata": {},
"outputs": [],
"source": [
@@ -84,7 +84,7 @@
},
{
"cell_type": "markdown",
- "id": "89fd57bd",
+ "id": "4a8b9403",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset."
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1eb18fea",
+ "id": "6d10571e",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "bc7016b1",
+ "id": "13fd4d3d",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1ba39568",
+ "id": "445648a5",
"metadata": {},
"outputs": [],
"source": [
@@ -133,7 +133,7 @@
},
{
"cell_type": "markdown",
- "id": "c01bbe33",
+ "id": "80638077",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probable hypotheses according to the model."
@@ -142,7 +142,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ab09db61",
+ "id": "ff974f88",
"metadata": {},
"outputs": [],
"source": [
@@ -154,7 +154,7 @@
},
{
"cell_type": "markdown",
- "id": "23dd6ed5",
+ "id": "bf187e40",
"metadata": {},
"source": [
"Display the result."
@@ -163,7 +163,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2125234a",
+ "id": "b6c436e5",
"metadata": {},
"outputs": [],
"source": [
@@ -177,7 +177,7 @@
},
{
"cell_type": "markdown",
- "id": "ea5df5be",
+ "id": "96fc7256",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_ssd.ipynb b/assets/hub/nvidia_deeplearningexamples_ssd.ipynb
index 5e4a6e91299d..dbccddd71b99 100644
--- a/assets/hub/nvidia_deeplearningexamples_ssd.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_ssd.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "1f382b45",
+ "id": "5d76ed0b",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -56,7 +56,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c5314b39",
+ "id": "0890db8a",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +66,7 @@
},
{
"cell_type": "markdown",
- "id": "70ac1618",
+ "id": "2a9e3db0",
"metadata": {},
"source": [
"Load an SSD model pretrained on COCO dataset, as well as a set of utility methods for convenient and comprehensive formatting of input and output of the model."
@@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eaac935c",
+ "id": "b8232b66",
"metadata": {},
"outputs": [],
"source": [
@@ -86,7 +86,7 @@
},
{
"cell_type": "markdown",
- "id": "b9cadf5d",
+ "id": "adbb229a",
"metadata": {},
"source": [
"Now, prepare the loaded model for inference"
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ab0c6129",
+ "id": "7f295ff4",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "2de1bd97",
+ "id": "e47e5a19",
"metadata": {},
"source": [
"Prepare input images for object detection.\n",
@@ -115,7 +115,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b7dc8806",
+ "id": "606e24dd",
"metadata": {},
"outputs": [],
"source": [
@@ -128,7 +128,7 @@
},
{
"cell_type": "markdown",
- "id": "436185fe",
+ "id": "4a27c20c",
"metadata": {},
"source": [
"Format the images to comply with the network input and convert them to tensor."
@@ -137,7 +137,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "00df00af",
+ "id": "135a6770",
"metadata": {},
"outputs": [],
"source": [
@@ -147,7 +147,7 @@
},
{
"cell_type": "markdown",
- "id": "0e2cc838",
+ "id": "e13032aa",
"metadata": {},
"source": [
"Run the SSD network to perform object detection."
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "703bd4be",
+ "id": "9a012604",
"metadata": {},
"outputs": [],
"source": [
@@ -166,7 +166,7 @@
},
{
"cell_type": "markdown",
- "id": "d0a6cf3d",
+ "id": "68d921cd",
"metadata": {},
"source": [
"By default, raw output from SSD network per input image contains\n",
@@ -177,7 +177,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "aa1bc676",
+ "id": "524fbe2f",
"metadata": {},
"outputs": [],
"source": [
@@ -187,7 +187,7 @@
},
{
"cell_type": "markdown",
- "id": "1aee258b",
+ "id": "dfdaf21f",
"metadata": {},
"source": [
"The model was trained on COCO dataset, which we need to access in order to translate class IDs into object names.\n",
@@ -197,7 +197,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "18f1a4dd",
+ "id": "024d5502",
"metadata": {},
"outputs": [],
"source": [
@@ -206,7 +206,7 @@
},
{
"cell_type": "markdown",
- "id": "4c4b2262",
+ "id": "c2d243fb",
"metadata": {},
"source": [
"Finally, let's visualize our detections"
@@ -215,7 +215,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "17854d27",
+ "id": "73be430c",
"metadata": {},
"outputs": [],
"source": [
@@ -240,7 +240,7 @@
},
{
"cell_type": "markdown",
- "id": "252276be",
+ "id": "1ff6b463",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb b/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb
index ae30d6fd8974..91f6281a7d8f 100644
--- a/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "554f3346",
+ "id": "19eedf4e",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -41,7 +41,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c3a084eb",
+ "id": "f8b2e09d",
"metadata": {},
"outputs": [],
"source": [
@@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
- "id": "8f3b4656",
+ "id": "9b4d297a",
"metadata": {},
"source": [
"Load the Tacotron2 model pre-trained on [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/) and prepare it for inference:"
@@ -62,7 +62,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e9a06649",
+ "id": "e42e68fe",
"metadata": {},
"outputs": [],
"source": [
@@ -74,7 +74,7 @@
},
{
"cell_type": "markdown",
- "id": "9d9669b7",
+ "id": "38259b0e",
"metadata": {},
"source": [
"Load pretrained WaveGlow model"
@@ -83,7 +83,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c647c85c",
+ "id": "3e513cfb",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
},
{
"cell_type": "markdown",
- "id": "fe1c3ef3",
+ "id": "38396ac6",
"metadata": {},
"source": [
"Now, let's make the model say:"
@@ -104,7 +104,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e2ad062c",
+ "id": "73a84670",
"metadata": {},
"outputs": [],
"source": [
@@ -113,7 +113,7 @@
},
{
"cell_type": "markdown",
- "id": "02d3105e",
+ "id": "8ba96fe1",
"metadata": {},
"source": [
"Format the input using utility methods"
@@ -122,7 +122,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ceee9594",
+ "id": "7df9ef7f",
"metadata": {},
"outputs": [],
"source": [
@@ -132,7 +132,7 @@
},
{
"cell_type": "markdown",
- "id": "b9ac9cc8",
+ "id": "a3426a60",
"metadata": {},
"source": [
"Run the chained models:"
@@ -141,7 +141,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7f202f2d",
+ "id": "4f679ce8",
"metadata": {},
"outputs": [],
"source": [
@@ -154,7 +154,7 @@
},
{
"cell_type": "markdown",
- "id": "66ae8eee",
+ "id": "a5f089cd",
"metadata": {},
"source": [
"You can write it to a file and listen to it"
@@ -163,7 +163,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5b44a7e8",
+ "id": "28bed410",
"metadata": {},
"outputs": [],
"source": [
@@ -173,7 +173,7 @@
},
{
"cell_type": "markdown",
- "id": "3e31d945",
+ "id": "f13a3bea",
"metadata": {},
"source": [
"Alternatively, play it right away in a notebook with IPython widgets"
@@ -182,7 +182,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3a16dbed",
+ "id": "69fd067a",
"metadata": {},
"outputs": [],
"source": [
@@ -192,7 +192,7 @@
},
{
"cell_type": "markdown",
- "id": "aa82e09f",
+ "id": "ff906a21",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb b/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb
index e890b30289bc..2f6cbac035f3 100644
--- a/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "ee2d4406",
+ "id": "a0e588e9",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -39,7 +39,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4d4589b5",
+ "id": "a2998b9d",
"metadata": {},
"outputs": [],
"source": [
@@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
- "id": "9fcba93e",
+ "id": "789eee70",
"metadata": {},
"source": [
"Load the WaveGlow model pre-trained on [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/)"
@@ -60,7 +60,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f1603593",
+ "id": "92de58d9",
"metadata": {},
"outputs": [],
"source": [
@@ -70,7 +70,7 @@
},
{
"cell_type": "markdown",
- "id": "8bacbd3a",
+ "id": "45efdf24",
"metadata": {},
"source": [
"Prepare the WaveGlow model for inference"
@@ -79,7 +79,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9475bb1c",
+ "id": "79ae9d81",
"metadata": {},
"outputs": [],
"source": [
@@ -90,7 +90,7 @@
},
{
"cell_type": "markdown",
- "id": "18eb6745",
+ "id": "95950471",
"metadata": {},
"source": [
"Load a pretrained Tacotron2 model"
@@ -99,7 +99,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0d7b6f97",
+ "id": "a537b0ac",
"metadata": {},
"outputs": [],
"source": [
@@ -110,7 +110,7 @@
},
{
"cell_type": "markdown",
- "id": "4ad72966",
+ "id": "a663ac94",
"metadata": {},
"source": [
"Now, let's make the model say:"
@@ -119,7 +119,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f0772405",
+ "id": "cf3c1496",
"metadata": {},
"outputs": [],
"source": [
@@ -128,7 +128,7 @@
},
{
"cell_type": "markdown",
- "id": "597b4d0a",
+ "id": "b29f09d8",
"metadata": {},
"source": [
"Format the input using utility methods"
@@ -137,7 +137,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b15fe69b",
+ "id": "5c96535c",
"metadata": {},
"outputs": [],
"source": [
@@ -147,7 +147,7 @@
},
{
"cell_type": "markdown",
- "id": "73ec1320",
+ "id": "689f582b",
"metadata": {},
"source": [
"Run the chained models"
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "da9d4593",
+ "id": "5251a6f9",
"metadata": {},
"outputs": [],
"source": [
@@ -169,7 +169,7 @@
},
{
"cell_type": "markdown",
- "id": "4e682201",
+ "id": "758ffbf5",
"metadata": {},
"source": [
"You can write it to a file and listen to it"
@@ -178,7 +178,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6247a7e0",
+ "id": "72832bbb",
"metadata": {},
"outputs": [],
"source": [
@@ -188,7 +188,7 @@
},
{
"cell_type": "markdown",
- "id": "b74eec6b",
+ "id": "a2ea94e4",
"metadata": {},
"source": [
"Alternatively, play it right away in a notebook with IPython widgets"
@@ -197,7 +197,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f47b49c8",
+ "id": "8f744c46",
"metadata": {},
"outputs": [],
"source": [
@@ -207,7 +207,7 @@
},
{
"cell_type": "markdown",
- "id": "bfa1e91f",
+ "id": "27beab4e",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/pytorch_fairseq_roberta.ipynb b/assets/hub/pytorch_fairseq_roberta.ipynb
index b7634a445769..439595ded191 100644
--- a/assets/hub/pytorch_fairseq_roberta.ipynb
+++ b/assets/hub/pytorch_fairseq_roberta.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "3dad40b4",
+ "id": "788d0a71",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -43,7 +43,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4a02def3",
+ "id": "7c268cb6",
"metadata": {},
"outputs": [],
"source": [
@@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
- "id": "af69fc55",
+ "id": "1c4ad3b2",
"metadata": {},
"source": [
"### Example\n",
@@ -64,7 +64,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "303d7635",
+ "id": "8ecd2d8a",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "8a6a0183",
+ "id": "9db6170b",
"metadata": {},
"source": [
"##### Apply Byte-Pair Encoding (BPE) to input text"
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "40dc4d3d",
+ "id": "0263f30d",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
},
{
"cell_type": "markdown",
- "id": "96ad75fd",
+ "id": "ba941417",
"metadata": {},
"source": [
"##### Extract features from RoBERTa"
@@ -104,7 +104,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9c195438",
+ "id": "e62a0345",
"metadata": {},
"outputs": [],
"source": [
@@ -120,7 +120,7 @@
},
{
"cell_type": "markdown",
- "id": "35bfcd82",
+ "id": "9315f4e1",
"metadata": {},
"source": [
"##### Use RoBERTa for sentence-pair classification tasks"
@@ -129,7 +129,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f966e727",
+ "id": "563b02d4",
"metadata": {},
"outputs": [],
"source": [
@@ -151,7 +151,7 @@
},
{
"cell_type": "markdown",
- "id": "81ba9f06",
+ "id": "6a74b2f1",
"metadata": {},
"source": [
"##### Register a new (randomly initialized) classification head"
@@ -160,7 +160,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "83b993b7",
+ "id": "24026907",
"metadata": {},
"outputs": [],
"source": [
@@ -170,7 +170,7 @@
},
{
"cell_type": "markdown",
- "id": "fd2ed275",
+ "id": "4e73ae11",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/pytorch_fairseq_translation.ipynb b/assets/hub/pytorch_fairseq_translation.ipynb
index 40d7ff1ebc7f..fa102013f2eb 100644
--- a/assets/hub/pytorch_fairseq_translation.ipynb
+++ b/assets/hub/pytorch_fairseq_translation.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "4c8e9536",
+ "id": "eed5fc3b",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -37,7 +37,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "931d3561",
+ "id": "fe6cbede",
"metadata": {},
"outputs": [],
"source": [
@@ -47,7 +47,7 @@
},
{
"cell_type": "markdown",
- "id": "6d798f1c",
+ "id": "48cbffac",
"metadata": {},
"source": [
"### English-to-French Translation\n",
@@ -59,7 +59,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ccad8399",
+ "id": "da54527b",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
},
{
"cell_type": "markdown",
- "id": "56de2e31",
+ "id": "8d2837cb",
"metadata": {},
"source": [
"### English-to-German Translation\n",
@@ -123,7 +123,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "11143284",
+ "id": "45746db2",
"metadata": {},
"outputs": [],
"source": [
@@ -142,7 +142,7 @@
},
{
"cell_type": "markdown",
- "id": "a6f4673a",
+ "id": "825dd33e",
"metadata": {},
"source": [
"We can also do a round-trip translation to create a paraphrase:"
@@ -151,7 +151,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cd1cdcb5",
+ "id": "18e2ba02",
"metadata": {},
"outputs": [],
"source": [
@@ -172,7 +172,7 @@
},
{
"cell_type": "markdown",
- "id": "7760e0c9",
+ "id": "bec981f2",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/pytorch_vision_alexnet.ipynb b/assets/hub/pytorch_vision_alexnet.ipynb
index 008c5448b964..f99f5523b759 100644
--- a/assets/hub/pytorch_vision_alexnet.ipynb
+++ b/assets/hub/pytorch_vision_alexnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "476959cb",
+ "id": "4d34690f",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "db43dde0",
+ "id": "dbc39722",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "a5132d26",
+ "id": "b8f5c60e",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cc08defc",
+ "id": "c02a39c8",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f6ff64dc",
+ "id": "910a2e6a",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1c4c2653",
+ "id": "71c8425c",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3940b165",
+ "id": "efa48e6f",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "d6f8ebee",
+ "id": "90025897",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb b/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb
index d04fd8924d6e..89e125880f38 100644
--- a/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb
+++ b/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "1f1269d4",
+ "id": "f05f98c4",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5af48e28",
+ "id": "969f3efa",
"metadata": {},
"outputs": [],
"source": [
@@ -38,7 +38,7 @@
},
{
"cell_type": "markdown",
- "id": "538f054d",
+ "id": "c51ddbf4",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -54,7 +54,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "15108b2e",
+ "id": "1dfbe50d",
"metadata": {},
"outputs": [],
"source": [
@@ -68,7 +68,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "58a4298f",
+ "id": "cd4c96cb",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
},
{
"cell_type": "markdown",
- "id": "eeef1bfa",
+ "id": "01dfd431",
"metadata": {},
"source": [
"The output here is of shape `(21, H, W)`, and at each location, there are unnormalized probabilities corresponding to the prediction of each class.\n",
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "78d079ba",
+ "id": "3a96dc7a",
"metadata": {},
"outputs": [],
"source": [
@@ -129,7 +129,7 @@
},
{
"cell_type": "markdown",
- "id": "5c4f6d2c",
+ "id": "135d5987",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_densenet.ipynb b/assets/hub/pytorch_vision_densenet.ipynb
index b20ae36a382f..71b0a4cce7c4 100644
--- a/assets/hub/pytorch_vision_densenet.ipynb
+++ b/assets/hub/pytorch_vision_densenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "2310947a",
+ "id": "6789ed2e",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a9533770",
+ "id": "1f9ffff6",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "d1c886d9",
+ "id": "75c2cd89",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b0f0ee8e",
+ "id": "a7dd4a27",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c9eed5d6",
+ "id": "7def2030",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9e446049",
+ "id": "cd2e231a",
"metadata": {},
"outputs": [],
"source": [
@@ -112,7 +112,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6c43e6ed",
+ "id": "8504e182",
"metadata": {},
"outputs": [],
"source": [
@@ -127,7 +127,7 @@
},
{
"cell_type": "markdown",
- "id": "53707577",
+ "id": "a0035580",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_fcn_resnet101.ipynb b/assets/hub/pytorch_vision_fcn_resnet101.ipynb
index 9374fe082fe2..a840f07aac68 100644
--- a/assets/hub/pytorch_vision_fcn_resnet101.ipynb
+++ b/assets/hub/pytorch_vision_fcn_resnet101.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "02094174",
+ "id": "c6bfef42",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3265aa13",
+ "id": "707dde8a",
"metadata": {},
"outputs": [],
"source": [
@@ -37,7 +37,7 @@
},
{
"cell_type": "markdown",
- "id": "79369d78",
+ "id": "c278044e",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d2faffda",
+ "id": "09a4ccb2",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3139ff37",
+ "id": "2dbcce32",
"metadata": {},
"outputs": [],
"source": [
@@ -96,7 +96,7 @@
},
{
"cell_type": "markdown",
- "id": "42b2b9f1",
+ "id": "cb2187f7",
"metadata": {},
"source": [
"The output here is of shape `(21, H, W)`, and at each location, there are unnormalized probabilities corresponding to the prediction of each class.\n",
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c0b43cf6",
+ "id": "8ba2f504",
"metadata": {},
"outputs": [],
"source": [
@@ -128,7 +128,7 @@
},
{
"cell_type": "markdown",
- "id": "b120a511",
+ "id": "ac15ff3e",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_ghostnet.ipynb b/assets/hub/pytorch_vision_ghostnet.ipynb
index 43a0fc4bdf40..699bf96954f0 100644
--- a/assets/hub/pytorch_vision_ghostnet.ipynb
+++ b/assets/hub/pytorch_vision_ghostnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "f8bf1396",
+ "id": "5674d702",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "dde62de1",
+ "id": "7ceabdaa",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "4cf34fbf",
+ "id": "ca60e255",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -47,7 +47,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5c885bd6",
+ "id": "f771aaa4",
"metadata": {},
"outputs": [],
"source": [
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "209b1dcf",
+ "id": "42d9acb6",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "33e1b9fd",
+ "id": "b69c99e7",
"metadata": {},
"outputs": [],
"source": [
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "26ef2198",
+ "id": "b93b9632",
"metadata": {},
"outputs": [],
"source": [
@@ -121,7 +121,7 @@
},
{
"cell_type": "markdown",
- "id": "15579d5e",
+ "id": "19532973",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_googlenet.ipynb b/assets/hub/pytorch_vision_googlenet.ipynb
index 687380b94de3..4e3e51b2c9fb 100644
--- a/assets/hub/pytorch_vision_googlenet.ipynb
+++ b/assets/hub/pytorch_vision_googlenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "653ba58a",
+ "id": "3b90002c",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3a88841f",
+ "id": "c2e2ac53",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "4f868089",
+ "id": "25b182cd",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a45f5ed6",
+ "id": "bb6c66cf",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b36e1a87",
+ "id": "46436ec1",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "262834f2",
+ "id": "daacd46b",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eaa7338a",
+ "id": "f23e780e",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "5388f4b9",
+ "id": "41b8424d",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_hardnet.ipynb b/assets/hub/pytorch_vision_hardnet.ipynb
index 3c1c7c770375..4661e2a6719a 100644
--- a/assets/hub/pytorch_vision_hardnet.ipynb
+++ b/assets/hub/pytorch_vision_hardnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "24660a73",
+ "id": "8430c810",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f1c8db18",
+ "id": "c9bc24e8",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "7d7f0ed0",
+ "id": "d0c9eb10",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c57175d8",
+ "id": "6dedef27",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "79bf823d",
+ "id": "dcd14229",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fbead03d",
+ "id": "d148585e",
"metadata": {},
"outputs": [],
"source": [
@@ -112,7 +112,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d9fc5f85",
+ "id": "519e7436",
"metadata": {},
"outputs": [],
"source": [
@@ -127,7 +127,7 @@
},
{
"cell_type": "markdown",
- "id": "f907cd48",
+ "id": "d6a84b3b",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_ibnnet.ipynb b/assets/hub/pytorch_vision_ibnnet.ipynb
index 7cab575eae1f..5b8eeb9111d9 100644
--- a/assets/hub/pytorch_vision_ibnnet.ipynb
+++ b/assets/hub/pytorch_vision_ibnnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "cdcbede3",
+ "id": "55b2c054",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "64579ff1",
+ "id": "778ec43e",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "e32117fd",
+ "id": "04a14351",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -47,7 +47,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "dff7873f",
+ "id": "139985fd",
"metadata": {},
"outputs": [],
"source": [
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fbe50e55",
+ "id": "ab642401",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6158a71a",
+ "id": "86ab4cd8",
"metadata": {},
"outputs": [],
"source": [
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "05b44f50",
+ "id": "192bd6fd",
"metadata": {},
"outputs": [],
"source": [
@@ -121,7 +121,7 @@
},
{
"cell_type": "markdown",
- "id": "3fb192b1",
+ "id": "ad1b8c08",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_inception_v3.ipynb b/assets/hub/pytorch_vision_inception_v3.ipynb
index 3f0fe5440663..e0744e705bf9 100644
--- a/assets/hub/pytorch_vision_inception_v3.ipynb
+++ b/assets/hub/pytorch_vision_inception_v3.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "c8b3329c",
+ "id": "53b1da46",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f9a832f4",
+ "id": "25de85ce",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "79143d52",
+ "id": "1fb226bc",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -47,7 +47,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "61d4f63e",
+ "id": "4545e672",
"metadata": {},
"outputs": [],
"source": [
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "80861a0a",
+ "id": "3c8d69d9",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7e32deb0",
+ "id": "ca694506",
"metadata": {},
"outputs": [],
"source": [
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d3b4f451",
+ "id": "790f622c",
"metadata": {},
"outputs": [],
"source": [
@@ -121,7 +121,7 @@
},
{
"cell_type": "markdown",
- "id": "41f3553a",
+ "id": "10ac82e4",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_meal_v2.ipynb b/assets/hub/pytorch_vision_meal_v2.ipynb
index aa396f170f64..2b648e3e5f3d 100644
--- a/assets/hub/pytorch_vision_meal_v2.ipynb
+++ b/assets/hub/pytorch_vision_meal_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "77879216",
+ "id": "f5517f6b",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -27,7 +27,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "87b9c18e",
+ "id": "a9190500",
"metadata": {},
"outputs": [],
"source": [
@@ -38,7 +38,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "430eb3e3",
+ "id": "3ea3bcc4",
"metadata": {},
"outputs": [],
"source": [
@@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
- "id": "54dc27d3",
+ "id": "f0ad64b6",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -65,7 +65,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e819328a",
+ "id": "900fd1fa",
"metadata": {},
"outputs": [],
"source": [
@@ -79,7 +79,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a3e748a5",
+ "id": "43835ceb",
"metadata": {},
"outputs": [],
"source": [
@@ -113,7 +113,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c71c1bf2",
+ "id": "22232d84",
"metadata": {},
"outputs": [],
"source": [
@@ -124,7 +124,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "061ec52c",
+ "id": "8efcc66a",
"metadata": {},
"outputs": [],
"source": [
@@ -139,7 +139,7 @@
},
{
"cell_type": "markdown",
- "id": "e4970c25",
+ "id": "0515c396",
"metadata": {},
"source": [
"### Model Description\n",
@@ -167,7 +167,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a0feae70",
+ "id": "73e4c4ab",
"metadata": {},
"outputs": [],
"source": [
@@ -181,7 +181,7 @@
},
{
"cell_type": "markdown",
- "id": "312df7cb",
+ "id": "5dd71b00",
"metadata": {},
"source": [
"@inproceedings{shen2019MEAL,\n",
diff --git a/assets/hub/pytorch_vision_mobilenet_v2.ipynb b/assets/hub/pytorch_vision_mobilenet_v2.ipynb
index 4ca50ee5998e..db3865911772 100644
--- a/assets/hub/pytorch_vision_mobilenet_v2.ipynb
+++ b/assets/hub/pytorch_vision_mobilenet_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "3bf6503d",
+ "id": "cf1fe1cb",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0b97488d",
+ "id": "749a36d7",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "fd34873b",
+ "id": "801b671c",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1ef9150b",
+ "id": "d60a8dd9",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4837c4ad",
+ "id": "514b9e7d",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6365de4c",
+ "id": "4617d031",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fc20b644",
+ "id": "2895e34a",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "26377095",
+ "id": "57402cc0",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_once_for_all.ipynb b/assets/hub/pytorch_vision_once_for_all.ipynb
index 871ee6ca210e..aec66cd67a31 100644
--- a/assets/hub/pytorch_vision_once_for_all.ipynb
+++ b/assets/hub/pytorch_vision_once_for_all.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "280cf46c",
+ "id": "ca8ba23d",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -29,7 +29,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "00d6dbb9",
+ "id": "1d7fbdda",
"metadata": {},
"outputs": [],
"source": [
@@ -45,7 +45,7 @@
},
{
"cell_type": "markdown",
- "id": "70be32a3",
+ "id": "565d2a26",
"metadata": {},
"source": [
"| OFA Network | Design Space | Resolution | Width Multiplier | Depth | Expand Ratio | kernel Size | \n",
@@ -62,7 +62,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "82bdeb07",
+ "id": "7757251d",
"metadata": {},
"outputs": [],
"source": [
@@ -77,7 +77,7 @@
},
{
"cell_type": "markdown",
- "id": "ee77baeb",
+ "id": "3f573fcb",
"metadata": {},
"source": [
"### Get Specialized Architecture"
@@ -86,7 +86,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ba970e58",
+ "id": "20dacc3a",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
},
{
"cell_type": "markdown",
- "id": "e11a1cea",
+ "id": "30a70052",
"metadata": {},
"source": [
"More models and configurations can be found in [once-for-all/model-zoo](https://github.com/mit-han-lab/once-for-all#evaluate-1)\n",
@@ -111,7 +111,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3ef6fe6d",
+ "id": "11b222c1",
"metadata": {},
"outputs": [],
"source": [
@@ -122,7 +122,7 @@
},
{
"cell_type": "markdown",
- "id": "c67f4cd6",
+ "id": "4dbc7982",
"metadata": {},
"source": [
"The model's prediction can be evalutaed by"
@@ -131,7 +131,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "823bea6f",
+ "id": "ad47ee85",
"metadata": {},
"outputs": [],
"source": [
@@ -173,7 +173,7 @@
},
{
"cell_type": "markdown",
- "id": "ec31dc6c",
+ "id": "ce5f019d",
"metadata": {},
"source": [
"### Model Description\n",
@@ -189,7 +189,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "abd6536f",
+ "id": "8cd8a5f2",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/pytorch_vision_proxylessnas.ipynb b/assets/hub/pytorch_vision_proxylessnas.ipynb
index ed3c76247eb2..b7c8c169aa65 100644
--- a/assets/hub/pytorch_vision_proxylessnas.ipynb
+++ b/assets/hub/pytorch_vision_proxylessnas.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "88a5de56",
+ "id": "b53ef600",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6636c360",
+ "id": "6bbf501e",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "ed0996c0",
+ "id": "76c09fa8",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bb45aa1c",
+ "id": "8c8f6cd9",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "25de28e4",
+ "id": "472630a6",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "57ee8ed4",
+ "id": "c2bba651",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bcd7853e",
+ "id": "cb8d7322",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "33510909",
+ "id": "2c693c1b",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_resnest.ipynb b/assets/hub/pytorch_vision_resnest.ipynb
index 7d82e1f15bb9..f7fafa75a152 100644
--- a/assets/hub/pytorch_vision_resnest.ipynb
+++ b/assets/hub/pytorch_vision_resnest.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "1c05534d",
+ "id": "7ce7dcc2",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "da97212a",
+ "id": "50230d26",
"metadata": {},
"outputs": [],
"source": [
@@ -36,7 +36,7 @@
},
{
"cell_type": "markdown",
- "id": "999cc408",
+ "id": "2941cbc0",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -50,7 +50,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6ccf320a",
+ "id": "d8d613f3",
"metadata": {},
"outputs": [],
"source": [
@@ -64,7 +64,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a9379264",
+ "id": "63953c71",
"metadata": {},
"outputs": [],
"source": [
@@ -98,7 +98,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4bc1d7e1",
+ "id": "c8dfd494",
"metadata": {},
"outputs": [],
"source": [
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "14d5cafa",
+ "id": "44f1a118",
"metadata": {},
"outputs": [],
"source": [
@@ -124,7 +124,7 @@
},
{
"cell_type": "markdown",
- "id": "5b7febde",
+ "id": "ceaf10c1",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_resnet.ipynb b/assets/hub/pytorch_vision_resnet.ipynb
index cdcf88ad4d2d..7c55e2a40913 100644
--- a/assets/hub/pytorch_vision_resnet.ipynb
+++ b/assets/hub/pytorch_vision_resnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "dd07e4bc",
+ "id": "2edc9aaf",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "dc05f4f7",
+ "id": "79a19c4a",
"metadata": {},
"outputs": [],
"source": [
@@ -38,7 +38,7 @@
},
{
"cell_type": "markdown",
- "id": "6cf6a663",
+ "id": "f47953c0",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -52,7 +52,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6103cc90",
+ "id": "10209e8a",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +66,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "83573f88",
+ "id": "c8b51dd4",
"metadata": {},
"outputs": [],
"source": [
@@ -100,7 +100,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7fd7e165",
+ "id": "30ba4c0d",
"metadata": {},
"outputs": [],
"source": [
@@ -111,7 +111,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4be16c10",
+ "id": "9ee167a7",
"metadata": {},
"outputs": [],
"source": [
@@ -126,7 +126,7 @@
},
{
"cell_type": "markdown",
- "id": "442ec887",
+ "id": "cda0542a",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_resnext.ipynb b/assets/hub/pytorch_vision_resnext.ipynb
index fbb72cd3de43..cfa4fb63d0dc 100644
--- a/assets/hub/pytorch_vision_resnext.ipynb
+++ b/assets/hub/pytorch_vision_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "1dbfb126",
+ "id": "68775ea6",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a4e4510f",
+ "id": "567a31c5",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "281cc00e",
+ "id": "9b094b19",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d234a89a",
+ "id": "ebdd171c",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "734b775d",
+ "id": "27e2d85b",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a10904b0",
+ "id": "b593958d",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "51a931b0",
+ "id": "62b51361",
"metadata": {},
"outputs": [],
"source": [
@@ -125,7 +125,7 @@
},
{
"cell_type": "markdown",
- "id": "5849ad6f",
+ "id": "fdfe7598",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_shufflenet_v2.ipynb b/assets/hub/pytorch_vision_shufflenet_v2.ipynb
index 76638db3bb9a..70f9822026e8 100644
--- a/assets/hub/pytorch_vision_shufflenet_v2.ipynb
+++ b/assets/hub/pytorch_vision_shufflenet_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "950a54b7",
+ "id": "446af667",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "df71f296",
+ "id": "51a7c483",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "41669c57",
+ "id": "f3315c2b",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a0a6fddc",
+ "id": "5de569a5",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "995ae222",
+ "id": "f102adfe",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "04defc28",
+ "id": "96be3951",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ee045d49",
+ "id": "59dbcac6",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "a8b4275f",
+ "id": "525abaf3",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_snnmlp.ipynb b/assets/hub/pytorch_vision_snnmlp.ipynb
index ad07a8382dad..e8a5c54ef519 100644
--- a/assets/hub/pytorch_vision_snnmlp.ipynb
+++ b/assets/hub/pytorch_vision_snnmlp.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "423c6bb9",
+ "id": "3a4488e9",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "66922103",
+ "id": "7b232697",
"metadata": {},
"outputs": [],
"source": [
@@ -37,7 +37,7 @@
},
{
"cell_type": "markdown",
- "id": "50072b65",
+ "id": "e26b7a51",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -51,7 +51,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "66f4bb64",
+ "id": "9cc1433d",
"metadata": {},
"outputs": [],
"source": [
@@ -65,7 +65,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "34b930d0",
+ "id": "6c03efcb",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
},
{
"cell_type": "markdown",
- "id": "8c1af071",
+ "id": "086db843",
"metadata": {},
"source": [
"### Model Description\n",
@@ -121,7 +121,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "27077511",
+ "id": "58bcc451",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/pytorch_vision_squeezenet.ipynb b/assets/hub/pytorch_vision_squeezenet.ipynb
index 909db0f384bb..ba99b537377d 100644
--- a/assets/hub/pytorch_vision_squeezenet.ipynb
+++ b/assets/hub/pytorch_vision_squeezenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "41ba3085",
+ "id": "d3cceb23",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b2a625de",
+ "id": "ba1bea79",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "724d99c5",
+ "id": "3485ae1d",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "21a0dfbf",
+ "id": "e59ab561",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "44e7d2e2",
+ "id": "615bae32",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "12b4294b",
+ "id": "9348a952",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c36216a7",
+ "id": "e2d90a62",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "bb2e49ba",
+ "id": "1ca80f7a",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_vgg.ipynb b/assets/hub/pytorch_vision_vgg.ipynb
index cec8b3392a48..7425935a8edf 100644
--- a/assets/hub/pytorch_vision_vgg.ipynb
+++ b/assets/hub/pytorch_vision_vgg.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "4e9a5dc4",
+ "id": "6e2bd73f",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c6197bb8",
+ "id": "f7be3401",
"metadata": {},
"outputs": [],
"source": [
@@ -41,7 +41,7 @@
},
{
"cell_type": "markdown",
- "id": "4f2e4575",
+ "id": "8e760367",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f4affc18",
+ "id": "555ea831",
"metadata": {},
"outputs": [],
"source": [
@@ -69,7 +69,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "156ecf00",
+ "id": "73faa62f",
"metadata": {},
"outputs": [],
"source": [
@@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ba44d494",
+ "id": "33921c73",
"metadata": {},
"outputs": [],
"source": [
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "68b742ea",
+ "id": "94b16fc8",
"metadata": {},
"outputs": [],
"source": [
@@ -129,7 +129,7 @@
},
{
"cell_type": "markdown",
- "id": "b2a89730",
+ "id": "0ca09fdf",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_wide_resnet.ipynb b/assets/hub/pytorch_vision_wide_resnet.ipynb
index 0612d8f49ae1..1bbb62100629 100644
--- a/assets/hub/pytorch_vision_wide_resnet.ipynb
+++ b/assets/hub/pytorch_vision_wide_resnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "7ffc6028",
+ "id": "0e4d2fcb",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "59790d7d",
+ "id": "ee963fbd",
"metadata": {},
"outputs": [],
"source": [
@@ -36,7 +36,7 @@
},
{
"cell_type": "markdown",
- "id": "0e332f01",
+ "id": "03ce48eb",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -50,7 +50,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "187a4923",
+ "id": "b9741f13",
"metadata": {},
"outputs": [],
"source": [
@@ -64,7 +64,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2974582b",
+ "id": "76164e80",
"metadata": {},
"outputs": [],
"source": [
@@ -98,7 +98,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eb62a015",
+ "id": "f8cfc4f1",
"metadata": {},
"outputs": [],
"source": [
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d298a8ad",
+ "id": "48c4e653",
"metadata": {},
"outputs": [],
"source": [
@@ -124,7 +124,7 @@
},
{
"cell_type": "markdown",
- "id": "3cee8de5",
+ "id": "69f3583a",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb b/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb
index 54946d1a1aae..02ca6594cce1 100644
--- a/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb
+++ b/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "db40d259",
+ "id": "bb7b0d2f",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "458448ba",
+ "id": "f1709f01",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "067e3a1b",
+ "id": "2aa88c7c",
"metadata": {},
"outputs": [],
"source": [
@@ -59,7 +59,7 @@
},
{
"cell_type": "markdown",
- "id": "1f245d6b",
+ "id": "1ac86775",
"metadata": {},
"source": [
"### Model Description\n",
@@ -94,7 +94,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "107604bc",
+ "id": "f2b116bd",
"metadata": {},
"outputs": [],
"source": [
@@ -104,7 +104,7 @@
},
{
"cell_type": "markdown",
- "id": "89a78a02",
+ "id": "92368586",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/simplenet.ipynb b/assets/hub/simplenet.ipynb
index ca650430a5b8..c634042e79ef 100644
--- a/assets/hub/simplenet.ipynb
+++ b/assets/hub/simplenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "50eb0360",
+ "id": "6380b8b1",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d94d0129",
+ "id": "38a0db8c",
"metadata": {},
"outputs": [],
"source": [
@@ -41,7 +41,7 @@
},
{
"cell_type": "markdown",
- "id": "894673f2",
+ "id": "05e0545b",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1a361de2",
+ "id": "9bda7a40",
"metadata": {},
"outputs": [],
"source": [
@@ -69,7 +69,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0da46067",
+ "id": "5fd4e468",
"metadata": {},
"outputs": [],
"source": [
@@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d813eec4",
+ "id": "08350fd4",
"metadata": {},
"outputs": [],
"source": [
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2457d7f9",
+ "id": "daca609f",
"metadata": {},
"outputs": [],
"source": [
@@ -129,7 +129,7 @@
},
{
"cell_type": "markdown",
- "id": "e7eb90cd",
+ "id": "f82e119b",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/snakers4_silero-models_stt.ipynb b/assets/hub/snakers4_silero-models_stt.ipynb
index 6275dd5b744b..05cf400a8890 100644
--- a/assets/hub/snakers4_silero-models_stt.ipynb
+++ b/assets/hub/snakers4_silero-models_stt.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "c9736d65",
+ "id": "188eafab",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5b8bab76",
+ "id": "4071c889",
"metadata": {},
"outputs": [],
"source": [
@@ -36,7 +36,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "511bde43",
+ "id": "7ed71a97",
"metadata": {},
"outputs": [],
"source": [
@@ -69,7 +69,7 @@
},
{
"cell_type": "markdown",
- "id": "1f9e3ad0",
+ "id": "92200964",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/snakers4_silero-models_tts.ipynb b/assets/hub/snakers4_silero-models_tts.ipynb
index eb41fc141f2c..ec3331095b21 100644
--- a/assets/hub/snakers4_silero-models_tts.ipynb
+++ b/assets/hub/snakers4_silero-models_tts.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "788dc27a",
+ "id": "b575c702",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -20,7 +20,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3fd7fa21",
+ "id": "90f76af2",
"metadata": {},
"outputs": [],
"source": [
@@ -32,7 +32,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b74bb048",
+ "id": "fa12f9c7",
"metadata": {},
"outputs": [],
"source": [
@@ -55,7 +55,7 @@
},
{
"cell_type": "markdown",
- "id": "141202f4",
+ "id": "65ff05e6",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/snakers4_silero-vad_vad.ipynb b/assets/hub/snakers4_silero-vad_vad.ipynb
index 9999389300d3..e71e87604e2a 100644
--- a/assets/hub/snakers4_silero-vad_vad.ipynb
+++ b/assets/hub/snakers4_silero-vad_vad.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "7e6d8442",
+ "id": "309defa1",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9045be08",
+ "id": "dcaca115",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7e1824cc",
+ "id": "a7e71a13",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
},
{
"cell_type": "markdown",
- "id": "4074dd60",
+ "id": "2bca08b9",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/ultralytics_yolov5.ipynb b/assets/hub/ultralytics_yolov5.ipynb
index 027cdffe976e..a3b29383cb1e 100644
--- a/assets/hub/ultralytics_yolov5.ipynb
+++ b/assets/hub/ultralytics_yolov5.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "af1c27df",
+ "id": "391f971b",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -29,7 +29,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7e49f1be",
+ "id": "d01b47b3",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "7f158c19",
+ "id": "999c52f6",
"metadata": {},
"source": [
"## Model Description\n",
@@ -82,7 +82,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d182a57c",
+ "id": "b931c1b1",
"metadata": {},
"outputs": [],
"source": [
@@ -112,7 +112,7 @@
},
{
"cell_type": "markdown",
- "id": "336e53fd",
+ "id": "5be3b03d",
"metadata": {},
"source": [
"## Citation\n",
@@ -125,7 +125,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5b3ba85e",
+ "id": "7083ab2d",
"metadata": {
"attributes": {
"classes": [
@@ -150,7 +150,7 @@
},
{
"cell_type": "markdown",
- "id": "ed7dffa9",
+ "id": "431c8c8f",
"metadata": {},
"source": [
"## Contact\n",
diff --git a/assets/images/unlocking-pt-2-6-intel.png b/assets/images/unlocking-pt-2-6-intel.png
new file mode 100644
index 000000000000..94d372662a2c
Binary files /dev/null and b/assets/images/unlocking-pt-2-6-intel.png differ
diff --git a/blog/10/index.html b/blog/10/index.html
index b234cc0df272..7192c6c6b0f0 100644
--- a/blog/10/index.html
+++ b/blog/10/index.html
@@ -323,11 +323,11 @@
Featured Post
- This is part 2 of the Understanding GPU Memory blog series. Our first post Understanding GPU Memo...
+ This post is the third part of a multi-series blog focused on how to accelerate generative AI mod...
-
+
Read More
@@ -347,6 +347,25 @@