diff --git a/assets/hub/datvuthanh_hybridnets.ipynb b/assets/hub/datvuthanh_hybridnets.ipynb
index 240a6ac5254e..aa5dd789d9a7 100644
--- a/assets/hub/datvuthanh_hybridnets.ipynb
+++ b/assets/hub/datvuthanh_hybridnets.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "361ddace",
+ "id": "67796c9f",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "38e3ece1",
+ "id": "0b092f57",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
- "id": "56ec9de8",
+ "id": "f51f234b",
"metadata": {},
"source": [
"## Model Description\n",
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3293edc8",
+ "id": "23313218",
"metadata": {},
"outputs": [],
"source": [
@@ -109,7 +109,7 @@
},
{
"cell_type": "markdown",
- "id": "89ae0954",
+ "id": "a80cd907",
"metadata": {},
"source": [
"### Citation\n",
@@ -120,7 +120,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "72f20f2e",
+ "id": "bfc3e14c",
"metadata": {
"attributes": {
"classes": [
diff --git a/assets/hub/facebookresearch_WSL-Images_resnext.ipynb b/assets/hub/facebookresearch_WSL-Images_resnext.ipynb
index f95488ebb484..c3f014860aeb 100644
--- a/assets/hub/facebookresearch_WSL-Images_resnext.ipynb
+++ b/assets/hub/facebookresearch_WSL-Images_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "0a99bfd2",
+ "id": "6b2a20f1",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a6b9b88d",
+ "id": "5775dee1",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "2a675a85",
+ "id": "e3f3f828",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "129f6d1e",
+ "id": "4887ac8e",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "49d587b7",
+ "id": "e8e84ad5",
"metadata": {},
"outputs": [],
"source": [
@@ -99,7 +99,7 @@
},
{
"cell_type": "markdown",
- "id": "66e59741",
+ "id": "057b8ed2",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb b/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
index f1018ff55a14..a0df272e0054 100644
--- a/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
+++ b/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "5b35e586",
+ "id": "299d2d90",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cfebbc78",
+ "id": "bf1f77ef",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
- "id": "26824bbc",
+ "id": "f195d4c7",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 120)` where `N` is the number of images to be generated.\n",
@@ -45,7 +45,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "04027b3f",
+ "id": "6f67ab88",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
},
{
"cell_type": "markdown",
- "id": "3e29d60e",
+ "id": "6ed7f83d",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
diff --git a/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb b/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
index 3c02e90b254d..3a9ec571f259 100644
--- a/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
+++ b/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "3b0e5949",
+ "id": "7f2deeee",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f9fa6a91",
+ "id": "0c3e5997",
"metadata": {},
"outputs": [],
"source": [
@@ -44,7 +44,7 @@
},
{
"cell_type": "markdown",
- "id": "acb41108",
+ "id": "0f4e5d74",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 512)` where `N` is the number of images to be generated.\n",
@@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "27b35de1",
+ "id": "10f9b451",
"metadata": {},
"outputs": [],
"source": [
@@ -74,7 +74,7 @@
},
{
"cell_type": "markdown",
- "id": "8aa53c57",
+ "id": "df705417",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
diff --git a/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb b/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
index ffec82ce5c85..547d3aaa6abd 100644
--- a/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
+++ b/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "336194d6",
+ "id": "bec2c0ca",
"metadata": {},
"source": [
"# 3D ResNet\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ed499478",
+ "id": "96c73128",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "8e2ea214",
+ "id": "68fa396a",
"metadata": {},
"source": [
"Import remaining functions:"
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "de4affb2",
+ "id": "92626d77",
"metadata": {},
"outputs": [],
"source": [
@@ -64,7 +64,7 @@
},
{
"cell_type": "markdown",
- "id": "5aa7e7d2",
+ "id": "c46d3c8c",
"metadata": {},
"source": [
"#### Setup\n",
@@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "879cf356",
+ "id": "db4ffe8e",
"metadata": {
"attributes": {
"classes": [
@@ -94,7 +94,7 @@
},
{
"cell_type": "markdown",
- "id": "f1a42748",
+ "id": "0a572bc7",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
@@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c92fc2d7",
+ "id": "84aaaabb",
"metadata": {},
"outputs": [],
"source": [
@@ -116,7 +116,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "957e1464",
+ "id": "334828f8",
"metadata": {},
"outputs": [],
"source": [
@@ -131,7 +131,7 @@
},
{
"cell_type": "markdown",
- "id": "141be6e9",
+ "id": "037fda84",
"metadata": {},
"source": [
"#### Define input transform"
@@ -140,7 +140,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0fba1d05",
+ "id": "8c11ebdb",
"metadata": {},
"outputs": [],
"source": [
@@ -174,7 +174,7 @@
},
{
"cell_type": "markdown",
- "id": "0d4224a4",
+ "id": "84f5d428",
"metadata": {},
"source": [
"#### Run Inference\n",
@@ -185,7 +185,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "41e538ae",
+ "id": "05eb47ef",
"metadata": {},
"outputs": [],
"source": [
@@ -197,7 +197,7 @@
},
{
"cell_type": "markdown",
- "id": "61cff05d",
+ "id": "0b6f1b02",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
@@ -206,7 +206,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "aae72388",
+ "id": "dff4f244",
"metadata": {},
"outputs": [],
"source": [
@@ -231,7 +231,7 @@
},
{
"cell_type": "markdown",
- "id": "6c05f910",
+ "id": "9f1fc3ee",
"metadata": {},
"source": [
"#### Get Predictions"
@@ -240,7 +240,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "67ede4bc",
+ "id": "449a9f0b",
"metadata": {},
"outputs": [],
"source": [
@@ -259,7 +259,7 @@
},
{
"cell_type": "markdown",
- "id": "8ed7f958",
+ "id": "438faa5f",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb b/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb
index 74200803716d..76002f499cfc 100644
--- a/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb
+++ b/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "ac00aefc",
+ "id": "d113b047",
"metadata": {},
"source": [
"# SlowFast\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3fb8f064",
+ "id": "450e1cde",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "aa965743",
+ "id": "10bf484f",
"metadata": {},
"source": [
"Import remaining functions:"
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c812dfd7",
+ "id": "7f063f4f",
"metadata": {},
"outputs": [],
"source": [
@@ -65,7 +65,7 @@
},
{
"cell_type": "markdown",
- "id": "c78572f5",
+ "id": "90a7d53a",
"metadata": {},
"source": [
"#### Setup\n",
@@ -76,7 +76,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9182e035",
+ "id": "441e905e",
"metadata": {
"attributes": {
"classes": [
@@ -95,7 +95,7 @@
},
{
"cell_type": "markdown",
- "id": "d522c3ac",
+ "id": "a0561274",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
@@ -104,7 +104,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8e4dbd17",
+ "id": "6256b38e",
"metadata": {},
"outputs": [],
"source": [
@@ -117,7 +117,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "790e33eb",
+ "id": "bd2c1090",
"metadata": {},
"outputs": [],
"source": [
@@ -132,7 +132,7 @@
},
{
"cell_type": "markdown",
- "id": "c5448e3b",
+ "id": "530cf51f",
"metadata": {},
"source": [
"#### Define input transform"
@@ -141,7 +141,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cf0c87ea",
+ "id": "440c352d",
"metadata": {},
"outputs": [],
"source": [
@@ -198,7 +198,7 @@
},
{
"cell_type": "markdown",
- "id": "ca4b95e7",
+ "id": "c6146aba",
"metadata": {},
"source": [
"#### Run Inference\n",
@@ -209,7 +209,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "722f289f",
+ "id": "c0cce302",
"metadata": {},
"outputs": [],
"source": [
@@ -221,7 +221,7 @@
},
{
"cell_type": "markdown",
- "id": "bf4074cd",
+ "id": "2cf94887",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
@@ -230,7 +230,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5f9c4b23",
+ "id": "0fefc131",
"metadata": {},
"outputs": [],
"source": [
@@ -255,7 +255,7 @@
},
{
"cell_type": "markdown",
- "id": "4b67c94f",
+ "id": "0e6674ef",
"metadata": {},
"source": [
"#### Get Predictions"
@@ -264,7 +264,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c8423caf",
+ "id": "4b364936",
"metadata": {},
"outputs": [],
"source": [
@@ -283,7 +283,7 @@
},
{
"cell_type": "markdown",
- "id": "941ae989",
+ "id": "b09b4513",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb b/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb
index a71be67daa7d..26815b5b387e 100644
--- a/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb
+++ b/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "c647b701",
+ "id": "4a925ff0",
"metadata": {},
"source": [
"# X3D\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c72c0dc7",
+ "id": "6d231948",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
- "id": "71a98d04",
+ "id": "730853d1",
"metadata": {},
"source": [
"Import remaining functions:"
@@ -43,7 +43,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1b4b4521",
+ "id": "69f06ce6",
"metadata": {},
"outputs": [],
"source": [
@@ -65,7 +65,7 @@
},
{
"cell_type": "markdown",
- "id": "c1d63768",
+ "id": "4744e64f",
"metadata": {},
"source": [
"#### Setup\n",
@@ -76,7 +76,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "89a94b33",
+ "id": "bf71d609",
"metadata": {},
"outputs": [],
"source": [
@@ -88,7 +88,7 @@
},
{
"cell_type": "markdown",
- "id": "9b5cb1da",
+ "id": "057f7ffa",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e4325700",
+ "id": "72f2bc7c",
"metadata": {},
"outputs": [],
"source": [
@@ -110,7 +110,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "41c45dcd",
+ "id": "3500a689",
"metadata": {},
"outputs": [],
"source": [
@@ -125,7 +125,7 @@
},
{
"cell_type": "markdown",
- "id": "323800d8",
+ "id": "b75737cc",
"metadata": {},
"source": [
"#### Define input transform"
@@ -134,7 +134,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9836d141",
+ "id": "0090b78d",
"metadata": {},
"outputs": [],
"source": [
@@ -187,7 +187,7 @@
},
{
"cell_type": "markdown",
- "id": "62b32889",
+ "id": "3219a014",
"metadata": {},
"source": [
"#### Run Inference\n",
@@ -198,7 +198,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "26fd850a",
+ "id": "ad19da69",
"metadata": {},
"outputs": [],
"source": [
@@ -210,7 +210,7 @@
},
{
"cell_type": "markdown",
- "id": "33682ec9",
+ "id": "d08c2f84",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
@@ -219,7 +219,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "210ac4b9",
+ "id": "94c5d35b",
"metadata": {},
"outputs": [],
"source": [
@@ -244,7 +244,7 @@
},
{
"cell_type": "markdown",
- "id": "2621f2d0",
+ "id": "6d0f46f2",
"metadata": {},
"source": [
"#### Get Predictions"
@@ -253,7 +253,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1ed4292e",
+ "id": "f81911a8",
"metadata": {},
"outputs": [],
"source": [
@@ -272,7 +272,7 @@
},
{
"cell_type": "markdown",
- "id": "267043cc",
+ "id": "c87c269a",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb b/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb
index 7462ddf51e0c..cec1b9313edd 100644
--- a/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb
+++ b/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "ff135de3",
+ "id": "2dffd919",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "07c9d8f5",
+ "id": "5ac5f3e8",
"metadata": {},
"outputs": [],
"source": [
@@ -47,7 +47,7 @@
},
{
"cell_type": "markdown",
- "id": "771216ef",
+ "id": "a2ac5155",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "03dcd121",
+ "id": "070ae81d",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7aa403a0",
+ "id": "dc90e8a0",
"metadata": {},
"outputs": [],
"source": [
@@ -107,7 +107,7 @@
},
{
"cell_type": "markdown",
- "id": "2b2c68d5",
+ "id": "80678eca",
"metadata": {},
"source": [
"### Model Description\n",
@@ -144,7 +144,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3a18a965",
+ "id": "3b2aaf00",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/huggingface_pytorch-transformers.ipynb b/assets/hub/huggingface_pytorch-transformers.ipynb
index 7101bc93ba9e..425417814a68 100644
--- a/assets/hub/huggingface_pytorch-transformers.ipynb
+++ b/assets/hub/huggingface_pytorch-transformers.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "29549814",
+ "id": "d7be0b07",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -43,7 +43,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bed65e64",
+ "id": "ec1363b4",
"metadata": {},
"outputs": [],
"source": [
@@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
- "id": "2171215f",
+ "id": "6305867c",
"metadata": {},
"source": [
"# Usage\n",
@@ -86,7 +86,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "60b70e47",
+ "id": "38441f21",
"metadata": {
"attributes": {
"classes": [
@@ -104,7 +104,7 @@
},
{
"cell_type": "markdown",
- "id": "2c744265",
+ "id": "5623bbec",
"metadata": {},
"source": [
"## Models\n",
@@ -115,7 +115,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "df1934f8",
+ "id": "f36d232b",
"metadata": {
"attributes": {
"classes": [
@@ -138,7 +138,7 @@
},
{
"cell_type": "markdown",
- "id": "1d7f4001",
+ "id": "b61127fe",
"metadata": {},
"source": [
"## Models with a language modeling head\n",
@@ -149,7 +149,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fe881651",
+ "id": "2a950470",
"metadata": {
"attributes": {
"classes": [
@@ -172,7 +172,7 @@
},
{
"cell_type": "markdown",
- "id": "f79e0133",
+ "id": "e8caeddb",
"metadata": {},
"source": [
"## Models with a sequence classification head\n",
@@ -183,7 +183,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "37b09b46",
+ "id": "27270594",
"metadata": {
"attributes": {
"classes": [
@@ -206,7 +206,7 @@
},
{
"cell_type": "markdown",
- "id": "9301ed76",
+ "id": "9cc619ae",
"metadata": {},
"source": [
"## Models with a question answering head\n",
@@ -217,7 +217,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a059d28b",
+ "id": "2109f05b",
"metadata": {
"attributes": {
"classes": [
@@ -240,7 +240,7 @@
},
{
"cell_type": "markdown",
- "id": "84f9506e",
+ "id": "eff7023d",
"metadata": {},
"source": [
"## Configuration\n",
@@ -251,7 +251,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7bb2daad",
+ "id": "21876e4c",
"metadata": {
"attributes": {
"classes": [
@@ -282,7 +282,7 @@
},
{
"cell_type": "markdown",
- "id": "f21310f3",
+ "id": "018926d8",
"metadata": {},
"source": [
"# Example Usage\n",
@@ -295,7 +295,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0ef0cd96",
+ "id": "cb05b5c6",
"metadata": {},
"outputs": [],
"source": [
@@ -311,7 +311,7 @@
},
{
"cell_type": "markdown",
- "id": "a4bfb11a",
+ "id": "5d541732",
"metadata": {},
"source": [
"## Using `BertModel` to encode the input sentence in a sequence of last layer hidden-states"
@@ -320,7 +320,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bb78038d",
+ "id": "89da97c1",
"metadata": {},
"outputs": [],
"source": [
@@ -339,7 +339,7 @@
},
{
"cell_type": "markdown",
- "id": "13a4627e",
+ "id": "4e7aca8e",
"metadata": {},
"source": [
"## Using `modelForMaskedLM` to predict a masked token with BERT"
@@ -348,7 +348,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "095e8694",
+ "id": "29e8f6cb",
"metadata": {},
"outputs": [],
"source": [
@@ -370,7 +370,7 @@
},
{
"cell_type": "markdown",
- "id": "daa29b13",
+ "id": "05d7971d",
"metadata": {},
"source": [
"## Using `modelForQuestionAnswering` to do question answering with BERT"
@@ -379,7 +379,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "97a51992",
+ "id": "5c457343",
"metadata": {},
"outputs": [],
"source": [
@@ -409,7 +409,7 @@
},
{
"cell_type": "markdown",
- "id": "04c59d5f",
+ "id": "38a8df6c",
"metadata": {},
"source": [
"## Using `modelForSequenceClassification` to do paraphrase classification with BERT"
@@ -418,7 +418,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "247ea495",
+ "id": "23a9fb62",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/hustvl_yolop.ipynb b/assets/hub/hustvl_yolop.ipynb
index ed19c292b701..e44112d50b8c 100644
--- a/assets/hub/hustvl_yolop.ipynb
+++ b/assets/hub/hustvl_yolop.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8317a19d",
+ "id": "d5b3e194",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -23,7 +23,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ff9aa348",
+ "id": "86e4eb12",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "8cbd94c2",
+ "id": "c5a771c9",
"metadata": {},
"source": [
"## YOLOP: You Only Look Once for Panoptic driving Perception\n",
@@ -132,7 +132,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "459662bf",
+ "id": "b1929148",
"metadata": {},
"outputs": [],
"source": [
@@ -148,7 +148,7 @@
},
{
"cell_type": "markdown",
- "id": "26c4cbe4",
+ "id": "e30c1305",
"metadata": {},
"source": [
"### Citation\n",
diff --git a/assets/hub/intelisl_midas_v2.ipynb b/assets/hub/intelisl_midas_v2.ipynb
index 5627bb068977..16d80f5291db 100644
--- a/assets/hub/intelisl_midas_v2.ipynb
+++ b/assets/hub/intelisl_midas_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8426633d",
+ "id": "a43ca894",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -32,7 +32,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9597e763",
+ "id": "be8b42c9",
"metadata": {
"attributes": {
"classes": [
@@ -48,7 +48,7 @@
},
{
"cell_type": "markdown",
- "id": "9752c95c",
+ "id": "2516401d",
"metadata": {},
"source": [
"### Example Usage\n",
@@ -59,7 +59,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e5579e14",
+ "id": "dc105cd1",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "766a2015",
+ "id": "8eae12f5",
"metadata": {},
"source": [
"Load a model (see [https://github.com/intel-isl/MiDaS/#Accuracy](https://github.com/intel-isl/MiDaS/#Accuracy) for an overview)"
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9133b6e5",
+ "id": "630715ec",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
},
{
"cell_type": "markdown",
- "id": "fc8f6519",
+ "id": "f488b399",
"metadata": {},
"source": [
"Move model to GPU if available"
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "697b4f1e",
+ "id": "1134b01d",
"metadata": {},
"outputs": [],
"source": [
@@ -117,7 +117,7 @@
},
{
"cell_type": "markdown",
- "id": "64c3ca48",
+ "id": "a0ef45e0",
"metadata": {},
"source": [
"Load transforms to resize and normalize the image for large or small model"
@@ -126,7 +126,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ce75f09f",
+ "id": "8d014222",
"metadata": {},
"outputs": [],
"source": [
@@ -140,7 +140,7 @@
},
{
"cell_type": "markdown",
- "id": "7bf2592e",
+ "id": "5928a92f",
"metadata": {},
"source": [
"Load image and apply transforms"
@@ -149,7 +149,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cc54fd26",
+ "id": "ae781510",
"metadata": {},
"outputs": [],
"source": [
@@ -161,7 +161,7 @@
},
{
"cell_type": "markdown",
- "id": "7c034c9e",
+ "id": "f0f692e2",
"metadata": {},
"source": [
"Predict and resize to original resolution"
@@ -170,7 +170,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d1353543",
+ "id": "603b0e56",
"metadata": {},
"outputs": [],
"source": [
@@ -189,7 +189,7 @@
},
{
"cell_type": "markdown",
- "id": "0fd90d92",
+ "id": "52504f0a",
"metadata": {},
"source": [
"Show result"
@@ -198,7 +198,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "10de3f59",
+ "id": "ea6b1c85",
"metadata": {},
"outputs": [],
"source": [
@@ -208,7 +208,7 @@
},
{
"cell_type": "markdown",
- "id": "dcfee9f7",
+ "id": "73a31d5c",
"metadata": {},
"source": [
"### References\n",
@@ -222,7 +222,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b4234af6",
+ "id": "14c001eb",
"metadata": {
"attributes": {
"classes": [
@@ -244,7 +244,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "64522ea6",
+ "id": "523f3405",
"metadata": {
"attributes": {
"classes": [
diff --git a/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb b/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb
index b6b0b93e7729..c0d9ed06d426 100644
--- a/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb
+++ b/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "c9ec6d0c",
+ "id": "88e57741",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2406bdc3",
+ "id": "cdc17101",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "1655d9be",
+ "id": "14637f75",
"metadata": {},
"source": [
"Loads a U-Net model pre-trained for abnormality segmentation on a dataset of brain MRI volumes [kaggle.com/mateuszbuda/lgg-mri-segmentation](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation)\n",
@@ -57,7 +57,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b7669b3d",
+ "id": "ef500e8c",
"metadata": {},
"outputs": [],
"source": [
@@ -71,7 +71,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "90fe464c",
+ "id": "696f472d",
"metadata": {},
"outputs": [],
"source": [
@@ -100,7 +100,7 @@
},
{
"cell_type": "markdown",
- "id": "441d13c0",
+ "id": "7f73fbf2",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb b/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb
index 34245757a347..ab0dae4980e6 100644
--- a/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb
+++ b/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "a5be3197",
+ "id": "2fef4143",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5887d04c",
+ "id": "90303f18",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "d45027d1",
+ "id": "261e5480",
"metadata": {},
"source": [
"### Example Usage"
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4d487f27",
+ "id": "3f7df7a8",
"metadata": {},
"outputs": [],
"source": [
@@ -78,7 +78,7 @@
},
{
"cell_type": "markdown",
- "id": "0b2a24f1",
+ "id": "a4a58521",
"metadata": {},
"source": [
"### Model Description\n",
@@ -91,7 +91,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2dfa9892",
+ "id": "76044c97",
"metadata": {
"attributes": {
"classes": [
diff --git a/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb b/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb
index 7991ea7b1d92..8e5cb251b498 100644
--- a/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "b2fd4b5f",
+ "id": "dceed305",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b6c6552f",
+ "id": "de4e41fc",
"metadata": {},
"outputs": [],
"source": [
@@ -52,7 +52,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d46aaf09",
+ "id": "9c82d743",
"metadata": {},
"outputs": [],
"source": [
@@ -73,7 +73,7 @@
},
{
"cell_type": "markdown",
- "id": "3b091f5d",
+ "id": "f5f399bb",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset.\n",
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ff1ee409",
+ "id": "3214c197",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "4d4d7794",
+ "id": "fbcbcfbd",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5ef0b8e2",
+ "id": "b6ae1f52",
"metadata": {},
"outputs": [],
"source": [
@@ -132,7 +132,7 @@
},
{
"cell_type": "markdown",
- "id": "ae7d9365",
+ "id": "40d48c59",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probable hypotheses according to the model."
@@ -141,7 +141,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "237daf1f",
+ "id": "4be6324c",
"metadata": {},
"outputs": [],
"source": [
@@ -153,7 +153,7 @@
},
{
"cell_type": "markdown",
- "id": "2623f061",
+ "id": "52a7b51d",
"metadata": {},
"source": [
"Display the result."
@@ -162,7 +162,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d36978a4",
+ "id": "f9c4affa",
"metadata": {},
"outputs": [],
"source": [
@@ -176,7 +176,7 @@
},
{
"cell_type": "markdown",
- "id": "865034de",
+ "id": "477a8607",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb b/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb
index 28a670c43501..88b3467086d7 100644
--- a/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "b69956b1",
+ "id": "dcc41800",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -51,7 +51,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "63022ee7",
+ "id": "9524e4eb",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +66,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "26492f35",
+ "id": "ebbd8892",
"metadata": {},
"outputs": [],
"source": [
@@ -82,7 +82,7 @@
},
{
"cell_type": "markdown",
- "id": "b42cdb80",
+ "id": "4f2320ec",
"metadata": {},
"source": [
"Download and setup FastPitch generator model."
@@ -91,7 +91,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a2dc5642",
+ "id": "28d60c00",
"metadata": {},
"outputs": [],
"source": [
@@ -100,7 +100,7 @@
},
{
"cell_type": "markdown",
- "id": "56bdb12a",
+ "id": "7161e323",
"metadata": {},
"source": [
"Download and setup vocoder and denoiser models."
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a9be5e3f",
+ "id": "b9e2f37e",
"metadata": {},
"outputs": [],
"source": [
@@ -118,7 +118,7 @@
},
{
"cell_type": "markdown",
- "id": "de5a867d",
+ "id": "28899ef3",
"metadata": {},
"source": [
"Verify that generator and vocoder models agree on input parameters."
@@ -127,7 +127,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2aca2fbb",
+ "id": "74b7d3ae",
"metadata": {},
"outputs": [],
"source": [
@@ -147,7 +147,7 @@
},
{
"cell_type": "markdown",
- "id": "b1300613",
+ "id": "ae36e291",
"metadata": {},
"source": [
"Put all models on available device."
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "20de83d4",
+ "id": "ab5d22c7",
"metadata": {},
"outputs": [],
"source": [
@@ -167,7 +167,7 @@
},
{
"cell_type": "markdown",
- "id": "0538c331",
+ "id": "e2c266b2",
"metadata": {},
"source": [
"Load text processor."
@@ -176,7 +176,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "823a3f1b",
+ "id": "8f074bdd",
"metadata": {},
"outputs": [],
"source": [
@@ -185,7 +185,7 @@
},
{
"cell_type": "markdown",
- "id": "e0e7a4cc",
+ "id": "75cd3aa8",
"metadata": {},
"source": [
"Set the text to be synthetized, prepare input and set additional generation parameters."
@@ -194,7 +194,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "46e5387d",
+ "id": "5abf58c6",
"metadata": {},
"outputs": [],
"source": [
@@ -204,7 +204,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "73a4d313",
+ "id": "dfda43e9",
"metadata": {},
"outputs": [],
"source": [
@@ -214,7 +214,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "423c65bb",
+ "id": "3dda8f7e",
"metadata": {},
"outputs": [],
"source": [
@@ -228,7 +228,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1ab327f9",
+ "id": "8d3d0c66",
"metadata": {},
"outputs": [],
"source": [
@@ -242,7 +242,7 @@
},
{
"cell_type": "markdown",
- "id": "35ae7ec6",
+ "id": "f7188d79",
"metadata": {},
"source": [
"Plot the intermediate spectorgram."
@@ -251,7 +251,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6eae5fd4",
+ "id": "f304d53a",
"metadata": {},
"outputs": [],
"source": [
@@ -265,7 +265,7 @@
},
{
"cell_type": "markdown",
- "id": "247b7117",
+ "id": "d04985aa",
"metadata": {},
"source": [
"Syntesize audio."
@@ -274,7 +274,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0aed38d6",
+ "id": "515398ca",
"metadata": {},
"outputs": [],
"source": [
@@ -284,7 +284,7 @@
},
{
"cell_type": "markdown",
- "id": "64c193b2",
+ "id": "dc305d05",
"metadata": {},
"source": [
"Write audio to wav file."
@@ -293,7 +293,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "965ceae0",
+ "id": "95c3bea4",
"metadata": {},
"outputs": [],
"source": [
@@ -303,7 +303,7 @@
},
{
"cell_type": "markdown",
- "id": "cda3d6f1",
+ "id": "5c38d67a",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb b/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb
index 44c54f8ec473..f46b9eda25c6 100644
--- a/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "df492897",
+ "id": "02fd7517",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4b0a083d",
+ "id": "81fcec5b",
"metadata": {},
"outputs": [],
"source": [
@@ -45,7 +45,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4f74e24a",
+ "id": "e4f4cd87",
"metadata": {},
"outputs": [],
"source": [
@@ -73,7 +73,7 @@
},
{
"cell_type": "markdown",
- "id": "dc50e9cf",
+ "id": "8a7cf86a",
"metadata": {},
"source": [
"### Load Pretrained model\n",
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "72de62ae",
+ "id": "e69bf50e",
"metadata": {},
"outputs": [],
"source": [
@@ -113,7 +113,7 @@
},
{
"cell_type": "markdown",
- "id": "a9fa7caa",
+ "id": "01e34391",
"metadata": {},
"source": [
"### Prepare inference data\n",
@@ -123,7 +123,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "423e7eab",
+ "id": "667ee62d",
"metadata": {},
"outputs": [],
"source": [
@@ -146,7 +146,7 @@
},
{
"cell_type": "markdown",
- "id": "9c19ae28",
+ "id": "3fee0350",
"metadata": {},
"source": [
"### Run inference\n",
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d11f974b",
+ "id": "ea0aa320",
"metadata": {},
"outputs": [],
"source": [
@@ -168,7 +168,7 @@
},
{
"cell_type": "markdown",
- "id": "771f4541",
+ "id": "cd0b461f",
"metadata": {},
"source": [
"### Display result"
@@ -177,7 +177,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "48b00455",
+ "id": "93f2e20f",
"metadata": {},
"outputs": [],
"source": [
@@ -191,7 +191,7 @@
},
{
"cell_type": "markdown",
- "id": "79367905",
+ "id": "84371066",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb b/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb
index 12ae75afd71e..49852106b762 100644
--- a/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "6e57797c",
+ "id": "2c55494a",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -44,7 +44,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4eea5ac2",
+ "id": "7db4c513",
"metadata": {},
"outputs": [],
"source": [
@@ -59,7 +59,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c44652f2",
+ "id": "6fb6b43a",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "94824c80",
+ "id": "76eae16f",
"metadata": {},
"source": [
"Download and setup FastPitch generator model."
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4a61fbdb",
+ "id": "a8ac96e8",
"metadata": {},
"outputs": [],
"source": [
@@ -93,7 +93,7 @@
},
{
"cell_type": "markdown",
- "id": "0b8d83cf",
+ "id": "2ed91a71",
"metadata": {},
"source": [
"Download and setup vocoder and denoiser models."
@@ -102,7 +102,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "020893d7",
+ "id": "f118e167",
"metadata": {},
"outputs": [],
"source": [
@@ -111,7 +111,7 @@
},
{
"cell_type": "markdown",
- "id": "9cf25624",
+ "id": "b11a926e",
"metadata": {},
"source": [
"Verify that generator and vocoder models agree on input parameters."
@@ -120,7 +120,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0ae50032",
+ "id": "3ed01b27",
"metadata": {},
"outputs": [],
"source": [
@@ -140,7 +140,7 @@
},
{
"cell_type": "markdown",
- "id": "dc47faa5",
+ "id": "312fe368",
"metadata": {},
"source": [
"Put all models on available device."
@@ -149,7 +149,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1e796df5",
+ "id": "bb57be71",
"metadata": {},
"outputs": [],
"source": [
@@ -160,7 +160,7 @@
},
{
"cell_type": "markdown",
- "id": "14f2990f",
+ "id": "0f1756e8",
"metadata": {},
"source": [
"Load text processor."
@@ -169,7 +169,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "da033a9f",
+ "id": "055bcc95",
"metadata": {},
"outputs": [],
"source": [
@@ -178,7 +178,7 @@
},
{
"cell_type": "markdown",
- "id": "4c4e0cfe",
+ "id": "40106196",
"metadata": {},
"source": [
"Set the text to be synthetized, prepare input and set additional generation parameters."
@@ -187,7 +187,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5cee0c52",
+ "id": "fe9318ad",
"metadata": {},
"outputs": [],
"source": [
@@ -197,7 +197,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e0ec75c2",
+ "id": "8c590dbe",
"metadata": {},
"outputs": [],
"source": [
@@ -207,7 +207,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e378410a",
+ "id": "997b9435",
"metadata": {},
"outputs": [],
"source": [
@@ -221,7 +221,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "98fe0be7",
+ "id": "3336aae7",
"metadata": {},
"outputs": [],
"source": [
@@ -235,7 +235,7 @@
},
{
"cell_type": "markdown",
- "id": "5d06a98a",
+ "id": "374697c4",
"metadata": {},
"source": [
"Plot the intermediate spectorgram."
@@ -244,7 +244,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b1a6bfb4",
+ "id": "fe6410ee",
"metadata": {},
"outputs": [],
"source": [
@@ -258,7 +258,7 @@
},
{
"cell_type": "markdown",
- "id": "25047d56",
+ "id": "2a72d9f1",
"metadata": {},
"source": [
"Syntesize audio."
@@ -267,7 +267,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a3df10cc",
+ "id": "294f01a3",
"metadata": {},
"outputs": [],
"source": [
@@ -277,7 +277,7 @@
},
{
"cell_type": "markdown",
- "id": "43d65eee",
+ "id": "12c73999",
"metadata": {},
"source": [
"Write audio to wav file."
@@ -286,7 +286,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1a2dc09b",
+ "id": "90f9473d",
"metadata": {},
"outputs": [],
"source": [
@@ -296,7 +296,7 @@
},
{
"cell_type": "markdown",
- "id": "f65df362",
+ "id": "fc318e09",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb b/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb
index 8abf89f291cd..decb4a0a135e 100644
--- a/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "53fedcca",
+ "id": "51261007",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -44,7 +44,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "42c43094",
+ "id": "814b4492",
"metadata": {},
"outputs": [],
"source": [
@@ -54,7 +54,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "671ff30d",
+ "id": "2298cb7e",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "5c9642f0",
+ "id": "29b46a0c",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset."
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6f2e23d4",
+ "id": "d1275a51",
"metadata": {},
"outputs": [],
"source": [
@@ -96,7 +96,7 @@
},
{
"cell_type": "markdown",
- "id": "26cd267d",
+ "id": "e8aad620",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -105,7 +105,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7eba66f5",
+ "id": "faf80cee",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "13cd5fd7",
+ "id": "aaafcd4f",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probably hypothesis according to the model."
@@ -132,7 +132,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9cfbac5a",
+ "id": "5a6928bf",
"metadata": {},
"outputs": [],
"source": [
@@ -144,7 +144,7 @@
},
{
"cell_type": "markdown",
- "id": "3ebb002a",
+ "id": "bd02ad85",
"metadata": {},
"source": [
"Display the result."
@@ -153,7 +153,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2b3192d3",
+ "id": "2c9c3c47",
"metadata": {},
"outputs": [],
"source": [
@@ -167,7 +167,7 @@
},
{
"cell_type": "markdown",
- "id": "cdc752fc",
+ "id": "5c269d3b",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_resnext.ipynb b/assets/hub/nvidia_deeplearningexamples_resnext.ipynb
index 0da7a08cc9e5..fc0ab315a1a5 100644
--- a/assets/hub/nvidia_deeplearningexamples_resnext.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "56b1bb68",
+ "id": "2e13818a",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b90a02b9",
+ "id": "fbefe7f4",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "33a27d31",
+ "id": "358a570c",
"metadata": {},
"outputs": [],
"source": [
@@ -84,7 +84,7 @@
},
{
"cell_type": "markdown",
- "id": "835d2c99",
+ "id": "4b869898",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset."
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cea598c8",
+ "id": "5ee0b5b8",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "a6d2835f",
+ "id": "15eecfc1",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "dc102468",
+ "id": "80bc3d09",
"metadata": {},
"outputs": [],
"source": [
@@ -133,7 +133,7 @@
},
{
"cell_type": "markdown",
- "id": "3593e896",
+ "id": "7a67672d",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probably hypothesis according to the model."
@@ -142,7 +142,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "025d5e51",
+ "id": "e0683c8e",
"metadata": {},
"outputs": [],
"source": [
@@ -154,7 +154,7 @@
},
{
"cell_type": "markdown",
- "id": "a0e9d0a2",
+ "id": "b32d74d7",
"metadata": {},
"source": [
"Display the result."
@@ -163,7 +163,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0a11b444",
+ "id": "b26dc6ef",
"metadata": {},
"outputs": [],
"source": [
@@ -177,7 +177,7 @@
},
{
"cell_type": "markdown",
- "id": "e9e4ccf9",
+ "id": "6c4346d4",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb b/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb
index 12090f902803..7bc6fdbcb655 100644
--- a/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "b8a37dea",
+ "id": "f158119b",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "09165f51",
+ "id": "fa3af28e",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b44016dd",
+ "id": "1fd4f344",
"metadata": {},
"outputs": [],
"source": [
@@ -84,7 +84,7 @@
},
{
"cell_type": "markdown",
- "id": "6dc38aef",
+ "id": "4646010c",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset."
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "57fca7ac",
+ "id": "baee173b",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "7e95ffc2",
+ "id": "c07f9904",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e655f17f",
+ "id": "f109f2d2",
"metadata": {},
"outputs": [],
"source": [
@@ -133,7 +133,7 @@
},
{
"cell_type": "markdown",
- "id": "aa3ce0ad",
+ "id": "0ebeaee9",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probable hypotheses according to the model."
@@ -142,7 +142,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d13d9758",
+ "id": "0f3e8608",
"metadata": {},
"outputs": [],
"source": [
@@ -154,7 +154,7 @@
},
{
"cell_type": "markdown",
- "id": "eebb5145",
+ "id": "a0981246",
"metadata": {},
"source": [
"Display the result."
@@ -163,7 +163,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "39630063",
+ "id": "95c71fcd",
"metadata": {},
"outputs": [],
"source": [
@@ -177,7 +177,7 @@
},
{
"cell_type": "markdown",
- "id": "0ea98266",
+ "id": "db9ccf36",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_ssd.ipynb b/assets/hub/nvidia_deeplearningexamples_ssd.ipynb
index ac62e0b8cacc..61573ae753d5 100644
--- a/assets/hub/nvidia_deeplearningexamples_ssd.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_ssd.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "57b52479",
+ "id": "32678419",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -56,7 +56,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "91d7662e",
+ "id": "b5ccdd00",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +66,7 @@
},
{
"cell_type": "markdown",
- "id": "a3ae3dd3",
+ "id": "560361f5",
"metadata": {},
"source": [
"Load an SSD model pretrained on COCO dataset, as well as a set of utility methods for convenient and comprehensive formatting of input and output of the model."
@@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ebbee6ae",
+ "id": "427ebb5c",
"metadata": {},
"outputs": [],
"source": [
@@ -86,7 +86,7 @@
},
{
"cell_type": "markdown",
- "id": "97662cc4",
+ "id": "7b5d81b1",
"metadata": {},
"source": [
"Now, prepare the loaded model for inference"
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e9b2300d",
+ "id": "6d566a07",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "168ed8b3",
+ "id": "e4d351f8",
"metadata": {},
"source": [
"Prepare input images for object detection.\n",
@@ -115,7 +115,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "dc4ea544",
+ "id": "35142436",
"metadata": {},
"outputs": [],
"source": [
@@ -128,7 +128,7 @@
},
{
"cell_type": "markdown",
- "id": "e510e6da",
+ "id": "b45dfc9f",
"metadata": {},
"source": [
"Format the images to comply with the network input and convert them to tensor."
@@ -137,7 +137,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "96931b17",
+ "id": "93cb9230",
"metadata": {},
"outputs": [],
"source": [
@@ -147,7 +147,7 @@
},
{
"cell_type": "markdown",
- "id": "73f5a1a3",
+ "id": "adede3db",
"metadata": {},
"source": [
"Run the SSD network to perform object detection."
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "442298e8",
+ "id": "0eea2442",
"metadata": {},
"outputs": [],
"source": [
@@ -166,7 +166,7 @@
},
{
"cell_type": "markdown",
- "id": "fd564752",
+ "id": "ba253ee0",
"metadata": {},
"source": [
"By default, raw output from SSD network per input image contains\n",
@@ -177,7 +177,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "693d401e",
+ "id": "040ae7db",
"metadata": {},
"outputs": [],
"source": [
@@ -187,7 +187,7 @@
},
{
"cell_type": "markdown",
- "id": "ed910a84",
+ "id": "37fe2fd5",
"metadata": {},
"source": [
"The model was trained on COCO dataset, which we need to access in order to translate class IDs into object names.\n",
@@ -197,7 +197,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b3929454",
+ "id": "23f1c236",
"metadata": {},
"outputs": [],
"source": [
@@ -206,7 +206,7 @@
},
{
"cell_type": "markdown",
- "id": "0406c5c3",
+ "id": "da5cb1ff",
"metadata": {},
"source": [
"Finally, let's visualize our detections"
@@ -215,7 +215,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e01cfaa2",
+ "id": "05b4ed9c",
"metadata": {},
"outputs": [],
"source": [
@@ -240,7 +240,7 @@
},
{
"cell_type": "markdown",
- "id": "33345c5d",
+ "id": "cc2062d8",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb b/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb
index 762a096dc100..6e61db9bad48 100644
--- a/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "9ae6f131",
+ "id": "685a22b9",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -41,7 +41,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fb4557a5",
+ "id": "03f5c26a",
"metadata": {},
"outputs": [],
"source": [
@@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
- "id": "0bb52fb1",
+ "id": "463a2f1e",
"metadata": {},
"source": [
"Load the Tacotron2 model pre-trained on [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/) and prepare it for inference:"
@@ -62,7 +62,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "78c83a25",
+ "id": "30de7ec5",
"metadata": {},
"outputs": [],
"source": [
@@ -74,7 +74,7 @@
},
{
"cell_type": "markdown",
- "id": "52628072",
+ "id": "54ec3e58",
"metadata": {},
"source": [
"Load pretrained WaveGlow model"
@@ -83,7 +83,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0dc9501d",
+ "id": "b85554f7",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
},
{
"cell_type": "markdown",
- "id": "ff96f4ac",
+ "id": "1bbcb40c",
"metadata": {},
"source": [
"Now, let's make the model say:"
@@ -104,7 +104,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "20940b35",
+ "id": "ee9a9521",
"metadata": {},
"outputs": [],
"source": [
@@ -113,7 +113,7 @@
},
{
"cell_type": "markdown",
- "id": "93acb984",
+ "id": "83ad989b",
"metadata": {},
"source": [
"Format the input using utility methods"
@@ -122,7 +122,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "071949af",
+ "id": "607b6a6b",
"metadata": {},
"outputs": [],
"source": [
@@ -132,7 +132,7 @@
},
{
"cell_type": "markdown",
- "id": "3bc4aa40",
+ "id": "94bda550",
"metadata": {},
"source": [
"Run the chained models:"
@@ -141,7 +141,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "69f39038",
+ "id": "c8dd5e6d",
"metadata": {},
"outputs": [],
"source": [
@@ -154,7 +154,7 @@
},
{
"cell_type": "markdown",
- "id": "e95ea0d2",
+ "id": "48777a22",
"metadata": {},
"source": [
"You can write it to a file and listen to it"
@@ -163,7 +163,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3493a902",
+ "id": "157c8a2e",
"metadata": {},
"outputs": [],
"source": [
@@ -173,7 +173,7 @@
},
{
"cell_type": "markdown",
- "id": "bb845373",
+ "id": "a768718e",
"metadata": {},
"source": [
"Alternatively, play it right away in a notebook with IPython widgets"
@@ -182,7 +182,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "20cf0ba5",
+ "id": "a5baf59a",
"metadata": {},
"outputs": [],
"source": [
@@ -192,7 +192,7 @@
},
{
"cell_type": "markdown",
- "id": "114da168",
+ "id": "d53fceb5",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb b/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb
index d697c72dd5d0..1c6c03270ed0 100644
--- a/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "cd21a314",
+ "id": "0c9ae3d8",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -39,7 +39,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6381a0d8",
+ "id": "2b3124bc",
"metadata": {},
"outputs": [],
"source": [
@@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
- "id": "f6623a9b",
+ "id": "0aa2f02b",
"metadata": {},
"source": [
"Load the WaveGlow model pre-trained on [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/)"
@@ -60,7 +60,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c19eaab6",
+ "id": "c55dd427",
"metadata": {},
"outputs": [],
"source": [
@@ -70,7 +70,7 @@
},
{
"cell_type": "markdown",
- "id": "88c9760f",
+ "id": "e36518c9",
"metadata": {},
"source": [
"Prepare the WaveGlow model for inference"
@@ -79,7 +79,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "44e5fd57",
+ "id": "7ad66c04",
"metadata": {},
"outputs": [],
"source": [
@@ -90,7 +90,7 @@
},
{
"cell_type": "markdown",
- "id": "4c86245a",
+ "id": "21bc3db6",
"metadata": {},
"source": [
"Load a pretrained Tacotron2 model"
@@ -99,7 +99,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "17befa81",
+ "id": "39dc2a06",
"metadata": {},
"outputs": [],
"source": [
@@ -110,7 +110,7 @@
},
{
"cell_type": "markdown",
- "id": "48440b0e",
+ "id": "e4678b8c",
"metadata": {},
"source": [
"Now, let's make the model say:"
@@ -119,7 +119,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6194e202",
+ "id": "9761d5d3",
"metadata": {},
"outputs": [],
"source": [
@@ -128,7 +128,7 @@
},
{
"cell_type": "markdown",
- "id": "5bb1f438",
+ "id": "0801a537",
"metadata": {},
"source": [
"Format the input using utility methods"
@@ -137,7 +137,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d7b8a466",
+ "id": "46af6c44",
"metadata": {},
"outputs": [],
"source": [
@@ -147,7 +147,7 @@
},
{
"cell_type": "markdown",
- "id": "de0598ae",
+ "id": "56dd5d0f",
"metadata": {},
"source": [
"Run the chained models"
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3841cff2",
+ "id": "bd3d4b97",
"metadata": {},
"outputs": [],
"source": [
@@ -169,7 +169,7 @@
},
{
"cell_type": "markdown",
- "id": "d7fc699e",
+ "id": "92553612",
"metadata": {},
"source": [
"You can write it to a file and listen to it"
@@ -178,7 +178,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e19067f6",
+ "id": "d8c9a78f",
"metadata": {},
"outputs": [],
"source": [
@@ -188,7 +188,7 @@
},
{
"cell_type": "markdown",
- "id": "9d1c0d46",
+ "id": "c1c72265",
"metadata": {},
"source": [
"Alternatively, play it right away in a notebook with IPython widgets"
@@ -197,7 +197,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f5514e0d",
+ "id": "47783534",
"metadata": {},
"outputs": [],
"source": [
@@ -207,7 +207,7 @@
},
{
"cell_type": "markdown",
- "id": "ee211741",
+ "id": "263021ca",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/pytorch_fairseq_roberta.ipynb b/assets/hub/pytorch_fairseq_roberta.ipynb
index 12d3ea920099..c70687677bfd 100644
--- a/assets/hub/pytorch_fairseq_roberta.ipynb
+++ b/assets/hub/pytorch_fairseq_roberta.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "efc5e548",
+ "id": "421d080a",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -43,7 +43,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3008cba1",
+ "id": "e7c36e0d",
"metadata": {},
"outputs": [],
"source": [
@@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
- "id": "b2812dfb",
+ "id": "6804053a",
"metadata": {},
"source": [
"### Example\n",
@@ -64,7 +64,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5cb23891",
+ "id": "0803b7a4",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "e172c23c",
+ "id": "db3b0494",
"metadata": {},
"source": [
"##### Apply Byte-Pair Encoding (BPE) to input text"
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "097e2cb1",
+ "id": "22477fed",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
},
{
"cell_type": "markdown",
- "id": "98422303",
+ "id": "3e59ef2b",
"metadata": {},
"source": [
"##### Extract features from RoBERTa"
@@ -104,7 +104,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "87514c50",
+ "id": "bdbbeefb",
"metadata": {},
"outputs": [],
"source": [
@@ -120,7 +120,7 @@
},
{
"cell_type": "markdown",
- "id": "f964361c",
+ "id": "74dc95d9",
"metadata": {},
"source": [
"##### Use RoBERTa for sentence-pair classification tasks"
@@ -129,7 +129,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f4a1cd1e",
+ "id": "904fbebe",
"metadata": {},
"outputs": [],
"source": [
@@ -151,7 +151,7 @@
},
{
"cell_type": "markdown",
- "id": "1c6ee6cc",
+ "id": "d44af1c9",
"metadata": {},
"source": [
"##### Register a new (randomly initialized) classification head"
@@ -160,7 +160,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e1789f1b",
+ "id": "411a7e32",
"metadata": {},
"outputs": [],
"source": [
@@ -170,7 +170,7 @@
},
{
"cell_type": "markdown",
- "id": "18cf8278",
+ "id": "6d82a8ec",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/pytorch_fairseq_translation.ipynb b/assets/hub/pytorch_fairseq_translation.ipynb
index 17ef4d439204..189676c15e9d 100644
--- a/assets/hub/pytorch_fairseq_translation.ipynb
+++ b/assets/hub/pytorch_fairseq_translation.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "a89652cd",
+ "id": "61e54d52",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -37,7 +37,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2404f451",
+ "id": "93db2c4d",
"metadata": {},
"outputs": [],
"source": [
@@ -47,7 +47,7 @@
},
{
"cell_type": "markdown",
- "id": "4f522619",
+ "id": "c75f4862",
"metadata": {},
"source": [
"### English-to-French Translation\n",
@@ -59,7 +59,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0b619338",
+ "id": "e68f3b18",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
},
{
"cell_type": "markdown",
- "id": "58dfa153",
+ "id": "425c1dce",
"metadata": {},
"source": [
"### English-to-German Translation\n",
@@ -123,7 +123,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bea23e0f",
+ "id": "3533ec94",
"metadata": {},
"outputs": [],
"source": [
@@ -142,7 +142,7 @@
},
{
"cell_type": "markdown",
- "id": "bac14ef1",
+ "id": "cc5673de",
"metadata": {},
"source": [
"We can also do a round-trip translation to create a paraphrase:"
@@ -151,7 +151,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8bb607cb",
+ "id": "317bfcd0",
"metadata": {},
"outputs": [],
"source": [
@@ -172,7 +172,7 @@
},
{
"cell_type": "markdown",
- "id": "a3a17034",
+ "id": "43c8869b",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/pytorch_vision_alexnet.ipynb b/assets/hub/pytorch_vision_alexnet.ipynb
index 68b3b70df331..1d73f373c9b3 100644
--- a/assets/hub/pytorch_vision_alexnet.ipynb
+++ b/assets/hub/pytorch_vision_alexnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "aee75859",
+ "id": "c3590188",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d82117c4",
+ "id": "83cad7ca",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "96eab0b1",
+ "id": "a674e9ef",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7d0d56b8",
+ "id": "1fc564ad",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "085acb98",
+ "id": "feacc415",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "32e05762",
+ "id": "bdf61a7d",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "31021a86",
+ "id": "e47c465d",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "059c5815",
+ "id": "cb2f406d",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb b/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb
index 9fd83eb8210f..964419d51157 100644
--- a/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb
+++ b/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "0a56e457",
+ "id": "e930d22b",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "22724d07",
+ "id": "0c333e3b",
"metadata": {},
"outputs": [],
"source": [
@@ -38,7 +38,7 @@
},
{
"cell_type": "markdown",
- "id": "c651e8dc",
+ "id": "1e94634e",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -54,7 +54,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5752f1b5",
+ "id": "bd399d6f",
"metadata": {},
"outputs": [],
"source": [
@@ -68,7 +68,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e9365dd9",
+ "id": "e5441286",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
},
{
"cell_type": "markdown",
- "id": "d306f676",
+ "id": "574ac297",
"metadata": {},
"source": [
"The output here is of shape `(21, H, W)`, and at each location, there are unnormalized probabilities corresponding to the prediction of each class.\n",
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "679e36d5",
+ "id": "ae71f93c",
"metadata": {},
"outputs": [],
"source": [
@@ -129,7 +129,7 @@
},
{
"cell_type": "markdown",
- "id": "225f8468",
+ "id": "cb3ecb66",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_densenet.ipynb b/assets/hub/pytorch_vision_densenet.ipynb
index a637d7d4fedb..459232b22eb0 100644
--- a/assets/hub/pytorch_vision_densenet.ipynb
+++ b/assets/hub/pytorch_vision_densenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8b6df43e",
+ "id": "7cf9a3ba",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "74c2ccdb",
+ "id": "11988169",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "9084b36d",
+ "id": "5f8ead6b",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "163a8888",
+ "id": "45263b0e",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "35157ab9",
+ "id": "ed16a7f0",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "92537410",
+ "id": "4616f4a1",
"metadata": {},
"outputs": [],
"source": [
@@ -112,7 +112,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ac42c453",
+ "id": "3bc71d21",
"metadata": {},
"outputs": [],
"source": [
@@ -127,7 +127,7 @@
},
{
"cell_type": "markdown",
- "id": "d0057b86",
+ "id": "27446a14",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_fcn_resnet101.ipynb b/assets/hub/pytorch_vision_fcn_resnet101.ipynb
index 531d2cee5121..45d1ea176666 100644
--- a/assets/hub/pytorch_vision_fcn_resnet101.ipynb
+++ b/assets/hub/pytorch_vision_fcn_resnet101.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "cf83b2f4",
+ "id": "626ad083",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c4a513d4",
+ "id": "5de8ed40",
"metadata": {},
"outputs": [],
"source": [
@@ -37,7 +37,7 @@
},
{
"cell_type": "markdown",
- "id": "1de73adf",
+ "id": "d7fb2449",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f2ea473c",
+ "id": "03c2b0ba",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "be939950",
+ "id": "deeed568",
"metadata": {},
"outputs": [],
"source": [
@@ -96,7 +96,7 @@
},
{
"cell_type": "markdown",
- "id": "3373629a",
+ "id": "e886eb0c",
"metadata": {},
"source": [
"The output here is of shape `(21, H, W)`, and at each location, there are unnormalized probabilities corresponding to the prediction of each class.\n",
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "166276ee",
+ "id": "4a0f2f7e",
"metadata": {},
"outputs": [],
"source": [
@@ -128,7 +128,7 @@
},
{
"cell_type": "markdown",
- "id": "6ad48b60",
+ "id": "e53c7877",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_ghostnet.ipynb b/assets/hub/pytorch_vision_ghostnet.ipynb
index 741f2cb7c616..f96bae4df0d6 100644
--- a/assets/hub/pytorch_vision_ghostnet.ipynb
+++ b/assets/hub/pytorch_vision_ghostnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "d1ca5a52",
+ "id": "20cd6fdd",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b52ed12f",
+ "id": "324b3b8f",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "23876653",
+ "id": "46f8b4bb",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -47,7 +47,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2213057c",
+ "id": "19c3d187",
"metadata": {},
"outputs": [],
"source": [
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "069bf9c0",
+ "id": "85d6342d",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "78ef1947",
+ "id": "0d2131b9",
"metadata": {},
"outputs": [],
"source": [
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "62e26788",
+ "id": "35348308",
"metadata": {},
"outputs": [],
"source": [
@@ -121,7 +121,7 @@
},
{
"cell_type": "markdown",
- "id": "23ccd321",
+ "id": "b46b1ed7",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_googlenet.ipynb b/assets/hub/pytorch_vision_googlenet.ipynb
index 5c07fa51e5ce..d13fd12cbfce 100644
--- a/assets/hub/pytorch_vision_googlenet.ipynb
+++ b/assets/hub/pytorch_vision_googlenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "3bf20fff",
+ "id": "d8231d82",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4e95b51d",
+ "id": "bb9b4e3c",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "4bb3d78b",
+ "id": "40609eaf",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c707defd",
+ "id": "e058ff24",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d795220d",
+ "id": "c8863351",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8d3d9e3c",
+ "id": "3f5d9edc",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bc519870",
+ "id": "d22b86cb",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "79695f17",
+ "id": "2a7a8ed7",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_hardnet.ipynb b/assets/hub/pytorch_vision_hardnet.ipynb
index f68c4173e64f..d98475cfeab3 100644
--- a/assets/hub/pytorch_vision_hardnet.ipynb
+++ b/assets/hub/pytorch_vision_hardnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "29acd236",
+ "id": "8cc68425",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "64e45db9",
+ "id": "4e614a1f",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "9df7999e",
+ "id": "c6d35b46",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "45a7b33d",
+ "id": "aab2ca00",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f5213d6d",
+ "id": "cd407f99",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6d8448fe",
+ "id": "1fc390cf",
"metadata": {},
"outputs": [],
"source": [
@@ -112,7 +112,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "45a33f0c",
+ "id": "eee04102",
"metadata": {},
"outputs": [],
"source": [
@@ -127,7 +127,7 @@
},
{
"cell_type": "markdown",
- "id": "8d1ada3f",
+ "id": "1cefd379",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_ibnnet.ipynb b/assets/hub/pytorch_vision_ibnnet.ipynb
index d30a85dcfceb..26cf1e3666c5 100644
--- a/assets/hub/pytorch_vision_ibnnet.ipynb
+++ b/assets/hub/pytorch_vision_ibnnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "e0a24487",
+ "id": "1247db78",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "988fa773",
+ "id": "617343fc",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "675e3eca",
+ "id": "f56f526b",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -47,7 +47,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8495c4f9",
+ "id": "a534c491",
"metadata": {},
"outputs": [],
"source": [
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b4afc7a0",
+ "id": "4241510f",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2f8828ea",
+ "id": "262023d7",
"metadata": {},
"outputs": [],
"source": [
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "756ebfa2",
+ "id": "43b13e43",
"metadata": {},
"outputs": [],
"source": [
@@ -121,7 +121,7 @@
},
{
"cell_type": "markdown",
- "id": "cbc2d89e",
+ "id": "59428e0c",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_inception_v3.ipynb b/assets/hub/pytorch_vision_inception_v3.ipynb
index a22daa0eb0c7..69f75a422ac8 100644
--- a/assets/hub/pytorch_vision_inception_v3.ipynb
+++ b/assets/hub/pytorch_vision_inception_v3.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "780bb566",
+ "id": "fcc5d81e",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0cd1ec78",
+ "id": "129798b5",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "5efae8e6",
+ "id": "579d6b80",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -47,7 +47,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "48d07d0b",
+ "id": "9adfdf45",
"metadata": {},
"outputs": [],
"source": [
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7bb0915d",
+ "id": "7f5fd1fe",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ecacea95",
+ "id": "c3bf801d",
"metadata": {},
"outputs": [],
"source": [
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0ec1d157",
+ "id": "290ed617",
"metadata": {},
"outputs": [],
"source": [
@@ -121,7 +121,7 @@
},
{
"cell_type": "markdown",
- "id": "f8abcf37",
+ "id": "239c2b52",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_meal_v2.ipynb b/assets/hub/pytorch_vision_meal_v2.ipynb
index 11e50daca88d..6dc4963d1c70 100644
--- a/assets/hub/pytorch_vision_meal_v2.ipynb
+++ b/assets/hub/pytorch_vision_meal_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "9c8426c3",
+ "id": "1e90ecfb",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -27,7 +27,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c8abd7d0",
+ "id": "d39d4ce4",
"metadata": {},
"outputs": [],
"source": [
@@ -38,7 +38,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e854754d",
+ "id": "7f44b600",
"metadata": {},
"outputs": [],
"source": [
@@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
- "id": "9271f083",
+ "id": "1b4a7ed1",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -65,7 +65,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a9f6caba",
+ "id": "efaac451",
"metadata": {},
"outputs": [],
"source": [
@@ -79,7 +79,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "827843d4",
+ "id": "1dcafc68",
"metadata": {},
"outputs": [],
"source": [
@@ -113,7 +113,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fb9719e5",
+ "id": "a1d2736b",
"metadata": {},
"outputs": [],
"source": [
@@ -124,7 +124,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a35bca95",
+ "id": "f7a64718",
"metadata": {},
"outputs": [],
"source": [
@@ -139,7 +139,7 @@
},
{
"cell_type": "markdown",
- "id": "f3d29d9f",
+ "id": "36aa8253",
"metadata": {},
"source": [
"### Model Description\n",
@@ -167,7 +167,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f861060e",
+ "id": "93f19c67",
"metadata": {},
"outputs": [],
"source": [
@@ -181,7 +181,7 @@
},
{
"cell_type": "markdown",
- "id": "ab6e9c4c",
+ "id": "81591d4d",
"metadata": {},
"source": [
"@inproceedings{shen2019MEAL,\n",
diff --git a/assets/hub/pytorch_vision_mobilenet_v2.ipynb b/assets/hub/pytorch_vision_mobilenet_v2.ipynb
index 36c6c9e90898..8865d086f6c7 100644
--- a/assets/hub/pytorch_vision_mobilenet_v2.ipynb
+++ b/assets/hub/pytorch_vision_mobilenet_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "3b160c25",
+ "id": "fdfd9882",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3fec733d",
+ "id": "2556e657",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "b051b510",
+ "id": "82a6eaec",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "53560c20",
+ "id": "e811f0bd",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7518785d",
+ "id": "340a13f2",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e328296b",
+ "id": "3fd48808",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "abd144fe",
+ "id": "b57e80c4",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "5a1b19bc",
+ "id": "85b32bac",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_once_for_all.ipynb b/assets/hub/pytorch_vision_once_for_all.ipynb
index b5fe184344cc..f92672a14dbc 100644
--- a/assets/hub/pytorch_vision_once_for_all.ipynb
+++ b/assets/hub/pytorch_vision_once_for_all.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "bbbb126c",
+ "id": "c178bd13",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -29,7 +29,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "60faae2c",
+ "id": "78f94756",
"metadata": {},
"outputs": [],
"source": [
@@ -45,7 +45,7 @@
},
{
"cell_type": "markdown",
- "id": "7f32ab71",
+ "id": "efa6d81f",
"metadata": {},
"source": [
"| OFA Network | Design Space | Resolution | Width Multiplier | Depth | Expand Ratio | kernel Size | \n",
@@ -62,7 +62,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6383f953",
+ "id": "7cf285d3",
"metadata": {},
"outputs": [],
"source": [
@@ -77,7 +77,7 @@
},
{
"cell_type": "markdown",
- "id": "0a2291d0",
+ "id": "36a954a1",
"metadata": {},
"source": [
"### Get Specialized Architecture"
@@ -86,7 +86,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "73238019",
+ "id": "0d931ac9",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
},
{
"cell_type": "markdown",
- "id": "18bf0397",
+ "id": "c0df1729",
"metadata": {},
"source": [
"More models and configurations can be found in [once-for-all/model-zoo](https://github.com/mit-han-lab/once-for-all#evaluate-1)\n",
@@ -111,7 +111,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "848e0956",
+ "id": "c058d592",
"metadata": {},
"outputs": [],
"source": [
@@ -122,7 +122,7 @@
},
{
"cell_type": "markdown",
- "id": "c7c7d675",
+ "id": "a54c1b4b",
"metadata": {},
"source": [
"The model's prediction can be evalutaed by"
@@ -131,7 +131,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8f585b28",
+ "id": "773ee814",
"metadata": {},
"outputs": [],
"source": [
@@ -173,7 +173,7 @@
},
{
"cell_type": "markdown",
- "id": "2a24a7f7",
+ "id": "e6d4b834",
"metadata": {},
"source": [
"### Model Description\n",
@@ -189,7 +189,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0ddcc736",
+ "id": "95b14e4c",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/pytorch_vision_proxylessnas.ipynb b/assets/hub/pytorch_vision_proxylessnas.ipynb
index af141ca79cf4..00a4b949fe14 100644
--- a/assets/hub/pytorch_vision_proxylessnas.ipynb
+++ b/assets/hub/pytorch_vision_proxylessnas.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "dabe01f7",
+ "id": "08cb0da8",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5aa3b342",
+ "id": "bc34615d",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "b53e465b",
+ "id": "c3f93704",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "747cfe70",
+ "id": "00467b6c",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cfbb1358",
+ "id": "8da604d4",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c7e4aa21",
+ "id": "62541cff",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "72bcd177",
+ "id": "c6721b89",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "7a436cb4",
+ "id": "622e3b7e",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_resnest.ipynb b/assets/hub/pytorch_vision_resnest.ipynb
index 960db00335e9..f721a724103a 100644
--- a/assets/hub/pytorch_vision_resnest.ipynb
+++ b/assets/hub/pytorch_vision_resnest.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "6d0d28f7",
+ "id": "0c073d65",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a51275b1",
+ "id": "aebae9ae",
"metadata": {},
"outputs": [],
"source": [
@@ -36,7 +36,7 @@
},
{
"cell_type": "markdown",
- "id": "d1089225",
+ "id": "d70b1caf",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -50,7 +50,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "da6746a0",
+ "id": "35fc1353",
"metadata": {},
"outputs": [],
"source": [
@@ -64,7 +64,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4e78afe3",
+ "id": "a10c61c9",
"metadata": {},
"outputs": [],
"source": [
@@ -98,7 +98,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9d24f5b7",
+ "id": "74452e6f",
"metadata": {},
"outputs": [],
"source": [
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "96354faf",
+ "id": "6d572da3",
"metadata": {},
"outputs": [],
"source": [
@@ -124,7 +124,7 @@
},
{
"cell_type": "markdown",
- "id": "5296e67b",
+ "id": "dc723c02",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_resnet.ipynb b/assets/hub/pytorch_vision_resnet.ipynb
index 084649b6a075..6210ed7bc3c0 100644
--- a/assets/hub/pytorch_vision_resnet.ipynb
+++ b/assets/hub/pytorch_vision_resnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "1e434c12",
+ "id": "3e2990ec",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f5c87974",
+ "id": "737b1ae1",
"metadata": {},
"outputs": [],
"source": [
@@ -38,7 +38,7 @@
},
{
"cell_type": "markdown",
- "id": "5ff2b716",
+ "id": "7be5e57d",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -52,7 +52,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eb6b9431",
+ "id": "aa63bb64",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +66,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8c72ba75",
+ "id": "3a7aad81",
"metadata": {},
"outputs": [],
"source": [
@@ -100,7 +100,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "904257e8",
+ "id": "f28b5f4f",
"metadata": {},
"outputs": [],
"source": [
@@ -111,7 +111,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "98ff0879",
+ "id": "38ac23c8",
"metadata": {},
"outputs": [],
"source": [
@@ -126,7 +126,7 @@
},
{
"cell_type": "markdown",
- "id": "463a6d40",
+ "id": "d961a31f",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_resnext.ipynb b/assets/hub/pytorch_vision_resnext.ipynb
index bb53e2e24e63..8e3a699f5e84 100644
--- a/assets/hub/pytorch_vision_resnext.ipynb
+++ b/assets/hub/pytorch_vision_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "e14c84d0",
+ "id": "52aabb6e",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "09bd863d",
+ "id": "eb380751",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "b3d9e523",
+ "id": "b7660cef",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f2a1fa8f",
+ "id": "b6298805",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d99ec928",
+ "id": "4c84c13e",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "56c818f4",
+ "id": "ff837592",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "79edb573",
+ "id": "d3f96ebd",
"metadata": {},
"outputs": [],
"source": [
@@ -125,7 +125,7 @@
},
{
"cell_type": "markdown",
- "id": "fa0e9e28",
+ "id": "f97b2ac8",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_shufflenet_v2.ipynb b/assets/hub/pytorch_vision_shufflenet_v2.ipynb
index dcdf0a839093..9a0f98cdfcd8 100644
--- a/assets/hub/pytorch_vision_shufflenet_v2.ipynb
+++ b/assets/hub/pytorch_vision_shufflenet_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "a24ea4d9",
+ "id": "cc4bde5a",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b2bc531a",
+ "id": "4cf5a0dd",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "0c016aea",
+ "id": "ce64c90a",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "14bbfd49",
+ "id": "6cedb155",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c1336846",
+ "id": "6f6cb4f5",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "73b680de",
+ "id": "1558dfe7",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "824fa962",
+ "id": "0a5dd44c",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "16f492a5",
+ "id": "aefa46a5",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_snnmlp.ipynb b/assets/hub/pytorch_vision_snnmlp.ipynb
index e09baf591ee3..4b8a3535e62d 100644
--- a/assets/hub/pytorch_vision_snnmlp.ipynb
+++ b/assets/hub/pytorch_vision_snnmlp.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "07c84b7f",
+ "id": "00dd5ae0",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "16d1ba7a",
+ "id": "759f3761",
"metadata": {},
"outputs": [],
"source": [
@@ -37,7 +37,7 @@
},
{
"cell_type": "markdown",
- "id": "b180cd5d",
+ "id": "1c37b4a8",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -51,7 +51,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "59fc02ae",
+ "id": "1667817d",
"metadata": {},
"outputs": [],
"source": [
@@ -65,7 +65,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b250104b",
+ "id": "71194138",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
},
{
"cell_type": "markdown",
- "id": "dd0f47e7",
+ "id": "2ef5a4f8",
"metadata": {},
"source": [
"### Model Description\n",
@@ -121,7 +121,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "25959990",
+ "id": "41b40d01",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/pytorch_vision_squeezenet.ipynb b/assets/hub/pytorch_vision_squeezenet.ipynb
index b4f063c3fb26..4fd928313e2a 100644
--- a/assets/hub/pytorch_vision_squeezenet.ipynb
+++ b/assets/hub/pytorch_vision_squeezenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8648e506",
+ "id": "cb1f7488",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e45f4162",
+ "id": "031aad51",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "e734fbbb",
+ "id": "5b9ac81a",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "043b0655",
+ "id": "7d2f6484",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ff091ab1",
+ "id": "b16d1d7e",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "91a5e98b",
+ "id": "5509a650",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "85e4a623",
+ "id": "4909ae18",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "b20d1936",
+ "id": "99748070",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_vgg.ipynb b/assets/hub/pytorch_vision_vgg.ipynb
index 9d51e328ee9b..349b66ffa666 100644
--- a/assets/hub/pytorch_vision_vgg.ipynb
+++ b/assets/hub/pytorch_vision_vgg.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "c9cb3220",
+ "id": "74f2d290",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b89a1d01",
+ "id": "c5d31ee3",
"metadata": {},
"outputs": [],
"source": [
@@ -41,7 +41,7 @@
},
{
"cell_type": "markdown",
- "id": "d38f094f",
+ "id": "88766d01",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ff3bf136",
+ "id": "51a23985",
"metadata": {},
"outputs": [],
"source": [
@@ -69,7 +69,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0a41d25a",
+ "id": "3ebf1b06",
"metadata": {},
"outputs": [],
"source": [
@@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d56fdad9",
+ "id": "76df14ea",
"metadata": {},
"outputs": [],
"source": [
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "de41872f",
+ "id": "c4114d8d",
"metadata": {},
"outputs": [],
"source": [
@@ -129,7 +129,7 @@
},
{
"cell_type": "markdown",
- "id": "336e95c4",
+ "id": "c544c79c",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_wide_resnet.ipynb b/assets/hub/pytorch_vision_wide_resnet.ipynb
index e5eeef6c5493..fc1d499863db 100644
--- a/assets/hub/pytorch_vision_wide_resnet.ipynb
+++ b/assets/hub/pytorch_vision_wide_resnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "77f79299",
+ "id": "6a8ff628",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "29b15724",
+ "id": "0ae3940d",
"metadata": {},
"outputs": [],
"source": [
@@ -36,7 +36,7 @@
},
{
"cell_type": "markdown",
- "id": "c0e729fb",
+ "id": "845dc953",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -50,7 +50,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8aba0b4d",
+ "id": "c786580b",
"metadata": {},
"outputs": [],
"source": [
@@ -64,7 +64,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fcb49f9b",
+ "id": "fcfcc8b4",
"metadata": {},
"outputs": [],
"source": [
@@ -98,7 +98,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bcf9694e",
+ "id": "ed9fc1b8",
"metadata": {},
"outputs": [],
"source": [
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "864ef6e4",
+ "id": "ab050989",
"metadata": {},
"outputs": [],
"source": [
@@ -124,7 +124,7 @@
},
{
"cell_type": "markdown",
- "id": "28eeb4f8",
+ "id": "929afa5a",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb b/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb
index 9751aebd35d1..0a28e4948dbc 100644
--- a/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb
+++ b/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "2165a68f",
+ "id": "e8c5f3ec",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fc9683ac",
+ "id": "aaf48de2",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3a123544",
+ "id": "477358bc",
"metadata": {},
"outputs": [],
"source": [
@@ -59,7 +59,7 @@
},
{
"cell_type": "markdown",
- "id": "3e4a853f",
+ "id": "30b458ba",
"metadata": {},
"source": [
"### Model Description\n",
@@ -94,7 +94,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4322123f",
+ "id": "6c625539",
"metadata": {},
"outputs": [],
"source": [
@@ -104,7 +104,7 @@
},
{
"cell_type": "markdown",
- "id": "32d12ba1",
+ "id": "661be97b",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/simplenet.ipynb b/assets/hub/simplenet.ipynb
index 151fc64a5250..3479f52a43e2 100644
--- a/assets/hub/simplenet.ipynb
+++ b/assets/hub/simplenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "d9326a55",
+ "id": "01714944",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "824249f6",
+ "id": "5ba3eee5",
"metadata": {},
"outputs": [],
"source": [
@@ -41,7 +41,7 @@
},
{
"cell_type": "markdown",
- "id": "e45edc22",
+ "id": "8063ac56",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "90f291e9",
+ "id": "1dc0fe6b",
"metadata": {},
"outputs": [],
"source": [
@@ -69,7 +69,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "596b2f33",
+ "id": "6c46b36c",
"metadata": {},
"outputs": [],
"source": [
@@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "64d56587",
+ "id": "8466a648",
"metadata": {},
"outputs": [],
"source": [
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8a5eff71",
+ "id": "979540e2",
"metadata": {},
"outputs": [],
"source": [
@@ -129,7 +129,7 @@
},
{
"cell_type": "markdown",
- "id": "091fb66f",
+ "id": "3dbe599e",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/snakers4_silero-models_stt.ipynb b/assets/hub/snakers4_silero-models_stt.ipynb
index 7d3b553ffa24..67128fb8ea32 100644
--- a/assets/hub/snakers4_silero-models_stt.ipynb
+++ b/assets/hub/snakers4_silero-models_stt.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8bdd0aff",
+ "id": "895ef8c0",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "821fc27c",
+ "id": "cfb86421",
"metadata": {},
"outputs": [],
"source": [
@@ -36,7 +36,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9d22d976",
+ "id": "141cb700",
"metadata": {},
"outputs": [],
"source": [
@@ -69,7 +69,7 @@
},
{
"cell_type": "markdown",
- "id": "a5dbdf35",
+ "id": "51599396",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/snakers4_silero-models_tts.ipynb b/assets/hub/snakers4_silero-models_tts.ipynb
index 68ac689f7158..0ce590ad79e9 100644
--- a/assets/hub/snakers4_silero-models_tts.ipynb
+++ b/assets/hub/snakers4_silero-models_tts.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "6cc8aa2a",
+ "id": "939a7465",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -20,7 +20,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a78f68c9",
+ "id": "9da9c282",
"metadata": {},
"outputs": [],
"source": [
@@ -32,7 +32,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a7ec42ab",
+ "id": "5308a23c",
"metadata": {},
"outputs": [],
"source": [
@@ -55,7 +55,7 @@
},
{
"cell_type": "markdown",
- "id": "4c4f30dc",
+ "id": "9ff11761",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/snakers4_silero-vad_vad.ipynb b/assets/hub/snakers4_silero-vad_vad.ipynb
index 3a84aaf715db..1318c74ec505 100644
--- a/assets/hub/snakers4_silero-vad_vad.ipynb
+++ b/assets/hub/snakers4_silero-vad_vad.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "d5baea6e",
+ "id": "d0f975b5",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "976a5d42",
+ "id": "7339c387",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f5004077",
+ "id": "de4ced88",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
},
{
"cell_type": "markdown",
- "id": "eab1a1a3",
+ "id": "199992b8",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/ultralytics_yolov5.ipynb b/assets/hub/ultralytics_yolov5.ipynb
index d3b579d3a58c..110fef82a86a 100644
--- a/assets/hub/ultralytics_yolov5.ipynb
+++ b/assets/hub/ultralytics_yolov5.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "76e97349",
+ "id": "8722f086",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -29,7 +29,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d1fbf37a",
+ "id": "1e34a8d3",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "8fb1a9c4",
+ "id": "74c22ae7",
"metadata": {},
"source": [
"## Model Description\n",
@@ -82,7 +82,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6c2e9a33",
+ "id": "f90b7378",
"metadata": {},
"outputs": [],
"source": [
@@ -112,7 +112,7 @@
},
{
"cell_type": "markdown",
- "id": "7705fd67",
+ "id": "b0746367",
"metadata": {},
"source": [
"## Citation\n",
@@ -125,7 +125,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "71d4ac3f",
+ "id": "d2e6e7c0",
"metadata": {
"attributes": {
"classes": [
@@ -150,7 +150,7 @@
},
{
"cell_type": "markdown",
- "id": "c3240ea3",
+ "id": "9824b65c",
"metadata": {},
"source": [
"## Contact\n",
diff --git a/assets/images/scaling-recommendation-2d-sparse-parallelism/fg1.png b/assets/images/scaling-recommendation-2d-sparse-parallelism/fg1.png
new file mode 100644
index 000000000000..08674e9efda3
Binary files /dev/null and b/assets/images/scaling-recommendation-2d-sparse-parallelism/fg1.png differ
diff --git a/assets/images/scaling-recommendation-2d-sparse-parallelism/fg2.png b/assets/images/scaling-recommendation-2d-sparse-parallelism/fg2.png
new file mode 100644
index 000000000000..45b60ca30c15
Binary files /dev/null and b/assets/images/scaling-recommendation-2d-sparse-parallelism/fg2.png differ
diff --git a/blog/10/index.html b/blog/10/index.html
index 8046ceabd845..a4c8c9e1adde 100644
--- a/blog/10/index.html
+++ b/blog/10/index.html
@@ -323,13 +323,11 @@
Featured Post
- Overview
+
This post is the fourth part of a multi-series blog focused on how to accelerate generative AI mo...
-
-
-
+
Read More
@@ -349,6 +347,25 @@