Skip to content

Commit eba421b

Browse files
Moved notebooks to a separate dir, improved CI for notebooks (openvinotoolkit#118)
* Moved notebooks to a separate dir * Update sanity-check.yml * Update * Update sanity-check.yml * Update sanity-check.yml * Update sanity-check.yml * Update sanity-check.yml * Update sanity-check.yml * Update
1 parent e3271e9 commit eba421b

8 files changed

+49
-43
lines changed

.github/workflows/sanity-check.yml

+12-6
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,9 @@ jobs:
2828
run: |
2929
# Ensure the directories are passed correctly to the environment variable
3030
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
31-
subproject_dirs=$(git diff --name-only origin/master..HEAD | grep '^demos/' | xargs -I{} dirname "{}" | sort -u | tr '\n' ' ')
31+
subproject_dirs=$(git diff --name-only origin/master..HEAD | grep -e '^demos' -e '^notebooks' | xargs -I{} dirname "{}" | sort -u | tr '\n' ' ')
3232
else
33-
subproject_dirs=$(find demos -mindepth 1 -maxdepth 1 -type d ! -name utils | tr '\n' ' ')
33+
subproject_dirs=$(find demos notebooks -mindepth 1 -maxdepth 1 -type d ! -name utils | tr '\n' ' ')
3434
fi
3535
echo "subproject_dirs=$subproject_dirs" >> $GITHUB_ENV
3636
- name: Categorize subprojects
@@ -44,11 +44,11 @@ jobs:
4444
for dir in $subproject_dirs; do
4545
if [ -f "$dir/package.json" ]; then
4646
js+=("$dir")
47-
elif [ -f "$dir/main.ipynb" ]; then
47+
elif find "$dir" -maxdepth 1 -name "*.ipynb" | grep -q "."; then
4848
notebook+=("$dir")
49-
elif grep -q "gradio" "$dir/requirements.txt"; then
49+
elif [ -f "$dir/requirements.txt" ] && grep -q "gradio" "$dir/requirements.txt"; then
5050
gradio+=("$dir")
51-
elif grep -q -- "--stream" "$dir/main.py"; then
51+
elif [ -f "$dir/main.py" ] && grep -q -- "--stream" "$dir/main.py"; then
5252
webcam+=("$dir")
5353
fi
5454
done
@@ -90,10 +90,16 @@ jobs:
9090
with:
9191
python: ${{ matrix.python }}
9292
project: ${{ matrix.subproject }}
93+
- name: Use downloaded video as a stream
94+
shell: bash
95+
run: |
96+
cd ${{ matrix.subproject }}
97+
# replace video_path with sample_video.mp4
98+
find . -name "*.ipynb" -exec sed -E -i "s/video_path\s*=\s*(['\"]?.*?['\"]?)/video_path=\\\\\"sample_video.mp4\\\\\"\\\n\",/g" {} +
9399
- uses: ./.github/reusable-steps/timeouted-action
94100
name: Run Notebook
95101
with:
96-
command: jupyter nbconvert --to notebook --execute main.ipynb
102+
command: jupyter nbconvert --to notebook --execute *.ipynb
97103
project: ${{ matrix.subproject }}
98104

99105
gradio:

demos/CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ The goal of any demo in this directory is to present OpenVINO as an optimization
66

77
Rules:
88
- The demo must be standalone - no dependencies to other demos (dependency to utils is ok)
9-
- The demo must be a Python script (preferable one) called `main.py` or a Jupyter notebook
9+
- The demo must be a Python script called `main.py`
1010
- All dependencies must be pinned to specific, stable, and tested versions and provided in the corresponding `requirements.txt` file (script) or the first code cell (notebook)
1111
- If the demo is visual (produces any video/image output) it must add an OpenVINO watermark to the output video/image (see utils)
1212
- The demo must provide a README file with the instructions on installing the environment, setting up and running (+ changing the behavior if applicable)
File renamed without changes.
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,24 @@
1-
# Running YOLOv8 Object Detection with ONNX and OpenVINO
2-
3-
In this demo, we'll perform object detection leveraging YOLOv8 with Ultralytics, and with ONNX using the OpenVINO Execution Provider for enhanced performance, to detect up to 80 different objects (e.g., birds, dogs, etc.)
4-
This sample was modified from one of the [available Onnx Runtime Inference examples here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/python/OpenVINO_EP/yolov8_object_detection).
5-
6-
<p align="center">
7-
<img src="https://github.com/user-attachments/assets/a3e35991-0c3b-47e0-a94a-c70d7b135261"/>
8-
</p>
9-
10-
11-
### Installation Instructions
12-
- Create a virtual environment using
13-
```sh
14-
python -m venv <venv-name>
15-
```
16-
- To activate the virtual environment use
17-
```sh
18-
\<venv-name>\Scripts\activate
19-
```
20-
- Install the required dependencies via pip
21-
```sh
22-
pip install -r requirements.txt
23-
```
1+
# Running YOLOv8 Object Detection with ONNX and OpenVINO
2+
3+
In this demo, we'll perform object detection leveraging YOLOv8 with Ultralytics, and with ONNX using the OpenVINO Execution Provider for enhanced performance, to detect up to 80 different objects (e.g., birds, dogs, etc.)
4+
This sample was modified from one of the [available Onnx Runtime Inference examples here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/python/OpenVINO_EP/yolov8_object_detection).
5+
6+
<p align="center">
7+
<img src="https://github.com/user-attachments/assets/a3e35991-0c3b-47e0-a94a-c70d7b135261"/>
8+
</p>
9+
10+
11+
### Installation Instructions
12+
- Create a virtual environment using
13+
```sh
14+
python -m venv <venv-name>
15+
```
16+
- To activate the virtual environment use
17+
```sh
18+
\<venv-name>\Scripts\activate
19+
```
20+
- Install the required dependencies via pip
21+
```sh
22+
pip install -r requirements.txt
23+
```
2424
- Now you only need a Jupyter server to start.

demos/onnxruntime_yolov8_demo/main.ipynb notebooks/onnxruntime_yolov8/YOLOv8 Object Detection with ONNX and OpenVINO Execution Provider.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -419,8 +419,8 @@
419419
"#Inference on webcam or live streams\n",
420420
"import cv2\n",
421421
"\n",
422-
"stream = 0 #can set to video path like /path/input.mp4\n",
423-
"cap = cv2.VideoCapture(stream)\n",
422+
"video_path = 0 #can set to video path like /path/input.mp4\n",
423+
"cap = cv2.VideoCapture(video_path)\n",
424424
"assert cap.isOpened(), \"Error reading video file\"\n",
425425
"w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))\n",
426426
"frame_count = 0\n",
@@ -471,7 +471,7 @@
471471
"name": "python",
472472
"nbconvert_exporter": "python",
473473
"pygments_lexer": "ipython3",
474-
"version": "3.10.11"
474+
"version": "3.11.10"
475475
}
476476
},
477477
"nbformat": 4,
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
--extra-index-url https://download.pytorch.org/whl/cpu
2-
3-
jupyterlab==4.2.5
4-
5-
openvino==2024.3.0
6-
ultralytics==8.2.81
7-
onnxruntime-openvino==1.19.0
8-
onnx==1.17.0
9-
onnxruntime==1.19.2
10-
setuptools==73.0.1
1+
--extra-index-url https://download.pytorch.org/whl/cpu
2+
3+
jupyterlab==4.2.5
4+
5+
openvino==2024.3.0
6+
ultralytics==8.2.81
7+
onnxruntime-openvino==1.19.0
8+
onnx==1.17.0
9+
onnxruntime==1.19.2
10+
setuptools==73.0.1

0 commit comments

Comments
 (0)