Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit ba106c2

Browse files
committedApr 24, 2024·
Update workflow to run example tests
1 parent 88139d3 commit ba106c2

File tree

4 files changed

+15
-14
lines changed

4 files changed

+15
-14
lines changed
 

‎.github/workflows/test_openvino_examples.yml

+10-10
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@ on:
77
push:
88
paths:
99
- '.github/workflows/test_openvino_examples.yml'
10-
- 'examples/openvino/*'
10+
- 'examples/openvino/**'
1111
pull_request:
1212
paths:
1313
- '.github/workflows/test_openvino_examples.yml'
14-
- 'examples/openvino/*'
14+
- 'examples/openvino/**'
1515

1616
concurrency:
1717
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
@@ -22,9 +22,9 @@ jobs:
2222
strategy:
2323
fail-fast: false
2424
matrix:
25-
python-version: ["3.8", "3.10"]
25+
python-version: ["3.8", "3.11"]
2626

27-
runs-on: ubuntu-20.04
27+
runs-on: ubuntu-22.04
2828

2929
steps:
3030
- uses: actions/checkout@v2
@@ -35,12 +35,12 @@ jobs:
3535

3636
- name: Install dependencies
3737
run: |
38-
pip install optimum[openvino] jstyleson nncf pytest
39-
pip install -r examples/openvino/audio-classification/requirements.txt
40-
pip install -r examples/openvino/image-classification/requirements.txt
41-
pip install -r examples/openvino/question-answering/requirements.txt
42-
pip install -r examples/openvino/text-classification/requirements.txt
38+
pip install .[openvino] jstyleson pytest
39+
pip install -r examples/openvino/audio-classification/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
40+
pip install -r examples/openvino/image-classification/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
41+
pip install -r examples/openvino/question-answering/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
42+
pip install -r examples/openvino/text-classification/requirements.txt --extra-index-url https://download.pytorch.org/whl/cpu
4343
4444
- name: Test examples
4545
run: |
46-
python -m pytest examples/openvino/test_examples.py
46+
python -m pytest examples/openvino/test_examples.py

‎.github/workflows/test_openvino_notebooks.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,9 @@ jobs:
2323
strategy:
2424
fail-fast: false
2525
matrix:
26-
python-version: ["3.8", "3.10"]
26+
python-version: ["3.8", "3.11"]
2727

28-
runs-on: ubuntu-20.04
28+
runs-on: ubuntu-22.04
2929

3030
steps:
3131
- uses: actions/checkout@v2

‎examples/openvino/question-answering/README.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1313
See the License for the specific language governing permissions and
1414
limitations under the License.
1515
-->
16-
# Question answering
16+
# Question nswering
1717

1818
This folder contains [`run_qa.py`](https://github.com/huggingface/optimum/blob/main/examples/openvino/question-answering/run_qa.py), a script to fine-tune a 🤗 Transformers model on a question answering dataset while applying quantization aware training (QAT). QAT can be easily applied by replacing the Transformers [`Trainer`](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#trainer) with the Optimum [`OVTrainer`].
1919
An `QuestionAnsweringOVTrainer` is defined in [`trainer_qa.py`](https://github.com/huggingface/optimum/blob/main/examples/openvino/question-answering/trainer_qa.py), which inherits from `OVTrainer` and is adapted to perform question answering tasks evaluation.
@@ -47,6 +47,7 @@ python run_qa.py \
4747
```
4848

4949
### Joint Pruning, Quantization and Distillation (JPQD) for BERT on SQuAD1.0
50+
5051
`OVTrainer` also provides an advanced optimization workflow through the NNCF when Transformer model can be structurally pruned along with 8-bit quantization and distillation. Below is an example which demonstrates how to jointly prune, quantize BERT-base for SQuAD 1.0 using NNCF config `--nncf_compression_config` and distill from BERT-large teacher. This example closely resembles the movement sparsification work of [Lagunas et al., 2021, Block Pruning For Faster Transformers](https://arxiv.org/pdf/2109.04838.pdf). This example takes about 12 hours with a single V100 GPU and ~40% of the weights of the Transformer blocks were pruned. For launching the script on multiple GPUs specify `--nproc-per-node=<number of GPU>`. Note, that different batch size and other hyperparameters qmight be required to achieve the same results as on a single GPU.
5152

5253
More on how to configure movement sparsity, see NNCF documentation [here](https://github.com/openvinotoolkit/nncf/blob/develop/nncf/experimental/torch/sparsity/movement/MovementSparsity.md).

‎notebooks/openvino/requirements.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
optimum-intel[openvino, nncf]
1+
optimum-intel[openvino]
22
datasets
33
evaluate[evaluator]
44
ipywidgets

0 commit comments

Comments
 (0)
Please sign in to comment.