This repository was archived by the owner on Aug 28, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 24
/
Copy pathprofiling_docs_cell.jinja
33 lines (21 loc) · 1.89 KB
/
profiling_docs_cell.jinja
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
## Evaluate Model Performance <a name="evaluate-model-performance"></a>
### Motivation
Model performance is the amount of information that your model can process per unit of time.
{% if is_nlp %}In NLP, model performance defines how fast your model can process a number of text samples and generate the desired output. Usually it is measured in Samples Per Second (SPS).
{% else %}
In Computer Vision model performance defines how fast your model can process a number of images and generate the desired output. Usually it is measured in Frames Per Second (FPS).
{% endif %}
OpenVINO uses the term Inference to denote the stage of a single network execution.
Inference is the stage in which a trained model is used to infer/predict the testing samples and comprises of a similar forward pass as training to predict the values.
In OpenVINO toolkit inference is performed by Benchmark Tool.
{% if is_nlp %} Note that Benchmark Tool was initially developed for Computer Vision (CV) use case and reports inference results in Frames Per Second (FPS in CV = SPS in NLP). {% endif %}
### OpenVINO Tool: Benchmark Tool
#### Main usage
Benchmark Tool estimates deep learning inference performance on supported devices.
#### Description
There are two inference modes: synchronous (latency-oriented) and asynchronous (throughput-oriented).
Refer to the [documentation](https://docs.openvino.ai/latest/openvino_inference_engine_tools_benchmark_tool_README.html) for more details.
#### Used Command-Line Arguments
{{ CLIToolEnum.benchmark_tool.format_to_markdown_table() | safe }}
Refer to the [documentation](https://docs.openvino.ai/latest/openvino_inference_engine_tools_benchmark_tool_README.html) for the full list of available command-line arguments.
> **NOTE**: In this tutorial arguments `-b`,`-nstreams`, and `-t` values for Benchmark Tool are set according to your last profiling experiment with this model in the DL Workbench.