You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: tools/accuracy_checker/accuracy_checker/launcher/dlsdk_launcher_readme.md
+9-5
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ It is possible to specify one or more devices via `-td, --target devices` comman
9
9
*`model` - path to xml file with model for your topology or compiled executable network.
10
10
*`weights` - path to bin file with weights for your topology (Optional, the argument can be omitted if bin file stored in the same directory with model xml or if you use compiled blob).
11
11
12
-
**Note:**
12
+
**Note:**
13
13
You can generate executable blob using [compile_tool](https://docs.openvinotoolkit.org/latest/_inference_engine_tools_compile_tool_README.html).
14
14
Before evaluation executable blob, please make sure that selected device support it.
15
15
@@ -28,7 +28,7 @@ You can provide:
28
28
In case when you want to determine additional parameters for model conversion (data_type, input_shape and so on), you can use `mo_params` for arguments with values and `mo_flags` for positional arguments like `legacy_mxnet_model` .
29
29
Full list of supported parameters you can find in Model Optimizer Developer Guide.
30
30
31
-
Model will be converted before every evaluation.
31
+
Model will be converted before every evaluation.
32
32
You can provide `converted_model_dir` for saving converted model in specific folder, otherwise, converted models will be saved in path provided via `-C` command line argument or source model directory.
33
33
34
34
*`adapter` - approach how raw output will be converted to representation of dataset problem, some adapters can be specific to framework. You can find detailed instruction how to use adapters [here](../adapters/README.md).
@@ -43,14 +43,18 @@ Additionally you can provide device specific parameters:
43
43
*`gpu_extensions` (path to extension *.xml file with OpenCL kernel description for gpu).
44
44
*`bitstream` for running on FPGA.
45
45
46
-
For setting device specific flags, you are able to use `-dc` or `--device_config` command line option. Device config should be represented as YML file with dictionary, where keys are plugin configuration keys and values are their values respectively.
46
+
Device config contains device specific options which should be set to Inference Engine. For setting device specific flags, you are able to use `-dc` or `--device_config` command line option. Device config should be represented as YML file with dictionary of one of two types:
47
+
1. keys are plugin configuration keys and values are their values respectively. In this way configuration will be applied to current running device.
48
+
2. keys are supported devices and values are plugin configuration for each device. Plugin configuration represented as dictionary where keys are plugin specific configuration keys and values are their values respectively.
49
+
47
50
Each supported device has own set of supported configuration parameters which can be found on device page in [Inference Engine development guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html)
48
51
49
52
**Note:** Since OpenVINO 2020.4 on platforms with native bfloat16 support models will be executed on this precision by default. For disabling this behaviour, you need to use device_config with following configuration:
50
53
```yml
51
-
ENFORCE_BF16: "NO"
54
+
CPU:
55
+
ENFORCE_BF16: "NO"
52
56
```
53
-
Device config example can be found <a href="https://github.com/opencv/open_model_zoo/blob/develop/tools/accuracy_checker/sample/disable_bfloat16_device_config.yml">here</a>()
57
+
Device config example can be found <a href="https://github.com/opencv/open_model_zoo/blob/develop/tools/accuracy_checker/sample/disable_bfloat16_device_config.yml">here</a>
54
58
55
59
Beside that, you can launch model in `async_mode`, enable this option and optionally provide the number of infer requests (`num_requests`), which will be used in evaluation process. By default, if `num_requests` not provided or used value `AUTO`, automatic number request assignment for specific device will be performed
56
60
For multi device configuration async mode used always. You can provide number requests for each device as part device specification: `MULTI:device_1(num_req_1),device_2(num_req_2)`or in `num_requests` config section (for this case comma-separated list of integer numbers or one value if number requests for all devices equal can be used).
0 commit comments