You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: tools/accuracy_checker/README.md
+15-15
Original file line number
Diff line number
Diff line change
@@ -131,7 +131,7 @@ models:
131
131
- name: dataset_name
132
132
```
133
133
Optionally you can use global configuration. It can be useful for avoiding duplication if you have several models which should be run on the same dataset.
134
-
Example of global definitions file can be found [here](dataset_definitions.yml). Global definitions will be merged with evaluation config in the runtime by dataset name.
134
+
Example of global definitions file can be found <a href="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/dataset_definitions.yml">here</a>. Global definitions will be merged with evaluation config in the runtime by dataset name.
135
135
Parameters of global configuration can be overwritten by local config (e.g. if in definitions specified resize with destination size 224 and in the local config used resize with size 227, the value in config - 227 will be used as resize parameter)
136
136
You can use field `global_definitions` for specifying path to global definitions directly in the model config or via command line arguments (`-d`, `--definitions`).
137
137
@@ -141,26 +141,26 @@ Launcher is a description of how your model should be executed.
141
141
Each launcher configuration starts with setting `framework` name. Currently *caffe*, *dlsdk*, *mxnet*, *tf*, *tf_lite*, *opencv*, *onnx_runtime* supported. Launcher description can have differences.
142
142
Please view:
143
143
144
-
- [how to configure Caffe launcher](accuracy_checker/launcher/caffe_launcher_readme.md).
145
-
- [how to configure DLSDK launcher](accuracy_checker/launcher/dlsdk_launcher_readme.md).
146
-
- [how to configure OpenCV launcher](accuracy_checker/launcher/opencv_launcher_readme.md).
147
-
- [how to configure MXNet Launcher](accuracy_checker/launcher/mxnet_launcher_readme.md).
148
-
- [how to configure TensorFlow Launcher](accuracy_checker/launcher/tf_launcher_readme.md).
149
-
- [how to configure TensorFlow Lite Launcher](accuracy_checker/launcher/tf_lite_launcher_readme.md).
150
-
- [how to configure ONNX Runtime Launcher](accuracy_checker/launcher/onnx_runtime_launcher_readme.md).
151
-
- [how to configure PyTorch Launcher](accuracy_checker/launcher/pytorch_launcher_readme.md)
144
+
- [How to configure Caffe launcher](accuracy_checker/launcher/caffe_launcher_readme.md)
145
+
- [How to configure DLSDK launcher](accuracy_checker/launcher/dlsdk_launcher_readme.md)
146
+
- [How to configure OpenCV launcher](accuracy_checker/launcher/opencv_launcher_readme.md)
147
+
- [How to configure MXNet Launcher](accuracy_checker/launcher/mxnet_launcher_readme.md)
148
+
- [How to configure TensorFlow Launcher](accuracy_checker/launcher/tf_launcher_readme.md)
149
+
- [How to configure TensorFlow Lite Launcher](accuracy_checker/launcher/tf_lite_launcher_readme.md)
150
+
- [How to configure ONNX Runtime Launcher](accuracy_checker/launcher/onnx_runtime_launcher_readme.md)
151
+
- [How to configure PyTorch Launcher](accuracy_checker/launcher/pytorch_launcher_readme.md)
152
152
153
153
### Datasets
154
154
155
155
Dataset entry describes data on which model should be evaluated,
156
156
all required preprocessing and postprocessing/filtering steps,
157
157
and metrics that will be used for evaluation.
158
158
159
-
If your dataset data is a well-known competition problem (COCO, Pascal VOC, ...) and/or can be potentially reused for other models
159
+
If your dataset data is a well-known competition problem (COCO, Pascal VOC, and others) and/or can be potentially reused for other models
160
160
it is reasonable to declare it in some global configuration file (*definition* file). This way in your local configuration file you can provide only
161
161
`name`and all required steps will be picked from global one. To pass path to this global configuration use `--definition` argument of CLI.
162
162
163
-
If you want to evaluate models using prepared config files and well-known datasets, you need to organize folders with validation datasets in a certain way. More detailed information about dataset preparation you can find in [Dataset Preparation Guide](../../datasets.md).
163
+
If you want to evaluate models using prepared config files and well-known datasets, you need to organize folders with validation datasets in a certain way. More detailed information about dataset preparation you can find in <a href="https://github.com/opencv/open_model_zoo/blob/develop/datasets.md">Dataset Preparation Guide</a>.
164
164
165
165
Each dataset must have:
166
166
@@ -175,7 +175,7 @@ And optionally:
175
175
- `segmentation_masks_source`- path to directory where gt masks for semantic segmentation task stored.
176
176
177
177
Also it must contain data related to annotation.
178
-
You can convert annotation inplace using:
178
+
You can convert annotation in-place using:
179
179
- `annotation_conversion`: parameters for annotation conversion
180
180
181
181
@@ -216,8 +216,8 @@ will be picked from *definitions* file.
216
216
You can find useful following instructions:
217
217
218
218
- [how to convert annotations](accuracy_checker/annotation_converters/README.md)
219
-
- [how to use preprocessings](accuracy_checker/preprocessor/README.md).
220
-
- [how to use postprocessings](accuracy_checker/postprocessor/README.md).
219
+
- [how to use preprocessing](accuracy_checker/preprocessor/README.md).
220
+
- [how to use postprocessing](accuracy_checker/postprocessor/README.md).
221
221
- [how to use metrics](accuracy_checker/metrics/README.md).
222
222
- [how to use readers](accuracy_checker/data_readers/README.md).
223
223
@@ -249,4 +249,4 @@ Typical workflow for testing new model include:
249
249
250
250
Standard Accuracy Checker validation pipeline: Annotation Reading -> Data Reading -> Preprocessing -> Inference -> Postprocessing -> Metrics.
251
251
In some cases it can be unsuitable (e.g. if you have sequence of models). You are able to customize validation pipeline using own evaluator.
252
-
More details about custom evaluations can be found in [related section](accuracy_checker/evaluators/custom_evaluators/README.md).
252
+
More details about custom evaluations can be found in the [related section](accuracy_checker/evaluators/custom_evaluators/README.md).
Copy file name to clipboardexpand all lines: tools/accuracy_checker/accuracy_checker/annotation_converters/README.md
+8-8
Original file line number
Diff line number
Diff line change
@@ -9,15 +9,15 @@ Process of conversion can be implemented in two ways:
9
9
* via configuration file
10
10
* via command line
11
11
12
-
###Describing annotation conversion in configuration file.
12
+
## Describing Annotation Conversion in Configuration File
13
13
14
-
Annotation conversion can be provided in `dataset` section your configuration file to convert annotation inplace before every evaluation.
14
+
Annotation conversion can be provided in `dataset` section your configuration file to convert annotation in-place before every evaluation.
15
15
Each conversion configuration should contain `converter` field filled selected converter name and provide converter specific parameters (more details in supported converters section). All paths can be prefixed via command line with `-s, --source` argument.
16
16
17
17
You can additionally use optional parameters like:
18
18
*`subsample_size` - Dataset subsample size. You can specify the number of ground truth objects or dataset ratio in percentage. Please, be careful to use this option, some datasets does not support subsampling. You can also specify `subsample_seed` if you want to generate subsample with specific random seed.
19
19
*`annotation` - path to store converted annotation pickle file. You can use this parameter if you need to reuse converted annotation to avoid subsequent conversions.
20
-
*`dataset_meta` - path to store mata information about converted annotation if it is provided.
20
+
*`dataset_meta` - path to store meta information about converted annotation if it is provided.
21
21
*`analyze_dataset` - flag which allow to get statistics about converted dataset. Supported annotations: `ClassificationAnnotation`, `DetectionAnnotation`, `MultiLabelRecognitionAnnotation`, `RegressionAnnotation`. Default value is False.
22
22
23
23
Example of usage:
@@ -35,7 +35,7 @@ Example of usage:
35
35
dataset_meta: sample_dataset.json
36
36
```
37
37
38
-
### Conversing process via command line.
38
+
## Conversing Process via Command Line
39
39
40
40
The command line for annotation conversion looks like:
41
41
@@ -49,7 +49,7 @@ You may refer to `-h, --help` to full list of command line options. Some optiona
49
49
*`-a, --annotation_name` - annotation file name.
50
50
*`-m, --meta_name` - meta info file name.
51
51
52
-
###Supported converters
52
+
## Supported Converters
53
53
54
54
Accuracy Checker supports following list of annotation converters and specific for them parameters:
55
55
*`cifar` - converts [CIFAR](https://www.cs.toronto.edu/~kriz/cifar.html) classification dataset to `ClassificationAnnotation`
@@ -86,7 +86,7 @@ Accuracy Checker supports following list of annotation converters and specific f
86
86
*`images_dir` - path to directory with images related to devkit root (default JPEGImages).
87
87
*`mask_dir` - path to directory with ground truth segmentation masks related to devkit root (default SegmentationClass).
88
88
*`dataset_meta_file` - path path to json file with dataset meta (e.g. label_map, color_encoding).Optional, more details in [Customizing dataset meta](#customizing-dataset-meta) section.
89
-
**Note: since OpenVINO 2020.4 converter behaviour changed. `data_source` parameter of dataset should contains directory for images only, if you have segmentation mask in separated location, please use `segmentation_masks_source` for specifying gt masks location.**
89
+
**Note**: Since OpenVINO 2020.4 the converter behaviour changed. `data_source` parameter of dataset should contains directory for images only, if you have segmentation mask in separated location, please use `segmentation_masks_source` for specifying gt masks location.
90
90
*`mscoco_detection` - converts MS COCO dataset for object detection task to `DetectionAnnotation`.
91
91
*`annotation_file` - path ot annotation file in json format.
92
92
*`has_background` - allows convert dataset with/without adding background_label. Accepted values are True or False. (default is False).
@@ -307,11 +307,11 @@ Accuracy Checker supports following list of annotation converters and specific f
307
307
*`images_dir` - path to images directory.
308
308
*`masks_dir` - path to mask dataset to be used for inpainting (Optional).
309
309
*`aflw2000_3d` - converts [AFLW2000-3D](http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/main.htm) dataset for 3d facial landmarks regression task to `FacialLandmarks3DAnnotation`.
310
-
*`data_dir` - directory, where input images and annotation files in matlab format stored.
310
+
*`data_dir` - directory, where input images and annotation files in MATLAB format stored.
311
311
*`style_transfer` - converts images to `StyleTransferAnnotation`.
312
312
*`images_dir` - path to images directory.
313
313
314
-
### Customizing datasetmeta
314
+
##<aname="customizing-dataset-meta"></a>Customizing Dataset Meta
315
315
There are situations when we need customize some default dataset parameters (e.g. replace original dataset label map with own.)
316
316
You are able to overload parameters such as `label_map`, `segmentation_colors`, `backgound_label` using `dataset_meta_file` argument.
317
317
dataset meta file is JSON file, which can contains following parameters:
Copy file name to clipboardexpand all lines: tools/accuracy_checker/accuracy_checker/evaluators/custom_evaluators/README.md
+10-11
Original file line number
Diff line number
Diff line change
@@ -21,22 +21,21 @@ Each custom evaluation config should start with keyword `evaluation` and contain
21
21
Before running, please make sure that prefix to module added to your python path or use `python_path` parameter in config for it specification.
22
22
Optionally you can provide `module_config` section which contains config for custom evaluator (Depends from realization, it can contains evaluator specific parameters).
23
23
24
-
25
24
## Examples
26
25
***Sequential Action Recognition Evaluator** demonstrates how to run Action Recognition models with encoder + decoder architecture.
*[action-recognition-0001-encoder](../../../configs/action-recognition-0001-encoder.yml) - running full pipeline of action recognition model.
30
-
*[action-recognition-0001-decoder](../../../configs/action-recognition-0001-decoder.yml) - running only decoder stage with dumped embeddings of encoder.
28
+
*<ahref="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/action-recognition-0001-encoder.yml">action-recognition-0001-encoder</a> - Running full pipeline of action recognition model.
29
+
*<ahref="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/action-recognition-0001-decoder.yml">action-recognition-0001-decoder</a> - Running only decoder stage with dumped embeddings of encoder.
31
30
32
31
***MTCNN Evaluator** shows how to run MTCNN model.
*[mtcnn-p](../../../configs/mtcnn-p.yml) - running proposal stage of MTCNN as usual model.
36
-
*[mtcnn-r](../../../configs/mtcnn-r.yml) - running only refine stage of MTCNN using dumped proposal stage results.
37
-
*[mtcnn-o](../../../configs/mtcnn-o.yml) - running full MTCNN pipeline.
34
+
*<ahref="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/mtcnn-p.yml">mtcnn-p</a> - Running proposal stage of MTCNN as usual model.
35
+
*<ahref="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/mtcnn-r.yml">mtcnn-r</a> - Running only refine stage of MTCNN using dumped proposal stage results.
36
+
*<ahref="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/mtcnn-o.yml">mtcnn-o</a> - Running full MTCNN pipeline.
38
37
39
-
***Text Spotting Evaluator** demonstrates how to evaluate text-spotting-0002 model via Accuracy Checker.
40
-
[Evaluator code](text_spotting_evaluator.py)
38
+
***Text Spotting Evaluator** demonstrates how to evaluate the `text-spotting-0002` model via Accuracy Checker.
Copy file name to clipboardexpand all lines: tools/accuracy_checker/sample/README.md
+8-8
Original file line number
Diff line number
Diff line change
@@ -7,8 +7,8 @@ We will try to evaluate **SampLeNet** topology as an example.
7
7
8
8
### 1. Download and extract dataset
9
9
10
-
In this sample we will use toy dataset which we refer to as *sample dataset*, which contains 10k images
11
-
of 10 different classes (classification problem), which is actually CIFAR10 dataset converted to png (image conversion will be done automatically in evaluation process)
10
+
In this sample we will use toy dataset which we refer to as *sample dataset*, which contains 10K images
11
+
of 10 different classes (classification problem), which is actually CIFAR10 dataset converted to PNG (image conversion will be done automatically in evaluation process)
12
12
13
13
You can download original CIFAR10 dataset from [official website](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz).
14
14
@@ -21,8 +21,8 @@ tar xvf cifar-10-python.tar.gz -C sample
21
21
22
22
### 2. Evaluate sample topology
23
23
24
-
Typically you need to write configuration file, describing evaluation process of your topology.
25
-
There is already config file for evaluating SampLeNet using OpenVINO framework, read it carefully. It runs Caffe model using Model Optimizer which requires installed Caffe. If you have not opportunity to use Caffe, please replace `caffe_model` and `caffe_weights` on
24
+
Typically you need to write a configuration file describing evaluation process of your topology.
25
+
There is already a config file for evaluating SampLeNet using OpenVINO framework, read it carefully. It runs Caffe model using Model Optimizer which requires installed Caffe. If you have not opportunity to use Caffe, please replace `caffe_model` and `caffe_weights` on
26
26
27
27
```yaml
28
28
model: SampleNet.xml
@@ -39,9 +39,9 @@ If everything worked correctly, you should be able to get `75.02%` accuracy.
39
39
40
40
Now try edit config, to run SampLeNet on other device or framework (e.g. Caffe, MXNet or OpenCV), or go directly to your topology!
41
41
42
-
43
42
### Additional useful resources
44
43
45
-
*[config](opencv_sample_config.yml) for running SampleNet via [OpenCV launcher](../accuracy_checker/launcher/opencv_launcher_readme.md)
46
-
*[config](sample_blob_config.yml) for running SampleNet using compiled executable network blob.
47
-
**Note: Not all OpenVINO plugins support compiled network blob execution**
44
+
* <ahref="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/sample/opencv_sample_config.yml">config</a>() for running SampleNet via [OpenCV launcher](../accuracy_checker/launcher/opencv_launcher_readme.md)
45
+
* <ahref="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/sample/sample_blob_config.yml">config</a> for running SampleNet using compiled executable network blob.
46
+
47
+
>**NOTE**: Not all Inference Engine plugins support compiled network blob execution.
0 commit comments