Skip to content

Commit c0a3fed

Browse files
authoredJun 30, 2020
Feature/azaytsev/accuracy checker doc fixes (openvinotoolkit#1276)
* Fixed links * Added an anchor * Fixed link * Minor fixes * Fixed links * Fixed links * Fixed Note
1 parent c004a46 commit c0a3fed

File tree

5 files changed

+43
-44
lines changed

5 files changed

+43
-44
lines changed
 

‎tools/accuracy_checker/README.md

+15-15
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ models:
131131
- name: dataset_name
132132
```
133133
Optionally you can use global configuration. It can be useful for avoiding duplication if you have several models which should be run on the same dataset.
134-
Example of global definitions file can be found [here](dataset_definitions.yml). Global definitions will be merged with evaluation config in the runtime by dataset name.
134+
Example of global definitions file can be found <a href="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/dataset_definitions.yml">here</a>. Global definitions will be merged with evaluation config in the runtime by dataset name.
135135
Parameters of global configuration can be overwritten by local config (e.g. if in definitions specified resize with destination size 224 and in the local config used resize with size 227, the value in config - 227 will be used as resize parameter)
136136
You can use field `global_definitions` for specifying path to global definitions directly in the model config or via command line arguments (`-d`, `--definitions`).
137137

@@ -141,26 +141,26 @@ Launcher is a description of how your model should be executed.
141141
Each launcher configuration starts with setting `framework` name. Currently *caffe*, *dlsdk*, *mxnet*, *tf*, *tf_lite*, *opencv*, *onnx_runtime* supported. Launcher description can have differences.
142142
Please view:
143143

144-
- [how to configure Caffe launcher](accuracy_checker/launcher/caffe_launcher_readme.md).
145-
- [how to configure DLSDK launcher](accuracy_checker/launcher/dlsdk_launcher_readme.md).
146-
- [how to configure OpenCV launcher](accuracy_checker/launcher/opencv_launcher_readme.md).
147-
- [how to configure MXNet Launcher](accuracy_checker/launcher/mxnet_launcher_readme.md).
148-
- [how to configure TensorFlow Launcher](accuracy_checker/launcher/tf_launcher_readme.md).
149-
- [how to configure TensorFlow Lite Launcher](accuracy_checker/launcher/tf_lite_launcher_readme.md).
150-
- [how to configure ONNX Runtime Launcher](accuracy_checker/launcher/onnx_runtime_launcher_readme.md).
151-
- [how to configure PyTorch Launcher](accuracy_checker/launcher/pytorch_launcher_readme.md)
144+
- [How to configure Caffe launcher](accuracy_checker/launcher/caffe_launcher_readme.md)
145+
- [How to configure DLSDK launcher](accuracy_checker/launcher/dlsdk_launcher_readme.md)
146+
- [How to configure OpenCV launcher](accuracy_checker/launcher/opencv_launcher_readme.md)
147+
- [How to configure MXNet Launcher](accuracy_checker/launcher/mxnet_launcher_readme.md)
148+
- [How to configure TensorFlow Launcher](accuracy_checker/launcher/tf_launcher_readme.md)
149+
- [How to configure TensorFlow Lite Launcher](accuracy_checker/launcher/tf_lite_launcher_readme.md)
150+
- [How to configure ONNX Runtime Launcher](accuracy_checker/launcher/onnx_runtime_launcher_readme.md)
151+
- [How to configure PyTorch Launcher](accuracy_checker/launcher/pytorch_launcher_readme.md)
152152

153153
### Datasets
154154

155155
Dataset entry describes data on which model should be evaluated,
156156
all required preprocessing and postprocessing/filtering steps,
157157
and metrics that will be used for evaluation.
158158

159-
If your dataset data is a well-known competition problem (COCO, Pascal VOC, ...) and/or can be potentially reused for other models
159+
If your dataset data is a well-known competition problem (COCO, Pascal VOC, and others) and/or can be potentially reused for other models
160160
it is reasonable to declare it in some global configuration file (*definition* file). This way in your local configuration file you can provide only
161161
`name` and all required steps will be picked from global one. To pass path to this global configuration use `--definition` argument of CLI.
162162

163-
If you want to evaluate models using prepared config files and well-known datasets, you need to organize folders with validation datasets in a certain way. More detailed information about dataset preparation you can find in [Dataset Preparation Guide](../../datasets.md).
163+
If you want to evaluate models using prepared config files and well-known datasets, you need to organize folders with validation datasets in a certain way. More detailed information about dataset preparation you can find in <a href="https://github.com/opencv/open_model_zoo/blob/develop/datasets.md">Dataset Preparation Guide</a>.
164164

165165
Each dataset must have:
166166

@@ -175,7 +175,7 @@ And optionally:
175175
- `segmentation_masks_source` - path to directory where gt masks for semantic segmentation task stored.
176176

177177
Also it must contain data related to annotation.
178-
You can convert annotation inplace using:
178+
You can convert annotation in-place using:
179179
- `annotation_conversion`: parameters for annotation conversion
180180

181181

@@ -216,8 +216,8 @@ will be picked from *definitions* file.
216216
You can find useful following instructions:
217217

218218
- [how to convert annotations](accuracy_checker/annotation_converters/README.md)
219-
- [how to use preprocessings](accuracy_checker/preprocessor/README.md).
220-
- [how to use postprocessings](accuracy_checker/postprocessor/README.md).
219+
- [how to use preprocessing](accuracy_checker/preprocessor/README.md).
220+
- [how to use postprocessing](accuracy_checker/postprocessor/README.md).
221221
- [how to use metrics](accuracy_checker/metrics/README.md).
222222
- [how to use readers](accuracy_checker/data_readers/README.md).
223223

@@ -249,4 +249,4 @@ Typical workflow for testing new model include:
249249

250250
Standard Accuracy Checker validation pipeline: Annotation Reading -> Data Reading -> Preprocessing -> Inference -> Postprocessing -> Metrics.
251251
In some cases it can be unsuitable (e.g. if you have sequence of models). You are able to customize validation pipeline using own evaluator.
252-
More details about custom evaluations can be found in [related section](accuracy_checker/evaluators/custom_evaluators/README.md).
252+
More details about custom evaluations can be found in the [related section](accuracy_checker/evaluators/custom_evaluators/README.md).

‎tools/accuracy_checker/accuracy_checker/annotation_converters/README.md

+8-8
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,15 @@ Process of conversion can be implemented in two ways:
99
* via configuration file
1010
* via command line
1111

12-
### Describing annotation conversion in configuration file.
12+
## Describing Annotation Conversion in Configuration File
1313

14-
Annotation conversion can be provided in `dataset` section your configuration file to convert annotation inplace before every evaluation.
14+
Annotation conversion can be provided in `dataset` section your configuration file to convert annotation in-place before every evaluation.
1515
Each conversion configuration should contain `converter` field filled selected converter name and provide converter specific parameters (more details in supported converters section). All paths can be prefixed via command line with `-s, --source` argument.
1616

1717
You can additionally use optional parameters like:
1818
* `subsample_size` - Dataset subsample size. You can specify the number of ground truth objects or dataset ratio in percentage. Please, be careful to use this option, some datasets does not support subsampling. You can also specify `subsample_seed` if you want to generate subsample with specific random seed.
1919
* `annotation` - path to store converted annotation pickle file. You can use this parameter if you need to reuse converted annotation to avoid subsequent conversions.
20-
* `dataset_meta` - path to store mata information about converted annotation if it is provided.
20+
* `dataset_meta` - path to store meta information about converted annotation if it is provided.
2121
* `analyze_dataset` - flag which allow to get statistics about converted dataset. Supported annotations: `ClassificationAnnotation`, `DetectionAnnotation`, `MultiLabelRecognitionAnnotation`, `RegressionAnnotation`. Default value is False.
2222

2323
Example of usage:
@@ -35,7 +35,7 @@ Example of usage:
3535
dataset_meta: sample_dataset.json
3636
```
3737
38-
### Conversing process via command line.
38+
## Conversing Process via Command Line
3939
4040
The command line for annotation conversion looks like:
4141
@@ -49,7 +49,7 @@ You may refer to `-h, --help` to full list of command line options. Some optiona
4949
* `-a, --annotation_name` - annotation file name.
5050
* `-m, --meta_name` - meta info file name.
5151

52-
### Supported converters
52+
## Supported Converters
5353

5454
Accuracy Checker supports following list of annotation converters and specific for them parameters:
5555
* `cifar` - converts [CIFAR](https://www.cs.toronto.edu/~kriz/cifar.html) classification dataset to `ClassificationAnnotation`
@@ -86,7 +86,7 @@ Accuracy Checker supports following list of annotation converters and specific f
8686
* `images_dir` - path to directory with images related to devkit root (default JPEGImages).
8787
* `mask_dir` - path to directory with ground truth segmentation masks related to devkit root (default SegmentationClass).
8888
* `dataset_meta_file` - path path to json file with dataset meta (e.g. label_map, color_encoding).Optional, more details in [Customizing dataset meta](#customizing-dataset-meta) section.
89-
**Note: since OpenVINO 2020.4 converter behaviour changed. `data_source` parameter of dataset should contains directory for images only, if you have segmentation mask in separated location, please use `segmentation_masks_source` for specifying gt masks location.**
89+
**Note**: Since OpenVINO 2020.4 the converter behaviour changed. `data_source` parameter of dataset should contains directory for images only, if you have segmentation mask in separated location, please use `segmentation_masks_source` for specifying gt masks location.
9090
* `mscoco_detection` - converts MS COCO dataset for object detection task to `DetectionAnnotation`.
9191
* `annotation_file` - path ot annotation file in json format.
9292
* `has_background` - allows convert dataset with/without adding background_label. Accepted values are True or False. (default is False).
@@ -307,11 +307,11 @@ Accuracy Checker supports following list of annotation converters and specific f
307307
* `images_dir` - path to images directory.
308308
* `masks_dir` - path to mask dataset to be used for inpainting (Optional).
309309
* `aflw2000_3d` - converts [AFLW2000-3D](http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/main.htm) dataset for 3d facial landmarks regression task to `FacialLandmarks3DAnnotation`.
310-
* `data_dir` - directory, where input images and annotation files in matlab format stored.
310+
* `data_dir` - directory, where input images and annotation files in MATLAB format stored.
311311
* `style_transfer` - converts images to `StyleTransferAnnotation`.
312312
* `images_dir` - path to images directory.
313313

314-
### Customizing dataset meta
314+
## <a name="customizing-dataset-meta"></a>Customizing Dataset Meta
315315
There are situations when we need customize some default dataset parameters (e.g. replace original dataset label map with own.)
316316
You are able to overload parameters such as `label_map`, `segmentation_colors`, `backgound_label` using `dataset_meta_file` argument.
317317
dataset meta file is JSON file, which can contains following parameters:

‎tools/accuracy_checker/accuracy_checker/evaluators/custom_evaluators/README.md

+10-11
Original file line numberDiff line numberDiff line change
@@ -21,22 +21,21 @@ Each custom evaluation config should start with keyword `evaluation` and contain
2121
Before running, please make sure that prefix to module added to your python path or use `python_path` parameter in config for it specification.
2222
Optionally you can provide `module_config` section which contains config for custom evaluator (Depends from realization, it can contains evaluator specific parameters).
2323

24-
2524
## Examples
2625
* **Sequential Action Recognition Evaluator** demonstrates how to run Action Recognition models with encoder + decoder architecture.
27-
[Evaluator code](sequential_action_recognition_evaluator.py)
26+
<a href="https://github.com/opencv/open_model_zoo/blob/develop/tools/accuracy_checker/accuracy_checker/evaluators/custom_evaluators/sequential_action_recognition_evaluator.py">Evaluator code</a>.
2827
Configuration file examples:
29-
* [action-recognition-0001-encoder](../../../configs/action-recognition-0001-encoder.yml) - running full pipeline of action recognition model.
30-
* [action-recognition-0001-decoder](../../../configs/action-recognition-0001-decoder.yml) - running only decoder stage with dumped embeddings of encoder.
28+
* <a href="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/action-recognition-0001-encoder.yml">action-recognition-0001-encoder</a> - Running full pipeline of action recognition model.
29+
* <a href="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/action-recognition-0001-decoder.yml">action-recognition-0001-decoder</a> - Running only decoder stage with dumped embeddings of encoder.
3130

3231
* **MTCNN Evaluator** shows how to run MTCNN model.
33-
[Evaluator code](mtcnn_evaluator.py)
32+
<a href="https://github.com/opencv/open_model_zoo/blob/develop/tools/accuracy_checker/accuracy_checker/evaluators/custom_evaluators/mtcnn_evaluator.py">Evaluator code</a>.
3433
Configuration file examples:
35-
* [mtcnn-p](../../../configs/mtcnn-p.yml) - running proposal stage of MTCNN as usual model.
36-
* [mtcnn-r](../../../configs/mtcnn-r.yml) - running only refine stage of MTCNN using dumped proposal stage results.
37-
* [mtcnn-o](../../../configs/mtcnn-o.yml) - running full MTCNN pipeline.
34+
* <a href="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/mtcnn-p.yml">mtcnn-p</a> - Running proposal stage of MTCNN as usual model.
35+
* <a href="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/mtcnn-r.yml">mtcnn-r</a> - Running only refine stage of MTCNN using dumped proposal stage results.
36+
* <a href="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/mtcnn-o.yml">mtcnn-o</a> - Running full MTCNN pipeline.
3837

39-
* **Text Spotting Evaluator** demonstrates how to evaluate text-spotting-0002 model via Accuracy Checker.
40-
[Evaluator code](text_spotting_evaluator.py)
38+
* **Text Spotting Evaluator** demonstrates how to evaluate the `text-spotting-0002` model via Accuracy Checker.
39+
<a href="https://github.com/opencv/open_model_zoo/blob/develop/tools/accuracy_checker/accuracy_checker/evaluators/custom_evaluators/text_spotting_evaluator.py">Evaluator code</a>.
4140
Configuration file examples:
42-
* [text-spotting-0002](../../../configs/text-spotting-0002.yml)
41+
* <a href="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/configs/text-spotting-0002.yml">text-spotting-0002</a>.

‎tools/accuracy_checker/sample/README.md

+8-8
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ We will try to evaluate **SampLeNet** topology as an example.
77

88
### 1. Download and extract dataset
99

10-
In this sample we will use toy dataset which we refer to as *sample dataset*, which contains 10k images
11-
of 10 different classes (classification problem), which is actually CIFAR10 dataset converted to png (image conversion will be done automatically in evaluation process)
10+
In this sample we will use toy dataset which we refer to as *sample dataset*, which contains 10K images
11+
of 10 different classes (classification problem), which is actually CIFAR10 dataset converted to PNG (image conversion will be done automatically in evaluation process)
1212

1313
You can download original CIFAR10 dataset from [official website](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz).
1414

@@ -21,8 +21,8 @@ tar xvf cifar-10-python.tar.gz -C sample
2121

2222
### 2. Evaluate sample topology
2323

24-
Typically you need to write configuration file, describing evaluation process of your topology.
25-
There is already config file for evaluating SampLeNet using OpenVINO framework, read it carefully. It runs Caffe model using Model Optimizer which requires installed Caffe. If you have not opportunity to use Caffe, please replace `caffe_model` and `caffe_weights` on
24+
Typically you need to write a configuration file describing evaluation process of your topology.
25+
There is already a config file for evaluating SampLeNet using OpenVINO framework, read it carefully. It runs Caffe model using Model Optimizer which requires installed Caffe. If you have not opportunity to use Caffe, please replace `caffe_model` and `caffe_weights` on
2626

2727
```yaml
2828
model: SampleNet.xml
@@ -39,9 +39,9 @@ If everything worked correctly, you should be able to get `75.02%` accuracy.
3939

4040
Now try edit config, to run SampLeNet on other device or framework (e.g. Caffe, MXNet or OpenCV), or go directly to your topology!
4141

42-
4342
### Additional useful resources
4443

45-
* [config](opencv_sample_config.yml) for running SampleNet via [OpenCV launcher](../accuracy_checker/launcher/opencv_launcher_readme.md)
46-
* [config](sample_blob_config.yml) for running SampleNet using compiled executable network blob.
47-
**Note: Not all OpenVINO plugins support compiled network blob execution**
44+
* <a href="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/sample/opencv_sample_config.yml">config</a>() for running SampleNet via [OpenCV launcher](../accuracy_checker/launcher/opencv_launcher_readme.md)
45+
* <a href="https://github.com/opencv/open_model_zoo/blob/master/tools/accuracy_checker/sample/sample_blob_config.yml">config</a> for running SampleNet using compiled executable network blob.
46+
47+
>**NOTE**: Not all Inference Engine plugins support compiled network blob execution.

‎tools/downloader/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -307,13 +307,13 @@ To do this, use the `--dry_run` option:
307307
See the "Shared options" section for information on other options accepted by
308308
the script.
309309

310-
Model quantizer usage
310+
Model Quantizer Usage
311311
---------------------
312312

313313
Before you run the model quantizer, you must prepare a directory with
314314
the datasets required for the quantization process. This directory will be
315315
referred to as `<DATASET_DIR>` below. You can find more detailed information
316-
about dataset preparation in the [Dataset Preparation Guide](../../datasets.md).
316+
about dataset preparation in the <a href="https://github.com/opencv/open_model_zoo/blob/develop/datasets.md">Dataset Preparation Guide</a>.
317317

318318
The basic usage is to run the script like this:
319319

0 commit comments

Comments
 (0)