YOLO v3 is a real-time object detection model in ONNX* format from the repository which is converted from Keras* model repository using keras2onnx converter. This model was pre-trained on Common Objects in Context (COCO) dataset with 80 classes.
Metric | Value |
---|---|
Type | Detection |
GFLOPs | 65.998 |
MParams | 61.930 |
Source framework | ONNX* |
Accuracy metrics obtained on Common Objects in Context (COCO) validation dataset for converted model.
Metric | Value |
---|---|
mAP | 48.30% |
COCO mAP | 47.07% |
-
Image, name -
input_1
, shape -1, 3, 416, 416
, format isB, C, H, W
, where:B
- batch sizeC
- channelH
- heightW
- width
Channel order is
RGB
. Scale value - 255. -
Information of input image size, name:
image_shape
, shape:1, 2
, format:B, C
, where:B
- batch sizeC
- vector of 2 values in formatH, W
, whereH
is an image height,W
is an image width.
-
Image, name -
input_1
, shape -1, 3, 416, 416
, format isB, C, H, W
, where:B
- batch sizeC
- channelH
- heightW
- width
Channel order is
BGR
. -
Information of input image size, name:
image_shape
, shape:1, 2
, format:B, C
, where:B
- batch sizeC
- vector of 2 values in formatH, W
, whereH
is an image height,W
is an image width.
-
Boxes coordinates, name -
yolonms_layer_1/ExpandDims_1:0
, shape -1, 10647, 4
, format -B, N, 4
, where:B
- batch sizeN
- number of candidates
-
Scores of boxes per class, name -
yolonms_layer_1/ExpandDims_3:0
, shape -1, 80, 10647
, format -B, 80, N
, where:B
- batch sizeN
- number of candidates
-
Selected indices from the boxes tensor, name -
yolonms_layer_1/concat_2:0
, shape -1, 1600, 3
, format -B, N, 3
, where:B
- batch sizeN
- number of detection boxes
Each index has format [b_idx
, cls_idx
, box_idx
], where:
b_idx
- batch indexcls_idx
- class_indexbox_idx
- box_index
The model was trained on Common Objects in Context (COCO) dataset version with 80 categories of object. Mapping to class names provided in <omz_dir>/data/dataset_classes/coco_80cl.txt
file.
-
Boxes coordinates, name -
yolonms_layer_1/ExpandDims_1:0
, shape -1, 10647, 4
, format -B, N, 4
, where:B
- batch sizeN
- number of candidates
-
Scores of boxes per class, name -
yolonms_layer_1/ExpandDims_3:0
, shape -1, 80, 10647
, format -B, 80, N
, where:B
- batch sizeN
- number of candidates
-
Selected indices from the boxes tensor, name -
yolonms_layer_1/concat_2:0
, shape -1, 1600, 3
, format -B, N, 3
, where:B
- batch sizeN
- number of detection boxes
Each index has format [b_idx
, cls_idx
, box_idx
], where:
b_idx
- batch indexcls_idx
- class_indexbox_idx
- box_index
The model was trained on Common Objects in Context (COCO) dataset version with 80 categories of object. Mapping to class names provided in <omz_dir>/data/dataset_classes/coco_80cl.txt
file.
You can download models and if necessary convert them into Inference Engine format using the Model Downloader and other automation tools as shown in the examples below.
An example of using the Model Downloader:
omz_downloader --name <model_name>
An example of using the Model Converter:
omz_converter --name <model_name>
The original model is distributed under the
Apache License, Version 2.0.
A copy of the license is provided in <omz_dir>/models/public/licenses/APACHE-2.0.txt
.