Skip to content

Commit edb4ed3

Browse files
[DOCS] Update CODEOWNERS, Some improves of modelzoo.md (#2015)
### Changes - Fix names of teams in CODEOWNERS - But openvino-admins and openvino-configuration-mgmt looks like private groups as result ``` Unknown owner on line 3: make sure the team @openvinotoolkit/openvino-admins exists, is publicly visible, and has write access to the repository ``` - Little improves of modelzoo.md
1 parent dcf5965 commit edb4ed3

File tree

2 files changed

+36
-40
lines changed

2 files changed

+36
-40
lines changed

CODEOWNERS

+2-6
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
* @openvinotoolkit/nncf_pytorch-maintainers
1+
* @openvinotoolkit/nncf-maintainers
22

3-
CODEOWNERS @openvinotoolkit/openvino-admins
4-
5-
# Control 3d party dependencies
6-
requirements.txt @openvino-configuration-mgmt
7-
**/setup.py @openvino-configuration-mgmt
3+
CODEOWNERS @openvinotoolkit/nncf-admins

docs/ModelZoo.md

+34-34
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ The applied quantization compression algorithms are divided into two broad categ
2525
<th>Model</th>
2626
<th>Compression algorithm</th>
2727
<th>Dataset</th>
28-
<th>Accuracy&nbsp(<em>drop</em>)&nbsp%</th>
28+
<th>Accuracy (<em>drop</em>) %</th>
2929
<th>Configuration</th>
3030
<th>Checkpoint</th>
3131
</tr>
@@ -65,7 +65,7 @@ The applied quantization compression algorithms are divided into two broad categ
6565
</tr>
6666
<tr>
6767
<td align="left">Inception V3</td>
68-
<td align="left">• QAT: INT8<br />• Sparsity: 61% (RB)</td>
68+
<td align="left">• QAT: INT8<br>• Sparsity: 61% (RB)</td>
6969
<td>ImageNet</td>
7070
<td>76.36 (0.97)</td>
7171
<td><a href="../examples/torch/classification/configs/sparsity_quantization/inception_v3_imagenet_rb_sparsity_int8.json">Config</a></td>
@@ -105,7 +105,7 @@ The applied quantization compression algorithms are divided into two broad categ
105105
</tr>
106106
<tr>
107107
<td align="left">MobileNet V2</td>
108-
<td align="left">• QAT: INT8<br />• Sparsity: 52% (RB)</td>
108+
<td align="left">• QAT: INT8<br>• Sparsity: 52% (RB)</td>
109109
<td>ImageNet</td>
110110
<td>71.09 (0.78)</td>
111111
<td><a href="../examples/torch/classification/configs/sparsity_quantization/mobilenet_v2_imagenet_rb_sparsity_int8.json">Config</a></td>
@@ -169,7 +169,7 @@ The applied quantization compression algorithms are divided into two broad categ
169169
</tr>
170170
<tr>
171171
<td align="left">ResNet-18</td>
172-
<td align="left">• Accuracy-aware compressed training<br />• Filter pruning: 60%, geometric median criterion</td>
172+
<td align="left">• Accuracy-aware compressed training<br>• Filter pruning: 60%, geometric median criterion</td>
173173
<td>ImageNet</td>
174174
<td>69.2 (-0.6)</td>
175175
<td><a href="../examples/torch/classification/configs/pruning/resnet18_imagenet_pruning_accuracy_aware.json">Config</a></td>
@@ -185,7 +185,7 @@ The applied quantization compression algorithms are divided into two broad categ
185185
</tr>
186186
<tr>
187187
<td align="left">ResNet-34</td>
188-
<td align="left">• Filter pruning: 50%, geometric median criterion<br />• Knowledge distillation</td>
188+
<td align="left">• Filter pruning: 50%, geometric median criterion<br>• Knowledge distillation</td>
189189
<td>ImageNet</td>
190190
<td>73.11 (0.19)</td>
191191
<td><a href="../examples/torch/classification/configs/pruning/resnet34_imagenet_pruning_geometric_median_kd.json">Config</a></td>
@@ -225,15 +225,15 @@ The applied quantization compression algorithms are divided into two broad categ
225225
</tr>
226226
<tr>
227227
<td align="left">ResNet-50</td>
228-
<td align="left">• QAT: INT8<br />• Sparsity: 61% (RB)</td>
228+
<td align="left">• QAT: INT8<br>• Sparsity: 61% (RB)</td>
229229
<td>ImageNet</td>
230230
<td>75.42 (0.73)</td>
231231
<td><a href="../examples/torch/classification/configs/sparsity_quantization/resnet50_imagenet_rb_sparsity_int8.json">Config</a></td>
232232
<td><a href="https://storage.openvinotoolkit.org/repositories/nncf/models/develop/torch/resnet50_imagenet_rb_sparsity_int8.pth">Download</a></td>
233233
</tr>
234234
<tr>
235235
<td align="left">ResNet-50</td>
236-
<td align="left">• QAT: INT8<br />• Sparsity: 50% (RB)</td>
236+
<td align="left">• QAT: INT8<br>• Sparsity: 50% (RB)</td>
237237
<td>ImageNet</td>
238238
<td>75.50 (0.65)</td>
239239
<td><a href="../examples/torch/classification/configs/sparsity_quantization/resnet50_imagenet_rb_sparsity50_int8.json">Config</a></td>
@@ -249,7 +249,7 @@ The applied quantization compression algorithms are divided into two broad categ
249249
</tr>
250250
<tr>
251251
<td align="left">ResNet-50</td>
252-
<td align="left">• Accuracy-aware compressed training<br />• Filter pruning: 52.5%, geometric median criterion</td>
252+
<td align="left">• Accuracy-aware compressed training<br>• Filter pruning: 52.5%, geometric median criterion</td>
253253
<td>ImageNet</td>
254254
<td>75.23 (0.93)</td>
255255
<td><a href="../examples/torch/classification/configs/pruning/resnet50_imagenet_pruning_accuracy_aware.json">Config</a></td>
@@ -298,7 +298,7 @@ The applied quantization compression algorithms are divided into two broad categ
298298
<th>Model</th>
299299
<th>Compression algorithm</th>
300300
<th>Dataset</th>
301-
<th>mAP&nbsp(<em>drop</em>)&nbsp%</th>
301+
<th>mAP (<em>drop</em>) %</th>
302302
<th>Configuration</th>
303303
<th>Checkpoint</th>
304304
</tr>
@@ -314,7 +314,7 @@ The applied quantization compression algorithms are divided into two broad categ
314314
</tr>
315315
<tr>
316316
<td align="left">SSD300‑MobileNet</td>
317-
<td align="left">• QAT: INT8<br />• Sparsity: 70% (Magnitude)</td>
317+
<td align="left">• QAT: INT8<br>• Sparsity: 70% (Magnitude)</td>
318318
<td>VOC12+07 train, VOC07 eval</td>
319319
<td>62.95 (-0.72)</td>
320320
<td><a href="../examples/torch/object_detection/configs/ssd300_mobilenet_voc_magnitude_int8.json">Config</a></td>
@@ -338,7 +338,7 @@ The applied quantization compression algorithms are divided into two broad categ
338338
</tr>
339339
<tr>
340340
<td align="left">SSD300‑VGG‑BN</td>
341-
<td align="left">• QAT: INT8<br />• Sparsity: 70% (Magnitude)</td>
341+
<td align="left">• QAT: INT8<br>• Sparsity: 70% (Magnitude)</td>
342342
<td>VOC12+07 train, VOC07 eval</td>
343343
<td>77.66 (0.62)</td>
344344
<td><a href="../examples/torch/object_detection/configs/ssd300_vgg_voc_magnitude_sparsity_int8.json">Config</a></td>
@@ -370,7 +370,7 @@ The applied quantization compression algorithms are divided into two broad categ
370370
</tr>
371371
<tr>
372372
<td align="left">SSD512-VGG‑BN</td>
373-
<td align="left">• QAT: INT8<br />• Sparsity: 70% (Magnitude)</td>
373+
<td align="left">• QAT: INT8<br>• Sparsity: 70% (Magnitude)</td>
374374
<td>VOC12+07 train, VOC07 eval</td>
375375
<td>79.68 (0.58)</td>
376376
<td><a href="../examples/torch/object_detection/configs/ssd512_vgg_voc_magnitude_sparsity_int8.json">Config</a></td>
@@ -387,7 +387,7 @@ The applied quantization compression algorithms are divided into two broad categ
387387
<th>Model</th>
388388
<th>Compression algorithm</th>
389389
<th>Dataset</th>
390-
<th>mIoU&nbsp(<em>drop</em>)&nbsp%</th>
390+
<th>mIoU (<em>drop</em>) %</th>
391391
<th>Configuration</th>
392392
<th>Checkpoint</th>
393393
</tr>
@@ -411,7 +411,7 @@ The applied quantization compression algorithms are divided into two broad categ
411411
</tr>
412412
<tr>
413413
<td align="left">ICNet</td>
414-
<td align="left">• QAT: INT8<br />• Sparsity: 60% (Magnitude)</td>
414+
<td align="left">• QAT: INT8<br>• Sparsity: 60% (Magnitude)</td>
415415
<td>CamVid</td>
416416
<td>67.16 (0.73)</td>
417417
<td><a href="../examples/torch/semantic_segmentation/configs/icnet_camvid_magnitude_sparsity_int8.json">Config</a></td>
@@ -435,7 +435,7 @@ The applied quantization compression algorithms are divided into two broad categ
435435
</tr>
436436
<tr>
437437
<td align="left">UNet</td>
438-
<td align="left">• QAT: INT8<br />• Sparsity: 60% (Magnitude)</td>
438+
<td align="left">• QAT: INT8<br>• Sparsity: 60% (Magnitude)</td>
439439
<td>CamVid</td>
440440
<td>72.46 (-0.51)</td>
441441
<td><a href="../examples/torch/semantic_segmentation/configs/unet_camvid_magnitude_sparsity_int8.json">Config</a></td>
@@ -459,7 +459,7 @@ The applied quantization compression algorithms are divided into two broad categ
459459
</tr>
460460
<tr>
461461
<td align="left">UNet</td>
462-
<td align="left">• QAT: INT8<br />• Sparsity: 60% (Magnitude)</td>
462+
<td align="left">• QAT: INT8<br>• Sparsity: 60% (Magnitude)</td>
463463
<td>Mapillary</td>
464464
<td>55.69 (0.55)</td>
465465
<td><a href="../examples/torch/semantic_segmentation/configs/unet_mapillary_magnitude_sparsity_int8.json">Config</a></td>
@@ -484,7 +484,7 @@ The applied quantization compression algorithms are divided into two broad categ
484484
<th>PyTorch Model</th>
485485
<th><img width="20" height="1">Compression algorithm<img width="20" height="1"></th>
486486
<th>Dataset</th>
487-
<th>Accuracy&nbsp(<em>drop</em>)&nbsp%</th>
487+
<th>Accuracy (<em>drop</em>) %</th>
488488
</tr>
489489
</thead>
490490
<tbody align="center">
@@ -507,7 +507,7 @@ The applied quantization compression algorithms are divided into two broad categ
507507
<td>77.22 (0.46)</td>
508508
</tr>
509509
<tr>
510-
<td align="left">BERT-large<br />(Whole Word Masking)</td>
510+
<td align="left">BERT-large<br>(Whole Word Masking)</td>
511511
<td align="left">• QAT: INT8</td>
512512
<td>SQuAD v1.1</td>
513513
<td>F1: 92.68 (0.53)</td>
@@ -549,7 +549,7 @@ The applied quantization compression algorithms are divided into two broad categ
549549
<th>Model</th>
550550
<th>Compression algorithm</th>
551551
<th>Dataset</th>
552-
<th>Accuracy&nbsp(<em>drop</em>)&nbsp%</th>
552+
<th>Accuracy (<em>drop</em>) %</th>
553553
<th>Configuration</th>
554554
<th>Checkpoint</th>
555555
</tr>
@@ -573,7 +573,7 @@ The applied quantization compression algorithms are divided into two broad categ
573573
</tr>
574574
<tr>
575575
<td align="left">Inception V3</td>
576-
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations), Sparsity: 61% (RB)</td>
576+
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations)<br>• Sparsity: 61% (RB)</td>
577577
<td>ImageNet</td>
578578
<td>77.52 (0.39)</td>
579579
<td><a href="../examples/tensorflow/classification/configs/sparsity_quantization/inception_v3_imagenet_rb_sparsity_int8.json">Config</a></td>
@@ -605,7 +605,7 @@ The applied quantization compression algorithms are divided into two broad categ
605605
</tr>
606606
<tr>
607607
<td align="left">MobileNet V2</td>
608-
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations), Sparsity: 52% (RB)</td>
608+
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations)<br>• Sparsity: 52% (RB)</td>
609609
<td>ImageNet</td>
610610
<td>70.94 (0.91)</td>
611611
<td><a href="../examples/tensorflow/classification/configs/sparsity_quantization/mobilenet_v2_imagenet_rb_sparsity_int8.json">Config</a></td>
@@ -645,7 +645,7 @@ The applied quantization compression algorithms are divided into two broad categ
645645
</tr>
646646
<tr>
647647
<td align="left">MobileNet V3 (Large)</td>
648-
<td align="left">• QAT: INT8 (per-channel symmetric for weights, per-tensor asymmetric half-range for activations)<br />• Sparsity: 42% (RB)</td>
648+
<td align="left">• QAT: INT8 (per-channel symmetric for weights, per-tensor asymmetric half-range for activations)<br>• Sparsity: 42% (RB)</td>
649649
<td>ImageNet</td>
650650
<td>75.24 (0.56)</td>
651651
<td><a href="../examples/tensorflow/classification/configs/sparsity_quantization/mobilenet_v3_large_imagenet_rb_sparsity_int8.json">Config</a></td>
@@ -669,7 +669,7 @@ The applied quantization compression algorithms are divided into two broad categ
669669
</tr>
670670
<tr>
671671
<td align="left">MobileNet V3 (Small)</td>
672-
<td align="left">• QAT: INT8 (per-channel symmetric for weights, per-tensor asymmetric half-range for activations)<br />• Sparsity: 42% (Magnitude)</td>
672+
<td align="left">• QAT: INT8 (per-channel symmetric for weights, per-tensor asymmetric half-range for activations)<br>• Sparsity: 42% (Magnitude)</td>
673673
<td>ImageNet</td>
674674
<td>67.44 (0.94)</td>
675675
<td><a href="../examples/tensorflow/classification/configs/sparsity_quantization/mobilenet_v3_small_imagenet_rb_sparsity_int8.json">Config</a></td>
@@ -693,7 +693,7 @@ The applied quantization compression algorithms are divided into two broad categ
693693
</tr>
694694
<tr>
695695
<td align="left">ResNet-50</td>
696-
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations)<br />• Sparsity: 65% (RB)</td>
696+
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations)<br>• Sparsity: 65% (RB)</td>
697697
<td>ImageNet</td>
698698
<td>74.36 (0.69)</td>
699699
<td><a href="../examples/tensorflow/classification/configs/sparsity_quantization/resnet50_imagenet_rb_sparsity_int8.json">Config</a></td>
@@ -717,15 +717,15 @@ The applied quantization compression algorithms are divided into two broad categ
717717
</tr>
718718
<tr>
719719
<td align="left">ResNet-50</td>
720-
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations)<br />• Filter pruning: 40%, geometric median criterion</td>
720+
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations)<br>• Filter pruning: 40%, geometric median criterion</td>
721721
<td>ImageNet</td>
722722
<td>75.09 (-0.04)</td>
723723
<td><a href="../examples/tensorflow/classification/configs/pruning_quantization/resnet50_imagenet_pruning_geometric_median_int8.json">Config</a></td>
724724
<td><a href="https://storage.openvinotoolkit.org/repositories/nncf/models/develop/tensorflow/resnet50_imagenet_pruning_geometric_median_int8.tar.gz">Download</a></td>
725725
</tr>
726726
<tr>
727727
<td align="left">ResNet50</td>
728-
<td align="left">• Accuracy-aware compressed training<br />• Sparsity: 65% (Magnitude)</td>
728+
<td align="left">• Accuracy-aware compressed training<br>• Sparsity: 65% (Magnitude)</td>
729729
<td>ImageNet</td>
730730
<td>74.37 (0.67)</td>
731731
<td><a href="../examples/tensorflow/classification/configs/sparsity/resnet50_imagenet_magnitude_sparsity_accuracy_aware.json">Config</a></td>
@@ -742,7 +742,7 @@ The applied quantization compression algorithms are divided into two broad categ
742742
<th>Model</th>
743743
<th>Compression algorithm</th>
744744
<th>Dataset</th>
745-
<th>mAP&nbsp(<em>drop</em>)&nbsp%</th>
745+
<th>mAP (<em>drop</em>) %</th>
746746
<th>Configuration</th>
747747
<th>Checkpoint</th>
748748
</tr>
@@ -782,7 +782,7 @@ The applied quantization compression algorithms are divided into two broad categ
782782
</tr>
783783
<tr>
784784
<td align="left">RetinaNet</td>
785-
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations)<br />• Filter pruning: 40%</td>
785+
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations)<br>• Filter pruning: 40%</td>
786786
<td>COCO 2017</td>
787787
<td>32.67 (0.76)</td>
788788
<td><a href="../examples/tensorflow/object_detection/configs/pruning_quantization/retinanet_coco_pruning_geometric_median_int8.json">Config</a></td>
@@ -823,7 +823,7 @@ The applied quantization compression algorithms are divided into two broad categ
823823
<th>Model</th>
824824
<th>Compression algorithm</th>
825825
<th>Dataset</th>
826-
<th>mAP&nbsp(<em>drop</em>)&nbsp%</th>
826+
<th>mAP (<em>drop</em>) %</th>
827827
<th>Configuration</th>
828828
<th>Checkpoint</th>
829829
</tr>
@@ -833,23 +833,23 @@ The applied quantization compression algorithms are divided into two broad categ
833833
<td align="left">Mask‑R‑CNN</td>
834834
<td align="left">-</td>
835835
<td>COCO 2017</td>
836-
<td>bbox: 37.33 segm: 33.56</td>
836+
<td>bbox: 37.33<br>segm: 33.56</td>
837837
<td><a href="../examples/tensorflow/segmentation/configs/mask_rcnn_coco.json">Config</a></td>
838838
<td><a href="https://storage.openvinotoolkit.org/repositories/nncf/models/develop/tensorflow/mask_rcnn_coco.tar.gz">Download</a></td>
839839
</tr>
840840
<tr>
841841
<td align="left">Mask‑R‑CNN</td>
842842
<td align="left">• QAT: INT8 (per-tensor symmetric for weights, per-tensor asymmetric half-range for activations)</td>
843843
<td>COCO 2017</td>
844-
<td>bbox: 37.19 (0.14) segm: 33.54 (0.02)</td>
844+
<td>bbox: 37.19 (0.14)<br>segm: 33.54 (0.02)</td>
845845
<td><a href="../examples/tensorflow/segmentation/configs/quantization/mask_rcnn_coco_int8.json">Config</a></td>
846846
<td><a href="https://storage.openvinotoolkit.org/repositories/nncf/models/develop/tensorflow/mask_rcnn_coco_int8.tar.gz">Download</a></td>
847847
</tr>
848848
<tr>
849849
<td align="left">Mask‑R‑CNN</td>
850850
<td align="left">• Sparsity: 50% (Magnitude)</td>
851851
<td>COCO 2017</td>
852-
<td>bbox: 36.94 (0.39) segm: 33.23 (0.33)</td>
852+
<td>bbox: 36.94 (0.39)<br>segm: 33.23 (0.33)</td>
853853
<td><a href="../examples/tensorflow/segmentation/configs/sparsity/mask_rcnn_coco_magnitude_sparsity.json">Config</a></td>
854854
<td><a href="https://storage.openvinotoolkit.org/repositories/nncf/models/develop/tensorflow/mask_rcnn_coco_magnitude_sparsity.tar.gz">Download</a></td>
855855
</tr>
@@ -866,7 +866,7 @@ The applied quantization compression algorithms are divided into two broad categ
866866
<th>ONNX Model</th>
867867
<th>Compression algorithm</th>
868868
<th>Dataset</th>
869-
<th>Accuracy&nbsp(<em>drop</em>)&nbsp%</th>
869+
<th>Accuracy (<em>drop</em>) %</th>
870870
</tr>
871871
</thead>
872872
<tbody align="center">
@@ -923,7 +923,7 @@ The applied quantization compression algorithms are divided into two broad categ
923923
<th>ONNX Model</th>
924924
<th>Compression algorithm</th>
925925
<th>Dataset</th>
926-
<th>mAP&nbsp(<em>drop</em>)&nbsp%</th>
926+
<th>mAP (<em>drop</em>) %</th>
927927
</tr>
928928
</thead>
929929
<tbody align="center">

0 commit comments

Comments
 (0)