Skip to content

Commit a1645e0

Browse files
authored
Remove deprecated models (openvinotoolkit#3001)
* Remove deprecated models * remove AC configs links and reference in datasets * remove models usage in demos and tests * remove missed files * remove caffe2 to onnx script * update ci requirements * update known frameworks, restore some req
1 parent d513a3b commit a1645e0

File tree

176 files changed

+18
-7077
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

176 files changed

+18
-7077
lines changed

CONTRIBUTING.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@ We appreciate your intention to contribute model to the OpenVINO™ Open Mod
44

55
Frameworks supported by the Open Model Zoo:
66
* Caffe\*
7-
* Caffe2\* (via conversion to ONNX\*)
87
* TensorFlow\*
98
* PyTorch\* (via conversion to ONNX\*)
109
* MXNet\*
@@ -113,7 +112,7 @@ For replacement operation:
113112
- `replacement` — Replacement string
114113
- `count` (*optional*) — Exact number of replacements (if number of `pattern` occurrences less then this number, downloading will be aborted)
115114

116-
**`conversion_to_onnx_args`** (*only for Caffe2\*, PyTorch\* models*)
115+
**`conversion_to_onnx_args`** (*only for PyTorch\* models*)
117116

118117
List of ONNX\* conversion parameters, see `model_optimizer_args` for details.
119118

ci/requirements-ac-test.txt

-2
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,6 @@ decorator==4.4.2
2020
# via networkx
2121
defusedxml==0.7.1
2222
# via -r tools/accuracy_checker/requirements-core.in
23-
editdistance==0.5.3
24-
# via -r tools/accuracy_checker/requirements.in
2523
fast-ctc-decode==0.3.0
2624
# via -r tools/accuracy_checker/requirements.in
2725
filelock==3.0.12

ci/requirements-ac.txt

-2
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,6 @@ decorator==4.4.2
1616
# via networkx
1717
defusedxml==0.7.1
1818
# via -r tools/accuracy_checker/requirements-core.in
19-
editdistance==0.5.3
20-
# via -r tools/accuracy_checker/requirements.in
2119
fast-ctc-decode==0.3.0
2220
# via -r tools/accuracy_checker/requirements.in
2321
filelock==3.0.12

ci/requirements-conversion.txt

-5
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,6 @@ defusedxml==0.7.1
2626
# -r ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements_tf2.txt
2727
flatbuffers==1.12
2828
# via tensorflow
29-
future==0.18.2
30-
# via -r tools/model_tools/requirements-caffe2.in
3129
gast==0.3.3
3230
# via tensorflow
3331
google-auth==1.35.0
@@ -84,7 +82,6 @@ oauthlib==3.1.1
8482
onnx==1.10.1
8583
# via
8684
# -r ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements_onnx.txt
87-
# -r tools/model_tools/requirements-caffe2.in
8885
# -r tools/model_tools/requirements-pytorch.in
8986
opt-einsum==3.3.0
9087
# via tensorflow
@@ -126,7 +123,6 @@ six==1.15.0
126123
# google-auth
127124
# google-pasta
128125
# grpcio
129-
# h5py
130126
# keras-preprocessing
131127
# onnx
132128
# protobuf
@@ -147,7 +143,6 @@ termcolor==1.1.0
147143
# via tensorflow
148144
torch==1.8.1
149145
# via
150-
# -r tools/model_tools/requirements-caffe2.in
151146
# -r tools/model_tools/requirements-pytorch.in
152147
# torchvision
153148
torchvision==0.9.1

ci/requirements-quantization.txt

-2
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,6 @@ defusedxml==0.7.1
2020
# via
2121
# -r ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/requirements_kaldi.txt
2222
# -r tools/accuracy_checker/requirements-core.in
23-
editdistance==0.5.3
24-
# via -r tools/accuracy_checker/requirements.in
2523
fast-ctc-decode==0.3.0
2624
# via -r tools/accuracy_checker/requirements.in
2725
filelock==3.0.12

ci/update-requirements.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ def pc(target, *sources):
8787
pc('ci/requirements-check-basics.txt',
8888
'ci/requirements-check-basics.in', 'ci/requirements-documentation.in')
8989
pc('ci/requirements-conversion.txt',
90-
*(f'tools/model_tools/requirements-{suffix}.in' for suffix in ['caffe2', 'pytorch', 'tensorflow']),
90+
*(f'tools/model_tools/requirements-{suffix}.in' for suffix in ['pytorch', 'tensorflow']),
9191
*(openvino_dir / f'deployment_tools/model_optimizer/requirements_{suffix}.txt'
9292
for suffix in ['caffe', 'mxnet', 'onnx', 'tf2']))
9393
pc('ci/requirements-demos.txt',

data/datasets.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ To use this dataset with OMZ tools, make sure `<DATASET_DIR>` contains the follo
4040

4141
### Datasets in dataset_definitions.yml
4242
* `imagenet_1000_classes` used for evaluation models trained on ILSVRC 2012 dataset with 1000 classes. (model examples: [`alexnet`](../models/public/alexnet/README.md), [`vgg16`](../models/public/vgg16/README.md))
43-
* `imagenet_1000_classes_2015` used for evaluation models trained on ILSVRC 2015 dataset with 1000 classes. (model examples: [`se-resnet-152`](../models/public/se-resnet-152/README.md), [`se-resnext-50`](../models/public/se-resnext-50/README.md))
43+
* `imagenet_1000_classes_2015` used for evaluation models trained on ILSVRC 2015 dataset with 1000 classes. (model examples: [`se-resnet-50`](../models/public/se-resnet-50/README.md), [`se-resnext-50`](../models/public/se-resnext-50/README.md))
4444
* `imagenet_1001_classes` used for evaluation models trained on ILSVRC 2012 dataset with 1001 classes (background label + original labels). (model examples: [`googlenet-v2-tf`](../models/public/googlenet-v2-tf/README.md), [`resnet-50-tf`](../models/public/resnet-50-tf/README.md))
4545

4646
## [Common Objects in Context (COCO)](https://cocodataset.org/#home)
@@ -62,9 +62,9 @@ To use this dataset with OMZ tools, make sure `<DATASET_DIR>` contains the follo
6262

6363
### Datasets in dataset_definitions.yml
6464
* `ms_coco_mask_rcnn` used for evaluation models trained on COCO dataset for object detection and instance segmentation tasks. Background label + label map with 80 public available object categories are used. Annotations are saved in order of ascending image ID.
65-
* `ms_coco_detection_91_classes` used for evaluation models trained on COCO dataset for object detection tasks. Background label + label map with 80 public available object categories are used (original indexing to 91 categories is preserved. You can find more information about object categories labels [here](https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/)). Annotations are saved in order of ascending image ID. (model examples: [`faster_rcnn_resnet50_coco`](../models/public/faster_rcnn_resnet50_coco/README.md), [`ssd_resnet50_v1_fpn_coco`](../models/public/ssd_resnet50_v1_fpn_coco/README.md))
65+
* `ms_coco_detection_91_classes` used for evaluation models trained on COCO dataset for object detection tasks. Background label + label map with 80 public available object categories are used (original indexing to 91 categories is preserved. You can find more information about object categories labels [here](https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/)). Annotations are saved in order of ascending image ID. (model examples: [`faster_rcnn_resnet50_coco`](../models/public/faster_rcnn_resnet50_coco/README.md), [`ssd_mobilenet_v1_coco`](../models/public/ssd_mobilenet_v1_coco/README.md))
6666
* `ms_coco_detection_80_class_with_background` used for evaluation models trained on COCO dataset for object detection tasks. Background label + label map with 80 public available object categories are used. Annotations are saved in order of ascending image ID. (model examples: [`faster-rcnn-resnet101-coco-sparse-60-0001`](../models/intel/faster-rcnn-resnet101-coco-sparse-60-0001/README.md), [`ssd-resnet34-1200-onnx`](../models/public/ssd-resnet34-1200-onnx/README.md))
67-
* `ms_coco_detection_80_class_without_background` used for evaluation models trained on COCO dataset for object detection tasks. Label map with 80 public available object categories is used. Annotations are saved in order of ascending image ID. (model examples: [`ctdet_coco_dlav0_384`](../models/public/ctdet_coco_dlav0_384/README.md), [`yolo-v3-tf`](../models/public/yolo-v3-tf/README.md))
67+
* `ms_coco_detection_80_class_without_background` used for evaluation models trained on COCO dataset for object detection tasks. Label map with 80 public available object categories is used. Annotations are saved in order of ascending image ID. (model examples: [`ctdet_coco_dlav0_512`](../models/public/ctdet_coco_dlav0_512/README.md), [`yolo-v3-tf`](../models/public/yolo-v3-tf/README.md))
6868
* `ms_coco_keypoints` used for evaluation models trained on COCO dataset for human pose estimation tasks. Each annotation stores multiple keypoints for one image. (model examples: [`human-pose-estimation-0001`](../models/intel/human-pose-estimation-0001/README.md))
6969
* `ms_coco_single_keypoints` used for evaluation models trained on COCO dataset for human pose estimation tasks. Each annotation stores single keypoints for image, so several annotation can be associated to one image. (model examples: [`single-human-pose-estimation-0001`](../models/public/single-human-pose-estimation-0001/README.md))
7070

demos/classification_benchmark_demo/cpp/README.md

-31
Original file line numberDiff line numberDiff line change
@@ -38,78 +38,50 @@ omz_converter --list models.lst
3838
* alexnet
3939
* caffenet
4040
* densenet-121
41-
* densenet-121-caffe2
4241
* densenet-121-tf
43-
* densenet-161
44-
* densenet-161-tf
45-
* densenet-169
46-
* densenet-169-tf
47-
* densenet-201
48-
* densenet-201-tf
4942
* dla-34
5043
* efficientnet-b0
51-
* efficientnet-b0_auto_aug
5244
* efficientnet-b0-pytorch
53-
* efficientnet-b5
54-
* efficientnet-b5-pytorch
55-
* efficientnet-b7_auto_aug
56-
* efficientnet-b7-pytorch
5745
* googlenet-v1
5846
* googlenet-v1-tf
5947
* googlenet-v2
6048
* googlenet-v3
6149
* googlenet-v3-pytorch
6250
* googlenet-v4-tf
6351
* hbonet-0.25
64-
* hbonet-0.5
6552
* hbonet-1.0
6653
* inception-resnet-v2-tf
6754
* mixnet-l
6855
* mobilenet-v1-0.25-128
69-
* mobilenet-v1-0.50-160
70-
* mobilenet-v1-0.50-224
7156
* mobilenet-v1-1.0-224
7257
* mobilenet-v1-1.0-224-tf
7358
* mobilenet-v2
7459
* mobilenet-v2-1.0-224
7560
* mobilenet-v2-1.4-224
7661
* mobilenet-v2-pytorch
7762
* nfnet-f0
78-
* octave-densenet-121-0.125
79-
* octave-resnet-101-0.125
80-
* octave-resnet-200-0.125
8163
* octave-resnet-26-0.25
82-
* octave-resnet-50-0.125
83-
* octave-resnext-101-0.25
84-
* octave-resnext-50-0.25
85-
* octave-se-resnet-50-0.125
8664
* regnetx-3.2gf
8765
* repvgg-a0
8866
* repvgg-b1
8967
* repvgg-b3
9068
* resnest-50-pytorch
9169
* resnet-18-pytorch
92-
* resnet-50-caffe2
9370
* resnet-50-pytorch
9471
* resnet-50-tf
9572
* resnet18-xnor-binary-onnx-0001
9673
* resnet50-binary-0001
9774
* rexnet-v1-x1.0
9875
* se-inception
99-
* se-resnet-101
100-
* se-resnet-152
10176
* se-resnet-50
102-
* se-resnext-101
10377
* se-resnext-50
10478
* shufflenet-v2-x0.5
10579
* shufflenet-v2-x1.0
10680
* squeezenet1.0
10781
* squeezenet1.1
108-
* squeezenet1.1-caffe2
10982
* swin-tiny-patch4-window7-224
11083
* vgg16
11184
* vgg19
112-
* vgg19-caffe2
11385

11486
> **NOTE**: Refer to the tables [Intel's Pre-Trained Models Device Support](../../../models/intel/device_support.md) and [Public Pre-Trained Models Device Support](../../../models/public/device_support.md) for the details on models inference support at different devices.
11587
@@ -136,10 +108,7 @@ Please note that you should use `<omz_dir>/data/dataset_classes/imagenet_2015.tx
136108

137109
* googlenet-v2
138110
* se-inception
139-
* se-resnet-101
140-
* se-resnet-152
141111
* se-resnet-50
142-
* se-resnext-101
143112
* se-resnext-50
144113

145114
and `<omz_dir>/data/dataset_classes/imagenet_2012.txt` labels file with all other models supported by the demo.

demos/classification_benchmark_demo/cpp/models.lst

-28
Original file line numberDiff line numberDiff line change
@@ -2,75 +2,47 @@
22
alexnet
33
caffenet
44
densenet-121
5-
densenet-121-caffe2
65
densenet-121-tf
7-
densenet-161
8-
densenet-161-tf
9-
densenet-169
10-
densenet-169-tf
11-
densenet-201
12-
densenet-201-tf
136
dla-34
147
efficientnet-b0
15-
efficientnet-b0_auto_aug
168
efficientnet-b0-pytorch
17-
efficientnet-b5
18-
efficientnet-b5-pytorch
19-
efficientnet-b7_auto_aug
20-
efficientnet-b7-pytorch
219
googlenet-v1
2210
googlenet-v1-tf
2311
googlenet-v2
2412
googlenet-v3
2513
googlenet-v3-pytorch
2614
googlenet-v4-tf
2715
hbonet-0.25
28-
hbonet-0.5
2916
hbonet-1.0
3017
inception-resnet-v2-tf
3118
mixnet-l
3219
mobilenet-v1-0.25-128
33-
mobilenet-v1-0.50-160
34-
mobilenet-v1-0.50-224
3520
mobilenet-v1-1.0-224
3621
mobilenet-v1-1.0-224-tf
3722
mobilenet-v2
3823
mobilenet-v2-1.0-224
3924
mobilenet-v2-1.4-224
4025
mobilenet-v2-pytorch
4126
nfnet-f0
42-
octave-densenet-121-0.125
43-
octave-resnet-101-0.125
44-
octave-resnet-200-0.125
4527
octave-resnet-26-0.25
46-
octave-resnet-50-0.125
47-
octave-resnext-101-0.25
48-
octave-resnext-50-0.25
49-
octave-se-resnet-50-0.125
5028
regnetx-3.2gf
5129
repvgg-a0
5230
repvgg-b1
5331
repvgg-b3
5432
resnest-50-pytorch
5533
resnet-18-pytorch
56-
resnet-50-caffe2
5734
resnet-50-pytorch
5835
resnet-50-tf
5936
resnet18-xnor-binary-onnx-0001
6037
resnet50-binary-0001
6138
rexnet-v1-x1.0
6239
se-inception
63-
se-resnet-101
64-
se-resnet-152
6540
se-resnet-50
66-
se-resnext-101
6741
se-resnext-50
6842
shufflenet-v2-x0.5
6943
shufflenet-v2-x1.0
7044
squeezenet1.0
7145
squeezenet1.1
72-
squeezenet1.1-caffe2
7346
swin-tiny-patch4-window7-224
7447
vgg16
7548
vgg19
76-
vgg19-caffe2

demos/classification_demo/python/README.md

-31
Original file line numberDiff line numberDiff line change
@@ -34,22 +34,10 @@ omz_converter --list models.lst
3434
* alexnet
3535
* caffenet
3636
* densenet-121
37-
* densenet-121-caffe2
3837
* densenet-121-tf
39-
* densenet-161
40-
* densenet-161-tf
41-
* densenet-169
42-
* densenet-169-tf
43-
* densenet-201
44-
* densenet-201-tf
4538
* dla-34
4639
* efficientnet-b0
47-
* efficientnet-b0_auto_aug
4840
* efficientnet-b0-pytorch
49-
* efficientnet-b5
50-
* efficientnet-b5-pytorch
51-
* efficientnet-b7_auto_aug
52-
* efficientnet-b7-pytorch
5341
* efficientnet-v2-b0
5442
* efficientnet-v2-s
5543
* googlenet-v1
@@ -59,55 +47,39 @@ omz_converter --list models.lst
5947
* googlenet-v3-pytorch
6048
* googlenet-v4-tf
6149
* hbonet-0.25
62-
* hbonet-0.5
6350
* hbonet-1.0
6451
* inception-resnet-v2-tf
6552
* mixnet-l
6653
* mobilenet-v1-0.25-128
67-
* mobilenet-v1-0.50-160
68-
* mobilenet-v1-0.50-224
6954
* mobilenet-v1-1.0-224
7055
* mobilenet-v1-1.0-224-tf
7156
* mobilenet-v2
7257
* mobilenet-v2-1.0-224
7358
* mobilenet-v2-1.4-224
7459
* mobilenet-v2-pytorch
7560
* nfnet-f0
76-
* octave-densenet-121-0.125
77-
* octave-resnet-101-0.125
78-
* octave-resnet-200-0.125
7961
* octave-resnet-26-0.25
80-
* octave-resnet-50-0.125
81-
* octave-resnext-101-0.25
82-
* octave-resnext-50-0.25
83-
* octave-se-resnet-50-0.125
8462
* regnetx-3.2gf
8563
* repvgg-a0
8664
* repvgg-b1
8765
* repvgg-b3
8866
* resnest-50-pytorch
8967
* resnet-18-pytorch
90-
* resnet-50-caffe2
9168
* resnet-50-pytorch
9269
* resnet-50-tf
9370
* resnet18-xnor-binary-onnx-0001
9471
* resnet50-binary-0001
9572
* rexnet-v1-x1.0
9673
* se-inception
97-
* se-resnet-101
98-
* se-resnet-152
9974
* se-resnet-50
100-
* se-resnext-101
10175
* se-resnext-50
10276
* shufflenet-v2-x0.5
10377
* shufflenet-v2-x1.0
10478
* squeezenet1.0
10579
* squeezenet1.1
106-
* squeezenet1.1-caffe2
10780
* swin-tiny-patch4-window7-224
10881
* vgg16
10982
* vgg19
110-
* vgg19-caffe2
11183

11284
> **NOTE**: Refer to the tables [Intel's Pre-Trained Models Device Support](../../../models/intel/device_support.md) and [Public Pre-Trained Models Device Support](../../../models/public/device_support.md) for the details on models inference support at different devices.
11385
@@ -119,10 +91,7 @@ Please note that you should use `<omz_dir>/data/dataset_classes/imagenet_2015.tx
11991

12092
* googlenet-v2
12193
* se-inception
122-
* se-resnet-101
123-
* se-resnet-152
12494
* se-resnet-50
125-
* se-resnext-101
12695
* se-resnext-50
12796

12897
and `<omz_dir>/data/dataset_classes/imagenet_2012.txt` labels file with all other models supported by the demo.

0 commit comments

Comments
 (0)