Skip to content

Commit bb0860c

Browse files
committed
remove mxnet
Signed-off-by: Mengni Wang <mengni.wang@intel.com>
1 parent df1258d commit bb0860c

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

59 files changed

+94
-3313
lines changed

.azure-pipelines/scripts/codeScan/pydocstyle/scan_path.txt

-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
/neural-compressor/neural_compressor/adaptor/mxnet_utils
21
/neural-compressor/neural_compressor/adaptor/ox_utils
32
/neural-compressor/neural_compressor/adaptor/tensorflow.py
43
/neural-compressor/neural_compressor/adaptor/tf_utils

.azure-pipelines/scripts/fwk_version.sh

-1
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,3 @@ export torchvision_version='0.20.1'
77
export ipex_version='2.5.0+cpu'
88
export onnx_version='1.17.0'
99
export onnxruntime_version='1.20.0'
10-
export mxnet_version='1.9.1'

.azure-pipelines/scripts/ut/env_setup.sh

-9
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@ echo "torchvision version is $torchvision_version"
1616
echo "ipex version is $ipex_version"
1717
echo "onnx version is $onnx_version"
1818
echo "onnxruntime version is $onnxruntime_version"
19-
echo "mxnet version is $mxnet_version"
2019

2120
test_case=$1
2221
echo -e "##[group]test case is ${test_case}"
@@ -66,14 +65,6 @@ if [[ "${onnxruntime_version}" != "" ]]; then
6665
pip install optimum
6766
fi
6867

69-
if [ "${mxnet_version}" != '' ]; then
70-
pip install numpy==1.23.5
71-
echo "re-install pycocotools resolve the issue with numpy..."
72-
pip uninstall pycocotools -y
73-
pip install --no-cache-dir pycocotools
74-
pip install mxnet==${mxnet_version}
75-
fi
76-
7768
# install special test env requirements
7869
# common deps
7970
pip install cmake

docs/source/adaptor.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Adaptor
1717

1818
Intel® Neural Compressor builds the low-precision inference
1919
solution on popular deep learning frameworks such as TensorFlow, PyTorch,
20-
MXNet, and ONNX Runtime. The adaptor layer is the bridge between the
20+
and ONNX Runtime. The adaptor layer is the bridge between the
2121
tuning strategy and vanilla framework quantization APIs.
2222

2323
## Adaptor Support Matrix
@@ -27,7 +27,6 @@ tuning strategy and vanilla framework quantization APIs.
2727
|TensorFlow |&#10004; |
2828
|PyTorch |&#10004; |
2929
|ONNX |&#10004; |
30-
|MXNet |&#10004; |
3130

3231

3332
## Working Flow

docs/source/add_new_adaptor.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ How to Add An Adaptor
1111
- [Add quantize API according to tune cfg](#add-quantize-api-according-to-tune-cfg)
1212

1313
## Introduction
14-
Intel® Neural Compressor builds the low-precision inference solution on popular deep learning frameworks such as TensorFlow, PyTorch, MXNet, Keras and ONNX Runtime. The adaptor layer is the bridge between the tuning strategy and vanilla framework quantization APIs, each framework has own adaptor. The users can add new adaptor to set strategy capabilities.
14+
Intel® Neural Compressor builds the low-precision inference solution on popular deep learning frameworks such as TensorFlow, PyTorch, Keras and ONNX Runtime. The adaptor layer is the bridge between the tuning strategy and vanilla framework quantization APIs, each framework has own adaptor. The users can add new adaptor to set strategy capabilities.
1515

1616
The document outlines the process of adding support for a new adaptor, in Intel® Neural Compressor with minimal changes. It provides instructions and code examples for implementation of a new adaptor. By following the steps outlined in the document, users can extend Intel® Neural Compressor's functionality to accommodate new adaptor and incorporate it into quantization workflows.
1717

docs/source/calibration.md

-5
Original file line numberDiff line numberDiff line change
@@ -42,11 +42,6 @@ Currently, Intel® Neural Compressor supports three popular calibration algorith
4242
<td>minmax</td>
4343
<td>minmax, kl</td>
4444
</tr>
45-
<tr>
46-
<td>MXNet</td>
47-
<td>minmax</td>
48-
<td>minmax, kl</td>
49-
</tr>
5045
<tr>
5146
<td>OnnxRuntime</td>
5247
<td>minmax</td>

docs/source/dataloader.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,6 @@ Of cause, users can also use frameworks own dataloader in Neural Compressor.
3333
|---------------|:----------:|
3434
| TensorFlow | &#10004; |
3535
| Keras | &#10004; |
36-
| MXNet | &#10004; |
3736
| PyTorch | &#10004; |
3837
| ONNX Runtime | &#10004; |
3938

@@ -45,7 +44,7 @@ Acceptable parameters for `DataLoader` API including:
4544

4645
| Parameter | Description |
4746
|---------------|:----------:|
48-
|framework (str)| different frameworks, such as `tensorflow`, `tensorflow_itex`, `keras`, `mxnet`, `pytorch` and `onnxruntime`.|
47+
|framework (str)| different frameworks, such as `tensorflow`, `tensorflow_itex`, `keras`, `pytorch` and `onnxruntime`.|
4948
|dataset (object)| A dataset object from which to get data. Dataset must implement `__iter__` or `__getitem__` method.|
5049
|batch_size (int, optional)| How many samples per batch to load. Defaults to 1.|
5150
|collate_fn (Callable, optional)| Callable function that processes the batch you want to return from your dataloader. Defaults to None.|

docs/source/examples_readme.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
2. [Release Data](#release-data)
55

66
## Example List
7-
A wide variety of examples are provided to demonstrate the usage of Intel® Neural Compressor in different frameworks: TensorFlow, PyTorch, MXNet, and ONNX Runtime.
7+
A wide variety of examples are provided to demonstrate the usage of Intel® Neural Compressor in different frameworks: TensorFlow, PyTorch, and ONNX Runtime.
88
View the [examples in Neural Compressor GitHub Repo](https://github.com/intel/neural-compressor/tree/master/examples).
99

1010
## Release Data

docs/source/framework_yaml.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ running user cases and setting up framework capabilities, respectively.
1616
Here, we introduce the framework YAML file, which describes the behavior of
1717
a specific framework. There is a corresponding framework YAML file for each framework supported by
1818
Intel® Neural Compressor - TensorFlow
19-
, Intel® Extension for TensorFlow*, PyTorch, Intel® Extension for PyTorch*, ONNX Runtime, and MXNet.
19+
, Intel® Extension for TensorFlow*, PyTorch, Intel® Extension for PyTorch* and ONNX Runtime.
2020

2121
>**Note**: Before diving to the details, we recommend that the end users do NOT make modifications
2222
unless they have clear requirements that can only be met by modifying the attributes.
@@ -28,7 +28,6 @@ unless they have clear requirements that can only be met by modifying the attrib
2828
| TensorFlow | &#10004; |
2929
| PyTorch | &#10004; |
3030
| ONNX | &#10004; |
31-
| MXNet | &#10004; |
3231

3332

3433
## Get started with Framework YAML Files

docs/source/infrastructure.md

-1
Original file line numberDiff line numberDiff line change
@@ -188,4 +188,3 @@ Intel® Neural Compressor has unified interfaces which dispatch tasks to differe
188188
|TensorFlow |&#10004; |
189189
|PyTorch |&#10004; |
190190
|ONNX |plan to support in the future |
191-
|MXNet |&#10004; |

docs/source/metric.md

+1-15
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,7 @@ Metrics
77

88
2.2. [PyTorch](#pytorch)
99

10-
2.3. [MxNet](#mxnet)
11-
12-
2.4. [ONNXRT](#onnxrt)
10+
2.3. [ONNXRT](#onnxrt)
1311

1412
3. [Get Started with Metric](#get-started-with-metric)
1513

@@ -56,18 +54,6 @@ Neural Compressor supports some built-in metrics that are popularly used in indu
5654
| MSE(compare_label) | **compare_label** (bool, default=True): Whether to compare label. False if there are no labels and will use FP32 preds as labels. | preds, labels | Computes Mean Squared Error (MSE) loss. |
5755
| F1() | None | preds, labels | Computes the F1 score of a binary classification problem. |
5856

59-
### MXNet
60-
61-
| Metric | Parameters | Inputs | Comments |
62-
| :------ | :------ | :------ | :------ |
63-
| topk(k) | **k** (int, default=1): Number of top elements to look at for computing accuracy | preds, labels | Computes top k predictions accuracy. |
64-
| Accuracy() | None | preds, labels | Computes accuracy classification score. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.Accuracy) for details. |
65-
| Loss() | None | preds, labels | A dummy metric for directly printing loss, it calculates the average of predictions. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/_modules/mxnet/metric.html#Loss) for details. |
66-
| MAE() | None | preds, labels | Computes Mean Absolute Error (MAE) loss. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.MAE) for details. |
67-
| RMSE(compare_label) | **compare_label** (bool, default=True): Whether to compare label. False if there are no labels and will use FP32 preds as labels. | preds, labels | Computes Root Mean Squared Error (RMSE) loss. |
68-
| MSE() | None | preds, labels | Computes Mean Squared Error (MSE) loss. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.MSE) for details. |
69-
| F1() | None | preds, labels | Computes the F1 score of a binary classification problem. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.F1) for details. |
70-
7157

7258
### ONNXRT
7359

docs/source/migration.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ from neural_compressor.config import PostTrainingQuantConfig, TuningCriterion, A
6969

7070
PostTrainingQuantConfig(
7171
## model: this parameter does not need to specially be defined;
72-
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
72+
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
7373
inputs="image_tensor", # input: same as in the conf.yaml;
7474
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
7575
device="cpu", # device: same as in the conf.yaml;
@@ -178,7 +178,7 @@ from neural_compressor.config import QuantizationAwareTrainingConfig
178178

179179
QuantizationAwareTrainingConfig(
180180
## model: this parameter does not need to specially be defined;
181-
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
181+
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
182182
inputs="image_tensor", # input: same as in the conf.yaml;
183183
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
184184
device="cpu", # device: same as in the conf.yaml;
@@ -570,7 +570,7 @@ from neural_compressor.config import MixedPrecisionConfig, TuningCriterion, Accu
570570

571571
MixedPrecisionConfig(
572572
## model: this parameter does not need to specially be defined;
573-
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
573+
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
574574
inputs="image_tensor", # input: same as in the conf.yaml;
575575
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
576576
device="cpu", # device: same as in the conf.yaml;
@@ -667,7 +667,7 @@ version: 1.0
667667

668668
model: # mandatory. used to specify model specific information.
669669
name: ssd_mobilenet_v1 # mandatory. the model name.
670-
framework: tensorflow # mandatory. supported values are tensorflow, pytorch, pytorch_fx, pytorch_ipex, onnxrt_integer, onnxrt_qlinear or mxnet; allow new framework backend extension.
670+
framework: tensorflow # mandatory. supported values are tensorflow, pytorch, pytorch_fx, pytorch_ipex, onnxrt_integer or onnxrt_qlinear; allow new framework backend extension.
671671
inputs: image_tensor # optional. inputs and outputs fields are only required in tensorflow.
672672
outputs: num_detections,detection_boxes,detection_scores,detection_classes
673673

@@ -711,7 +711,7 @@ from neural_compressor.config import BenchmarkConfig
711711

712712
BenchmarkConfig(
713713
## model: this parameter does not need to specially be defined;
714-
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
714+
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
715715
inputs="image_tensor", # input: same as in the conf.yaml;
716716
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
717717
device="cpu", # device: same as in the conf.yaml;

docs/source/mixed_precision.md

-9
Original file line numberDiff line numberDiff line change
@@ -98,15 +98,6 @@ The recently launched 3rd Gen Intel® Xeon® Scalable processor (codenamed Coope
9898
<td align="left">&#10004;</td>
9999
<td align="left">:x:</td>
100100
</tr>
101-
<tr>
102-
<td align="left">MXNet</td>
103-
<td align="left">OneDNN</td>
104-
<td align="left">OneDNN</td>
105-
<td align="left">"default"</td>
106-
<td align="left">cpu</td>
107-
<td align="left">&#10004;</td>
108-
<td align="left">:x:</td>
109-
</tr>
110101
</tbody>
111102
</table>
112103

docs/source/model.md

-9
Original file line numberDiff line numberDiff line change
@@ -92,15 +92,6 @@ The Neural Compressor Model feature is used to encapsulate the behavior of model
9292
<td>onnx.onnx_ml_pb2.ModelProto</td>
9393
<td>frozen onnx</td>
9494
</tr>
95-
<tr>
96-
<td rowspan=2>MXNet</td>
97-
<td>mxnet.gluon.HybridBlock</td>
98-
<td>save_path.json</td>
99-
</tr>
100-
<tr>
101-
<td>mxnet.symbol.Symbol</td>
102-
<td>save_path-symbol.json and save_path-0000.params</td>
103-
</tr>
10495
</tbody>
10596
</table>
10697

docs/source/quantization.md

+1-14
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,6 @@ Sometimes the reduce_range feature, that's using 7 bit width (1 sign bit + 6 dat
6060
| TensorFlow | [oneDNN](https://github.com/oneapi-src/oneDNN) | Activation (int8/uint8), Weight (int8) | - |
6161
| PyTorch | [FBGEMM](https://github.com/pytorch/FBGEMM) | Activation (uint8), Weight (int8) | Activation (uint8) |
6262
| PyTorch(IPEX) | [oneDNN](https://github.com/oneapi-src/oneDNN) | Activation (int8/uint8), Weight (int8) | - |
63-
| MXNet | [oneDNN](https://github.com/oneapi-src/oneDNN) | Activation (int8/uint8), Weight (int8) | - |
6463
| ONNX Runtime | [MLAS](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/core/mlas) | Weight (int8) | Activation (uint8) |
6564

6665
#### Quantization Scheme in TensorFlow
@@ -80,11 +79,6 @@ Sometimes the reduce_range feature, that's using 7 bit width (1 sign bit + 6 dat
8079
+ int8: scale = 2 * max(abs(rmin), abs(rmax)) / (max(int8) - min(int8) - 1)
8180
+ uint8: scale = max(rmin, rmax) / (max(uint8) - min(uint8))
8281

83-
#### Quantization Scheme in MXNet
84-
+ Symmetric Quantization
85-
+ int8: scale = 2 * max(abs(rmin), abs(rmax)) / (max(int8) - min(int8) - 1)
86-
+ uint8: scale = max(rmin, rmax) / (max(uint8) - min(uint8))
87-
8882
#### Quantization Scheme in ONNX Runtime
8983
+ Symmetric Quantization
9084
+ int8: scale = 2 * max(abs(rmin), abs(rmax)) / (max(int8) - min(int8) - 1)
@@ -441,7 +435,7 @@ conf = PostTrainingQuantConfig(recipes=recipes)
441435
```
442436

443437
### Specify Quantization Backend and Device
444-
Intel(R) Neural Compressor support multi-framework: PyTorch, Tensorflow, ONNX Runtime and MXNet. The neural compressor will automatically determine which framework to use based on the model type, but for backend and device, users need to set it themselves in configure object.
438+
Intel(R) Neural Compressor support multi-framework: PyTorch, Tensorflow and ONNX Runtime. The neural compressor will automatically determine which framework to use based on the model type, but for backend and device, users need to set it themselves in configure object.
445439

446440
<table class="center">
447441
<thead>
@@ -511,13 +505,6 @@ Intel(R) Neural Compressor support multi-framework: PyTorch, Tensorflow, ONNX Ru
511505
<td align="left">"itex"</td>
512506
<td align="left">cpu | gpu</td>
513507
</tr>
514-
<tr>
515-
<td align="left">MXNet</td>
516-
<td align="left">OneDNN</td>
517-
<td align="left">OneDNN</td>
518-
<td align="left">"default"</td>
519-
<td align="left">cpu</td>
520-
</tr>
521508
</tbody>
522509
</table>
523510
<br>

0 commit comments

Comments
 (0)