Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove mxnet #2146

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .azure-pipelines/scripts/codeScan/pydocstyle/scan_path.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
/neural-compressor/neural_compressor/adaptor/mxnet_utils
/neural-compressor/neural_compressor/adaptor/ox_utils
/neural-compressor/neural_compressor/adaptor/tensorflow.py
/neural-compressor/neural_compressor/adaptor/tf_utils
Expand Down
1 change: 0 additions & 1 deletion .azure-pipelines/scripts/fwk_version.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,3 @@ export torchvision_version='0.20.1'
export ipex_version='2.5.0+cpu'
export onnx_version='1.17.0'
export onnxruntime_version='1.20.0'
export mxnet_version='1.9.1'
9 changes: 0 additions & 9 deletions .azure-pipelines/scripts/ut/env_setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ echo "torchvision version is $torchvision_version"
echo "ipex version is $ipex_version"
echo "onnx version is $onnx_version"
echo "onnxruntime version is $onnxruntime_version"
echo "mxnet version is $mxnet_version"

test_case=$1
echo -e "##[group]test case is ${test_case}"
Expand Down Expand Up @@ -66,14 +65,6 @@ if [[ "${onnxruntime_version}" != "" ]]; then
pip install optimum==1.24.0
fi

if [ "${mxnet_version}" != '' ]; then
pip install numpy==1.23.5
echo "re-install pycocotools resolve the issue with numpy..."
pip uninstall pycocotools -y
pip install --no-cache-dir pycocotools
pip install mxnet==${mxnet_version}
fi

# install special test env requirements
# common deps
pip install cmake
Expand Down
3 changes: 1 addition & 2 deletions docs/source/adaptor.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Adaptor

Intel® Neural Compressor builds the low-precision inference
solution on popular deep learning frameworks such as TensorFlow, PyTorch,
MXNet, and ONNX Runtime. The adaptor layer is the bridge between the
and ONNX Runtime. The adaptor layer is the bridge between the
tuning strategy and vanilla framework quantization APIs.

## Adaptor Support Matrix
Expand All @@ -27,7 +27,6 @@ tuning strategy and vanilla framework quantization APIs.
|TensorFlow |✔ |
|PyTorch |✔ |
|ONNX |✔ |
|MXNet |✔ |


## Working Flow
Expand Down
2 changes: 1 addition & 1 deletion docs/source/add_new_adaptor.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ How to Add An Adaptor
- [Add quantize API according to tune cfg](#add-quantize-api-according-to-tune-cfg)

## Introduction
Intel® Neural Compressor builds the low-precision inference solution on popular deep learning frameworks such as TensorFlow, PyTorch, MXNet, Keras and ONNX Runtime. The adaptor layer is the bridge between the tuning strategy and vanilla framework quantization APIs, each framework has own adaptor. The users can add new adaptor to set strategy capabilities.
Intel® Neural Compressor builds the low-precision inference solution on popular deep learning frameworks such as TensorFlow, PyTorch, Keras and ONNX Runtime. The adaptor layer is the bridge between the tuning strategy and vanilla framework quantization APIs, each framework has own adaptor. The users can add new adaptor to set strategy capabilities.

The document outlines the process of adding support for a new adaptor, in Intel® Neural Compressor with minimal changes. It provides instructions and code examples for implementation of a new adaptor. By following the steps outlined in the document, users can extend Intel® Neural Compressor's functionality to accommodate new adaptor and incorporate it into quantization workflows.

Expand Down
5 changes: 0 additions & 5 deletions docs/source/calibration.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,6 @@ Currently, Intel® Neural Compressor supports three popular calibration algorith
<td>minmax</td>
<td>minmax, kl</td>
</tr>
<tr>
<td>MXNet</td>
<td>minmax</td>
<td>minmax, kl</td>
</tr>
<tr>
<td>OnnxRuntime</td>
<td>minmax</td>
Expand Down
3 changes: 1 addition & 2 deletions docs/source/dataloader.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ Of cause, users can also use frameworks own dataloader in Neural Compressor.
|---------------|:----------:|
| TensorFlow | &#10004; |
| Keras | &#10004; |
| MXNet | &#10004; |
| PyTorch | &#10004; |
| ONNX Runtime | &#10004; |

Expand All @@ -45,7 +44,7 @@ Acceptable parameters for `DataLoader` API including:

| Parameter | Description |
|---------------|:----------:|
|framework (str)| different frameworks, such as `tensorflow`, `tensorflow_itex`, `keras`, `mxnet`, `pytorch` and `onnxruntime`.|
|framework (str)| different frameworks, such as `tensorflow`, `tensorflow_itex`, `keras`, `pytorch` and `onnxruntime`.|
|dataset (object)| A dataset object from which to get data. Dataset must implement `__iter__` or `__getitem__` method.|
|batch_size (int, optional)| How many samples per batch to load. Defaults to 1.|
|collate_fn (Callable, optional)| Callable function that processes the batch you want to return from your dataloader. Defaults to None.|
Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples_readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
2. [Release Data](#release-data)

## Example List
A wide variety of examples are provided to demonstrate the usage of Intel® Neural Compressor in different frameworks: TensorFlow, PyTorch, MXNet, and ONNX Runtime.
A wide variety of examples are provided to demonstrate the usage of Intel® Neural Compressor in different frameworks: TensorFlow, PyTorch, and ONNX Runtime.
View the [examples in Neural Compressor GitHub Repo](https://github.com/intel/neural-compressor/tree/master/examples).

## Release Data
Expand Down
3 changes: 1 addition & 2 deletions docs/source/framework_yaml.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ running user cases and setting up framework capabilities, respectively.
Here, we introduce the framework YAML file, which describes the behavior of
a specific framework. There is a corresponding framework YAML file for each framework supported by
Intel® Neural Compressor - TensorFlow
, Intel® Extension for TensorFlow*, PyTorch, Intel® Extension for PyTorch*, ONNX Runtime, and MXNet.
, Intel® Extension for TensorFlow*, PyTorch, Intel® Extension for PyTorch* and ONNX Runtime.

>**Note**: Before diving to the details, we recommend that the end users do NOT make modifications
unless they have clear requirements that can only be met by modifying the attributes.
Expand All @@ -28,7 +28,6 @@ unless they have clear requirements that can only be met by modifying the attrib
| TensorFlow | &#10004; |
| PyTorch | &#10004; |
| ONNX | &#10004; |
| MXNet | &#10004; |


## Get started with Framework YAML Files
Expand Down
1 change: 0 additions & 1 deletion docs/source/infrastructure.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,4 +188,3 @@ Intel® Neural Compressor has unified interfaces which dispatch tasks to differe
|TensorFlow |&#10004; |
|PyTorch |&#10004; |
|ONNX |plan to support in the future |
|MXNet |&#10004; |
16 changes: 1 addition & 15 deletions docs/source/metric.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,7 @@ Metrics

2.2. [PyTorch](#pytorch)

2.3. [MxNet](#mxnet)

2.4. [ONNXRT](#onnxrt)
2.3. [ONNXRT](#onnxrt)

3. [Get Started with Metric](#get-started-with-metric)

Expand Down Expand Up @@ -56,18 +54,6 @@ Neural Compressor supports some built-in metrics that are popularly used in indu
| MSE(compare_label) | **compare_label** (bool, default=True): Whether to compare label. False if there are no labels and will use FP32 preds as labels. | preds, labels | Computes Mean Squared Error (MSE) loss. |
| F1() | None | preds, labels | Computes the F1 score of a binary classification problem. |

### MXNet

| Metric | Parameters | Inputs | Comments |
| :------ | :------ | :------ | :------ |
| topk(k) | **k** (int, default=1): Number of top elements to look at for computing accuracy | preds, labels | Computes top k predictions accuracy. |
| Accuracy() | None | preds, labels | Computes accuracy classification score. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.Accuracy) for details. |
| Loss() | None | preds, labels | A dummy metric for directly printing loss, it calculates the average of predictions. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/_modules/mxnet/metric.html#Loss) for details. |
| MAE() | None | preds, labels | Computes Mean Absolute Error (MAE) loss. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.MAE) for details. |
| RMSE(compare_label) | **compare_label** (bool, default=True): Whether to compare label. False if there are no labels and will use FP32 preds as labels. | preds, labels | Computes Root Mean Squared Error (RMSE) loss. |
| MSE() | None | preds, labels | Computes Mean Squared Error (MSE) loss. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.MSE) for details. |
| F1() | None | preds, labels | Computes the F1 score of a binary classification problem. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.F1) for details. |


### ONNXRT

Expand Down
10 changes: 5 additions & 5 deletions docs/source/migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ from neural_compressor.config import PostTrainingQuantConfig, TuningCriterion, A

PostTrainingQuantConfig(
## model: this parameter does not need to specially be defined;
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
inputs="image_tensor", # input: same as in the conf.yaml;
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
device="cpu", # device: same as in the conf.yaml;
Expand Down Expand Up @@ -178,7 +178,7 @@ from neural_compressor.config import QuantizationAwareTrainingConfig

QuantizationAwareTrainingConfig(
## model: this parameter does not need to specially be defined;
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
inputs="image_tensor", # input: same as in the conf.yaml;
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
device="cpu", # device: same as in the conf.yaml;
Expand Down Expand Up @@ -570,7 +570,7 @@ from neural_compressor.config import MixedPrecisionConfig, TuningCriterion, Accu

MixedPrecisionConfig(
## model: this parameter does not need to specially be defined;
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
inputs="image_tensor", # input: same as in the conf.yaml;
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
device="cpu", # device: same as in the conf.yaml;
Expand Down Expand Up @@ -667,7 +667,7 @@ version: 1.0

model: # mandatory. used to specify model specific information.
name: ssd_mobilenet_v1 # mandatory. the model name.
framework: tensorflow # mandatory. supported values are tensorflow, pytorch, pytorch_fx, pytorch_ipex, onnxrt_integer, onnxrt_qlinear or mxnet; allow new framework backend extension.
framework: tensorflow # mandatory. supported values are tensorflow, pytorch, pytorch_fx, pytorch_ipex, onnxrt_integer or onnxrt_qlinear; allow new framework backend extension.
inputs: image_tensor # optional. inputs and outputs fields are only required in tensorflow.
outputs: num_detections,detection_boxes,detection_scores,detection_classes

Expand Down Expand Up @@ -711,7 +711,7 @@ from neural_compressor.config import BenchmarkConfig

BenchmarkConfig(
## model: this parameter does not need to specially be defined;
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
inputs="image_tensor", # input: same as in the conf.yaml;
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
device="cpu", # device: same as in the conf.yaml;
Expand Down
9 changes: 0 additions & 9 deletions docs/source/mixed_precision.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,15 +98,6 @@ The recently launched 3rd Gen Intel® Xeon® Scalable processor (codenamed Coope
<td align="left">&#10004;</td>
<td align="left">:x:</td>
</tr>
<tr>
<td align="left">MXNet</td>
<td align="left">OneDNN</td>
<td align="left">OneDNN</td>
<td align="left">"default"</td>
<td align="left">cpu</td>
<td align="left">&#10004;</td>
<td align="left">:x:</td>
</tr>
</tbody>
</table>

Expand Down
9 changes: 0 additions & 9 deletions docs/source/model.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,15 +92,6 @@ The Neural Compressor Model feature is used to encapsulate the behavior of model
<td>onnx.onnx_ml_pb2.ModelProto</td>
<td>frozen onnx</td>
</tr>
<tr>
<td rowspan=2>MXNet</td>
<td>mxnet.gluon.HybridBlock</td>
<td>save_path.json</td>
</tr>
<tr>
<td>mxnet.symbol.Symbol</td>
<td>save_path-symbol.json and save_path-0000.params</td>
</tr>
</tbody>
</table>

Expand Down
15 changes: 1 addition & 14 deletions docs/source/quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ Sometimes the reduce_range feature, that's using 7 bit width (1 sign bit + 6 dat
| TensorFlow | [oneDNN](https://github.com/oneapi-src/oneDNN) | Activation (int8/uint8), Weight (int8) | - |
| PyTorch | [FBGEMM](https://github.com/pytorch/FBGEMM) | Activation (uint8), Weight (int8) | Activation (uint8) |
| PyTorch(IPEX) | [oneDNN](https://github.com/oneapi-src/oneDNN) | Activation (int8/uint8), Weight (int8) | - |
| MXNet | [oneDNN](https://github.com/oneapi-src/oneDNN) | Activation (int8/uint8), Weight (int8) | - |
| ONNX Runtime | [MLAS](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/core/mlas) | Weight (int8) | Activation (uint8) |

#### Quantization Scheme in TensorFlow
Expand All @@ -80,11 +79,6 @@ Sometimes the reduce_range feature, that's using 7 bit width (1 sign bit + 6 dat
+ int8: scale = 2 * max(abs(rmin), abs(rmax)) / (max(int8) - min(int8) - 1)
+ uint8: scale = max(rmin, rmax) / (max(uint8) - min(uint8))

#### Quantization Scheme in MXNet
+ Symmetric Quantization
+ int8: scale = 2 * max(abs(rmin), abs(rmax)) / (max(int8) - min(int8) - 1)
+ uint8: scale = max(rmin, rmax) / (max(uint8) - min(uint8))

#### Quantization Scheme in ONNX Runtime
+ Symmetric Quantization
+ int8: scale = 2 * max(abs(rmin), abs(rmax)) / (max(int8) - min(int8) - 1)
Expand Down Expand Up @@ -441,7 +435,7 @@ conf = PostTrainingQuantConfig(recipes=recipes)
```

### Specify Quantization Backend and Device
Intel(R) Neural Compressor support multi-framework: PyTorch, Tensorflow, ONNX Runtime and MXNet. The neural compressor will automatically determine which framework to use based on the model type, but for backend and device, users need to set it themselves in configure object.
Intel(R) Neural Compressor support multi-framework: PyTorch, Tensorflow and ONNX Runtime. The neural compressor will automatically determine which framework to use based on the model type, but for backend and device, users need to set it themselves in configure object.

<table class="center">
<thead>
Expand Down Expand Up @@ -511,13 +505,6 @@ Intel(R) Neural Compressor support multi-framework: PyTorch, Tensorflow, ONNX Ru
<td align="left">"itex"</td>
<td align="left">cpu | gpu</td>
</tr>
<tr>
<td align="left">MXNet</td>
<td align="left">OneDNN</td>
<td align="left">OneDNN</td>
<td align="left">"default"</td>
<td align="left">cpu</td>
</tr>
</tbody>
</table>
<br>
Expand Down
Loading
Loading