You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/source/add_new_adaptor.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ How to Add An Adaptor
11
11
-[Add quantize API according to tune cfg](#add-quantize-api-according-to-tune-cfg)
12
12
13
13
## Introduction
14
-
Intel® Neural Compressor builds the low-precision inference solution on popular deep learning frameworks such as TensorFlow, PyTorch, MXNet, Keras and ONNX Runtime. The adaptor layer is the bridge between the tuning strategy and vanilla framework quantization APIs, each framework has own adaptor. The users can add new adaptor to set strategy capabilities.
14
+
Intel® Neural Compressor builds the low-precision inference solution on popular deep learning frameworks such as TensorFlow, PyTorch, Keras and ONNX Runtime. The adaptor layer is the bridge between the tuning strategy and vanilla framework quantization APIs, each framework has own adaptor. The users can add new adaptor to set strategy capabilities.
15
15
16
16
The document outlines the process of adding support for a new adaptor, in Intel® Neural Compressor with minimal changes. It provides instructions and code examples for implementation of a new adaptor. By following the steps outlined in the document, users can extend Intel® Neural Compressor's functionality to accommodate new adaptor and incorporate it into quantization workflows.
Copy file name to clipboardexpand all lines: docs/source/examples_readme.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
2.[Release Data](#release-data)
5
5
6
6
## Example List
7
-
A wide variety of examples are provided to demonstrate the usage of Intel® Neural Compressor in different frameworks: TensorFlow, PyTorch, MXNet, and ONNX Runtime.
7
+
A wide variety of examples are provided to demonstrate the usage of Intel® Neural Compressor in different frameworks: TensorFlow, PyTorch, and ONNX Runtime.
8
8
View the [examples in Neural Compressor GitHub Repo](https://github.com/intel/neural-compressor/tree/master/examples).
Copy file name to clipboardexpand all lines: docs/source/metric.md
+1-15
Original file line number
Diff line number
Diff line change
@@ -7,9 +7,7 @@ Metrics
7
7
8
8
2.2. [PyTorch](#pytorch)
9
9
10
-
2.3. [MxNet](#mxnet)
11
-
12
-
2.4. [ONNXRT](#onnxrt)
10
+
2.3. [ONNXRT](#onnxrt)
13
11
14
12
3.[Get Started with Metric](#get-started-with-metric)
15
13
@@ -56,18 +54,6 @@ Neural Compressor supports some built-in metrics that are popularly used in indu
56
54
| MSE(compare_label) |**compare_label** (bool, default=True): Whether to compare label. False if there are no labels and will use FP32 preds as labels. | preds, labels | Computes Mean Squared Error (MSE) loss. |
57
55
| F1() | None | preds, labels | Computes the F1 score of a binary classification problem. |
58
56
59
-
### MXNet
60
-
61
-
| Metric | Parameters | Inputs | Comments |
62
-
| :------ | :------ | :------ | :------ |
63
-
| topk(k) |**k** (int, default=1): Number of top elements to look at for computing accuracy | preds, labels | Computes top k predictions accuracy. |
| Loss() | None | preds, labels | A dummy metric for directly printing loss, it calculates the average of predictions. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/_modules/mxnet/metric.html#Loss) for details. |
66
-
| MAE() | None | preds, labels | Computes Mean Absolute Error (MAE) loss. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.MAE) for details. |
67
-
| RMSE(compare_label) |**compare_label** (bool, default=True): Whether to compare label. False if there are no labels and will use FP32 preds as labels. | preds, labels | Computes Root Mean Squared Error (RMSE) loss. |
68
-
| MSE() | None | preds, labels | Computes Mean Squared Error (MSE) loss. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.MSE) for details. |
69
-
| F1() | None | preds, labels | Computes the F1 score of a binary classification problem. <br> Please refer to [MXNet docs](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/metric/index.html#mxnet.metric.F1) for details. |
Copy file name to clipboardexpand all lines: docs/source/migration.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -69,7 +69,7 @@ from neural_compressor.config import PostTrainingQuantConfig, TuningCriterion, A
69
69
70
70
PostTrainingQuantConfig(
71
71
## model: this parameter does not need to specially be defined;
72
-
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
72
+
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
73
73
inputs="image_tensor", # input: same as in the conf.yaml;
74
74
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
75
75
device="cpu", # device: same as in the conf.yaml;
@@ -178,7 +178,7 @@ from neural_compressor.config import QuantizationAwareTrainingConfig
178
178
179
179
QuantizationAwareTrainingConfig(
180
180
## model: this parameter does not need to specially be defined;
181
-
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
181
+
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
182
182
inputs="image_tensor", # input: same as in the conf.yaml;
183
183
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
184
184
device="cpu", # device: same as in the conf.yaml;
@@ -570,7 +570,7 @@ from neural_compressor.config import MixedPrecisionConfig, TuningCriterion, Accu
570
570
571
571
MixedPrecisionConfig(
572
572
## model: this parameter does not need to specially be defined;
573
-
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
573
+
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
574
574
inputs="image_tensor", # input: same as in the conf.yaml;
575
575
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
576
576
device="cpu", # device: same as in the conf.yaml;
@@ -667,7 +667,7 @@ version: 1.0
667
667
668
668
model: # mandatory. used to specify model specific information.
669
669
name: ssd_mobilenet_v1 # mandatory. the model name.
670
-
framework: tensorflow # mandatory. supported values are tensorflow, pytorch, pytorch_fx, pytorch_ipex, onnxrt_integer, onnxrt_qlinear or mxnet; allow new framework backend extension.
670
+
framework: tensorflow # mandatory. supported values are tensorflow, pytorch, pytorch_fx, pytorch_ipex, onnxrt_integeror onnxrt_qlinear; allow new framework backend extension.
671
671
inputs: image_tensor # optional. inputs and outputs fields are only required in tensorflow.
@@ -711,7 +711,7 @@ from neural_compressor.config import BenchmarkConfig
711
711
712
712
BenchmarkConfig(
713
713
## model: this parameter does not need to specially be defined;
714
-
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex, mxnet is currently unsupported;
714
+
backend="default", # framework: set as "default" when framework was tensorflow, pytorch, pytorch_fx, onnxrt_integer and onnxrt_qlinear. Set as "ipex" when framework was pytorch_ipex;
715
715
inputs="image_tensor", # input: same as in the conf.yaml;
716
716
outputs="num_detections,detection_boxes,detection_scores,detection_classes", # output: same as in the conf.yaml;
Intel(R) Neural Compressor support multi-framework: PyTorch, Tensorflow, ONNX Runtime and MXNet. The neural compressor will automatically determine which framework to use based on the model type, but for backend and device, users need to set it themselves in configure object.
438
+
Intel(R) Neural Compressor support multi-framework: PyTorch, Tensorflow and ONNX Runtime. The neural compressor will automatically determine which framework to use based on the model type, but for backend and device, users need to set it themselves in configure object.
0 commit comments