You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Changes
* Main README.md, Usage.md and post training quantization docs are
updated with info about the TorchFX
### Reason for changes
* To reflect new experimental features of TorchFX in the docs
### Related tickets
#2766
Copy file name to clipboardexpand all lines: README.md
+43-6
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@
19
19
20
20
Neural Network Compression Framework (NNCF) provides a suite of post-training and training-time algorithms for optimizing inference of neural networks in [OpenVINO™](https://docs.openvino.ai) with a minimal accuracy drop.
21
21
22
-
NNCF is designed to work with models from [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), [ONNX](https://onnx.ai/) and [OpenVINO™](https://docs.openvino.ai).
22
+
NNCF is designed to work with models from [PyTorch](https://pytorch.org/), [TorchFX](https://pytorch.org/docs/stable/fx.html), [TensorFlow](https://www.tensorflow.org/), [ONNX](https://onnx.ai/) and [OpenVINO™](https://docs.openvino.ai).
23
23
24
24
NNCF provides [samples](#demos-tutorials-and-samples) that demonstrate the usage of compression algorithms for different use cases and models. See compression results achievable with the NNCF-powered samples on the [NNCF Model Zoo page](./docs/ModelZoo.md).
|[Activation Sparsity](./nncf/experimental/torch/sparsify_activations/ActivationSparsity.md)| Not supported | Experimental | Not supported|Not supported| Not supported |
The returnformat of the data transformation function is directly the input tensors consumed by the model. \
57
57
_If you are not sure that your implementation of data transformation function is correct you can validate it by using the
@@ -89,7 +89,7 @@ for data_item in val_loader:
89
89
</details>
90
90
91
91
NNCF provides the examples of Post-Training Quantization where you can find the implementation of data transformation
92
-
function: [PyTorch](/examples/post_training_quantization/torch/mobilenet_v2/README.md), [TensorFlow](/examples/post_training_quantization/tensorflow/mobilenet_v2/README.md), [ONNX](/examples/post_training_quantization/onnx/mobilenet_v2/README.md), and [OpenVINO](/examples/post_training_quantization/openvino/mobilenet_v2/README.md)
92
+
function: [PyTorch](/examples/post_training_quantization/torch/mobilenet_v2/README.md), [TorchFX](/examples/post_training_quantization/torch_fx/resnet18/README.md), [TensorFlow](/examples/post_training_quantization/tensorflow/mobilenet_v2/README.md), [ONNX](/examples/post_training_quantization/onnx/mobilenet_v2/README.md), and [OpenVINO](/examples/post_training_quantization/openvino/mobilenet_v2/README.md)
93
93
94
94
In case the Post-Training Quantization algorithm could not reach quality requirements you can fine-tune a quantized pytorch model. Example of the Quantization-Aware training pipeline for a pytorch model could be found [here](/examples/quantization_aware_training/torch/resnet18/README.md).
0 commit comments