Skip to content

Commit e0bdd15

Browse files
AlexanderDokuchaevshumaari
authored andcommitted
Remove dead link to examples in optimum-intel (openvinotoolkit#3298)
### Changes Remove paragraphs with link to `https://github.com/huggingface/optimum-intel/tree/main/examples/openvino` ### Reason for changes huggingface/optimum-intel#1167
1 parent c6dd0c2 commit e0bdd15

File tree

3 files changed

+0
-10
lines changed

3 files changed

+0
-10
lines changed

README.md

-2
Original file line numberDiff line numberDiff line change
@@ -59,8 +59,6 @@ learning frameworks.
5959
- GPU-accelerated layers for faster compressed model fine-tuning.
6060
- Distributed training support.
6161
- Git patch for prominent third-party repository ([huggingface-transformers](https://github.com/huggingface/transformers)) demonstrating the process of integrating NNCF into custom training pipelines.
62-
- Seamless combination of pruning, sparsity, and quantization algorithms. Please refer to [optimum-intel](https://github.com/huggingface/optimum-intel/tree/main/examples/openvino) for examples of
63-
joint (movement) pruning, quantization, and distillation (JPQD), end-to-end from NNCF optimization to compressed OpenVINO IR.
6462
- Exporting PyTorch compressed models to ONNX\* checkpoints and TensorFlow compressed models to SavedModel or Frozen Graph format, ready to use with [OpenVINO™ toolkit](https://docs.openvino.ai).
6563
- Support for [Accuracy-Aware model training](./docs/usage/training_time_compression/other_algorithms/Usage.md#accuracy-aware-model-training) pipelines via the [Adaptive Compression Level Training](./docs/accuracy_aware_model_training/AdaptiveCompressionLevelTraining.md) and [Early Exit Training](./docs/accuracy_aware_model_training/EarlyExitTraining.md).
6664

docs/PyPiPublishing.md

-4
Original file line numberDiff line numberDiff line change
@@ -67,10 +67,6 @@ For more information about NNCF, see:
6767
- Git patch for prominent third-party repository
6868
([huggingface-transformers](https://github.com/huggingface/transformers))
6969
demonstrating the process of integrating NNCF into custom training pipelines.
70-
- Seamless combination of pruning, sparsity, and quantization algorithms. Refer
71-
to [optimum-intel](https://github.com/huggingface/optimum-intel/tree/main/examples/openvino)
72-
for examples of joint (movement) pruning, quantization, and distillation
73-
(JPQD), end-to-end from NNCF optimization to compressed OpenVINO IR.
7470
- Exporting PyTorch compressed models to ONNX\* checkpoints and TensorFlow
7571
compressed models to SavedModel or Frozen Graph format, ready to use with
7672
[OpenVINO™ toolkit](https://docs.openvino.ai).

nncf/experimental/torch/sparsity/movement/MovementSparsity.md

-4
Original file line numberDiff line numberDiff line change
@@ -43,10 +43,6 @@ This diagram is the sparsity level of BERT-base model over the optimization life
4343

4444
Optimized models are compatible with OpenVINO toolchain. Use `compression_controller.export_model("movement_sparsified_model.onnx")` to export model in onnx format. Sparsified parameters in the onnx are in value of zero. Structured sparse structures can be discarded during ONNX translation to OpenVINO IR using [Model Conversion](https://docs.openvino.ai/2025/openvino-workflow/model-preparation/convert-model-to-ir.html) with utilizing [pruning transformation](https://docs.openvino.ai/2025/openvino-workflow/model-optimization-guide/compressing-models-during-training/filter-pruning.html). Corresponding IR is compressed and deployable with [OpenVINO Runtime](https://docs.openvino.ai/2025/openvino-workflow/running-inference.html). To quantify inference performance improvement, both ONNX and IR can be profiled using [Benchmark Tool](https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/benchmark-tool.html).
4545

46-
## Getting Started
47-
48-
Please refer [optimum-intel](https://github.com/huggingface/optimum-intel/tree/main/examples/openvino) for example pipelines on image classification, question answering, etc. The repository also provides examples of joint pruning, quantization and distillation, end-to-end from NNCF optimization to compressed OpenVINO IR.
49-
5046
## Known Limitation
5147

5248
1. Movement sparsification only supports `torch.nn.Linear` layers.

0 commit comments

Comments
 (0)