You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Changes
- Add action to check broken links.
- Fix broken links
### Reason for changes
Existed broken links and no any test to check it.
### Related tickets
131357
---------
Co-authored-by: Lyalyushkin Nikolay <nikolay.lyalyushkin@intel.com>
Copy file name to clipboardexpand all lines: docs/Usage.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -303,7 +303,7 @@ In the example above, the NNCF-compressed models that contain instances of `MyMo
303
303
304
304
### Accuracy-Aware model training
305
305
306
-
NNCF has the capability to apply the model compression algorithms while satisfying the user-defined accuracy constraints. This is done by executing an internal custom accuracy-aware training loop, which also helps to automate away some of the manual hyperparameter search related to model training such as setting the total number of epochs, the target compression rate for the model, etc. There are two supported training loops. The first one is called [Early Exit Training](./accuracy_aware_model_training/EarlyExitTraining.md), which aims to finish fine-tuning when the accuracy drop criterion is reached. The second one is more sophisticated. It is targeted for the automated discovery of the compression rate for the model given that it satisfies the user-specified maximal tolerable accuracy drop due to compression. Its name is [Adaptive Compression Level Training](./accuracy_aware_model_training/AdaptiveCompressionTraining.md). Both training loops could be run with either PyTorch or TensorFlow backend with the same user interface(except for the TF case where the Keras API is used for training).
306
+
NNCF has the capability to apply the model compression algorithms while satisfying the user-defined accuracy constraints. This is done by executing an internal custom accuracy-aware training loop, which also helps to automate away some of the manual hyperparameter search related to model training such as setting the total number of epochs, the target compression rate for the model, etc. There are two supported training loops. The first one is called [Early Exit Training](./accuracy_aware_model_training/EarlyExitTraining.md), which aims to finish fine-tuning when the accuracy drop criterion is reached. The second one is more sophisticated. It is targeted for the automated discovery of the compression rate for the model given that it satisfies the user-specified maximal tolerable accuracy drop due to compression. Its name is [Adaptive Compression Level Training](./accuracy_aware_model_training/AdaptiveCompressionLevelTraining.md). Both training loops could be run with either PyTorch or TensorFlow backend with the same user interface(except for the TF case where the Keras API is used for training).
307
307
308
308
The following function is required to create the accuracy-aware training loop. One has to pass the `NNCFConfig`objectand the compression controller (that is returned upon compressed model creation, see above).
In order to properly instantiate the accuracy-aware training loop, the user has to specify the 'accuracy_aware_training' section.
316
316
This section fully depends on what Accuracy-Aware Training loop is being used.
317
-
For more details about config of Adaptive Compression Level Training refer to [Adaptive Compression Level Training documentation](./accuracy_aware_model_training/AdaptiveCompressionTraining.md) and Early Exit Training refer to [Early Exit Training documentation](./accuracy_aware_model_training/EarlyExitTraining.md).
317
+
For more details about config of Adaptive Compression Level Training refer to [Adaptive Compression Level Training documentation](./accuracy_aware_model_training/AdaptiveCompressionLevelTraining.md) and Early Exit Training refer to [Early Exit Training documentation](./accuracy_aware_model_training/EarlyExitTraining.md).
318
318
319
319
The training loop is launched by calling its `run` method. Before the start of the training loop, the user is expected to define several functions related to the training of the model andpass them as arguments to the `run` method of the training loop instance:
320
320
@@ -378,6 +378,6 @@ model = training_loop.run(
378
378
dump_checkpoint_fn=dump_checkpoint_fn)
379
379
```
380
380
381
-
The above call executes the accuracy-aware training loop andreturn the compressed model. For more details on how to use the accuracy-aware training loop functionality of NNCF, please refer to its [documentation](./accuracy_aware_model_training/AdaptiveCompressionTraining.md).
381
+
The above call executes the accuracy-aware training loop andreturn the compressed model. For more details on how to use the accuracy-aware training loop functionality of NNCF, please refer to its [documentation](./accuracy_aware_model_training/AdaptiveCompressionLevelTraining.md).
382
382
383
-
See a PyTorch [example](../../examples/torch/classification/main.py) for**Quantization**+**Filter Pruning** Adaptive Compression scenario on CIFAR10and ResNet18 [config](../../examples/torch/classification/configs/pruning/resnet18_cifar10_accuracy_aware.json).
383
+
See a PyTorch [example](/examples/torch/classification/main.py) for**Quantization**+**Filter Pruning** Adaptive Compression scenario on CIFAR10and ResNet18 [config](/examples/torch/classification/configs/pruning/resnet18_cifar10_accuracy_aware.json).
Copy file name to clipboardexpand all lines: examples/tensorflow/object_detection/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ The sample receives a configuration file where the training schedule, hyper-para
6
6
7
7
## Features
8
8
9
-
- RetinaNet from the official [TF repository](https://github.com/tensorflow/models/tree/master/official/vision/detection) with minor modifications (custom implementation of upsampling is replaced with equivalent tf.keras.layers.UpSampling2D). YOLOv4 from the [keras-YOLOv3-model-set](https://github.com/david8862/keras-YOLOv3-model-set) repository.
9
+
- RetinaNet from the official [TF repository](https://github.com/tensorflow/models/tree/master/official/legacy/detection) with minor modifications (custom implementation of upsampling is replaced with equivalent tf.keras.layers.UpSampling2D). YOLOv4 from the [keras-YOLOv3-model-set](https://github.com/david8862/keras-YOLOv3-model-set) repository.
10
10
- Support [TensorFlow Datasets (TFDS)](https://www.tensorflow.org/datasets) and TFRecords for COCO2017 dataset.
11
11
- Configuration file examples for sparsity, quantization, filter pruning and quantization with sparsity.
12
12
- Export to Frozen Graph or TensorFlow SavedModel that is supported by the OpenVINO™ toolkit.
Copy file name to clipboardexpand all lines: examples/tensorflow/segmentation/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ The sample receives a configuration file where the training schedule, hyper-para
6
6
7
7
## Features
8
8
9
-
- Mask R-CNN from the official [TF repository](https://github.com/tensorflow/models/tree/master/official/vision/detection) with minor modifications (custom implementation of upsampling is replaced with equivalent tf.keras.layers.UpSampling2D).
9
+
- Mask R-CNN from the official [TF repository](https://github.com/tensorflow/models/tree/master/official/legacy/detection) with minor modifications (custom implementation of upsampling is replaced with equivalent tf.keras.layers.UpSampling2D).
10
10
- Support TFRecords for COCO2017 dataset.
11
11
- Configuration file examples for sparsity, quantization, and quantization with sparsity.
12
12
- Export to Frozen Graph or TensorFlow SavedModel that is supported by the OpenVINO™ toolkit.
Copy file name to clipboardexpand all lines: nncf/experimental/torch/sparsity/movement/MovementSparsity.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -78,5 +78,5 @@ Following arguments have been defaulted to work well out of the box. However, yo
78
78
79
79
## References
80
80
81
-
1. Victor Sanh, Thomas Wolf, and Alexander M. Rush. 2020. [Movement Pruning: Adaptive Sparsity by Fine-Tuning]((https://arxiv.org/pdf/2005.07683.pdf)). In Advances in Neural Information Processing Systems, 33, pp. 20378-20389.
82
-
2. François Lagunas, Ella Charlaix, Victor Sanh, and Alexander M. Rush. 2021. [Block Pruning For Faster Transformers]((https://arxiv.org/pdf/2109.04838.pdf)). In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 10619–10629.
81
+
1. Victor Sanh, Thomas Wolf, and Alexander M. Rush. 2020. [Movement Pruning: Adaptive Sparsity by Fine-Tuning](https://arxiv.org/pdf/2005.07683.pdf). In Advances in Neural Information Processing Systems, 33, pp. 20378-20389.
82
+
2. François Lagunas, Ella Charlaix, Victor Sanh, and Alexander M. Rush. 2021. [Block Pruning For Faster Transformers](https://arxiv.org/pdf/2109.04838.pdf). In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 10619–10629.
Copy file name to clipboardexpand all lines: tests/onnx/README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -22,8 +22,8 @@ We provide two types of tests.
22
22
23
23
You should give three arguments to run this test.
24
24
25
-
1. `--model-dir`: The directory path which includes ONNX Model ZOO models (.onnx files). See [#prepare-models](benchmarking/README.md#prepare-models) for details.
26
-
2. `--data-dir`: The directory path which includes datasets (ImageNet2012, COCO, Cityscapes, and VOC) [#prepare-models](benchmarking/README.md#prepare-models).
25
+
1. `--model-dir`: The directory path which includes ONNX Model ZOO models (.onnx files). See [#prepare-models](benchmarking/README.md#benchmark-for-onnx-models-vision) for details.
26
+
2. `--data-dir`: The directory path which includes datasets (ImageNet2012, COCO, Cityscapes, and VOC) [#prepare-dataset](benchmarking/README.md#1-prepare-dataset).
27
27
3. `--output-dir`: The directory path where the test results will be saved.
28
28
4. (Optional) `--model-names`: String containing model names to test. Model name is the prefix of the name of AccuracyChecker config before the '.' symbol. Please, provide the model names using '' as a separator.
29
29
5. (Optional) `--ckpt-dir`: Directory path to save quantized models.
0 commit comments