Skip to content

Commit 481ce9d

Browse files
Fix broken links in md files (#2543)
### Changes - Add action to check broken links. - Fix broken links ### Reason for changes Existed broken links and no any test to check it. ### Related tickets 131357 --------- Co-authored-by: Lyalyushkin Nikolay <nikolay.lyalyushkin@intel.com>
1 parent 760d1d5 commit 481ce9d

File tree

9 files changed

+43
-38
lines changed

9 files changed

+43
-38
lines changed

.github/workflows/pre-commit-linters.yml

+5
Original file line numberDiff line numberDiff line change
@@ -19,3 +19,8 @@ jobs:
1919
run: make install-pre-commit
2020
- name: Run pre-commit linter suite
2121
run: make pre-commit
22+
md-dead-link-check:
23+
runs-on: ubuntu-22.04
24+
steps:
25+
- uses: actions/checkout@v4
26+
- uses: AlexanderDokuchaev/md-dead-link-check@0.4

CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Please run the pre-commit testing scope locally before submitting your PR and en
2424
New feature pull requests should include all the necessary testing code.
2525
Testing is done using the `pytest` framework.
2626
The test files should be located inside the [tests](./tests) directory and start with `test_` so that the `pytest` is able to discover them.
27-
Any additional data that is required for tests (configuration files, mock datasets, etc.) must be stored within the [tests/data](./tests/data) folder.
27+
Any additional data that is required for tests (configuration files, mock datasets, etc.) must be stored within the `tests/<framework>/data` folder.
2828
The test files themselves may be grouped in arbitrary directories according to their testing purpose and common sense.
2929

3030
Any additional tests in the [tests](./tests) directory will be automatically added into the pre-commit CI scope.

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ learning frameworks.
4141
|Compression algorithm|PyTorch|TensorFlow|
4242
| :--- | :---: | :---: |
4343
|[Quantization Aware Training](./docs/compression_algorithms/Quantization.md) | Supported | Supported |
44-
|[Mixed-Precision Quantization](./docs/compression_algorithms/Quantization.md#mixed_precision_quantization) | Supported | Not supported |
44+
|[Mixed-Precision Quantization](./docs/compression_algorithms/Quantization.md#mixed-precision-quantization) | Supported | Not supported |
4545
|[Binarization](./docs/compression_algorithms/Binarization.md) | Supported | Not supported |
4646
|[Sparsity](./docs/compression_algorithms/Sparsity.md) | Supported | Supported |
4747
|[Filter pruning](./docs/compression_algorithms/Pruning.md) | Supported | Supported |

docs/Usage.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ In the example above, the NNCF-compressed models that contain instances of `MyMo
303303

304304
### Accuracy-Aware model training
305305

306-
NNCF has the capability to apply the model compression algorithms while satisfying the user-defined accuracy constraints. This is done by executing an internal custom accuracy-aware training loop, which also helps to automate away some of the manual hyperparameter search related to model training such as setting the total number of epochs, the target compression rate for the model, etc. There are two supported training loops. The first one is called [Early Exit Training](./accuracy_aware_model_training/EarlyExitTraining.md), which aims to finish fine-tuning when the accuracy drop criterion is reached. The second one is more sophisticated. It is targeted for the automated discovery of the compression rate for the model given that it satisfies the user-specified maximal tolerable accuracy drop due to compression. Its name is [Adaptive Compression Level Training](./accuracy_aware_model_training/AdaptiveCompressionTraining.md). Both training loops could be run with either PyTorch or TensorFlow backend with the same user interface(except for the TF case where the Keras API is used for training).
306+
NNCF has the capability to apply the model compression algorithms while satisfying the user-defined accuracy constraints. This is done by executing an internal custom accuracy-aware training loop, which also helps to automate away some of the manual hyperparameter search related to model training such as setting the total number of epochs, the target compression rate for the model, etc. There are two supported training loops. The first one is called [Early Exit Training](./accuracy_aware_model_training/EarlyExitTraining.md), which aims to finish fine-tuning when the accuracy drop criterion is reached. The second one is more sophisticated. It is targeted for the automated discovery of the compression rate for the model given that it satisfies the user-specified maximal tolerable accuracy drop due to compression. Its name is [Adaptive Compression Level Training](./accuracy_aware_model_training/AdaptiveCompressionLevelTraining.md). Both training loops could be run with either PyTorch or TensorFlow backend with the same user interface(except for the TF case where the Keras API is used for training).
307307

308308
The following function is required to create the accuracy-aware training loop. One has to pass the `NNCFConfig` object and the compression controller (that is returned upon compressed model creation, see above).
309309

@@ -314,7 +314,7 @@ training_loop = create_accuracy_aware_training_loop(nncf_config, compression_ctr
314314

315315
In order to properly instantiate the accuracy-aware training loop, the user has to specify the 'accuracy_aware_training' section.
316316
This section fully depends on what Accuracy-Aware Training loop is being used.
317-
For more details about config of Adaptive Compression Level Training refer to [Adaptive Compression Level Training documentation](./accuracy_aware_model_training/AdaptiveCompressionTraining.md) and Early Exit Training refer to [Early Exit Training documentation](./accuracy_aware_model_training/EarlyExitTraining.md).
317+
For more details about config of Adaptive Compression Level Training refer to [Adaptive Compression Level Training documentation](./accuracy_aware_model_training/AdaptiveCompressionLevelTraining.md) and Early Exit Training refer to [Early Exit Training documentation](./accuracy_aware_model_training/EarlyExitTraining.md).
318318

319319
The training loop is launched by calling its `run` method. Before the start of the training loop, the user is expected to define several functions related to the training of the model and pass them as arguments to the `run` method of the training loop instance:
320320

@@ -378,6 +378,6 @@ model = training_loop.run(
378378
dump_checkpoint_fn=dump_checkpoint_fn)
379379
```
380380

381-
The above call executes the accuracy-aware training loop and return the compressed model. For more details on how to use the accuracy-aware training loop functionality of NNCF, please refer to its [documentation](./accuracy_aware_model_training/AdaptiveCompressionTraining.md).
381+
The above call executes the accuracy-aware training loop and return the compressed model. For more details on how to use the accuracy-aware training loop functionality of NNCF, please refer to its [documentation](./accuracy_aware_model_training/AdaptiveCompressionLevelTraining.md).
382382

383-
See a PyTorch [example](../../examples/torch/classification/main.py) for **Quantization** + **Filter Pruning** Adaptive Compression scenario on CIFAR10 and ResNet18 [config](../../examples/torch/classification/configs/pruning/resnet18_cifar10_accuracy_aware.json).
383+
See a PyTorch [example](/examples/torch/classification/main.py) for **Quantization** + **Filter Pruning** Adaptive Compression scenario on CIFAR10 and ResNet18 [config](/examples/torch/classification/configs/pruning/resnet18_cifar10_accuracy_aware.json).

examples/tensorflow/object_detection/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ The sample receives a configuration file where the training schedule, hyper-para
66

77
## Features
88

9-
- RetinaNet from the official [TF repository](https://github.com/tensorflow/models/tree/master/official/vision/detection) with minor modifications (custom implementation of upsampling is replaced with equivalent tf.keras.layers.UpSampling2D). YOLOv4 from the [keras-YOLOv3-model-set](https://github.com/david8862/keras-YOLOv3-model-set) repository.
9+
- RetinaNet from the official [TF repository](https://github.com/tensorflow/models/tree/master/official/legacy/detection) with minor modifications (custom implementation of upsampling is replaced with equivalent tf.keras.layers.UpSampling2D). YOLOv4 from the [keras-YOLOv3-model-set](https://github.com/david8862/keras-YOLOv3-model-set) repository.
1010
- Support [TensorFlow Datasets (TFDS)](https://www.tensorflow.org/datasets) and TFRecords for COCO2017 dataset.
1111
- Configuration file examples for sparsity, quantization, filter pruning and quantization with sparsity.
1212
- Export to Frozen Graph or TensorFlow SavedModel that is supported by the OpenVINO™ toolkit.

examples/tensorflow/segmentation/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ The sample receives a configuration file where the training schedule, hyper-para
66

77
## Features
88

9-
- Mask R-CNN from the official [TF repository](https://github.com/tensorflow/models/tree/master/official/vision/detection) with minor modifications (custom implementation of upsampling is replaced with equivalent tf.keras.layers.UpSampling2D).
9+
- Mask R-CNN from the official [TF repository](https://github.com/tensorflow/models/tree/master/official/legacy/detection) with minor modifications (custom implementation of upsampling is replaced with equivalent tf.keras.layers.UpSampling2D).
1010
- Support TFRecords for COCO2017 dataset.
1111
- Configuration file examples for sparsity, quantization, and quantization with sparsity.
1212
- Export to Frozen Graph or TensorFlow SavedModel that is supported by the OpenVINO™ toolkit.

nncf/experimental/torch/sparsity/movement/MovementSparsity.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -78,5 +78,5 @@ Following arguments have been defaulted to work well out of the box. However, yo
7878

7979
## References
8080

81-
1. Victor Sanh, Thomas Wolf, and Alexander M. Rush. 2020. [Movement Pruning: Adaptive Sparsity by Fine-Tuning]((https://arxiv.org/pdf/2005.07683.pdf)). In Advances in Neural Information Processing Systems, 33, pp. 20378-20389.
82-
2. François Lagunas, Ella Charlaix, Victor Sanh, and Alexander M. Rush. 2021. [Block Pruning For Faster Transformers]((https://arxiv.org/pdf/2109.04838.pdf)). In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 10619–10629.
81+
1. Victor Sanh, Thomas Wolf, and Alexander M. Rush. 2020. [Movement Pruning: Adaptive Sparsity by Fine-Tuning](https://arxiv.org/pdf/2005.07683.pdf). In Advances in Neural Information Processing Systems, 33, pp. 20378-20389.
82+
2. François Lagunas, Ella Charlaix, Victor Sanh, and Alexander M. Rush. 2021. [Block Pruning For Faster Transformers](https://arxiv.org/pdf/2109.04838.pdf). In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 10619–10629.

tests/onnx/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ We provide two types of tests.
2222

2323
You should give three arguments to run this test.
2424

25-
1. `--model-dir`: The directory path which includes ONNX Model ZOO models (.onnx files). See [#prepare-models](benchmarking/README.md#prepare-models) for details.
26-
2. `--data-dir`: The directory path which includes datasets (ImageNet2012, COCO, Cityscapes, and VOC) [#prepare-models](benchmarking/README.md#prepare-models).
25+
1. `--model-dir`: The directory path which includes ONNX Model ZOO models (.onnx files). See [#prepare-models](benchmarking/README.md#benchmark-for-onnx-models-vision) for details.
26+
2. `--data-dir`: The directory path which includes datasets (ImageNet2012, COCO, Cityscapes, and VOC) [#prepare-dataset](benchmarking/README.md#1-prepare-dataset).
2727
3. `--output-dir`: The directory path where the test results will be saved.
2828
4. (Optional) `--model-names`: String containing model names to test. Model name is the prefix of the name of AccuracyChecker config before the '.' symbol. Please, provide the model names using ' ' as a separator.
2929
5. (Optional) `--ckpt-dir`: Directory path to save quantized models.

tests/onnx/benchmarking/README.md

+26-26
Original file line numberDiff line numberDiff line change
@@ -22,35 +22,35 @@ The benchmarking supports the following models:
2222

2323
- Classification
2424

25-
1. [bvlcalexnet-12](https://github.com/onnx/models/blob/main/vision/classification/alexnet/model/bvlcalexnet-12.onnx)
26-
2. [caffenet-12](https://github.com/onnx/models/blob/main/vision/classification/caffenet/model/caffenet-12.onnx)
27-
3. [densenet-12](https://github.com/onnx/models/blob/main/vision/classification/densenet-121/model/densenet-12.onnx)
28-
4. [efficientnet-lite4-11](https://github.com/onnx/models/blob/main/vision/classification/efficientnet-lite4/model/efficientnet-lite4-11.onnx)
29-
5. [googlenet-12](https://github.com/onnx/models/blob/main/vision/classification/inception_and_googlenet/googlenet/model/googlenet-12.onnx)
30-
6. [inception-v1-12](https://github.com/onnx/models/blob/main/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-12.onnx)
31-
7. [mobilenetv2-12](https://github.com/onnx/models/blob/main/vision/classification/mobilenet/model/mobilenetv2-12.onnx)
32-
8. [resnet50-v1-12](https://github.com/onnx/models/blob/main/vision/classification/resnet/model/resnet50-v1-12.onnx)
33-
9. [resnet50-v2-7](https://github.com/onnx/models/blob/main/vision/classification/resnet/model/resnet50-v2-7.onnx)
34-
10. [shufflenet-9](https://github.com/onnx/models/blob/main/vision/classification/shufflenet/model/shufflenet-9.onnx)
35-
11. [shufflenet-v2-12](https://github.com/onnx/models/blob/main/vision/classification/shufflenet/model/shufflenet-v2-12.onnx)
36-
12. [squeezenet1.0-12](https://github.com/onnx/models/blob/main/vision/classification/squeezenet/model/squeezenet1.0-12.onnx)
37-
13. [vgg16-12](https://github.com/onnx/models/blob/main/vision/classification/vgg/model/vgg16-12.onnx)
38-
14. [zfnet512-12](https://github.com/onnx/models/blob/main/vision/classification/zfnet-512/model/zfnet512-12.onnx)
25+
1. [bvlcalexnet-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/alexnet/model/bvlcalexnet-12.onnx)
26+
2. [caffenet-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/caffenet/model/caffenet-12.onnx)
27+
3. [densenet-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/densenet-121/model/densenet-12.onnx)
28+
4. [efficientnet-lite4-11](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/efficientnet-lite4/model/efficientnet-lite4-11.onnx)
29+
5. [googlenet-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/inception_and_googlenet/googlenet/model/googlenet-12.onnx)
30+
6. [inception-v1-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-12.onnx)
31+
7. [mobilenetv2-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/mobilenet/model/mobilenetv2-12.onnx)
32+
8. [resnet50-v1-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/resnet/model/resnet50-v1-12.onnx)
33+
9. [resnet50-v2-7](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/resnet/model/resnet50-v2-7.onnx)
34+
10. [shufflenet-9](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/shufflenet/model/shufflenet-9.onnx)
35+
11. [shufflenet-v2-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/shufflenet/model/shufflenet-v2-12.onnx)
36+
12. [squeezenet1.0-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/squeezenet/model/squeezenet1.0-12.onnx)
37+
13. [vgg16-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/vgg/model/vgg16-12.onnx)
38+
14. [zfnet512-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/classification/zfnet-512/model/zfnet512-12.onnx)
3939

4040
- Object detection and segmentation models
4141

42-
1. [FasterRCNN-12](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/faster-rcnn/model/FasterRCNN-12.onnx)
43-
2. [MaskRCNN-12](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/mask-rcnn/model/MaskRCNN-12.onnx)
44-
3. [ResNet101-DUC-7](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx)
45-
4. [fcn-resnet50-12](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/fcn/model/fcn-resnet50-12.onnx)
46-
5. [retinanet-9](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/retinanet/model/retinanet-9.onnx)
47-
6. [ssd-12](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/ssd/model/ssd-12.onnx)
48-
7. [ssd_mobilenet_v1_12](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/ssd-mobilenetv1/model/ssd_mobilenet_v1_12.onnx)
49-
8. [tiny-yolov3-11](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/tiny-yolov3/model/tiny-yolov3-11.onnx)
50-
9. [tinyyolov2-8](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-8.onnx)
51-
10. [yolov2-coco-9](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx)
52-
11. [yolov3-12](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/yolov3/model/yolov3-12.onnx)
53-
12. [yolov4](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/yolov4/model/yolov4.onnx)
42+
1. [FasterRCNN-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/faster-rcnn/model/FasterRCNN-12.onnx)
43+
2. [MaskRCNN-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/mask-rcnn/model/MaskRCNN-12.onnx)
44+
3. [ResNet101-DUC-7](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx)
45+
4. [fcn-resnet50-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/fcn/model/fcn-resnet50-12.onnx)
46+
5. [retinanet-9](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/retinanet/model/retinanet-9.onnx)
47+
6. [ssd-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/ssd/model/ssd-12.onnx)
48+
7. [ssd_mobilenet_v1_12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/ssd-mobilenetv1/model/ssd_mobilenet_v1_12.onnx)
49+
8. [tiny-yolov3-11](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/tiny-yolov3/model/tiny-yolov3-11.onnx)
50+
9. [tinyyolov2-8](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-8.onnx)
51+
10. [yolov2-coco-9](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx)
52+
11. [yolov3-12](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/yolov3/model/yolov3-12.onnx)
53+
12. [yolov4](https://github.com/onnx/models/blob/5faef4c33eba0395177850e1e31c4a6a9e634c82/vision/object_detection_segmentation/yolov4/model/yolov4.onnx)
5454

5555
You can find the Accuracy Checker configs that are used for particular models
5656
in [classification](./classification/onnx_models_configs)

0 commit comments

Comments
 (0)