Skip to content

Commit aac4d77

Browse files
Add markdownlint to pre-commit (openvinotoolkit#1996)
### Changes Add [markdownlint](https://github.com/DavidAnson/markdownlint) check to pre-commit Fixed: - Incorrect links - Ordered lists - Blank lines before and after headers and code blocks - Types for code blocks - Trailing spaces - Spellcheck - Removed module-timm_custom_modules from api doc
1 parent 76727f2 commit aac4d77

File tree

51 files changed

+1109
-821
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+1109
-821
lines changed

.markdownlint.yaml

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Default state for all rules
2+
default: true
3+
4+
MD013: false # Line length
5+
MD033: false # Inline HTML
6+
MD034: false # Bare URL used
7+
MD036: false # Emphasis used instead of a heading
8+
MD037: false # Spaces inside emphasis markers
9+
MD041: false # First line

.pre-commit-config.yaml

+6
Original file line numberDiff line numberDiff line change
@@ -14,3 +14,9 @@ repos:
1414
hooks:
1515
- id: isort
1616
name: isort (python)
17+
18+
- repo: https://github.com/igorshubovych/markdownlint-cli
19+
rev: v0.33.0
20+
hooks:
21+
- id: markdownlint
22+
args: [--config=.markdownlint.yaml]

CONTRIBUTING.md

+11-29
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,18 @@
11
# Contributing to NNCF
22

33
Contributions are accepted in the form of:
4+
45
* Submitting issues against the current code to report bugs or request features
56
* Extending NNCF functionality with important features (e.g. to address community requests, improve usability, implement a recently published compression algorithm, etc.)
67
* Adding example scripts to showcase NNCF usage in real training pipelines and provide the means to reproduce the reported compression results
78
* Providing recipes (specific NNCF configurations and training hyperparameters) to obtain state-of-the-art compression using NNCF for existing models
89
* Adding well-defined patches that integrate NNCF into third-party repositories
9-
* Reducing performance overhead of NNCF compression by writing specialized CUDA kernels for compression operations or improving existing ones.
10+
* Reducing performance overhead of NNCF compression by writing specialized CUDA kernels for compression operations or improving existing ones.
1011

1112
The latter forms are accepted as pull requests from your own forks of the NNCF repository.
1213

1314
Any contributions must not violate the repository's [LICENSE](./LICENSE) requirements.
1415

15-
## Installation
16-
### (Experimental) ONNXRuntime-OpenVINO
17-
Install the package and its dependencies by running the following in the repository root directory:
18-
```bash
19-
make install-onnx-dev
20-
```
21-
2216
## Testing
2317

2418
After your pull request is submitted, the maintainer will launch a scope of CI tests against it.
@@ -28,42 +22,30 @@ The pre-commit scope may be run locally by executing the `pytest` command (witho
2822
Please run the pre-commit testing scope locally before submitting your PR and ensure that it passes to conserve your own time and that of the reviewing maintainer.
2923

3024
New feature pull requests should include all the necessary testing code.
31-
Testing is done using the `pytest` framework.
25+
Testing is done using the `pytest` framework.
3226
The test files should be located inside the [tests](./tests) directory and start with `test_` so that the `pytest` is able to discover them.
3327
Any additional data that is required for tests (configuration files, mock datasets, etc.) must be stored within the [tests/data](./tests/data) folder.
3428
The test files themselves may be grouped in arbitrary directories according to their testing purpose and common sense.
3529

36-
Any additional tests in the [tests](./tests) directory will be automatically added into the pre-commit CI scope.
30+
Any additional tests in the [tests](./tests) directory will be automatically added into the pre-commit CI scope.
3731
If your testing code is more extensive than unit tests (in terms of test execution time), or would be more suited to be executed on a nightly/weekly basis instead of for each future commit, please inform the maintainers in your PR discussion thread so that our internal testing pipelines could be adjusted accordingly.
3832

39-
### Preset command for testing
40-
You can launch appropriate tests against the framework by running the following command:
41-
42-
- (Experimental) ONNXRuntime-OpenVINO
43-
```bash
44-
test-onnx
45-
```
46-
4733
## Code style
34+
4835
Changes to NNCF Python code should conform to [Python Style Guide](./docs/styleguide/PyGuide.md)
4936

50-
Pylint is used throughout the project to ensure code cleanliness and quality.
37+
Pylint is used throughout the project to ensure code cleanliness and quality.
5138
A Pylint run is also done as part of the pre-commit scope - the pre-commit `pytest` scope will not be run if your code fails the Pylint checks.
5239
The Pylint rules and exceptions for this repository are described in the standard [.pylintrc](./.pylintrc) format - make sure your local linter uses these.
5340

54-
### Preset command for linting
55-
You can launch appropriate linting against the framework by running the following command:
56-
57-
- (Experimental) ONNXRuntime-OpenVINO
58-
```bash
59-
pylint-onnx
60-
```
61-
6241
## Binary files
63-
Please refrain from adding huge binary files into the repository. If binary files have to be added, mark these to use Git LFS via the [.gitattributes](./.gitattributes) file.
42+
43+
Please refrain from adding huge binary files into the repository. If binary files have to be added, mark these to use Git LFS via the [.gitattributes](./.gitattributes) file.
6444

6545
## Model identifiers
46+
6647
When adding model configs and checkpoints to be showcased in NNCF's sample script, follow the format for naming these files:
48+
6749
1. The base name must be the same for the NNCF config file, AC config file, checkpoint file (PT/ONNX/OV) or checkpoint folder (TF), and other associated artifacts.
6850
2. This name should be composed with the following format: `{model_name}_{dataset_name}` for FP32 models, `{topology_name}_{dataset_name}_{compression_algorithms_applied}`. The format may be extended if there are multiple models with the same topology, dataset and compression algos applied, which only differ in something else such as exact value of achieved sparsity. Align the naming of the new checkpoints with the existing ones.
69-
3. Additional human-readable information on the model such as expected metrics and compression algorithm specifics (e.g. level of pruning/sparsity, per-tensor/per-channel quantizer configuration etc.) should be stored in a registry file (`tests/torch/sota_checkpoints_eval.json` for PT, `tests/tensorflow/sota_checkpoints_eval.json` for TF)
51+
3. Additional human-readable information on the model such as expected metrics and compression algorithm specifics (e.g. level of pruning/sparsity, per-tensor/per-channel quantizer configuration etc.) should be stored in a registry file (`tests/torch/sota_checkpoints_eval.json` for PT, `tests/tensorflow/sota_checkpoints_eval.json` for TF)

README.md

+31-14
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,11 @@
33
# Neural Network Compression Framework (NNCF)
44

55
[Key Features](#key-features)
6-
[Installation](#Installation-guide)
6+
[Installation](#installation-guide)
77
[Documentation](#documentation)
88
[Usage](#usage)
9-
[Tutorials and Samples](#Model-compression-tutorials-and-samples)
10-
[Third-party integration](#Third-party-repository-integration)
9+
[Tutorials and Samples](#model-compression-tutorials-and-samples)
10+
[Third-party integration](#third-party-repository-integration)
1111
[Model Zoo](./docs/ModelZoo.md)
1212

1313
[![GitHub Release](https://img.shields.io/github/v/release/openvinotoolkit/nncf?color=green)](https://github.com/openvinotoolkit/nncf/releases)
@@ -21,13 +21,14 @@ Neural Network Compression Framework (NNCF) provides a suite of post-training an
2121

2222
NNCF is designed to work with models from [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), [ONNX](https://onnx.ai/) and [OpenVINO™](https://docs.openvino.ai/latest/home.html).
2323

24-
NNCF provides [samples](#Model-compression-tutorials-and-samples) that demonstrate the usage of compression algorithms for different use cases and models. See compression results achievable with the NNCF-powered samples at [Model Zoo page](./docs/ModelZoo.md).
24+
NNCF provides [samples](#model-compression-tutorials-and-samples) that demonstrate the usage of compression algorithms for different use cases and models. See compression results achievable with the NNCF-powered samples at [Model Zoo page](./docs/ModelZoo.md).
2525

2626
The framework is organized as a Python\* package that can be built and used in a standalone mode. The framework
2727
architecture is unified to make it easy to add different compression algorithms for both PyTorch and TensorFlow deep
2828
learning frameworks.
2929

3030
## Key Features
31+
3132
### Post-Training Compression Algorithms
3233

3334
| Compression algorithm |OpenVINO|PyTorch| TensorFlow | ONNX |
@@ -184,7 +185,6 @@ quantized_model = nncf.quantize(onnx_model, calibration_dataset)
184185

185186
</details>
186187

187-
188188
[//]: # (NNCF provides full [samples]&#40;#post-training-quantization-samples&#41;, which demonstrate Post-Training Quantization usage for PyTorch, TensorFlow, ONNX, OpenVINO.)
189189

190190
### Training-Time Compression
@@ -272,7 +272,8 @@ For a quicker start with NNCF-powered compression, try sample notebooks and scri
272272
### Model Compression Tutorials
273273

274274
A collection of ready-to-run Jupyter* notebooks are available to demonstrate how to use NNCF compression algorithms to optimize models for inference with the OpenVINO Toolkit:
275-
- [Accelerate Inference of NLP models with Post-Training Qunatization API of NNCF](https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/105-language-quantize-bert)
275+
276+
- [Accelerate Inference of NLP models with Post-Training Quantization API of NNCF](https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/105-language-quantize-bert)
276277
- [Convert and Optimize YOLOv8 with OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/230-yolov8-optimization)
277278
- [Convert and Optimize YOLOv7 with OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/226-yolov7-optimization)
278279
- [NNCF Post-Training Optimization of Segment Anything Model](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/237-segment-anything)
@@ -287,7 +288,9 @@ A collection of ready-to-run Jupyter* notebooks are available to demonstrate how
287288
- [Accelerate Inference of Sparse Transformer Models with OpenVINO and 4th Gen Intel Xeon Scalable Processors](https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/116-sparsity-optimization)
288289

289290
### Post-Training Quantization Samples
291+
290292
Compact scripts demonstrating quantization and corresponding inference speed boost:
293+
291294
- [Post-Training Quantization of MobileNet v2 OpenVINO Model](examples/post_training_quantization/openvino/mobilenet_v2/README.md)
292295
- [Post-Training Quantization of YOLOv8 OpenVINO Model](examples/post_training_quantization/openvino/yolov8/README.md)
293296
- [Post-Training Quantization of Anomaly Classification OpenVINO model with control of accuracy metric](examples/post_training_quantization/openvino/quantize_with_accuracy_control/README.md)
@@ -298,7 +301,9 @@ Compact scripts demonstrating quantization and corresponding inference speed boo
298301
- [Post-Training Quantization of MobileNet v2 TensorFlow Model](examples/post_training_quantization/tensorflow/mobilenet_v2/README.md)
299302

300303
### Training-Time Compression Samples
304+
301305
These examples provide full pipelines including compression, training and inference for classification, object detection and segmentation tasks.
306+
302307
- PyTorch samples:
303308
- [Image Classification sample](examples/torch/classification/README.md)
304309
- [Object Detection sample](examples/torch/object_detection/README.md)
@@ -309,6 +314,7 @@ These examples provide full pipelines including compression, training and infere
309314
- [Instance Segmentation sample](examples/tensorflow/segmentation/README.md)
310315

311316
## Third-party repository integration
317+
312318
NNCF may be straightforwardly integrated into training/evaluation pipelines of third-party repositories.
313319

314320
### Used by
@@ -322,30 +328,39 @@ NNCF may be straightforwardly integrated into training/evaluation pipelines of t
322328
NNCF is used as a compression backend within the renowned `transformers` repository in HuggingFace Optimum Intel.
323329

324330
### Git patches for third-party repository
331+
325332
See [third_party_integration](./third_party_integration) for examples of code modifications (Git patches and base commit IDs are provided) that are necessary to integrate NNCF into the following repositories:
326-
- [huggingface-transformers](third_party_integration/huggingface_transformers/README.md)
333+
334+
- [huggingface-transformers](third_party_integration/huggingface_transformers/README.md)
327335

328336
## Installation Guide
337+
329338
For detailed installation instructions please refer to the [Installation](./docs/Installation.md) page.
330339

331340
NNCF can be installed as a regular PyPI package via pip:
332-
```
341+
342+
```bash
333343
pip install nncf
334344
```
345+
335346
If you want to install both NNCF and the supported PyTorch version in one line, you can do this by simply running:
336-
```
347+
348+
```bash
337349
pip install nncf[torch]
338350
```
351+
339352
Other viable options besides `[torch]` are `[tf]`, `[onnx]` and `[openvino]`.
340353

341354
NNCF is also available via [conda](https://anaconda.org/conda-forge/nncf):
342-
```
355+
356+
```bash
343357
conda install -c conda-forge nncf
344358
```
345359

346-
You may also use one of the Dockerfiles in the [docker](./docker) directory to build an image with an environment already set up and ready for running NNCF [sample scripts](#Model-compression-tutorials-and-samples).
360+
You may also use one of the Dockerfiles in the [docker](./docker) directory to build an image with an environment already set up and ready for running NNCF [sample scripts](#model-compression-tutorials-and-samples).
347361

348362
### System requirements
363+
349364
- Ubuntu\* 18.04 or later (64-bit)
350365
- Python\* 3.7 or later
351366
- Supported frameworks:
@@ -362,7 +377,7 @@ List of models and compression results for them can be found at our [Model Zoo p
362377

363378
## Citing
364379

365-
```
380+
```bi
366381
@article{kozlov2020neural,
367382
title = {Neural network compression framework for fast model inference},
368383
author = {Kozlov, Alexander and Lazarevich, Ivan and Shamporov, Vasily and Lyalyushkin, Nikolay and Gorbachev, Yury},
@@ -372,13 +387,15 @@ List of models and compression results for them can be found at our [Model Zoo p
372387
```
373388

374389
## Contributing Guide
390+
375391
Refer to the [CONTRIBUTING.md](./CONTRIBUTING.md) file for guidelines on contributions to the NNCF repository.
376392

377393
## Useful links
394+
378395
- [Documentation](./docs)
379396
- Example scripts (model objects available through links in respective README.md files):
380-
- [PyTorch](./examples/torch)
381-
- [TensorFlow](./examples/tensorflow)
397+
- [PyTorch](./examples/torch)
398+
- [TensorFlow](./examples/tensorflow)
382399
- [FAQ](./docs/FAQ.md)
383400
- [Notebooks](https://github.com/openvinotoolkit/openvino_notebooks#-model-training)
384401
- [HuggingFace Optimum Intel](https://huggingface.co/docs/optimum/intel/optimization_ov)

0 commit comments

Comments
 (0)