This example load an image classification model exported from PyTorch and confirm its accuracy and speed based on ILSVR2012 validation Imagenet dataset. You need to download this dataset yourself.
pip install neural-compressor
pip install -r requirements.txt
Note: Validated ONNX Runtime Version.
Please refer to pytorch official guide for detailed model export. The following is a simple example:
python prepare_model.py --output_model='vgg16.onnx'
Download dataset ILSVR2012 validation Imagenet dataset.
Download label:
wget http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz
tar -xvzf caffe_ilsvrc12.tar.gz val.txt
Quantize model with QLinearOps:
bash run_quant.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/imagenet \
--label_path=/path/to/val.txt \
--output_model=path/to/save
Quantize model with QDQ mode:
bash run_quant.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/imagenet \
--label_path=/path/to/val.txt \
--output_model=path/to/save \
--quant_format=QDQ
bash run_benchmark.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/imagenet \
--label_path=/path/to/val.txt \
--batch_size=batch_size \
--mode=performance # or accuracy