This notebook demonstrates how to do inference with Automatic Device Selection. More information about Automatic Device Selection: click >>>
A basic introduction to do Auto Device Selection with OpenVINO.
This notebook demostrate how to compile_model with AUTO device, compare the first inference latency (model compilation time + 1st inference time) between GPU device and AUTO device, show the difference performance hints (THROUGHPUT and LATENCY) with significant performance results towards the desired metric.
If you have not done so already, please follow the Installation Guide to install all required dependencies.