This demo shows how to run a MobileNetV3 Large PaddePaddle model using OpenVINO Runtime. Instead of exporting the PaddlePaddle model to ONNX and converting to Intermediate Representation (IR) format using Model Optimizer, we can now read the Paddle model directly without conversion. It also covers new features in OpenVINO 2022.1, including:
- Preprocessing API
- Directly Loading a PaddlePaddle Model
- Auto-Device Plugin
- AsyncInferQueue Python API
- Performance Hints
- LATENCY Mode
- THROUGHPUT Mode
If you have not done so already, please follow the Installation Guide to install all required dependencies.