Intel® Distribution of OpenVINO™ toolkit is an open-source toolkit for optimizing
and deploying AI inference. It can be used to develop applications and solutions based
on deep learning tasks, such as: emulation of human vision, automatic speech recognition,
natural language processing, recommendation systems, etc. It provides high-performance
and rich deployment options, from edge to cloud
openvino provides CMake targets:
find_package(OpenVINO REQUIRED)
target_link_libraries(main PRIVATE openvino::runtime)
Enables Auto plugin for inference
Enables Auto Batch plugin for inference, useful for throughput mode
Enables CPU plugin for inference
Enables GPU plugin for inference
Enables Hetero plugin for inference
Enables IR frontend for reading models in OpenVINO IR format
NPU Support
Enables ONNX frontend for reading models in ONNX format
Enables PaddlePaddle frontend for reading models in PaddlePaddle format
Enables PyTorch frontend to convert models in PyTorch format
Enables TensorFlow frontend for reading models in TensorFlow format
Enables TensorFlow Lite frontend for reading models in TensorFlow Lite format
v2024.4.0#4
!uwp & !x86 & !(android & arm32)
Apache-2.0
Manifest