Spaces:
Running
Running
File size: 2,089 Bytes
1999a98 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
# YOLOv8 OpenVINO Inference in C++ π¦Ύ
Welcome to the YOLOv8 OpenVINO Inference example in C++! This guide will help you get started with leveraging the powerful YOLOv8 models using OpenVINO and OpenCV API in your C++ projects. Whether you're looking to enhance performance or add flexibility to your applications, this example has got you covered.
## π Features
- π **Model Format Support**: Compatible with `ONNX` and `OpenVINO IR` formats.
- β‘ **Precision Options**: Run models in `FP32`, `FP16`, and `INT8` precisions.
- π **Dynamic Shape Loading**: Easily handle models with dynamic input shapes.
## π Dependencies
To ensure smooth execution, please make sure you have the following dependencies installed:
| Dependency | Version |
| ---------- | -------- |
| OpenVINO | >=2023.3 |
| OpenCV | >=4.5.0 |
| C++ | >=14 |
| CMake | >=3.12.0 |
## βοΈ Build Instructions
Follow these steps to build the project:
1. Clone the repository:
```bash
git clone https://github.com/ultralytics/ultralytics.git
cd ultralytics/YOLOv8-OpenVINO-CPP-Inference
```
2. Create a build directory and compile the project:
```bash
mkdir build
cd build
cmake ..
make
```
## π οΈ Usage
Once built, you can run inference on an image using the following command:
```bash
./detect <model_path.{onnx, xml}> <image_path.jpg>
```
## π Exporting YOLOv8 Models
To use your YOLOv8 model with OpenVINO, you need to export it first. Use the command below to export the model:
```bash
yolo export model=yolov8s.pt imgsz=640 format=openvino
```
## πΈ Screenshots
### Running Using OpenVINO Model

### Running Using ONNX Model

## β€οΈ Contributions
We hope this example helps you integrate YOLOv8 with OpenVINO and OpenCV into your C++ projects effortlessly. Happy coding! π
|