mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-21 15:49:31 +08:00
[Model] Support YOLOv8 (#1137)
* add GPL lisence * add GPL-3.0 lisence * add GPL-3.0 lisence * add GPL-3.0 lisence * support yolov8 * add pybind for yolov8 * add yolov8 readme Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
This commit is contained in:
6
examples/vision/detection/scaledyolov4/README.md
Normal file → Executable file
6
examples/vision/detection/scaledyolov4/README.md
Normal file → Executable file
@@ -11,7 +11,7 @@ English | [简体中文](README_CN.md)
|
||||
|
||||
|
||||
Visit the official [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) github repository, follow the guidelines to download the `scaledyolov4.pt` model, and employ `models/export.py` to get the file in `onnx` format. If you have any problems with the exported `onnx` model, refer to [ScaledYOLOv4#401](https://github.com/WongKinYiu/ScaledYOLOv4/issues/401) for solution.
|
||||
|
||||
|
||||
|
||||
```bash
|
||||
# Download the ScaledYOLOv4 model file
|
||||
@@ -38,8 +38,6 @@ For developers' testing, models exported by ScaledYOLOv4 are provided below. Dev
|
||||
| [ScaledYOLOv4-P6+BoF](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6_.onnx) | 487MB | - | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P7](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p7.onnx) | 1.1GB | - | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
|
||||
|
||||
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python Deployment](python)
|
||||
@@ -48,4 +46,4 @@ For developers' testing, models exported by ScaledYOLOv4 are provided below. Dev
|
||||
|
||||
## Release Note
|
||||
|
||||
- Document and code are based on [ScaledYOLOv4 CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415)
|
||||
- Document and code are based on [ScaledYOLOv4 CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415)
|
||||
|
1
examples/vision/detection/yolor/README.md
Normal file → Executable file
1
examples/vision/detection/yolor/README.md
Normal file → Executable file
@@ -36,7 +36,6 @@ For developers' testing, models exported by YOLOR are provided below. Developers
|
||||
| [YOLOR-D6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-570-640-640.onnx) | 580MB | - | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-573-640-640.onnx) | 580MB | - | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
|
||||
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python Deployment](python)
|
||||
|
3
examples/vision/detection/yolov5/README.md
Normal file → Executable file
3
examples/vision/detection/yolov5/README.md
Normal file → Executable file
@@ -6,7 +6,6 @@ English | [简体中文](README_CN.md)
|
||||
- (1)The *.onnx provided by [Official Repository](https://github.com/ultralytics/yolov5/releases/tag/v7.0) can be deployed directly;
|
||||
- (2)The YOLOv5 v7.0 model trained by personal data should employ `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5) to export the ONNX files for deployment.
|
||||
|
||||
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
For developers' testing, models exported by YOLOv5 are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
|
||||
@@ -27,4 +26,4 @@ For developers' testing, models exported by YOLOv5 are provided below. Developer
|
||||
|
||||
## Release Note
|
||||
|
||||
- Document and code are based on [YOLOv5 v7.0](https://github.com/ultralytics/yolov5/tree/v7.0)
|
||||
- Document and code are based on [YOLOv5 v7.0](https://github.com/ultralytics/yolov5/tree/v7.0)
|
||||
|
3
examples/vision/detection/yolov5lite/README.md
Normal file → Executable file
3
examples/vision/detection/yolov5lite/README.md
Normal file → Executable file
@@ -60,7 +60,6 @@ For developers' testing, models exported by YOLOv5Lite are provided below. Devel
|
||||
| [YOLOv5Lite-c](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-c-sim-512.onnx) | 18MB | 50.9% | This model file is sourced from[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-g](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-g-sim-640.onnx) | 21MB | 57.6% | This model file is sourced from [YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
|
||||
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python Deployment](python)
|
||||
@@ -69,4 +68,4 @@ For developers' testing, models exported by YOLOv5Lite are provided below. Devel
|
||||
|
||||
## Release Note
|
||||
|
||||
- Document and code are based on [YOLOv5-Lite v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
|
||||
- Document and code are based on [YOLOv5-Lite v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
|
||||
|
6
examples/vision/detection/yolov6/README.md
Normal file → Executable file
6
examples/vision/detection/yolov6/README.md
Normal file → Executable file
@@ -8,8 +8,6 @@ English | [简体中文](README_CN.md)
|
||||
- (1)The *.onnx provided by [Official Repository](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) can directly conduct deployemnt;
|
||||
- (2)Personal models trained by developers should export the ONNX model. Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||
|
||||
|
||||
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
For developers' testing, models exported by YOLOv6 are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
|
||||
@@ -20,8 +18,6 @@ For developers' testing, models exported by YOLOv6 are provided below. Developer
|
||||
| [YOLOv6t](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6t.onnx) | 58MB | 41.3% | This model file is sourced from [YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
|
||||
| [YOLOv6n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6n.onnx) | 17MB | 35.0% | This model file is sourced from [YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
|
||||
|
||||
|
||||
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python Deployment](python)
|
||||
@@ -30,4 +26,4 @@ For developers' testing, models exported by YOLOv6 are provided below. Developer
|
||||
|
||||
## Release Note
|
||||
|
||||
- Document and code are based on [YOLOv6 0.1.0 version](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)
|
||||
- Document and code are based on [YOLOv6 0.1.0 version](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)
|
||||
|
2
examples/vision/detection/yolov7/README.md
Normal file → Executable file
2
examples/vision/detection/yolov7/README.md
Normal file → Executable file
@@ -20,8 +20,6 @@ python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt
|
||||
python models/export.py --grid --dynamic --end2end --weights PATH/TO/yolov7.pt
|
||||
```
|
||||
|
||||
## Download the pre-trained ONNX model
|
||||
|
||||
To facilitate testing for developers, we provide below the models exported by YOLOv7, which developers can download and use directly. (The accuracy of the models in the table is sourced from the official library)
|
||||
|
||||
| Model | Size | Accuracy | Note |
|
||||
|
4
examples/vision/detection/yolov7end2end_ort/README.md
Normal file → Executable file
4
examples/vision/detection/yolov7end2end_ort/README.md
Normal file → Executable file
@@ -31,13 +31,11 @@ For developers' testing, models exported by YOLOv7End2EndORT are provided below.
|
||||
| [yolov7-d6-end2end-ort-nms](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-d6-end2end-ort-nms.onnx) | 511MB | 56.6% | This model file is sourced from [YOLOv7](https://github.com/WongKinYiu/yolov7),GPL-3.0 License |
|
||||
| [yolov7-e6e-end2end-ort-nms](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-e6e-end2end-ort-nms.onnx) | 579MB | 56.8% | This model file is sourced from [YOLOv7](https://github.com/WongKinYiu/yolov7),GPL-3.0 License |
|
||||
|
||||
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
||||
|
||||
## Release Note
|
||||
|
||||
- Document and code are based on [YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1)
|
||||
- Document and code are based on [YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1)
|
||||
|
5
examples/vision/detection/yolov7end2end_trt/README.md
Normal file → Executable file
5
examples/vision/detection/yolov7end2end_trt/README.md
Normal file → Executable file
@@ -6,8 +6,6 @@ The YOLOv7End2EndTRT deployment is based on [YOLOv7](https://github.com/WongKinY
|
||||
- (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the deployment. The deployment of *.trt and *.pose models is not supported.
|
||||
- (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||
|
||||
|
||||
|
||||
## Export the ONNX Model
|
||||
|
||||
```bash
|
||||
@@ -37,7 +35,6 @@ For developers' testing, models exported by YOLOv7End2EndTRT are provided below.
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployement](cpp)
|
||||
|
||||
|
||||
## Release Note
|
||||
|
||||
- Document and code are based on [YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1)
|
||||
- Document and code are based on [YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1)
|
||||
|
29
examples/vision/detection/yolov8/README.md
Executable file
29
examples/vision/detection/yolov8/README.md
Executable file
@@ -0,0 +1,29 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
|
||||
# YOLOv8 Ready-to-deploy Model
|
||||
|
||||
- The deployment of the YOLOv8 model is based on [YOLOv8](https://github.com/ultralytics/ultralytics) and [Pre-trained Model Based on COCO](https://github.com/ultralytics/ultralytics)
|
||||
- (1)The *.onnx provided by [Official Repository](https://github.com/ultralytics/ultralytics) can be deployed directly;
|
||||
- (2)The YOLOv8 model trained by personal data should employ `export.py` in [YOLOv8](https://github.com/ultralytics/ultralytics) to export the ONNX files for deployment.
|
||||
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
For developers' testing, models exported by YOLOv8 are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
|
||||
| Model | Size | Accuracy | Note |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:---- |
|
||||
| [YOLOv8n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8n.onnx) | 12.1MB | 37.3% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
|
||||
| [YOLOv8s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx) | 42.6MB | 44.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
|
||||
| [YOLOv8m](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8m.onnx) | 98.8MB | 50.2% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
|
||||
| [YOLOv8l](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8l.onnx) | 166.7MB | 52.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
|
||||
| [YOLOv8x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8x.onnx) | 260.3MB | 53.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
|
||||
|
||||
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
- [Serving Deployment](serving)
|
||||
|
||||
## Release Note
|
||||
|
||||
- Document and code are based on [YOLOv8](https://github.com/ultralytics/ultralytics)
|
29
examples/vision/detection/yolov8/README_CN.md
Normal file
29
examples/vision/detection/yolov8/README_CN.md
Normal file
@@ -0,0 +1,29 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv8准备部署模型
|
||||
|
||||
- YOLOv8部署模型实现来自[YOLOv8](https://github.com/ultralytics/ultralytics),和[基于COCO的预训练模型](https://github.com/ultralytics/ultralytics)
|
||||
- (1)[官方库](https://github.com/ultralytics/ultralytics)提供的*.onnx可直接进行部署;
|
||||
- (2)开发者基于自己数据训练的YOLOv8模型,可使用[YOLOv8](https://github.com/ultralytics/ultralytics)中的`export.py`导出ONNX文件后,完成部署。
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv8导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:---- |
|
||||
| [YOLOv8n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8n.onnx) | 12.1MB | 37.3% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
|
||||
| [YOLOv8s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx) | 42.6MB | 44.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
|
||||
| [YOLOv8m](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8m.onnx) | 98.8MB | 50.2% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
|
||||
| [YOLOv8l](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8l.onnx) | 166.7MB | 52.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
|
||||
| [YOLOv8x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8x.onnx) | 260.3MB | 53.9% | This model file is sourced from [YOLOv8](https://github.com/ultralytics/ultralytics),GPL-3.0 License |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [服务化部署](serving)
|
||||
|
||||
## 版本说明
|
||||
|
||||
- 本版本文档和代码基于[YOLOv8](https://github.com/ultralytics/ultralytics) 编写
|
14
examples/vision/detection/yolov8/cpp/CMakeLists.txt
Executable file
14
examples/vision/detection/yolov8/cpp/CMakeLists.txt
Executable file
@@ -0,0 +1,14 @@
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
|
||||
|
||||
# Specify the fastdeploy library path after downloading and decompression
|
||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
|
||||
# Add FastDeploy dependent header files
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# Add FastDeploy library dependencies
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
90
examples/vision/detection/yolov8/cpp/README_CN.md
Normal file
90
examples/vision/detection/yolov8/cpp/README_CN.md
Normal file
@@ -0,0 +1,90 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv8 C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOv8在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.3以上(x.x.x>=1.0.3)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载 FastDeploy 预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 1. 下载官方转换好的 YOLOv8 ONNX 模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
./infer_demo yolov8s.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo yolov8s.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo yolov8s.onnx 000000014439.jpg 2
|
||||
```
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
## YOLOv8 C++接口
|
||||
|
||||
### YOLOv8类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::YOLOv8(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
YOLOv8模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> YOLOv8::Predict(cv::Mat* im, DetectionResult* result)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
105
examples/vision/detection/yolov8/cpp/infer.cc
Executable file
105
examples/vision/detection/yolov8/cpp/infer.cc
Executable file
@@ -0,0 +1,105 @@
|
||||
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include "fastdeploy/vision.h"
|
||||
|
||||
void CpuInfer(const std::string& model_file, const std::string& image_file) {
|
||||
auto model = fastdeploy::vision::detection::YOLOv8(model_file);
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
|
||||
fastdeploy::vision::DetectionResult res;
|
||||
if (!model.Predict(im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
std::cout << res.Str() << std::endl;
|
||||
|
||||
auto vis_im = fastdeploy::vision::VisDetection(im, res);
|
||||
cv::imwrite("vis_result.jpg", vis_im);
|
||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
||||
}
|
||||
|
||||
void GpuInfer(const std::string& model_file, const std::string& image_file) {
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
option.UseGpu();
|
||||
auto model = fastdeploy::vision::detection::YOLOv8(model_file, "", option);
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
|
||||
fastdeploy::vision::DetectionResult res;
|
||||
if (!model.Predict(im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
std::cout << res.Str() << std::endl;
|
||||
|
||||
auto vis_im = fastdeploy::vision::VisDetection(im, res);
|
||||
cv::imwrite("vis_result.jpg", vis_im);
|
||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
||||
}
|
||||
|
||||
void TrtInfer(const std::string& model_file, const std::string& image_file) {
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
option.UseGpu();
|
||||
option.UseTrtBackend();
|
||||
option.SetTrtInputShape("images", {1, 3, 640, 640});
|
||||
auto model = fastdeploy::vision::detection::YOLOv8(model_file, "", option);
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
|
||||
fastdeploy::vision::DetectionResult res;
|
||||
if (!model.Predict(im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
std::cout << res.Str() << std::endl;
|
||||
|
||||
auto vis_im = fastdeploy::vision::VisDetection(im, res);
|
||||
cv::imwrite("vis_result.jpg", vis_im);
|
||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
||||
}
|
||||
|
||||
int main(int argc, char* argv[]) {
|
||||
if (argc < 4) {
|
||||
std::cout << "Usage: infer_demo path/to/model path/to/image run_option, "
|
||||
"e.g ./infer_model ./yolov8s.onnx ./test.jpeg 0"
|
||||
<< std::endl;
|
||||
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
|
||||
"with gpu; 2: run with gpu and use tensorrt backend."
|
||||
<< std::endl;
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (std::atoi(argv[3]) == 0) {
|
||||
CpuInfer(argv[1], argv[2]);
|
||||
} else if (std::atoi(argv[3]) == 1) {
|
||||
GpuInfer(argv[1], argv[2]);
|
||||
} else if (std::atoi(argv[3]) == 2) {
|
||||
TrtInfer(argv[1], argv[2]);
|
||||
}
|
||||
return 0;
|
||||
}
|
78
examples/vision/detection/yolov8/python/README_CN.md
Normal file
78
examples/vision/detection/yolov8/python/README_CN.md
Normal file
@@ -0,0 +1,78 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv8 Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOv8在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/yolov8/python/
|
||||
|
||||
#下载yolov8模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model yolov8.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model yolov8.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model yolov8.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
|
||||
|
||||
## YOLOv8 Python接口
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.YOLOv8(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
YOLOv8模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> YOLOv8.predict(image_data)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [YOLOv8 模型介绍](..)
|
||||
- [YOLOv8 C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
58
examples/vision/detection/yolov8/python/infer.py
Executable file
58
examples/vision/detection/yolov8/python/infer.py
Executable file
@@ -0,0 +1,58 @@
|
||||
import fastdeploy as fd
|
||||
import cv2
|
||||
import os
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
import argparse
|
||||
import ast
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--model", default=None, help="Path of yolov8 model.")
|
||||
parser.add_argument(
|
||||
"--image", default=None, help="Path of test image file.")
|
||||
parser.add_argument(
|
||||
"--device",
|
||||
type=str,
|
||||
default='cpu',
|
||||
help="Type of inference device, support 'cpu' or 'gpu' or 'kunlunxin'.")
|
||||
parser.add_argument(
|
||||
"--use_trt",
|
||||
type=ast.literal_eval,
|
||||
default=False,
|
||||
help="Wether to use tensorrt.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def build_option(args):
|
||||
option = fd.RuntimeOption()
|
||||
|
||||
if args.device.lower() == "gpu":
|
||||
option.use_gpu()
|
||||
|
||||
if args.device.lower() == "ascend":
|
||||
option.use_ascend()
|
||||
|
||||
if args.use_trt:
|
||||
option.use_trt_backend()
|
||||
option.set_trt_input_shape("images", [1, 3, 640, 640])
|
||||
return option
|
||||
|
||||
|
||||
args = parse_arguments()
|
||||
|
||||
# Configure runtime, load model
|
||||
runtime_option = build_option(args)
|
||||
model = fd.vision.detection.YOLOv8(args.model, runtime_option=runtime_option)
|
||||
|
||||
# Predicting image
|
||||
if args.image is None:
|
||||
image = fd.utils.get_detection_test_image()
|
||||
else:
|
||||
image = args.image
|
||||
im = cv2.imread(image)
|
||||
result = model.predict(im)
|
||||
|
||||
# Visualization
|
||||
vis_im = fd.vision.vis_detection(im, result)
|
||||
cv2.imwrite("visualized_result.jpg", vis_im)
|
||||
print("Visualized result save in ./visualized_result.jpg")
|
Reference in New Issue
Block a user