mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-17 14:11:14 +08:00
Add docs for external models (#95)
* first commit for yolov7 * pybind for yolov7 * CPP README.md * CPP README.md * modified yolov7.cc * README.md * python file modify * delete license in fastdeploy/ * repush the conflict part * README.md modified * README.md modified * file path modified * file path modified * file path modified * file path modified * file path modified * README modified * README modified * move some helpers to private * add examples for yolov7 * api.md modified * api.md modified * api.md modified * YOLOv7 * yolov7 release link * yolov7 release link * yolov7 release link * copyright * change some helpers to private * change variables to const and fix documents. * gitignore * Transfer some funtions to private member of class * Transfer some funtions to private member of class * Merge from develop (#9) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * first commit for yolor * for merge * Develop (#11) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Yolor (#16) * Develop (#11) (#12) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Develop (#13) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * Develop (#14) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com> * add is_dynamic for YOLO series (#22) * first commit test photo * yolov7 doc * yolov7 doc * yolov7 doc * yolov7 doc * add yolov5 docs * modify yolov5 doc * first commit for retinaface * first commit for retinaface * firt commit for ultraface * firt commit for ultraface * firt commit for yolov5face * firt commit for modnet and arcface * firt commit for modnet and arcface * first commit for partial_fc * first commit for partial_fc * first commit for yolox * first commit for yolov6 * first commit for nano_det Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com>
This commit is contained in:
@@ -1,23 +0,0 @@
|
||||
# 视觉模型部署
|
||||
|
||||
本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型
|
||||
|
||||
| 任务类型 | 说明 | 预测结果结构体 |
|
||||
|:-------------- |:----------------------------------- |:-------------------------------------------------------------------------------- |
|
||||
| Detection | 目标检测,输入图像,检测图像中物体位置,并返回检测框坐标及类别和置信度 | [DetectionResult](../../../../docs/api/vision_results/detection_result.md) |
|
||||
| Segmentation | 语义分割,输入图像,给出图像中每个像素的分类及置信度 | [SegmentationResult](../../../../docs/api/vision_results/segmentation_result.md) |
|
||||
| Classification | 图像分类,输入图像,给出图像的分类结果和置信度 | [ClassifyResult](../../../../docs/api/vision_results/classification_result.md) |
|
||||
|
||||
## FastDeploy API设计
|
||||
|
||||
视觉模型具有较有统一任务范式,在设计API时(包括C++/Python),FastDeploy将视觉模型的部署拆分为四个步骤
|
||||
|
||||
- 模型加载
|
||||
- 图像预处理
|
||||
- 模型推理
|
||||
- 推理结果后处理
|
||||
|
||||
FastDeploy针对飞桨的视觉套件,以及外部热门模型,提供端到端的部署服务,用户只需准备模型,按以下步骤即可完成整个模型的部署
|
||||
|
||||
- 加载模型
|
||||
- 调用`predict`接口
|
22
examples/vision/detection/nanodet_plus/README.md
Normal file
22
examples/vision/detection/nanodet_plus/README.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# NanoDetPlus准备部署模型
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [NanoDetPlus v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)
|
||||
- (1)[链接中](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)的*.onnx可直接进行部署
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了NanoDetPlus导出的各系列模型,开发者可直接下载使用。
|
||||
|
||||
| 模型 | 大小 | 精度 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |
|
||||
| [NanoDetPlus_320](https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx ) | 4.6MB | 27.0% |
|
||||
| [NanoDetPlus_320_sim](https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320-sim.onnx) | 4.6MB | 27.0% |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
14
examples/vision/detection/nanodet_plus/cpp/CMakeLists.txt
Normal file
14
examples/vision/detection/nanodet_plus/cpp/CMakeLists.txt
Normal file
@@ -0,0 +1,14 @@
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
|
||||
# 添加FastDeploy依赖头文件
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
85
examples/vision/detection/nanodet_plus/cpp/README.md
Normal file
85
examples/vision/detection/nanodet_plus/cpp/README.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# NanoDetPlus C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成NanoDetPlus在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
|
||||
|
||||
```
|
||||
mkdir build
|
||||
cd build
|
||||
wget https://xxx.tgz
|
||||
tar xvf fastdeploy-linux-x64-0.2.0.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0
|
||||
make -j
|
||||
|
||||
#下载官方转换好的NanoDetPlus模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||
|
||||
## NanoDetPlus C++接口
|
||||
|
||||
### NanoDetPlus类
|
||||
|
||||
```
|
||||
fastdeploy::vision::detection::NanoDetPlus(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const Frontend& model_format = Frontend::ONNX)
|
||||
```
|
||||
|
||||
NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(Frontend): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```
|
||||
> NanoDetPlus::Predict(cv::Mat* im, DetectionResult* result,
|
||||
> float conf_threshold = 0.25,
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
|
||||
### 类成员变量
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
79
examples/vision/detection/nanodet_plus/python/README.md
Normal file
79
examples/vision/detection/nanodet_plus/python/README.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# NanoDetPlus Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成NanoDetPlus在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```
|
||||
#下载NanoDetPlus模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vison/detection/nanodet_plus/python/
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||
|
||||
## NanoDetPlus Python接口
|
||||
|
||||
```
|
||||
fastdeploy.vision.detection.NanoDetPlus(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
|
||||
```
|
||||
|
||||
NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(Frontend): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```
|
||||
> NanoDetPlus.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [NanoDetPlus 模型介绍](..)
|
||||
- [NanoDetPlus C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
28
examples/vision/detection/yolov5/README.md
Normal file
28
examples/vision/detection/yolov5/README.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# YOLOv7准备部署模型
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [YOLOv5 v6.0](https://github.com/ultralytics/yolov5/releases/tag/v6.0)
|
||||
- (1)[链接中](https://github.com/ultralytics/yolov5/releases/tag/v6.0)的*.onnx可直接进行部署;
|
||||
- (2)开发者基于自己数据训练的YOLOv5 v6.0模型,可使用[YOLOv5](https://github.com/ultralytics/yolov5)中的`export.py`导出ONNX文件后后,完成部署。
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv7导出的各系列模型,开发者可直接下载使用。
|
||||
|
||||
| 模型 | 大小 | 精度 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |
|
||||
| [YOLOv5n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n.onnx) | 1.9MB | 28.4% |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx) | 7.2MB | 37.2% |
|
||||
| [YOLOv5m](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5m.onnx) | 21.2MB | 45.2% |
|
||||
| [YOLOv5l](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5l.onnx) | 46.5MB | 48.8% |
|
||||
| [YOLOv5x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5x.onnx) | 86.7MB | 50.7% |
|
||||
|
||||
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
14
examples/vision/detection/yolov5/cpp/CMakeLists.txt
Normal file
14
examples/vision/detection/yolov5/cpp/CMakeLists.txt
Normal file
@@ -0,0 +1,14 @@
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
|
||||
# 添加FastDeploy依赖头文件
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
85
examples/vision/detection/yolov5/cpp/README.md
Normal file
85
examples/vision/detection/yolov5/cpp/README.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# YOLOv5 C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOv5在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
|
||||
|
||||
```
|
||||
mkdir build
|
||||
cd build
|
||||
wget https://xxx.tgz
|
||||
tar xvf fastdeploy-linux-x64-0.2.0.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0
|
||||
make -j
|
||||
|
||||
#下载官方转换好的yolov5模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||
|
||||
## YOLOv5 C++接口
|
||||
|
||||
### YOLOv5类
|
||||
|
||||
```
|
||||
fastdeploy::vision::detection::YOLOv5(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const Frontend& model_format = Frontend::ONNX)
|
||||
```
|
||||
|
||||
YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(Frontend): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```
|
||||
> YOLOv5::Predict(cv::Mat* im, DetectionResult* result,
|
||||
> float conf_threshold = 0.25,
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
|
||||
### 类成员变量
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
105
examples/vision/detection/yolov5/cpp/infer.cc
Normal file
105
examples/vision/detection/yolov5/cpp/infer.cc
Normal file
@@ -0,0 +1,105 @@
|
||||
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include "fastdeploy/vision.h"
|
||||
|
||||
void CpuInfer(const std::string& model_file, const std::string& image_file) {
|
||||
auto model = fastdeploy::vision::detection::YOLOv5(model_file);
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
auto im_bak = im.clone();
|
||||
|
||||
fastdeploy::vision::DetectionResult res;
|
||||
if (!model.Predict(&im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto vis_im = fastdeploy::vision::Visualize::VisDetection(im_bak, res);
|
||||
cv::imwrite("vis_result.jpg", vis_im);
|
||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
||||
}
|
||||
|
||||
void GpuInfer(const std::string& model_file, const std::string& image_file) {
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
option.UseGpu();
|
||||
auto model = fastdeploy::vision::detection::YOLOv5(model_file, "", option);
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
auto im_bak = im.clone();
|
||||
|
||||
fastdeploy::vision::DetectionResult res;
|
||||
if (!model.Predict(&im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto vis_im = fastdeploy::vision::Visualize::VisDetection(im_bak, res);
|
||||
cv::imwrite("vis_result.jpg", vis_im);
|
||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
||||
}
|
||||
|
||||
void TrtInfer(const std::string& model_file, const std::string& image_file) {
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
option.UseGpu();
|
||||
option.UseTrtBackend();
|
||||
option.SetTrtInputShape("images", {1, 3, 640, 640});
|
||||
auto model = fastdeploy::vision::detection::YOLOv5(model_file, "", option);
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
auto im_bak = im.clone();
|
||||
|
||||
fastdeploy::vision::DetectionResult res;
|
||||
if (!model.Predict(&im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto vis_im = fastdeploy::vision::Visualize::VisDetection(im_bak, res);
|
||||
cv::imwrite("vis_result.jpg", vis_im);
|
||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
||||
}
|
||||
|
||||
int main(int argc, char* argv[]) {
|
||||
if (argc < 4) {
|
||||
std::cout << "Usage: infer_demo path/to/model path/to/image run_option, "
|
||||
"e.g ./infer_model ./yolov5.onnx ./test.jpeg 0"
|
||||
<< std::endl;
|
||||
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
|
||||
"with gpu; 2: run with gpu and use tensorrt backend."
|
||||
<< std::endl;
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (std::atoi(argv[3]) == 0) {
|
||||
CpuInfer(argv[1], argv[2]);
|
||||
} else if (std::atoi(argv[3]) == 1) {
|
||||
GpuInfer(argv[1], argv[2]);
|
||||
} else if (std::atoi(argv[3]) == 2) {
|
||||
TrtInfer(argv[1], argv[2]);
|
||||
}
|
||||
return 0;
|
||||
}
|
79
examples/vision/detection/yolov5/python/README.md
Normal file
79
examples/vision/detection/yolov5/python/README.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# YOLOv5 Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOv5在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```
|
||||
#下载yolov5模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vison/detection/yolov5/python/
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model yolov5s.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model yolov5s.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model yolov5s.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||
|
||||
## YOLOv5 Python接口
|
||||
|
||||
```
|
||||
fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
|
||||
```
|
||||
|
||||
YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(Frontend): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```
|
||||
> YOLOv5.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [YOLOv5 模型介绍](..)
|
||||
- [YOLOv5 C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
51
examples/vision/detection/yolov5/python/infer.py
Normal file
51
examples/vision/detection/yolov5/python/infer.py
Normal file
@@ -0,0 +1,51 @@
|
||||
import fastdeploy as fd
|
||||
import cv2
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
import argparse
|
||||
import ast
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--model", required=True, help="Path of yolov5 onnx model.")
|
||||
parser.add_argument(
|
||||
"--image", required=True, help="Path of test image file.")
|
||||
parser.add_argument(
|
||||
"--device",
|
||||
type=str,
|
||||
default='cpu',
|
||||
help="Type of inference device, support 'cpu' or 'gpu'.")
|
||||
parser.add_argument(
|
||||
"--use_trt",
|
||||
type=ast.literal_eval,
|
||||
default=False,
|
||||
help="Wether to use tensorrt.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def build_option(args):
|
||||
option = fd.RuntimeOption()
|
||||
|
||||
if args.device.lower() == "gpu":
|
||||
option.use_gpu()
|
||||
|
||||
if args.use_trt:
|
||||
option.use_trt_backend()
|
||||
option.set_trt_input_shape("images", [1, 3, 640, 640])
|
||||
return option
|
||||
|
||||
|
||||
args = parse_arguments()
|
||||
|
||||
# 配置runtime,加载模型
|
||||
runtime_option = build_option(args)
|
||||
model = fd.vision.detection.YOLOv5(args.model, runtime_option=runtime_option)
|
||||
|
||||
# 预测图片检测结果
|
||||
im = cv2.imread(args.image)
|
||||
result = model.predict(im)
|
||||
|
||||
# 预测结果可视化
|
||||
vis_im = fd.vision.vis_detection(im, result)
|
||||
cv2.imwrite("visualized_result.jpg", vis_im)
|
||||
print("Visualized result save in ./visualized_result.jpg")
|
23
examples/vision/detection/yolov6/README.md
Normal file
23
examples/vision/detection/yolov6/README.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# YOLOv6准备部署模型
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [YOLOv6 v0.1.0](https://github.com/meituan/YOLOv6/releases/download/0.1.0)
|
||||
- (1)[链接中](https://github.com/meituan/YOLOv6/releases/download/0.1.0)的*.onnx可直接进行部署;
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv6导出的各系列模型,开发者可直接下载使用。
|
||||
|
||||
| 模型 | 大小 | 精度 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s.onnx) | 66MB | 43.1% |
|
||||
| [YOLOv6s_640](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s-640x640.onnx) | 66MB | 43.1% |
|
||||
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
14
examples/vision/detection/yolov6/cpp/CMakeLists.txt
Normal file
14
examples/vision/detection/yolov6/cpp/CMakeLists.txt
Normal file
@@ -0,0 +1,14 @@
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
|
||||
# 添加FastDeploy依赖头文件
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
85
examples/vision/detection/yolov6/cpp/README.md
Normal file
85
examples/vision/detection/yolov6/cpp/README.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# YOLOv6 C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOv6在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
|
||||
|
||||
```
|
||||
mkdir build
|
||||
cd build
|
||||
wget https://xxx.tgz
|
||||
tar xvf fastdeploy-linux-x64-0.2.0.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0
|
||||
make -j
|
||||
|
||||
#下载官方转换好的YOLOv6模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo yolov6s.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo yolov6s.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo yolov6s.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||
|
||||
## YOLOv6 C++接口
|
||||
|
||||
### YOLOv6类
|
||||
|
||||
```
|
||||
fastdeploy::vision::detection::YOLOv6(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const Frontend& model_format = Frontend::ONNX)
|
||||
```
|
||||
|
||||
YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(Frontend): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```
|
||||
> YOLOv6::Predict(cv::Mat* im, DetectionResult* result,
|
||||
> float conf_threshold = 0.25,
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
|
||||
### 类成员变量
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
79
examples/vision/detection/yolov6/python/README.md
Normal file
79
examples/vision/detection/yolov6/python/README.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# YOLOv6 Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOv6在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```
|
||||
#下载YOLOv6模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vison/detection/yolov6/python/
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||
|
||||
## YOLOv6 Python接口
|
||||
|
||||
```
|
||||
fastdeploy.vision.detection.YOLOv6(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
|
||||
```
|
||||
|
||||
YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(Frontend): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```
|
||||
> YOLOv6.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [YOLOv6 模型介绍](..)
|
||||
- [YOLOv6 C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
@@ -3,13 +3,14 @@
|
||||
## 模型版本说明
|
||||
|
||||
- [YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)
|
||||
- (1)[YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)链接中.pt后缀模型通过[导出ONNX模型](#导出ONNX模型)操作后,可直接部署;.onnx、.trt和 .pose后缀模型暂不支持部署;
|
||||
- (2)开发者基于自己数据训练的YOLOv7 0.1模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。
|
||||
- (1)[链接中](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
|
||||
- (2)[链接中](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)的*.onnx、*.trt和 *.pose模型不支持部署;
|
||||
- (3)开发者基于自己数据训练的YOLOv7 0.1模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。
|
||||
|
||||
## 导出ONNX模型
|
||||
|
||||
```
|
||||
# 下载yolov7模型文件,或准备训练好的YOLOv7模型文件
|
||||
# 下载yolov7模型文件
|
||||
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
|
||||
|
||||
# 导出onnx格式文件 (Tips: 对应 YOLOv7 release v0.1 代码)
|
||||
@@ -18,18 +19,24 @@ python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt
|
||||
# 如果您的代码版本中有支持NMS的ONNX文件导出,请使用如下命令导出ONNX文件(请暂时不要使用 "--end2end",我们后续将支持带有NMS的ONNX模型的部署)
|
||||
python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt
|
||||
|
||||
# 移动onnx文件到examples目录
|
||||
cp PATH/TO/yolov7.onnx PATH/TO/FastDeploy/examples/vision/detextion/yolov7/
|
||||
# 移动onnx文件到demo目录
|
||||
cp PATH/TO/yolov7.onnx PATH/TO/model_zoo/vision/yolov7/
|
||||
```
|
||||
|
||||
## 下载预训练模型
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv7导出的各系列模型,开发者可直接下载使用。
|
||||
|
||||
| 模型 | 大小 | 精度 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |
|
||||
| [YOLOv7](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7.onnx) | 141MB | 51.4% |
|
||||
| [YOLOv7-x] | 10MB | 51.4% |
|
||||
| [YOLOv7x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7x.onnx) | 273MB | 53.1% |
|
||||
| [YOLOv7-w6](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-w6.onnx) | 269MB | 54.9% |
|
||||
| [YOLOv7-e6](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-e6.onnx) | 372MB | 56.0% |
|
||||
| [YOLOv7-d6](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-d6.onnx) | 511MB | 56.6% |
|
||||
| [YOLOv7-e6e](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-e6e.onnx) | 579MB | 56.8% |
|
||||
|
||||
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
@@ -5,7 +5,7 @@
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
|
||||
|
||||
@@ -19,17 +19,21 @@ make -j
|
||||
|
||||
#下载官方转换好的yolov7模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov7.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000087038.jpg
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo yolov7.onnx 000000087038.jpg 0
|
||||
./infer_demo yolov7.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo yolov7.onnx 000000087038.jpg 1
|
||||
./infer_demo yolov7.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo yolov7.onnx 000000087038.jpg 2
|
||||
./infer_demo yolov7.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||
|
||||
## YOLOv7 C++接口
|
||||
|
||||
### YOLOv7类
|
||||
@@ -58,11 +62,11 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
> float conf_threshold = 0.25,
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
@@ -70,7 +74,11 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
### 类成员变量
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
|
@@ -18,15 +18,17 @@ git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vison/detection/yolov7/python/
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model yolov7.onnx --image 000000087038.jpg --device cpu
|
||||
python infer.py --model yolov7.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model yolov7.onnx --image 000000087038.jpg --device gpu
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model yolov7.onnx --image 000000087038.jpg --device gpu --use_trt True
|
||||
python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||
|
||||
## YOLOv7 Python接口
|
||||
|
||||
```
|
||||
@@ -47,22 +49,28 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
> ```
|
||||
> YOLOv7.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
|
||||
> **返回**
|
||||
>
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
|
||||
> > * **size**(list | tuple): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
|
23
examples/vision/detection/yolox/README.md
Normal file
23
examples/vision/detection/yolox/README.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# YOLOX准备部署模型
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [YOLOX v0.1.1](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0)
|
||||
- (1)[链接中](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0)的*.onnx可直接进行部署;
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOX导出的各系列模型,开发者可直接下载使用。
|
||||
|
||||
| 模型 | 大小 | 精度 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |
|
||||
| [YOLOX-s](https://bj.bcebos.com/paddlehub/fastdeploy/yolox_s.onnx) | 35MB | 40.5% |
|
||||
|
||||
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
14
examples/vision/detection/yolox/cpp/CMakeLists.txt
Normal file
14
examples/vision/detection/yolox/cpp/CMakeLists.txt
Normal file
@@ -0,0 +1,14 @@
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
|
||||
# 添加FastDeploy依赖头文件
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
85
examples/vision/detection/yolox/cpp/README.md
Normal file
85
examples/vision/detection/yolox/cpp/README.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# YOLOX C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOX在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
|
||||
|
||||
```
|
||||
mkdir build
|
||||
cd build
|
||||
wget https://xxx.tgz
|
||||
tar xvf fastdeploy-linux-x64-0.2.0.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0
|
||||
make -j
|
||||
|
||||
#下载官方转换好的YOLOX模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolox_s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo yolox_s.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo yolox_s.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo yolox_s.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||
|
||||
## YOLOX C++接口
|
||||
|
||||
### YOLOX类
|
||||
|
||||
```
|
||||
fastdeploy::vision::detection::YOLOX(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const Frontend& model_format = Frontend::ONNX)
|
||||
```
|
||||
|
||||
YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(Frontend): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```
|
||||
> YOLOX::Predict(cv::Mat* im, DetectionResult* result,
|
||||
> float conf_threshold = 0.25,
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
|
||||
### 类成员变量
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
79
examples/vision/detection/yolox/python/README.md
Normal file
79
examples/vision/detection/yolox/python/README.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# YOLOX Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOX在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```
|
||||
#下载YOLOX模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolox_s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vison/detection/yolox/python/
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model yolox_s.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model yolox_s.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model yolox_s.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||
|
||||
## YOLOX Python接口
|
||||
|
||||
```
|
||||
fastdeploy.vision.detection.YOLOX(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX)
|
||||
```
|
||||
|
||||
YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(Frontend): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```
|
||||
> YOLOX.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [YOLOX 模型介绍](..)
|
||||
- [YOLOX C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
Reference in New Issue
Block a user