From d8f312b0a00effdf7cba5538b68e1396de255664 Mon Sep 17 00:00:00 2001 From: ziqi-jin <67993288+ziqi-jin@users.noreply.github.com> Date: Thu, 11 Aug 2022 10:03:24 +0800 Subject: [PATCH] Add docs for external models (#95) * first commit for yolov7 * pybind for yolov7 * CPP README.md * CPP README.md * modified yolov7.cc * README.md * python file modify * delete license in fastdeploy/ * repush the conflict part * README.md modified * README.md modified * file path modified * file path modified * file path modified * file path modified * file path modified * README modified * README modified * move some helpers to private * add examples for yolov7 * api.md modified * api.md modified * api.md modified * YOLOv7 * yolov7 release link * yolov7 release link * yolov7 release link * copyright * change some helpers to private * change variables to const and fix documents. * gitignore * Transfer some funtions to private member of class * Transfer some funtions to private member of class * Merge from develop (#9) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason Co-authored-by: root Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * first commit for yolor * for merge * Develop (#11) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason Co-authored-by: root Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Yolor (#16) * Develop (#11) (#12) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason Co-authored-by: root Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason Co-authored-by: root Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Develop (#13) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason Co-authored-by: root Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * Develop (#14) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: root Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason Co-authored-by: root Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com> * add is_dynamic for YOLO series (#22) * first commit test photo * yolov7 doc * yolov7 doc * yolov7 doc * yolov7 doc * add yolov5 docs * modify yolov5 doc * first commit for retinaface * first commit for retinaface * firt commit for ultraface * firt commit for ultraface * firt commit for yolov5face * firt commit for modnet and arcface * firt commit for modnet and arcface * first commit for partial_fc * first commit for partial_fc * first commit for yolox * first commit for yolov6 * first commit for nano_det Co-authored-by: Jason Co-authored-by: root Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com> --- examples/vision/{detection => }/README.md | 0 .../vision/detection/nanodet_plus/README.md | 22 ++++ .../detection/nanodet_plus/cpp/CMakeLists.txt | 14 +++ .../detection/nanodet_plus/cpp/README.md | 85 ++++++++++++++ .../detection/nanodet_plus/python/README.md | 79 +++++++++++++ examples/vision/detection/yolov5/README.md | 28 +++++ .../detection/yolov5/cpp/CMakeLists.txt | 14 +++ .../vision/detection/yolov5/cpp/README.md | 85 ++++++++++++++ examples/vision/detection/yolov5/cpp/infer.cc | 105 ++++++++++++++++++ .../vision/detection/yolov5/python/README.md | 79 +++++++++++++ .../vision/detection/yolov5/python/infer.py | 51 +++++++++ examples/vision/detection/yolov6/README.md | 23 ++++ .../detection/yolov6/cpp/CMakeLists.txt | 14 +++ .../vision/detection/yolov6/cpp/README.md | 85 ++++++++++++++ .../vision/detection/yolov6/python/README.md | 79 +++++++++++++ examples/vision/detection/yolov7/README.md | 21 ++-- .../vision/detection/yolov7/cpp/README.md | 26 +++-- .../vision/detection/yolov7/python/README.md | 26 +++-- examples/vision/detection/yolox/README.md | 23 ++++ .../vision/detection/yolox/cpp/CMakeLists.txt | 14 +++ examples/vision/detection/yolox/cpp/README.md | 85 ++++++++++++++ .../vision/detection/yolox/python/README.md | 79 +++++++++++++ examples/vision/facedet/retinaface/README.md | 54 +++++++++ .../facedet/retinaface/cpp/CMakeLists.txt | 14 +++ .../vision/facedet/retinaface/cpp/README.md | 85 ++++++++++++++ .../facedet/retinaface/python/README.md | 79 +++++++++++++ examples/vision/facedet/ultraface/README.md | 23 ++++ .../facedet/ultraface/cpp/CMakeLists.txt | 14 +++ .../vision/facedet/ultraface/cpp/README.md | 85 ++++++++++++++ .../vision/facedet/ultraface/python/README.md | 79 +++++++++++++ examples/vision/facedet/yolov5face/README.md | 42 +++++++ .../facedet/yolov5face/cpp/CMakeLists.txt | 14 +++ .../vision/facedet/yolov5face/cpp/README.md | 85 ++++++++++++++ .../facedet/yolov5face/python/README.md | 79 +++++++++++++ examples/vision/faceid/arcface/README.md | 40 +++++++ .../vision/faceid/arcface/cpp/CMakeLists.txt | 14 +++ examples/vision/faceid/arcface/cpp/README.md | 85 ++++++++++++++ .../vision/faceid/arcface/python/README.md | 79 +++++++++++++ examples/vision/faceid/partial_fc/README.md | 37 ++++++ .../faceid/partial_fc/cpp/CMakeLists.txt | 14 +++ .../vision/faceid/partial_fc/cpp/README.md | 85 ++++++++++++++ .../vision/faceid/partial_fc/python/README.md | 79 +++++++++++++ examples/vision/matting/modnet/README.md | 42 +++++++ .../vision/matting/modnet/cpp/CMakeLists.txt | 14 +++ examples/vision/matting/modnet/cpp/README.md | 85 ++++++++++++++ .../vision/matting/modnet/python/README.md | 79 +++++++++++++ 46 files changed, 2318 insertions(+), 25 deletions(-) rename examples/vision/{detection => }/README.md (100%) create mode 100644 examples/vision/detection/nanodet_plus/README.md create mode 100644 examples/vision/detection/nanodet_plus/cpp/CMakeLists.txt create mode 100644 examples/vision/detection/nanodet_plus/cpp/README.md create mode 100644 examples/vision/detection/nanodet_plus/python/README.md create mode 100644 examples/vision/detection/yolov5/README.md create mode 100644 examples/vision/detection/yolov5/cpp/CMakeLists.txt create mode 100644 examples/vision/detection/yolov5/cpp/README.md create mode 100644 examples/vision/detection/yolov5/cpp/infer.cc create mode 100644 examples/vision/detection/yolov5/python/README.md create mode 100644 examples/vision/detection/yolov5/python/infer.py create mode 100644 examples/vision/detection/yolov6/README.md create mode 100644 examples/vision/detection/yolov6/cpp/CMakeLists.txt create mode 100644 examples/vision/detection/yolov6/cpp/README.md create mode 100644 examples/vision/detection/yolov6/python/README.md create mode 100644 examples/vision/detection/yolox/README.md create mode 100644 examples/vision/detection/yolox/cpp/CMakeLists.txt create mode 100644 examples/vision/detection/yolox/cpp/README.md create mode 100644 examples/vision/detection/yolox/python/README.md create mode 100644 examples/vision/facedet/retinaface/README.md create mode 100644 examples/vision/facedet/retinaface/cpp/CMakeLists.txt create mode 100644 examples/vision/facedet/retinaface/cpp/README.md create mode 100644 examples/vision/facedet/retinaface/python/README.md create mode 100644 examples/vision/facedet/ultraface/README.md create mode 100644 examples/vision/facedet/ultraface/cpp/CMakeLists.txt create mode 100644 examples/vision/facedet/ultraface/cpp/README.md create mode 100644 examples/vision/facedet/ultraface/python/README.md create mode 100644 examples/vision/facedet/yolov5face/README.md create mode 100644 examples/vision/facedet/yolov5face/cpp/CMakeLists.txt create mode 100644 examples/vision/facedet/yolov5face/cpp/README.md create mode 100644 examples/vision/facedet/yolov5face/python/README.md create mode 100644 examples/vision/faceid/arcface/README.md create mode 100644 examples/vision/faceid/arcface/cpp/CMakeLists.txt create mode 100644 examples/vision/faceid/arcface/cpp/README.md create mode 100644 examples/vision/faceid/arcface/python/README.md create mode 100644 examples/vision/faceid/partial_fc/README.md create mode 100644 examples/vision/faceid/partial_fc/cpp/CMakeLists.txt create mode 100644 examples/vision/faceid/partial_fc/cpp/README.md create mode 100644 examples/vision/faceid/partial_fc/python/README.md create mode 100644 examples/vision/matting/modnet/README.md create mode 100644 examples/vision/matting/modnet/cpp/CMakeLists.txt create mode 100644 examples/vision/matting/modnet/cpp/README.md create mode 100644 examples/vision/matting/modnet/python/README.md diff --git a/examples/vision/detection/README.md b/examples/vision/README.md similarity index 100% rename from examples/vision/detection/README.md rename to examples/vision/README.md diff --git a/examples/vision/detection/nanodet_plus/README.md b/examples/vision/detection/nanodet_plus/README.md new file mode 100644 index 000000000..b3fd57463 --- /dev/null +++ b/examples/vision/detection/nanodet_plus/README.md @@ -0,0 +1,22 @@ +# NanoDetPlus准备部署模型 + +## 模型版本说明 + +- [NanoDetPlus v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1) + - (1)[链接中](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)的*.onnx可直接进行部署 + + +## 下载预训练ONNX模型 + +为了方便开发者的测试,下面提供了NanoDetPlus导出的各系列模型,开发者可直接下载使用。 + +| 模型 | 大小 | 精度 | +|:---------------------------------------------------------------- |:----- |:----- | +| [NanoDetPlus_320](https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx ) | 4.6MB | 27.0% | +| [NanoDetPlus_320_sim](https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320-sim.onnx) | 4.6MB | 27.0% | + + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/examples/vision/detection/nanodet_plus/cpp/CMakeLists.txt b/examples/vision/detection/nanodet_plus/cpp/CMakeLists.txt new file mode 100644 index 000000000..fea1a2888 --- /dev/null +++ b/examples/vision/detection/nanodet_plus/cpp/CMakeLists.txt @@ -0,0 +1,14 @@ +PROJECT(infer_demo C CXX) +CMAKE_MINIMUM_REQUIRED (VERSION 3.12) + +# 指定下载解压后的fastdeploy库路径 +option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") + +include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) + +# 添加FastDeploy依赖头文件 +include_directories(${FASTDEPLOY_INCS}) + +add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) +# 添加FastDeploy库依赖 +target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/detection/nanodet_plus/cpp/README.md b/examples/vision/detection/nanodet_plus/cpp/README.md new file mode 100644 index 000000000..2dbee5e31 --- /dev/null +++ b/examples/vision/detection/nanodet_plus/cpp/README.md @@ -0,0 +1,85 @@ +# NanoDetPlus C++部署示例 + +本目录下提供`infer.cc`快速完成NanoDetPlus在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) + +以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 + +``` +mkdir build +cd build +wget https://xxx.tgz +tar xvf fastdeploy-linux-x64-0.2.0.tgz +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0 +make -j + +#下载官方转换好的NanoDetPlus模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx +wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg + + +# CPU推理 +./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 0 +# GPU推理 +./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 1 +# GPU上TensorRT推理 +./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 2 +``` + +运行完成可视化结果如下图所示 + + + +## NanoDetPlus C++接口 + +### NanoDetPlus类 + +``` +fastdeploy::vision::detection::NanoDetPlus( + const string& model_file, + const string& params_file = "", + const RuntimeOption& runtime_option = RuntimeOption(), + const Frontend& model_format = Frontend::ONNX) +``` + +NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格式。 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX格式 + +#### Predict函数 + +> ``` +> NanoDetPlus::Predict(cv::Mat* im, DetectionResult* result, +> float conf_threshold = 0.25, +> float nms_iou_threshold = 0.5) +> ``` +> +> 模型预测接口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **im**: 输入图像,注意需为HWC,BGR格式 +> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **conf_threshold**: 检测框置信度过滤阈值 +> > * **nms_iou_threshold**: NMS处理过程中iou阈值 + +### 类成员变量 + +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` + +- [模型介绍](../../) +- [Python部署](../python) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/detection/nanodet_plus/python/README.md b/examples/vision/detection/nanodet_plus/python/README.md new file mode 100644 index 000000000..7a60a31c8 --- /dev/null +++ b/examples/vision/detection/nanodet_plus/python/README.md @@ -0,0 +1,79 @@ +# NanoDetPlus Python部署示例 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md) + +本目录下提供`infer.py`快速完成NanoDetPlus在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 + +``` +#下载NanoDetPlus模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx +wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg + + +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vison/detection/nanodet_plus/python/ + +# CPU推理 +python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device cpu +# GPU推理 +python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device gpu +# GPU上使用TensorRT推理 +python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 + + + +## NanoDetPlus Python接口 + +``` +fastdeploy.vision.detection.NanoDetPlus(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +``` + +NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX + +### predict函数 + +> ``` +> NanoDetPlus.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 +> > * **conf_threshold**(float): 检测框置信度过滤阈值 +> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 + +> **返回** +> +> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 + +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + + +## 其它文档 + +- [NanoDetPlus 模型介绍](..) +- [NanoDetPlus C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/) diff --git a/examples/vision/detection/yolov5/README.md b/examples/vision/detection/yolov5/README.md new file mode 100644 index 000000000..30e638944 --- /dev/null +++ b/examples/vision/detection/yolov5/README.md @@ -0,0 +1,28 @@ +# YOLOv7准备部署模型 + +## 模型版本说明 + +- [YOLOv5 v6.0](https://github.com/ultralytics/yolov5/releases/tag/v6.0) + - (1)[链接中](https://github.com/ultralytics/yolov5/releases/tag/v6.0)的*.onnx可直接进行部署; + - (2)开发者基于自己数据训练的YOLOv5 v6.0模型,可使用[YOLOv5](https://github.com/ultralytics/yolov5)中的`export.py`导出ONNX文件后后,完成部署。 + + +## 下载预训练ONNX模型 + +为了方便开发者的测试,下面提供了YOLOv7导出的各系列模型,开发者可直接下载使用。 + +| 模型 | 大小 | 精度 | +|:---------------------------------------------------------------- |:----- |:----- | +| [YOLOv5n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n.onnx) | 1.9MB | 28.4% | +| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx) | 7.2MB | 37.2% | +| [YOLOv5m](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5m.onnx) | 21.2MB | 45.2% | +| [YOLOv5l](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5l.onnx) | 46.5MB | 48.8% | +| [YOLOv5x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5x.onnx) | 86.7MB | 50.7% | + + + + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/examples/vision/detection/yolov5/cpp/CMakeLists.txt b/examples/vision/detection/yolov5/cpp/CMakeLists.txt new file mode 100644 index 000000000..fea1a2888 --- /dev/null +++ b/examples/vision/detection/yolov5/cpp/CMakeLists.txt @@ -0,0 +1,14 @@ +PROJECT(infer_demo C CXX) +CMAKE_MINIMUM_REQUIRED (VERSION 3.12) + +# 指定下载解压后的fastdeploy库路径 +option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") + +include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) + +# 添加FastDeploy依赖头文件 +include_directories(${FASTDEPLOY_INCS}) + +add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) +# 添加FastDeploy库依赖 +target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/detection/yolov5/cpp/README.md b/examples/vision/detection/yolov5/cpp/README.md new file mode 100644 index 000000000..feb44d13d --- /dev/null +++ b/examples/vision/detection/yolov5/cpp/README.md @@ -0,0 +1,85 @@ +# YOLOv5 C++部署示例 + +本目录下提供`infer.cc`快速完成YOLOv5在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) + +以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 + +``` +mkdir build +cd build +wget https://xxx.tgz +tar xvf fastdeploy-linux-x64-0.2.0.tgz +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0 +make -j + +#下载官方转换好的yolov5模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx +wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg + + +# CPU推理 +./infer_demo yolov5s.onnx 000000014439.jpg 0 +# GPU推理 +./infer_demo yolov5s.onnx 000000014439.jpg 1 +# GPU上TensorRT推理 +./infer_demo yolov5s.onnx 000000014439.jpg 2 +``` + +运行完成可视化结果如下图所示 + + + +## YOLOv5 C++接口 + +### YOLOv5类 + +``` +fastdeploy::vision::detection::YOLOv5( + const string& model_file, + const string& params_file = "", + const RuntimeOption& runtime_option = RuntimeOption(), + const Frontend& model_format = Frontend::ONNX) +``` + +YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式。 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX格式 + +#### Predict函数 + +> ``` +> YOLOv5::Predict(cv::Mat* im, DetectionResult* result, +> float conf_threshold = 0.25, +> float nms_iou_threshold = 0.5) +> ``` +> +> 模型预测接口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **im**: 输入图像,注意需为HWC,BGR格式 +> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **conf_threshold**: 检测框置信度过滤阈值 +> > * **nms_iou_threshold**: NMS处理过程中iou阈值 + +### 类成员变量 + +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` + +- [模型介绍](../../) +- [Python部署](../python) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/detection/yolov5/cpp/infer.cc b/examples/vision/detection/yolov5/cpp/infer.cc new file mode 100644 index 000000000..ef3e47ea1 --- /dev/null +++ b/examples/vision/detection/yolov5/cpp/infer.cc @@ -0,0 +1,105 @@ +// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +#include "fastdeploy/vision.h" + +void CpuInfer(const std::string& model_file, const std::string& image_file) { + auto model = fastdeploy::vision::detection::YOLOv5(model_file); + if (!model.Initialized()) { + std::cerr << "Failed to initialize." << std::endl; + return; + } + + auto im = cv::imread(image_file); + auto im_bak = im.clone(); + + fastdeploy::vision::DetectionResult res; + if (!model.Predict(&im, &res)) { + std::cerr << "Failed to predict." << std::endl; + return; + } + + auto vis_im = fastdeploy::vision::Visualize::VisDetection(im_bak, res); + cv::imwrite("vis_result.jpg", vis_im); + std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl; +} + +void GpuInfer(const std::string& model_file, const std::string& image_file) { + auto option = fastdeploy::RuntimeOption(); + option.UseGpu(); + auto model = fastdeploy::vision::detection::YOLOv5(model_file, "", option); + if (!model.Initialized()) { + std::cerr << "Failed to initialize." << std::endl; + return; + } + + auto im = cv::imread(image_file); + auto im_bak = im.clone(); + + fastdeploy::vision::DetectionResult res; + if (!model.Predict(&im, &res)) { + std::cerr << "Failed to predict." << std::endl; + return; + } + + auto vis_im = fastdeploy::vision::Visualize::VisDetection(im_bak, res); + cv::imwrite("vis_result.jpg", vis_im); + std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl; +} + +void TrtInfer(const std::string& model_file, const std::string& image_file) { + auto option = fastdeploy::RuntimeOption(); + option.UseGpu(); + option.UseTrtBackend(); + option.SetTrtInputShape("images", {1, 3, 640, 640}); + auto model = fastdeploy::vision::detection::YOLOv5(model_file, "", option); + if (!model.Initialized()) { + std::cerr << "Failed to initialize." << std::endl; + return; + } + + auto im = cv::imread(image_file); + auto im_bak = im.clone(); + + fastdeploy::vision::DetectionResult res; + if (!model.Predict(&im, &res)) { + std::cerr << "Failed to predict." << std::endl; + return; + } + + auto vis_im = fastdeploy::vision::Visualize::VisDetection(im_bak, res); + cv::imwrite("vis_result.jpg", vis_im); + std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl; +} + +int main(int argc, char* argv[]) { + if (argc < 4) { + std::cout << "Usage: infer_demo path/to/model path/to/image run_option, " + "e.g ./infer_model ./yolov5.onnx ./test.jpeg 0" + << std::endl; + std::cout << "The data type of run_option is int, 0: run with cpu; 1: run " + "with gpu; 2: run with gpu and use tensorrt backend." + << std::endl; + return -1; + } + + if (std::atoi(argv[3]) == 0) { + CpuInfer(argv[1], argv[2]); + } else if (std::atoi(argv[3]) == 1) { + GpuInfer(argv[1], argv[2]); + } else if (std::atoi(argv[3]) == 2) { + TrtInfer(argv[1], argv[2]); + } + return 0; +} diff --git a/examples/vision/detection/yolov5/python/README.md b/examples/vision/detection/yolov5/python/README.md new file mode 100644 index 000000000..57cdba44c --- /dev/null +++ b/examples/vision/detection/yolov5/python/README.md @@ -0,0 +1,79 @@ +# YOLOv5 Python部署示例 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md) + +本目录下提供`infer.py`快速完成YOLOv5在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 + +``` +#下载yolov5模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx +wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg + + +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vison/detection/yolov5/python/ + +# CPU推理 +python infer.py --model yolov5s.onnx --image 000000014439.jpg --device cpu +# GPU推理 +python infer.py --model yolov5s.onnx --image 000000014439.jpg --device gpu +# GPU上使用TensorRT推理 +python infer.py --model yolov5s.onnx --image 000000014439.jpg --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 + + + +## YOLOv5 Python接口 + +``` +fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +``` + +YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX + +### predict函数 + +> ``` +> YOLOv5.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 +> > * **conf_threshold**(float): 检测框置信度过滤阈值 +> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 + +> **返回** +> +> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 + +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + + +## 其它文档 + +- [YOLOv5 模型介绍](..) +- [YOLOv5 C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/) diff --git a/examples/vision/detection/yolov5/python/infer.py b/examples/vision/detection/yolov5/python/infer.py new file mode 100644 index 000000000..3f7a91f99 --- /dev/null +++ b/examples/vision/detection/yolov5/python/infer.py @@ -0,0 +1,51 @@ +import fastdeploy as fd +import cv2 + + +def parse_arguments(): + import argparse + import ast + parser = argparse.ArgumentParser() + parser.add_argument( + "--model", required=True, help="Path of yolov5 onnx model.") + parser.add_argument( + "--image", required=True, help="Path of test image file.") + parser.add_argument( + "--device", + type=str, + default='cpu', + help="Type of inference device, support 'cpu' or 'gpu'.") + parser.add_argument( + "--use_trt", + type=ast.literal_eval, + default=False, + help="Wether to use tensorrt.") + return parser.parse_args() + + +def build_option(args): + option = fd.RuntimeOption() + + if args.device.lower() == "gpu": + option.use_gpu() + + if args.use_trt: + option.use_trt_backend() + option.set_trt_input_shape("images", [1, 3, 640, 640]) + return option + + +args = parse_arguments() + +# 配置runtime,加载模型 +runtime_option = build_option(args) +model = fd.vision.detection.YOLOv5(args.model, runtime_option=runtime_option) + +# 预测图片检测结果 +im = cv2.imread(args.image) +result = model.predict(im) + +# 预测结果可视化 +vis_im = fd.vision.vis_detection(im, result) +cv2.imwrite("visualized_result.jpg", vis_im) +print("Visualized result save in ./visualized_result.jpg") diff --git a/examples/vision/detection/yolov6/README.md b/examples/vision/detection/yolov6/README.md new file mode 100644 index 000000000..878e530bd --- /dev/null +++ b/examples/vision/detection/yolov6/README.md @@ -0,0 +1,23 @@ +# YOLOv6准备部署模型 + +## 模型版本说明 + +- [YOLOv6 v0.1.0](https://github.com/meituan/YOLOv6/releases/download/0.1.0) + - (1)[链接中](https://github.com/meituan/YOLOv6/releases/download/0.1.0)的*.onnx可直接进行部署; + + +## 下载预训练ONNX模型 + +为了方便开发者的测试,下面提供了YOLOv6导出的各系列模型,开发者可直接下载使用。 + +| 模型 | 大小 | 精度 | +|:---------------------------------------------------------------- |:----- |:----- | +| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s.onnx) | 66MB | 43.1% | +| [YOLOv6s_640](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s-640x640.onnx) | 66MB | 43.1% | + + + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/examples/vision/detection/yolov6/cpp/CMakeLists.txt b/examples/vision/detection/yolov6/cpp/CMakeLists.txt new file mode 100644 index 000000000..fea1a2888 --- /dev/null +++ b/examples/vision/detection/yolov6/cpp/CMakeLists.txt @@ -0,0 +1,14 @@ +PROJECT(infer_demo C CXX) +CMAKE_MINIMUM_REQUIRED (VERSION 3.12) + +# 指定下载解压后的fastdeploy库路径 +option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") + +include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) + +# 添加FastDeploy依赖头文件 +include_directories(${FASTDEPLOY_INCS}) + +add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) +# 添加FastDeploy库依赖 +target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/detection/yolov6/cpp/README.md b/examples/vision/detection/yolov6/cpp/README.md new file mode 100644 index 000000000..5a73f8b55 --- /dev/null +++ b/examples/vision/detection/yolov6/cpp/README.md @@ -0,0 +1,85 @@ +# YOLOv6 C++部署示例 + +本目录下提供`infer.cc`快速完成YOLOv6在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) + +以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 + +``` +mkdir build +cd build +wget https://xxx.tgz +tar xvf fastdeploy-linux-x64-0.2.0.tgz +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0 +make -j + +#下载官方转换好的YOLOv6模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s.onnx +wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg + + +# CPU推理 +./infer_demo yolov6s.onnx 000000014439.jpg 0 +# GPU推理 +./infer_demo yolov6s.onnx 000000014439.jpg 1 +# GPU上TensorRT推理 +./infer_demo yolov6s.onnx 000000014439.jpg 2 +``` + +运行完成可视化结果如下图所示 + + + +## YOLOv6 C++接口 + +### YOLOv6类 + +``` +fastdeploy::vision::detection::YOLOv6( + const string& model_file, + const string& params_file = "", + const RuntimeOption& runtime_option = RuntimeOption(), + const Frontend& model_format = Frontend::ONNX) +``` + +YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式。 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX格式 + +#### Predict函数 + +> ``` +> YOLOv6::Predict(cv::Mat* im, DetectionResult* result, +> float conf_threshold = 0.25, +> float nms_iou_threshold = 0.5) +> ``` +> +> 模型预测接口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **im**: 输入图像,注意需为HWC,BGR格式 +> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **conf_threshold**: 检测框置信度过滤阈值 +> > * **nms_iou_threshold**: NMS处理过程中iou阈值 + +### 类成员变量 + +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` + +- [模型介绍](../../) +- [Python部署](../python) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/detection/yolov6/python/README.md b/examples/vision/detection/yolov6/python/README.md new file mode 100644 index 000000000..35c35b208 --- /dev/null +++ b/examples/vision/detection/yolov6/python/README.md @@ -0,0 +1,79 @@ +# YOLOv6 Python部署示例 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md) + +本目录下提供`infer.py`快速完成YOLOv6在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 + +``` +#下载YOLOv6模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s.onnx +wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg + + +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vison/detection/yolov6/python/ + +# CPU推理 +python infer.py --model yolov6s.onnx --image 000000014439.jpg --device cpu +# GPU推理 +python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu +# GPU上使用TensorRT推理 +python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 + + + +## YOLOv6 Python接口 + +``` +fastdeploy.vision.detection.YOLOv6(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +``` + +YOLOv6模型加载和初始化,其中model_file为导出的ONNX模型格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX + +### predict函数 + +> ``` +> YOLOv6.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 +> > * **conf_threshold**(float): 检测框置信度过滤阈值 +> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 + +> **返回** +> +> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 + +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + + +## 其它文档 + +- [YOLOv6 模型介绍](..) +- [YOLOv6 C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/) diff --git a/examples/vision/detection/yolov7/README.md b/examples/vision/detection/yolov7/README.md index 5f4848075..857bdda31 100644 --- a/examples/vision/detection/yolov7/README.md +++ b/examples/vision/detection/yolov7/README.md @@ -3,13 +3,14 @@ ## 模型版本说明 - [YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) - - (1)[YOLOv7 0.1](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)链接中.pt后缀模型通过[导出ONNX模型](#导出ONNX模型)操作后,可直接部署;.onnx、.trt和 .pose后缀模型暂不支持部署; - - (2)开发者基于自己数据训练的YOLOv7 0.1模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 + - (1)[链接中](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; + - (2)[链接中](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)的*.onnx、*.trt和 *.pose模型不支持部署; + - (3)开发者基于自己数据训练的YOLOv7 0.1模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 ## 导出ONNX模型 ``` -# 下载yolov7模型文件,或准备训练好的YOLOv7模型文件 +# 下载yolov7模型文件 wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt # 导出onnx格式文件 (Tips: 对应 YOLOv7 release v0.1 代码) @@ -18,18 +19,24 @@ python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt # 如果您的代码版本中有支持NMS的ONNX文件导出,请使用如下命令导出ONNX文件(请暂时不要使用 "--end2end",我们后续将支持带有NMS的ONNX模型的部署) python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt -# 移动onnx文件到examples目录 -cp PATH/TO/yolov7.onnx PATH/TO/FastDeploy/examples/vision/detextion/yolov7/ +# 移动onnx文件到demo目录 +cp PATH/TO/yolov7.onnx PATH/TO/model_zoo/vision/yolov7/ ``` -## 下载预训练模型 +## 下载预训练ONNX模型 为了方便开发者的测试,下面提供了YOLOv7导出的各系列模型,开发者可直接下载使用。 | 模型 | 大小 | 精度 | |:---------------------------------------------------------------- |:----- |:----- | | [YOLOv7](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7.onnx) | 141MB | 51.4% | -| [YOLOv7-x] | 10MB | 51.4% | +| [YOLOv7x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7x.onnx) | 273MB | 53.1% | +| [YOLOv7-w6](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-w6.onnx) | 269MB | 54.9% | +| [YOLOv7-e6](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-e6.onnx) | 372MB | 56.0% | +| [YOLOv7-d6](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-d6.onnx) | 511MB | 56.6% | +| [YOLOv7-e6e](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7-e6e.onnx) | 579MB | 56.8% | + + ## 详细部署文档 diff --git a/examples/vision/detection/yolov7/cpp/README.md b/examples/vision/detection/yolov7/cpp/README.md index 2dab72beb..c67689570 100644 --- a/examples/vision/detection/yolov7/cpp/README.md +++ b/examples/vision/detection/yolov7/cpp/README.md @@ -5,7 +5,7 @@ 在部署前,需确认以下两个步骤 - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) -- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuilt_libraries.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) 以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 @@ -19,17 +19,21 @@ make -j #下载官方转换好的yolov7模型文件和测试图片 wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov7.onnx -wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000087038.jpg +wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg # CPU推理 -./infer_demo yolov7.onnx 000000087038.jpg 0 +./infer_demo yolov7.onnx 000000014439.jpg 0 # GPU推理 -./infer_demo yolov7.onnx 000000087038.jpg 1 +./infer_demo yolov7.onnx 000000014439.jpg 1 # GPU上TensorRT推理 -./infer_demo yolov7.onnx 000000087038.jpg 2 +./infer_demo yolov7.onnx 000000014439.jpg 2 ``` +运行完成可视化结果如下图所示 + + + ## YOLOv7 C++接口 ### YOLOv7类 @@ -58,11 +62,11 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式。 > float conf_threshold = 0.25, > float nms_iou_threshold = 0.5) > ``` -> +> > 模型预测接口,输入图像直接输出检测结果。 -> +> > **参数** -> +> > > * **im**: 输入图像,注意需为HWC,BGR格式 > > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) > > * **conf_threshold**: 检测框置信度过滤阈值 @@ -70,7 +74,11 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式。 ### 类成员变量 -> > * **size**(vector): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` - [模型介绍](../../) - [Python部署](../python) diff --git a/examples/vision/detection/yolov7/python/README.md b/examples/vision/detection/yolov7/python/README.md index c45d8a416..b3a4f12a1 100644 --- a/examples/vision/detection/yolov7/python/README.md +++ b/examples/vision/detection/yolov7/python/README.md @@ -18,15 +18,17 @@ git clone https://github.com/PaddlePaddle/FastDeploy.git cd examples/vison/detection/yolov7/python/ # CPU推理 -python infer.py --model yolov7.onnx --image 000000087038.jpg --device cpu +python infer.py --model yolov7.onnx --image 000000014439.jpg --device cpu # GPU推理 -python infer.py --model yolov7.onnx --image 000000087038.jpg --device gpu -# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待) -python infer.py --model yolov7.onnx --image 000000087038.jpg --device gpu --use_trt True +python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu +# GPU上使用TensorRT推理 +python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu --use_trt True ``` 运行完成可视化结果如下图所示 + + ## YOLOv7 Python接口 ``` @@ -47,22 +49,28 @@ YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式 > ``` > YOLOv7.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) > ``` -> +> > 模型预测结口,输入图像直接输出检测结果。 -> +> > **参数** -> +> > > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 > > * **conf_threshold**(float): 检测框置信度过滤阈值 > > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 > **返回** -> +> > > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) ### 类成员属性 -> > * **size**(list | tuple): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + ## 其它文档 diff --git a/examples/vision/detection/yolox/README.md b/examples/vision/detection/yolox/README.md new file mode 100644 index 000000000..72dc51be1 --- /dev/null +++ b/examples/vision/detection/yolox/README.md @@ -0,0 +1,23 @@ +# YOLOX准备部署模型 + +## 模型版本说明 + +- [YOLOX v0.1.1](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0) + - (1)[链接中](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0)的*.onnx可直接进行部署; + + +## 下载预训练ONNX模型 + +为了方便开发者的测试,下面提供了YOLOX导出的各系列模型,开发者可直接下载使用。 + +| 模型 | 大小 | 精度 | +|:---------------------------------------------------------------- |:----- |:----- | +| [YOLOX-s](https://bj.bcebos.com/paddlehub/fastdeploy/yolox_s.onnx) | 35MB | 40.5% | + + + + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/examples/vision/detection/yolox/cpp/CMakeLists.txt b/examples/vision/detection/yolox/cpp/CMakeLists.txt new file mode 100644 index 000000000..fea1a2888 --- /dev/null +++ b/examples/vision/detection/yolox/cpp/CMakeLists.txt @@ -0,0 +1,14 @@ +PROJECT(infer_demo C CXX) +CMAKE_MINIMUM_REQUIRED (VERSION 3.12) + +# 指定下载解压后的fastdeploy库路径 +option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") + +include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) + +# 添加FastDeploy依赖头文件 +include_directories(${FASTDEPLOY_INCS}) + +add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) +# 添加FastDeploy库依赖 +target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/detection/yolox/cpp/README.md b/examples/vision/detection/yolox/cpp/README.md new file mode 100644 index 000000000..abe761126 --- /dev/null +++ b/examples/vision/detection/yolox/cpp/README.md @@ -0,0 +1,85 @@ +# YOLOX C++部署示例 + +本目录下提供`infer.cc`快速完成YOLOX在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) + +以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 + +``` +mkdir build +cd build +wget https://xxx.tgz +tar xvf fastdeploy-linux-x64-0.2.0.tgz +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0 +make -j + +#下载官方转换好的YOLOX模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/yolox_s.onnx +wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg + + +# CPU推理 +./infer_demo yolox_s.onnx 000000014439.jpg 0 +# GPU推理 +./infer_demo yolox_s.onnx 000000014439.jpg 1 +# GPU上TensorRT推理 +./infer_demo yolox_s.onnx 000000014439.jpg 2 +``` + +运行完成可视化结果如下图所示 + + + +## YOLOX C++接口 + +### YOLOX类 + +``` +fastdeploy::vision::detection::YOLOX( + const string& model_file, + const string& params_file = "", + const RuntimeOption& runtime_option = RuntimeOption(), + const Frontend& model_format = Frontend::ONNX) +``` + +YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式。 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX格式 + +#### Predict函数 + +> ``` +> YOLOX::Predict(cv::Mat* im, DetectionResult* result, +> float conf_threshold = 0.25, +> float nms_iou_threshold = 0.5) +> ``` +> +> 模型预测接口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **im**: 输入图像,注意需为HWC,BGR格式 +> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **conf_threshold**: 检测框置信度过滤阈值 +> > * **nms_iou_threshold**: NMS处理过程中iou阈值 + +### 类成员变量 + +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` + +- [模型介绍](../../) +- [Python部署](../python) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/detection/yolox/python/README.md b/examples/vision/detection/yolox/python/README.md new file mode 100644 index 000000000..7a73132a2 --- /dev/null +++ b/examples/vision/detection/yolox/python/README.md @@ -0,0 +1,79 @@ +# YOLOX Python部署示例 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md) + +本目录下提供`infer.py`快速完成YOLOX在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 + +``` +#下载YOLOX模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/yolox_s.onnx +wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg + + +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vison/detection/yolox/python/ + +# CPU推理 +python infer.py --model yolox_s.onnx --image 000000014439.jpg --device cpu +# GPU推理 +python infer.py --model yolox_s.onnx --image 000000014439.jpg --device gpu +# GPU上使用TensorRT推理 +python infer.py --model yolox_s.onnx --image 000000014439.jpg --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 + + + +## YOLOX Python接口 + +``` +fastdeploy.vision.detection.YOLOX(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +``` + +YOLOX模型加载和初始化,其中model_file为导出的ONNX模型格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX + +### predict函数 + +> ``` +> YOLOX.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 +> > * **conf_threshold**(float): 检测框置信度过滤阈值 +> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 + +> **返回** +> +> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 + +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + + +## 其它文档 + +- [YOLOX 模型介绍](..) +- [YOLOX C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/) diff --git a/examples/vision/facedet/retinaface/README.md b/examples/vision/facedet/retinaface/README.md new file mode 100644 index 000000000..b545b98d2 --- /dev/null +++ b/examples/vision/facedet/retinaface/README.md @@ -0,0 +1,54 @@ +# RetinaFace准备部署模型 + +## 模型版本说明 + +- [RetinaFace CommitID:b984b4b](https://github.com/biubug6/Pytorch_Retinaface/commit/b984b4b) + - (1)[链接中](https://github.com/biubug6/Pytorch_Retinaface/commit/b984b4b)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; + - (2)开发者基于自己数据训练的RetinaFace CommitID:b984b4b模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 + +## 导出ONNX模型 + +自动下载的模型文件是我们事先转换好的,如果您需要从RetinaFace官方repo导出ONNX,请参考以下步骤。 + +* 下载官方仓库并 +```bash +git clone https://github.com/biubug6/Pytorch_Retinaface.git +``` +* 下载预训练权重并放在weights文件夹 +```text +./weights/ + mobilenet0.25_Final.pth + mobilenetV1X0.25_pretrain.tar + Resnet50_Final.pth +``` +* 运行convert_to_onnx.py导出ONNX模型文件 +```bash +PYTHONPATH=. python convert_to_onnx.py --trained_model ./weights/mobilenet0.25_Final.pth --network mobile0.25 --long_side 640 --cpu +PYTHONPATH=. python convert_to_onnx.py --trained_model ./weights/Resnet50_Final.pth --network resnet50 --long_side 640 --cpu +``` +注意:需要先对convert_to_onnx.py脚本中的--long_side参数增加类型约束,type=int. +* 使用onnxsim对模型进行简化 +```bash +onnxsim FaceDetector.onnx Pytorch_RetinaFace_mobile0.25-640-640.onnx # mobilenet +onnxsim FaceDetector.onnx Pytorch_RetinaFace_resnet50-640-640.onnx # resnet50 +``` + +## 下载预训练ONNX模型 + +为了方便开发者的测试,下面提供了RetinaFace导出的各系列模型,开发者可直接下载使用。 + +| 模型 | 大小 | 精度 | +|:---------------------------------------------------------------- |:----- |:----- | +| [RetinaFace_mobile0.25-640](https://bj.bcebos.com/paddlehub/fastdeploy/Pytorch_RetinaFace_mobile0.25-640-640.onnx) | 1.7MB | - | +| [RetinaFace_mobile0.25-720](https://bj.bcebos.com/paddlehub/fastdeploy/Pytorch_RetinaFace_mobile0.25-720-1080.onnx) | 1.7MB | -| +| [RetinaFace_resnet50-640](https://bj.bcebos.com/paddlehub/fastdeploy/Pytorch_RetinaFace_resnet50-720-1080.onnx) | 105MB | - | +| [RetinaFace_resnet50-720](https://bj.bcebos.com/paddlehub/fastdeploy/Pytorch_RetinaFace_resnet50-640-640.onnx) | 105MB | - | + + + + + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/examples/vision/facedet/retinaface/cpp/CMakeLists.txt b/examples/vision/facedet/retinaface/cpp/CMakeLists.txt new file mode 100644 index 000000000..fea1a2888 --- /dev/null +++ b/examples/vision/facedet/retinaface/cpp/CMakeLists.txt @@ -0,0 +1,14 @@ +PROJECT(infer_demo C CXX) +CMAKE_MINIMUM_REQUIRED (VERSION 3.12) + +# 指定下载解压后的fastdeploy库路径 +option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") + +include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) + +# 添加FastDeploy依赖头文件 +include_directories(${FASTDEPLOY_INCS}) + +add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) +# 添加FastDeploy库依赖 +target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/facedet/retinaface/cpp/README.md b/examples/vision/facedet/retinaface/cpp/README.md new file mode 100644 index 000000000..dc3665707 --- /dev/null +++ b/examples/vision/facedet/retinaface/cpp/README.md @@ -0,0 +1,85 @@ +# RetinaFace C++部署示例 + +本目录下提供`infer.cc`快速完成RetinaFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) + +以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 + +``` +mkdir build +cd build +wget https://xxx.tgz +tar xvf fastdeploy-linux-x64-0.2.0.tgz +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0 +make -j + +#下载官方转换好的RetinaFace模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/Pytorch_RetinaFace_mobile0.25-640-640.onnx +wget todo + + +# CPU推理 +./infer_demo Pytorch_RetinaFace_mobile0.25-640-640.onnx todo 0 +# GPU推理 +./infer_demo Pytorch_RetinaFace_mobile0.25-640-640.onnx todo 1 +# GPU上TensorRT推理 +./infer_demo Pytorch_RetinaFace_mobile0.25-640-640.onnx todo 2 +``` + +运行完成可视化结果如下图所示 + + + +## RetinaFace C++接口 + +### RetinaFace类 + +``` +fastdeploy::vision::facedet::RetinaFace( + const string& model_file, + const string& params_file = "", + const RuntimeOption& runtime_option = RuntimeOption(), + const Frontend& model_format = Frontend::ONNX) +``` + +RetinaFace模型加载和初始化,其中model_file为导出的ONNX模型格式。 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX格式 + +#### Predict函数 + +> ``` +> RetinaFace::Predict(cv::Mat* im, DetectionResult* result, +> float conf_threshold = 0.25, +> float nms_iou_threshold = 0.5) +> ``` +> +> 模型预测接口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **im**: 输入图像,注意需为HWC,BGR格式 +> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **conf_threshold**: 检测框置信度过滤阈值 +> > * **nms_iou_threshold**: NMS处理过程中iou阈值 + +### 类成员变量 + +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` + +- [模型介绍](../../) +- [Python部署](../python) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/facedet/retinaface/python/README.md b/examples/vision/facedet/retinaface/python/README.md new file mode 100644 index 000000000..b8c325135 --- /dev/null +++ b/examples/vision/facedet/retinaface/python/README.md @@ -0,0 +1,79 @@ +# RetinaFace Python部署示例 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md) + +本目录下提供`infer.py`快速完成RetinaFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 + +``` +#下载retinaface模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/Pytorch_RetinaFace_mobile0.25-640-640.onnx +wget todo + + +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vison/detection/retinaface/python/ + +# CPU推理 +python infer.py --model Pytorch_RetinaFace_mobile0.25-640-640.onnx --image todo --device cpu +# GPU推理 +python infer.py --model Pytorch_RetinaFace_mobile0.25-640-640.onnx --image todo --device gpu +# GPU上使用TensorRT推理 +python infer.py --model Pytorch_RetinaFace_mobile0.25-640-640.onnx --image todo --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 + + + +## RetinaFace Python接口 + +``` +fastdeploy.vision.facedet.RetinaFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +``` + +RetinaFace模型加载和初始化,其中model_file为导出的ONNX模型格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX + +### predict函数 + +> ``` +> RetinaFace.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 +> > * **conf_threshold**(float): 检测框置信度过滤阈值 +> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 + +> **返回** +> +> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 + +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + + +## 其它文档 + +- [RetinaFace 模型介绍](..) +- [RetinaFace C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/) diff --git a/examples/vision/facedet/ultraface/README.md b/examples/vision/facedet/ultraface/README.md new file mode 100644 index 000000000..f1dcca0b9 --- /dev/null +++ b/examples/vision/facedet/ultraface/README.md @@ -0,0 +1,23 @@ +# UltraFace准备部署模型 + +## 模型版本说明 + +- [UltraFace CommitID:dffdddd](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/commit/dffdddd) + - (1)[链接中](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/commit/dffdddd)的*.onnx可下载, 也可以通过下面模型链接下载并进行部署 + + +## 下载预训练ONNX模型 + +为了方便开发者的测试,下面提供了UltraFace导出的各系列模型,开发者可直接下载使用。 + +| 模型 | 大小 | 精度 | +|:---------------------------------------------------------------- |:----- |:----- | +| [RFB-320](https://bj.bcebos.com/paddlehub/fastdeploy/version-RFB-320.onnx) | 1.3MB | - | +| [RFB-320-sim](https://bj.bcebos.com/paddlehub/fastdeploy/version-RFB-320-sim.onnx) | 1.2MB | -| + + + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/examples/vision/facedet/ultraface/cpp/CMakeLists.txt b/examples/vision/facedet/ultraface/cpp/CMakeLists.txt new file mode 100644 index 000000000..fea1a2888 --- /dev/null +++ b/examples/vision/facedet/ultraface/cpp/CMakeLists.txt @@ -0,0 +1,14 @@ +PROJECT(infer_demo C CXX) +CMAKE_MINIMUM_REQUIRED (VERSION 3.12) + +# 指定下载解压后的fastdeploy库路径 +option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") + +include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) + +# 添加FastDeploy依赖头文件 +include_directories(${FASTDEPLOY_INCS}) + +add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) +# 添加FastDeploy库依赖 +target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/facedet/ultraface/cpp/README.md b/examples/vision/facedet/ultraface/cpp/README.md new file mode 100644 index 000000000..1eae69c0f --- /dev/null +++ b/examples/vision/facedet/ultraface/cpp/README.md @@ -0,0 +1,85 @@ +# UltraFace C++部署示例 + +本目录下提供`infer.cc`快速完成UltraFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) + +以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 + +``` +mkdir build +cd build +wget https://xxx.tgz +tar xvf fastdeploy-linux-x64-0.2.0.tgz +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0 +make -j + +#下载官方转换好的UltraFace模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/version-RFB-320.onnx +wget todo + + +# CPU推理 +./infer_demo version-RFB-320.onnx todo 0 +# GPU推理 +./infer_demo version-RFB-320.onnx todo 1 +# GPU上TensorRT推理 +./infer_demo version-RFB-320.onnx todo 2 +``` + +运行完成可视化结果如下图所示 + + + +## UltraFace C++接口 + +### UltraFace类 + +``` +fastdeploy::vision::facedet::UltraFace( + const string& model_file, + const string& params_file = "", + const RuntimeOption& runtime_option = RuntimeOption(), + const Frontend& model_format = Frontend::ONNX) +``` + +UltraFace模型加载和初始化,其中model_file为导出的ONNX模型格式。 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX格式 + +#### Predict函数 + +> ``` +> UltraFace::Predict(cv::Mat* im, DetectionResult* result, +> float conf_threshold = 0.25, +> float nms_iou_threshold = 0.5) +> ``` +> +> 模型预测接口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **im**: 输入图像,注意需为HWC,BGR格式 +> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **conf_threshold**: 检测框置信度过滤阈值 +> > * **nms_iou_threshold**: NMS处理过程中iou阈值 + +### 类成员变量 + +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` + +- [模型介绍](../../) +- [Python部署](../python) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/facedet/ultraface/python/README.md b/examples/vision/facedet/ultraface/python/README.md new file mode 100644 index 000000000..88026ecff --- /dev/null +++ b/examples/vision/facedet/ultraface/python/README.md @@ -0,0 +1,79 @@ +# UltraFace Python部署示例 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md) + +本目录下提供`infer.py`快速完成UltraFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 + +``` +#下载ultraface模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/version-RFB-320.onnx +wget todo + + +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vison/detection/ultraface/python/ + +# CPU推理 +python infer.py --model version-RFB-320.onnx --image todo --device cpu +# GPU推理 +python infer.py --model version-RFB-320.onnx --image todo --device gpu +# GPU上使用TensorRT推理 +python infer.py --model version-RFB-320.onnx --image todo --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 + + + +## UltraFace Python接口 + +``` +fastdeploy.vision.facedet.UltraFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +``` + +UltraFace模型加载和初始化,其中model_file为导出的ONNX模型格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX + +### predict函数 + +> ``` +> UltraFace.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 +> > * **conf_threshold**(float): 检测框置信度过滤阈值 +> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 + +> **返回** +> +> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 + +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + + +## 其它文档 + +- [UltraFace 模型介绍](..) +- [UltraFace C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/) diff --git a/examples/vision/facedet/yolov5face/README.md b/examples/vision/facedet/yolov5face/README.md new file mode 100644 index 000000000..34828b193 --- /dev/null +++ b/examples/vision/facedet/yolov5face/README.md @@ -0,0 +1,42 @@ +# YOLOv5Face准备部署模型 + +## 模型版本说明 + +- [YOLOv5Face CommitID:4fd1ead](https://github.com/deepcam-cn/yolov5-face/commit/4fd1ead) + - (1)[链接中](https://github.com/deepcam-cn/yolov5-face/commit/4fd1ead)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; + - (2)开发者基于自己数据训练的YOLOv5Face CommitID:b984b4b模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 + +## 导出ONNX模型 + +访问[YOLOv5Face](https://github.com/deepcam-cn/yolov5-face)官方github库,按照指引下载安装,下载`yolov5s-face.pt` 模型,利用 `export.py` 得到`onnx`格式文件。 + +* 下载yolov5face模型文件 + ``` + Link: https://pan.baidu.com/s/1fyzLxZYx7Ja1_PCIWRhxbw Link: eq0q + https://drive.google.com/file/d/1zxaHeLDyID9YU4-hqK7KNepXIwbTkRIO/view?usp=sharing + ``` + +* 导出onnx格式文件 + ```bash + PYTHONPATH=. python export.py --weights weights/yolov5s-face.pt --img_size 640 640 --batch_size 1 + ``` +* onnx模型简化(可选) + ```bash + onnxsim yolov5s-face.onnx yolov5s-face.onnx + ``` + +## 下载预训练ONNX模型 + +为了方便开发者的测试,下面提供了YOLOv5Face导出的各系列模型,开发者可直接下载使用。 + +| 模型 | 大小 | 精度 | +|:---------------------------------------------------------------- |:----- |:----- | +| [YOLOv5s-Face](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s-face.onnx) | 30MB | - | +| [YOLOv5s-Face-bak](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5face-s-640x640.bak.onnx) | 30MB | -| +| [YOLOv5l-Face](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5face-l-640x640.onnx ) | 181MB | - | + + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/examples/vision/facedet/yolov5face/cpp/CMakeLists.txt b/examples/vision/facedet/yolov5face/cpp/CMakeLists.txt new file mode 100644 index 000000000..fea1a2888 --- /dev/null +++ b/examples/vision/facedet/yolov5face/cpp/CMakeLists.txt @@ -0,0 +1,14 @@ +PROJECT(infer_demo C CXX) +CMAKE_MINIMUM_REQUIRED (VERSION 3.12) + +# 指定下载解压后的fastdeploy库路径 +option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") + +include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) + +# 添加FastDeploy依赖头文件 +include_directories(${FASTDEPLOY_INCS}) + +add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) +# 添加FastDeploy库依赖 +target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/facedet/yolov5face/cpp/README.md b/examples/vision/facedet/yolov5face/cpp/README.md new file mode 100644 index 000000000..ec0b48ad0 --- /dev/null +++ b/examples/vision/facedet/yolov5face/cpp/README.md @@ -0,0 +1,85 @@ +# YOLOv5Face C++部署示例 + +本目录下提供`infer.cc`快速完成YOLOv5Face在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) + +以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 + +``` +mkdir build +cd build +wget https://xxx.tgz +tar xvf fastdeploy-linux-x64-0.2.0.tgz +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0 +make -j + +#下载官方转换好的YOLOv5Face模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s-face.onnx +wget todo + + +# CPU推理 +./infer_demo yolov5s-face.onnx todo 0 +# GPU推理 +./infer_demo yolov5s-face.onnx todo 1 +# GPU上TensorRT推理 +./infer_demo yolov5s-face.onnx todo 2 +``` + +运行完成可视化结果如下图所示 + + + +## YOLOv5Face C++接口 + +### YOLOv5Face类 + +``` +fastdeploy::vision::facedet::YOLOv5Face( + const string& model_file, + const string& params_file = "", + const RuntimeOption& runtime_option = RuntimeOption(), + const Frontend& model_format = Frontend::ONNX) +``` + +YOLOv5Face模型加载和初始化,其中model_file为导出的ONNX模型格式。 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX格式 + +#### Predict函数 + +> ``` +> YOLOv5Face::Predict(cv::Mat* im, DetectionResult* result, +> float conf_threshold = 0.25, +> float nms_iou_threshold = 0.5) +> ``` +> +> 模型预测接口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **im**: 输入图像,注意需为HWC,BGR格式 +> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **conf_threshold**: 检测框置信度过滤阈值 +> > * **nms_iou_threshold**: NMS处理过程中iou阈值 + +### 类成员变量 + +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` + +- [模型介绍](../../) +- [Python部署](../python) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/facedet/yolov5face/python/README.md b/examples/vision/facedet/yolov5face/python/README.md new file mode 100644 index 000000000..2fc847f00 --- /dev/null +++ b/examples/vision/facedet/yolov5face/python/README.md @@ -0,0 +1,79 @@ +# YOLOv5Face Python部署示例 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md) + +本目录下提供`infer.py`快速完成YOLOv5Face在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 + +``` +#下载YOLOv5Face模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s-face.onnx +wget todo + + +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vison/detection/yolov5face/python/ + +# CPU推理 +python infer.py --model yolov5s-face.onnx --image todo --device cpu +# GPU推理 +python infer.py --model yolov5s-face.onnx --image todo --device gpu +# GPU上使用TensorRT推理 +python infer.py --model yolov5s-face.onnx --image todo --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 + + + +## YOLOv5Face Python接口 + +``` +fastdeploy.vision.facedet.YOLOv5Face(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +``` + +YOLOv5Face模型加载和初始化,其中model_file为导出的ONNX模型格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX + +### predict函数 + +> ``` +> YOLOv5Face.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 +> > * **conf_threshold**(float): 检测框置信度过滤阈值 +> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 + +> **返回** +> +> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 + +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + + +## 其它文档 + +- [YOLOv5Face 模型介绍](..) +- [YOLOv5Face C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/) diff --git a/examples/vision/faceid/arcface/README.md b/examples/vision/faceid/arcface/README.md new file mode 100644 index 000000000..cb9305402 --- /dev/null +++ b/examples/vision/faceid/arcface/README.md @@ -0,0 +1,40 @@ +# ArcFace准备部署模型 + +## 模型版本说明 + +- [ArcFace CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) + - (1)[链接中](https://github.com/deepinsight/insightface/commit/babb9a5)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; + - (2)开发者基于自己数据训练的ArcFace CommitID:babb9a5模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 + +## 导出ONNX模型 + +访问[ArcFace](https://github.com/deepinsight/insightface/tree/master/recognition/arcface_torch)官方github库,按照指引下载安装,下载pt模型文件,利用 `torch2onnx.py` 得到`onnx`格式文件。 + +* 下载ArcFace模型文件 + ``` + Link: https://pan.baidu.com/share/init?surl=CL-l4zWqsI1oDuEEYVhj-g code: e8pw + ``` + +* 导出onnx格式文件 + ```bash + PYTHONPATH=. python ./torch2onnx.py ms1mv3_arcface_r100_fp16/backbone.pth --output ms1mv3_arcface_r100.onnx --network r100 --simplify 1 + ``` + +## 下载预训练ONNX模型 + + + +todo + + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/examples/vision/faceid/arcface/cpp/CMakeLists.txt b/examples/vision/faceid/arcface/cpp/CMakeLists.txt new file mode 100644 index 000000000..fea1a2888 --- /dev/null +++ b/examples/vision/faceid/arcface/cpp/CMakeLists.txt @@ -0,0 +1,14 @@ +PROJECT(infer_demo C CXX) +CMAKE_MINIMUM_REQUIRED (VERSION 3.12) + +# 指定下载解压后的fastdeploy库路径 +option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") + +include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) + +# 添加FastDeploy依赖头文件 +include_directories(${FASTDEPLOY_INCS}) + +add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) +# 添加FastDeploy库依赖 +target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/faceid/arcface/cpp/README.md b/examples/vision/faceid/arcface/cpp/README.md new file mode 100644 index 000000000..505d144bb --- /dev/null +++ b/examples/vision/faceid/arcface/cpp/README.md @@ -0,0 +1,85 @@ +# ArcFace C++部署示例 + +本目录下提供`infer.cc`快速完成ArcFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) + +以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 + +``` +mkdir build +cd build +wget https://xxx.tgz +tar xvf fastdeploy-linux-x64-0.2.0.tgz +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0 +make -j + +#下载官方转换好的ArcFace模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r34.onnx +wget todo + + +# CPU推理 +./infer_demo ms1mv3_arcface_r34.onnx todo 0 +# GPU推理 +./infer_demo ms1mv3_arcface_r34.onnx todo 1 +# GPU上TensorRT推理 +./infer_demo ms1mv3_arcface_r34.onnx todo 2 +``` + +运行完成可视化结果如下图所示 + + + +## ArcFace C++接口 + +### ArcFace类 + +``` +fastdeploy::vision::faceid::ArcFace( + const string& model_file, + const string& params_file = "", + const RuntimeOption& runtime_option = RuntimeOption(), + const Frontend& model_format = Frontend::ONNX) +``` + +ArcFace模型加载和初始化,其中model_file为导出的ONNX模型格式。 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX格式 + +#### Predict函数 + +> ``` +> ArcFace::Predict(cv::Mat* im, DetectionResult* result, +> float conf_threshold = 0.25, +> float nms_iou_threshold = 0.5) +> ``` +> +> 模型预测接口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **im**: 输入图像,注意需为HWC,BGR格式 +> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **conf_threshold**: 检测框置信度过滤阈值 +> > * **nms_iou_threshold**: NMS处理过程中iou阈值 + +### 类成员变量 + +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` + +- [模型介绍](../../) +- [Python部署](../python) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/faceid/arcface/python/README.md b/examples/vision/faceid/arcface/python/README.md new file mode 100644 index 000000000..034b93049 --- /dev/null +++ b/examples/vision/faceid/arcface/python/README.md @@ -0,0 +1,79 @@ +# ArcFace Python部署示例 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md) + +本目录下提供`infer.py`快速完成ArcFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 + +``` +#下载arcface模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r34.onnx +wget todo + + +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vison/detection/arcface/python/ + +# CPU推理 +python infer.py --model ms1mv3_arcface_r34.onnx --image todo --device cpu +# GPU推理 +python infer.py --model ms1mv3_arcface_r34.onnx --image todo --device gpu +# GPU上使用TensorRT推理 +python infer.py --model ms1mv3_arcface_r34.onnx --image todo --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 + + + +## ArcFace Python接口 + +``` +fastdeploy.vision.faceid.ArcFace(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +``` + +ArcFace模型加载和初始化,其中model_file为导出的ONNX模型格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX + +### predict函数 + +> ``` +> ArcFace.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 +> > * **conf_threshold**(float): 检测框置信度过滤阈值 +> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 + +> **返回** +> +> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 + +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + + +## 其它文档 + +- [ArcFace 模型介绍](..) +- [ArcFace C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/) diff --git a/examples/vision/faceid/partial_fc/README.md b/examples/vision/faceid/partial_fc/README.md new file mode 100644 index 000000000..ca03ba2e7 --- /dev/null +++ b/examples/vision/faceid/partial_fc/README.md @@ -0,0 +1,37 @@ + + + + + + +## 下载预训练ONNX模型 + +为了方便开发者的测试,下面提供了RetinaFace导出的各系列模型,开发者可直接下载使用。 + +| 模型 | 大小 | 精度 | +|:---------------------------------------------------------------- |:----- |:----- | +| [partial_fc_glint360k_r50](https://bj.bcebos.com/paddlehub/fastdeploy/partial_fc_glint360k_r50.onnx) | 167MB | - | +| [partial_fc_glint360k_r100](https://bj.bcebos.com/paddlehub/fastdeploy/partial_fc_glint360k_r100.onnx) | 249MB | -| + + + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/examples/vision/faceid/partial_fc/cpp/CMakeLists.txt b/examples/vision/faceid/partial_fc/cpp/CMakeLists.txt new file mode 100644 index 000000000..fea1a2888 --- /dev/null +++ b/examples/vision/faceid/partial_fc/cpp/CMakeLists.txt @@ -0,0 +1,14 @@ +PROJECT(infer_demo C CXX) +CMAKE_MINIMUM_REQUIRED (VERSION 3.12) + +# 指定下载解压后的fastdeploy库路径 +option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") + +include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) + +# 添加FastDeploy依赖头文件 +include_directories(${FASTDEPLOY_INCS}) + +add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) +# 添加FastDeploy库依赖 +target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/faceid/partial_fc/cpp/README.md b/examples/vision/faceid/partial_fc/cpp/README.md new file mode 100644 index 000000000..20a2f0eb6 --- /dev/null +++ b/examples/vision/faceid/partial_fc/cpp/README.md @@ -0,0 +1,85 @@ +# PartialFC C++部署示例 + +本目录下提供`infer.cc`快速完成PartialFC在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) + +以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 + +``` +mkdir build +cd build +wget https://xxx.tgz +tar xvf fastdeploy-linux-x64-0.2.0.tgz +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0 +make -j + +#下载官方转换好的PartialFC模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/partial_fc_glint360k_r50.onnx +wget todo + + +# CPU推理 +./infer_demo partial_fc_glint360k_r50.onnx todo 0 +# GPU推理 +./infer_demo partial_fc_glint360k_r50.onnx todo 1 +# GPU上TensorRT推理 +./infer_demo partial_fc_glint360k_r50.onnx todo 2 +``` + +运行完成可视化结果如下图所示 + + + +## PartialFC C++接口 + +### PartialFC类 + +``` +fastdeploy::vision::faceid::PartialFC( + const string& model_file, + const string& params_file = "", + const RuntimeOption& runtime_option = RuntimeOption(), + const Frontend& model_format = Frontend::ONNX) +``` + +PartialFC模型加载和初始化,其中model_file为导出的ONNX模型格式。 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX格式 + +#### Predict函数 + +> ``` +> PartialFC::Predict(cv::Mat* im, DetectionResult* result, +> float conf_threshold = 0.25, +> float nms_iou_threshold = 0.5) +> ``` +> +> 模型预测接口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **im**: 输入图像,注意需为HWC,BGR格式 +> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **conf_threshold**: 检测框置信度过滤阈值 +> > * **nms_iou_threshold**: NMS处理过程中iou阈值 + +### 类成员变量 + +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` + +- [模型介绍](../../) +- [Python部署](../python) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/faceid/partial_fc/python/README.md b/examples/vision/faceid/partial_fc/python/README.md new file mode 100644 index 000000000..6189e99c4 --- /dev/null +++ b/examples/vision/faceid/partial_fc/python/README.md @@ -0,0 +1,79 @@ +# PartialFC Python部署示例 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md) + +本目录下提供`infer.py`快速完成PartialFC在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 + +``` +#下载partial_fc模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/partial_fc_glint360k_r50.onnx +wget todo + + +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vison/detection/partial_fc/python/ + +# CPU推理 +python infer.py --model partial_fc_glint360k_r50.onnx --image todo --device cpu +# GPU推理 +python infer.py --model partial_fc_glint360k_r50.onnx --image todo --device gpu +# GPU上使用TensorRT推理 +python infer.py --model partial_fc_glint360k_r50.onnx --image todo --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 + + + +## PartialFC Python接口 + +``` +fastdeploy.vision.faceid.PartialFC(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +``` + +PartialFC模型加载和初始化,其中model_file为导出的ONNX模型格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX + +### predict函数 + +> ``` +> PartialFC.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 +> > * **conf_threshold**(float): 检测框置信度过滤阈值 +> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 + +> **返回** +> +> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 + +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + + +## 其它文档 + +- [PartialFC 模型介绍](..) +- [PartialFC C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/) diff --git a/examples/vision/matting/modnet/README.md b/examples/vision/matting/modnet/README.md new file mode 100644 index 000000000..fc3f7c008 --- /dev/null +++ b/examples/vision/matting/modnet/README.md @@ -0,0 +1,42 @@ +# MODNet准备部署模型 + +## 模型版本说明 + +- [MODNet CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4) + - (1)[链接中](https://github.com/ZHKKKe/MODNet/commit/28165a4)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; + - (2)开发者基于自己数据训练的MODNet CommitID:b984b4b模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 + +## 导出ONNX模型 + + +访问[MODNet](https://github.com/ZHKKKe/MODNet)官方github库,按照指引下载安装,下载模型文件,利用 `onnx/export_onnx.py` 得到`onnx`格式文件。 + +* 导出onnx格式文件 + ```bash + python -m onnx.export_onnx \ + --ckpt-path=pretrained/modnet_photographic_portrait_matting.ckpt \ + --output-path=pretrained/modnet_photographic_portrait_matting.onnx + ``` + +## 下载预训练ONNX模型 + +为了方便开发者的测试,下面提供了MODNet导出的各系列模型,开发者可直接下载使用。 + +| 模型 | 大小 | 精度 | +|:---------------------------------------------------------------- |:----- |:----- | +| [modnet_photographic](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic__portrait_matting.onnx) | 25MB | - | +| [modnet_webcam](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_webcam_portrait_matting.onnx) | 25MB | -| +| [modnet_photographic_256](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting-256x256.onnx) | 25MB | - | +| [modnet_webcam_256](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_webcam_portrait_matting-256x256.onnx) | 25MB | - | +| [modnet_photographic_512](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting-512x512.onnx) | 25MB | - | +| [modnet_webcam_512](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_webcam_portrait_matting-512x512.onnx) | 25MB | - | +| [modnet_photographic_1024](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting-1024x1024.onnx) | 25MB | - | +| [modnet_webcam_1024](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_webcam_portrait_matting-1024x1024.onnx) | 25MB | -| + + + + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/examples/vision/matting/modnet/cpp/CMakeLists.txt b/examples/vision/matting/modnet/cpp/CMakeLists.txt new file mode 100644 index 000000000..fea1a2888 --- /dev/null +++ b/examples/vision/matting/modnet/cpp/CMakeLists.txt @@ -0,0 +1,14 @@ +PROJECT(infer_demo C CXX) +CMAKE_MINIMUM_REQUIRED (VERSION 3.12) + +# 指定下载解压后的fastdeploy库路径 +option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") + +include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) + +# 添加FastDeploy依赖头文件 +include_directories(${FASTDEPLOY_INCS}) + +add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) +# 添加FastDeploy库依赖 +target_link_libraries(infer_demo ${FASTDEPLOY_LIBS}) diff --git a/examples/vision/matting/modnet/cpp/README.md b/examples/vision/matting/modnet/cpp/README.md new file mode 100644 index 000000000..82226ae4c --- /dev/null +++ b/examples/vision/matting/modnet/cpp/README.md @@ -0,0 +1,85 @@ +# MODNet C++部署示例 + +本目录下提供`infer.cc`快速完成MODNet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuild_libraries.md) + +以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试 + +``` +mkdir build +cd build +wget https://xxx.tgz +tar xvf fastdeploy-linux-x64-0.2.0.tgz +cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0 +make -j + +#下载官方转换好的MODNet模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic__portrait_matting.onnx +wget todo + + +# CPU推理 +./infer_demo modnet_photographic__portrait_matting.onnx todo 0 +# GPU推理 +./infer_demo modnet_photographic__portrait_matting.onnx todo 1 +# GPU上TensorRT推理 +./infer_demo modnet_photographic__portrait_matting.onnx todo 2 +``` + +运行完成可视化结果如下图所示 + + + +## MODNet C++接口 + +### MODNet类 + +``` +fastdeploy::vision::matting::MODNet( + const string& model_file, + const string& params_file = "", + const RuntimeOption& runtime_option = RuntimeOption(), + const Frontend& model_format = Frontend::ONNX) +``` + +MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式。 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX格式 + +#### Predict函数 + +> ``` +> MODNet::Predict(cv::Mat* im, DetectionResult* result, +> float conf_threshold = 0.25, +> float nms_iou_threshold = 0.5) +> ``` +> +> 模型预测接口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **im**: 输入图像,注意需为HWC,BGR格式 +> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **conf_threshold**: 检测框置信度过滤阈值 +> > * **nms_iou_threshold**: NMS处理过程中iou阈值 + +### 类成员变量 + +> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` +> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` + +- [模型介绍](../../) +- [Python部署](../python) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/matting/modnet/python/README.md b/examples/vision/matting/modnet/python/README.md new file mode 100644 index 000000000..d7b1149f8 --- /dev/null +++ b/examples/vision/matting/modnet/python/README.md @@ -0,0 +1,79 @@ +# MODNet Python部署示例 + +在部署前,需确认以下两个步骤 + +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md) + +本目录下提供`infer.py`快速完成MODNet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 + +``` +#下载modnet模型文件和测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic__portrait_matting.onnx +wget todo + + +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vison/detection/modnet/python/ + +# CPU推理 +python infer.py --model modnet_photographic__portrait_matting.onnx --image todo --device cpu +# GPU推理 +python infer.py --model modnet_photographic__portrait_matting.onnx --image todo --device gpu +# GPU上使用TensorRT推理 +python infer.py --model modnet_photographic__portrait_matting.onnx --image todo --device gpu --use_trt True +``` + +运行完成可视化结果如下图所示 + + + +## MODNet Python接口 + +``` +fastdeploy.vision.matting.MODNet(model_file, params_file=None, runtime_option=None, model_format=Frontend.ONNX) +``` + +MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(Frontend): 模型格式,默认为ONNX + +### predict函数 + +> ``` +> MODNet.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 +> > * **conf_threshold**(float): 检测框置信度过滤阈值 +> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值 + +> **返回** +> +> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 + +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640] +> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] +> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False` +> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False` +> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32` + + + +## 其它文档 + +- [MODNet 模型介绍](..) +- [MODNet C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/)