Files
FastDeploy/examples/vision/detection/yolov5seg/cpp/README.md
WJJ1995 aa6931bee9 [Model] Add YOLOv5-seg (#988)
* add onnx_ort_runtime demo

* rm in requirements

* support batch eval

* fixed MattingResults bug

* move assignment for DetectionResult

* integrated x2paddle

* add model convert readme

* update readme

* re-lint

* add processor api

* Add MattingResult Free

* change valid_cpu_backends order

* add ppocr benchmark

* mv bs from 64 to 32

* fixed quantize.md

* fixed quantize bugs

* Add Monitor for benchmark

* update mem monitor

* Set trt_max_batch_size default 1

* fixed ocr benchmark bug

* support yolov5 in serving

* Fixed yolov5 serving

* Fixed postprocess

* update yolov5 to 7.0

* add poros runtime demos

* update readme

* Support poros abi=1

* rm useless note

* deal with comments

* support pp_trt for ppseg

* fixed symlink problem

* Add is_mini_pad and stride for yolov5

* Add yolo series for paddle format

* fixed bugs

* fixed bug

* support yolov5seg

* fixed bug

* refactor yolov5seg

* fixed bug

* mv Mask int32 to uint8

* add yolov5seg example

* rm log info

* fixed code style

* add yolov5seg example in python

* fixed dtype bug

* update note

* deal with comments

* get sorted index

* add yolov5seg test case

* Add GPL-3.0 License

* add round func

* deal with comments

* deal with commens

Co-authored-by: Jason <jiangjiajun@baidu.com>
2023-01-11 15:36:32 +08:00

3.0 KiB
Raw Blame History

YOLOv5Seg C++部署示例

本目录下提供infer.cc快速完成YOLOv5Seg在CPU/GPU以及GPU上通过TensorRT加速部署的示例。

在部署前,需确认以下两个步骤

以Linux上CPU推理为例在本目录执行如下命令即可完成编译测试支持此模型需保证FastDeploy版本1.0.3以上(x.x.x>=1.0.3)

mkdir build
cd build
# 下载 FastDeploy 预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j

# 1. 下载官方转换好的 YOLOv5Seg ONNX 模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s-seg.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg

# CPU推理
./infer_demo yolov5s-seg.onnx 000000014439.jpg 0
# GPU推理
./infer_demo yolov5s-seg.onnx 000000014439.jpg 1
# GPU上TensorRT推理
./infer_demo yolov5s-seg.onnx 000000014439.jpg 2

运行完成可视化结果如下图所示

以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:

YOLOv5Seg C++接口

YOLOv5Seg类

fastdeploy::vision::detection::YOLOv5Seg(
        const string& model_file,
        const string& params_file = "",
        const RuntimeOption& runtime_option = RuntimeOption(),
        const ModelFormat& model_format = ModelFormat::ONNX)

YOLOv5Seg模型加载和初始化其中model_file为导出的ONNX模型格式。

参数

  • model_file(str): 模型文件路径
  • params_file(str): 参数文件路径当模型格式为ONNX时此参数传入空字符串即可
  • runtime_option(RuntimeOption): 后端推理配置默认为None即采用默认配置
  • model_format(ModelFormat): 模型格式默认为ONNX格式

Predict函数

YOLOv5Seg::Predict(const cv::Mat& img, DetectionResult* result)

参数

  • im: 输入图像注意需为HWCBGR格式
  • result: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考视觉模型预测结果