mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-07 09:31:35 +08:00
[Doc] Add docs for ppocr ppseg examples (#1429)
* add docs for examples * add english doc * fix * fix docs
This commit is contained in:
162
examples/vision/detection/yolov5/c/README.md
Executable file
162
examples/vision/detection/yolov5/c/README.md
Executable file
@@ -0,0 +1,162 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5 C Deployment Example
|
||||
|
||||
This directory provides `infer.c` to finish the deployment of YOLOv5 on CPU/GPU.
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
|
||||
|
||||
```bash
|
||||
# 1. # Download the YOLOv5 model file and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU inference
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 0
|
||||
# GPU inference
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 1
|
||||
```
|
||||
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
## YOLOv5 C Interface
|
||||
|
||||
### RuntimeOption
|
||||
|
||||
```c
|
||||
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
|
||||
```
|
||||
|
||||
> Create a RuntimeOption object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseCpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
|
||||
```
|
||||
|
||||
> Enable Cpu inference.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseGpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
int gpu_id)
|
||||
```
|
||||
> Enable Gpu inference.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
> * **gpu_id**(int): gpu id
|
||||
|
||||
|
||||
### Model
|
||||
|
||||
```c
|
||||
|
||||
FD_C_YOLOv5Wrapper* FD_C_CreateYOLOv5Wrapper(
|
||||
const char* model_file, const char* params_file, const char* config_file,
|
||||
FD_C_RuntimeOptionWrapper* runtime_option,
|
||||
const FD_C_ModelFormat model_format)
|
||||
|
||||
```
|
||||
|
||||
> Create a YOLOv5 model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path,when model format is onnx,this can be empty string
|
||||
> * **runtime_option**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(FD_C_ModelFormat): Model format.
|
||||
>
|
||||
> **Return**
|
||||
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): Pointer to manipulate YOLOv5 object.
|
||||
|
||||
|
||||
#### Read and write image
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
```
|
||||
|
||||
> Read an image, and return a pointer to cv::Mat.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **imgpath**(const char*): image path
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **imgmat**(FD_C_Mat): pointer to cv::Mat object which holds the image.
|
||||
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
```
|
||||
|
||||
> Write image to a file.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **savepath**(const char*): save path
|
||||
> * **img**(FD_C_Mat): pointer to cv::Mat object
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **result**(FD_C_Bool): bool to indicate success or failure
|
||||
|
||||
|
||||
#### Prediction
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_YOLOv5WrapperPredict(
|
||||
__fd_take FD_C_YOLOv5Wrapper* fd_c_yolov5_wrapper, FD_C_Mat img,
|
||||
FD_C_DetectionResult* fd_c_detection_result)
|
||||
```
|
||||
>
|
||||
> Predict an image, and generate detection result.
|
||||
>
|
||||
> **Params**
|
||||
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): Pointer to manipulate YOLOv5 object.
|
||||
> * **img**(FD_C_Mat): pointer to cv::Mat object, which can be obained by FD_C_Imread interface
|
||||
> * **fd_c_detection_result**FD_C_DetectionResult*): Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResults
|
||||
|
||||
|
||||
#### Result
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisDetection(FD_C_Mat im, FD_C_DetectionResult* fd_detection_result,
|
||||
float score_threshold, int line_size, float font_size);
|
||||
```
|
||||
>
|
||||
> Visualize detection results and return visualization image.
|
||||
>
|
||||
> **Params**
|
||||
> * **im**(FD_C_Mat): pointer to input image
|
||||
> * **fd_detection_result**(FD_C_DetectionResult*): pointer to C DetectionResult structure
|
||||
> * **score_threshold**(float): score threshold
|
||||
> * **line_size**(int): line size
|
||||
> * **font_size**(float): font size
|
||||
>
|
||||
> **Return**
|
||||
> * **vis_im**(FD_C_Mat): pointer to visualization image.
|
||||
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
165
examples/vision/detection/yolov5/c/README_CN.md
Normal file
165
examples/vision/detection/yolov5/c/README_CN.md
Normal file
@@ -0,0 +1,165 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5 C 部署示例
|
||||
|
||||
本目录下提供`infer.c`来调用C API快速完成YOLOv5模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
```bash
|
||||
# 1. 下载官方转换好的 yolov5 ONNX 模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 1
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
## YOLOv5 C API接口
|
||||
|
||||
### 配置
|
||||
|
||||
```c
|
||||
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
|
||||
```
|
||||
|
||||
> 创建一个RuntimeOption的配置对象,并且返回操作它的指针。
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseCpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
|
||||
```
|
||||
|
||||
> 开启CPU推理
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseGpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
int gpu_id)
|
||||
```
|
||||
> 开启GPU推理
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
> * **gpu_id**(int): 显卡号
|
||||
|
||||
|
||||
### 模型
|
||||
|
||||
```c
|
||||
|
||||
FD_C_YOLOv5Wrapper* FD_C_CreateYOLOv5Wrapper(
|
||||
const char* model_file, const char* params_file, const char* config_file,
|
||||
FD_C_RuntimeOptionWrapper* runtime_option,
|
||||
const FD_C_ModelFormat model_format)
|
||||
|
||||
```
|
||||
|
||||
> 创建一个YOLOv5的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针,表示后端推理配置
|
||||
> * **model_format**(FD_C_ModelFormat): 模型格式
|
||||
>
|
||||
> **返回**
|
||||
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): 指向YOLOv5模型对象的指针
|
||||
|
||||
|
||||
#### 读写图像
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
```
|
||||
|
||||
> 读取一个图像,并且返回cv::Mat的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **imgpath**(const char*): 图像文件路径
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **imgmat**(FD_C_Mat): 指向图像数据cv::Mat的指针。
|
||||
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
```
|
||||
|
||||
> 将图像写入文件中。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **savepath**(const char*): 保存图像的路径
|
||||
> * **img**(FD_C_Mat): 指向图像数据的指针
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **result**(FD_C_Bool): 表示操作是否成功
|
||||
|
||||
|
||||
#### Predict函数
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_YOLOv5WrapperPredict(
|
||||
__fd_take FD_C_YOLOv5Wrapper* fd_c_yolov5_wrapper, FD_C_Mat img,
|
||||
FD_C_DetectionResult* fd_c_detection_result)
|
||||
```
|
||||
>
|
||||
> 模型预测接口,输入图像直接并生成检测结果。
|
||||
>
|
||||
> **参数**
|
||||
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): 指向YOLOv5模型的指针
|
||||
> * **img**(FD_C_Mat): 输入图像的指针,指向cv::Mat对象,可以调用FD_C_Imread读取图像获取
|
||||
> * **fd_c_detection_result**FD_C_DetectionResult*): 指向检测结果的指针,检测结果包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
#### Predict结果
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisDetection(FD_C_Mat im, FD_C_DetectionResult* fd_detection_result,
|
||||
float score_threshold, int line_size, float font_size);
|
||||
```
|
||||
>
|
||||
> 对检测结果进行可视化,返回可视化的图像。
|
||||
>
|
||||
> **参数**
|
||||
> * **im**(FD_C_Mat): 指向输入图像的指针
|
||||
> * **fd_detection_result**(FD_C_DetectionResult*): 指向FD_C_DetectionResult结构的指针
|
||||
> * **score_threshold**(float): 检测阈值
|
||||
> * **line_size**(int): 检测框线大小
|
||||
> * **font_size**(float): 检测框字体大小
|
||||
>
|
||||
> **返回**
|
||||
> * **vis_im**(FD_C_Mat): 指向可视化图像的指针
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
98
examples/vision/detection/yolov5/csharp/README.md
Executable file
98
examples/vision/detection/yolov5/csharp/README.md
Executable file
@@ -0,0 +1,98 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5 C# Deployment Example
|
||||
|
||||
This directory provides `infer.cs` to finish the deployment of YOLOv5 on CPU/GPU.
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Please follow below instructions to compile and test in Windows. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
|
||||
|
||||
## 1. Download C# package management tool nuget client
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
|
||||
Add nuget program into system variable **PATH**
|
||||
|
||||
## 2. Download model and image for test
|
||||
> https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
> https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
## 3. Compile example code
|
||||
|
||||
Open `x64 Native Tools Command Prompt for VS 2019` command tool on Windows, cd to the demo path of ppyoloe and execute commands
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\detection\yolov5\csharp
|
||||
|
||||
mkdir build && cd build
|
||||
cmake .. -G "Visual Studio 16 2019" -A x64 -DFASTDEPLOY_INSTALL_DIR=D:\Download\fastdeploy-win-x64-gpu-x.x.x -DCUDA_DIRECTORY="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2"
|
||||
|
||||
nuget restore
|
||||
msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
|
||||
```
|
||||
|
||||
For more information about how to use FastDeploy SDK to compile a project with Visual Studio 2019. Please refer to
|
||||
- [Using the FastDeploy C++ SDK on Windows Platform](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
## 4. Execute compiled program
|
||||
|
||||
fastdeploy.dll and related dynamic libraries are required by the program. FastDeploy provide a script to copy all required dll to your program path.
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x
|
||||
|
||||
fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\detection\yolov5\csharp\build\Release
|
||||
```
|
||||
|
||||
Then you can run your program and test the model with image
|
||||
```shell
|
||||
cd Release
|
||||
infer_demo yolov5s.onnx 000000014439.jpg 0 # CPU
|
||||
infer_demo yolov5s.onnx 000000014439.jpg 1 # GPU
|
||||
```
|
||||
|
||||
## YOLOv5 C# Interface
|
||||
|
||||
### Model Class
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.detection.YOLOv5(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
> YOLOv5 initialization.
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **model_file**(str): Model file path
|
||||
>> * **params_file**(str): Parameter file path,when model format is onnx,this can be empty string
|
||||
>> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
>> * **model_format**(ModelFormat): Model format.
|
||||
|
||||
|
||||
#### Predict Function
|
||||
|
||||
```c#
|
||||
fastdeploy.DetectionResult Predict(OpenCvSharp.Mat im)
|
||||
```
|
||||
|
||||
> Model prediction interface. Input images and output results directly.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
>> * **im**(Mat): Input images in HWC or BGR format
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
>> * **result**(DetectionResult): Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult
|
||||
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
98
examples/vision/detection/yolov5/csharp/README_CN.md
Normal file
98
examples/vision/detection/yolov5/csharp/README_CN.md
Normal file
@@ -0,0 +1,98 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5 C#部署示例
|
||||
|
||||
本目录下提供`infer.cs`来调用C# API快速完成YOLOv5模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
在Windows下执行如下操作完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
## 1. 下载C#包管理程序nuget客户端
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
|
||||
下载完成后将该程序添加到环境变量**PATH**中
|
||||
|
||||
## 2. 下载模型文件和测试图片
|
||||
> https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
> https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
## 3. 编译示例代码
|
||||
|
||||
本文档编译的示例代码可在解压的库中找到,编译工具依赖VS 2019的安装,**Windows打开x64 Native Tools Command Prompt for VS 2019命令工具**,通过如下命令开始编译
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\detection\yolov5\csharp
|
||||
|
||||
mkdir build && cd build
|
||||
cmake .. -G "Visual Studio 16 2019" -A x64 -DFASTDEPLOY_INSTALL_DIR=D:\Download\fastdeploy-win-x64-gpu-x.x.x -DCUDA_DIRECTORY="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2"
|
||||
|
||||
nuget restore
|
||||
msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
|
||||
```
|
||||
|
||||
关于使用Visual Studio 2019创建sln工程,或者CMake工程等方式编译的更详细信息,可参考如下文档
|
||||
- [在 Windows 使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
- [FastDeploy C++库在Windows上的多种使用方式](../../../../../docs/cn/faq/use_sdk_on_windows_build.md)
|
||||
|
||||
## 4. 运行可执行程序
|
||||
|
||||
注意Windows上运行时,需要将FastDeploy依赖的库拷贝至可执行程序所在目录, 或者配置环境变量。FastDeploy提供了工具帮助我们快速将所有依赖库拷贝至可执行程序所在目录,通过如下命令将所有依赖的dll文件拷贝至可执行程序所在的目录(可能生成的可执行文件在Release下还有一层目录,这里假设生成的可执行文件在Release处)
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x
|
||||
|
||||
fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\detection\yolov5\csharp\build\Release
|
||||
```
|
||||
|
||||
将dll拷贝到当前路径后,准备好模型和图片,使用如下命令运行可执行程序即可
|
||||
```shell
|
||||
cd Release
|
||||
infer_demo yolov5s.onnx 000000014439.jpg 0 # CPU
|
||||
infer_demo yolov5s.onnx 000000014439.jpg 1 # GPU
|
||||
```
|
||||
|
||||
## YOLOv5 C#接口
|
||||
|
||||
### 模型
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.detection.YOLOv5(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
> YOLOv5 模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **model_file**(str): 模型文件路径
|
||||
>> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
>> * **runtime_option**(RuntimeOption): 后端推理配置,默认为null,即采用默认配置
|
||||
>> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
|
||||
#### Predict函数
|
||||
|
||||
```c#
|
||||
fastdeploy.DetectionResult Predict(OpenCvSharp.Mat im)
|
||||
```
|
||||
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
>> * **im**(Mat): 输入图像,注意需为HWC,BGR格式
|
||||
>
|
||||
> **返回值**
|
||||
>
|
||||
>> * **result**(DetectionResult): 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
Reference in New Issue
Block a user