[Doc] Add docs for ppocr ppseg examples (#1429)

* add docs for examples

* add english doc

* fix

* fix docs
This commit is contained in:
chenjian
2023-02-28 20:13:01 +08:00
committed by GitHub
parent 010f12db3d
commit c8bcada1a2
22 changed files with 2983 additions and 5 deletions

View File

@@ -0,0 +1,162 @@
English | [简体中文](README_CN.md)
# YOLOv5 C Deployment Example
This directory provides `infer.c` to finish the deployment of YOLOv5 on CPU/GPU.
Before deployment, two steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
```bash
# 1. # Download the YOLOv5 model file and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# CPU inference
./infer_demo yolov5s.onnx 000000014439.jpg 0
# GPU inference
./infer_demo yolov5s.onnx 000000014439.jpg 1
```
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## YOLOv5 C Interface
### RuntimeOption
```c
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
```
> Create a RuntimeOption object, and return a pointer to manipulate it.
>
> **Return**
>
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
```c
void FD_C_RuntimeOptionWrapperUseCpu(
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
```
> Enable Cpu inference.
>
> **Params**
>
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
```c
void FD_C_RuntimeOptionWrapperUseGpu(
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
int gpu_id)
```
> Enable Gpu inference.
>
> **Params**
>
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
> * **gpu_id**(int): gpu id
### Model
```c
FD_C_YOLOv5Wrapper* FD_C_CreateYOLOv5Wrapper(
const char* model_file, const char* params_file, const char* config_file,
FD_C_RuntimeOptionWrapper* runtime_option,
const FD_C_ModelFormat model_format)
```
> Create a YOLOv5 model object, and return a pointer to manipulate it.
>
> **Params**
>
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file pathwhen model format is onnxthis can be empty string
> * **runtime_option**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(FD_C_ModelFormat): Model format.
>
> **Return**
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): Pointer to manipulate YOLOv5 object.
#### Read and write image
```c
FD_C_Mat FD_C_Imread(const char* imgpath)
```
> Read an image, and return a pointer to cv::Mat.
>
> **Params**
>
> * **imgpath**(const char*): image path
>
> **Return**
>
> * **imgmat**(FD_C_Mat): pointer to cv::Mat object which holds the image.
```c
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
```
> Write image to a file.
>
> **Params**
>
> * **savepath**(const char*): save path
> * **img**(FD_C_Mat): pointer to cv::Mat object
>
> **Return**
>
> * **result**(FD_C_Bool): bool to indicate success or failure
#### Prediction
```c
FD_C_Bool FD_C_YOLOv5WrapperPredict(
__fd_take FD_C_YOLOv5Wrapper* fd_c_yolov5_wrapper, FD_C_Mat img,
FD_C_DetectionResult* fd_c_detection_result)
```
>
> Predict an image, and generate detection result.
>
> **Params**
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): Pointer to manipulate YOLOv5 object.
> * **img**FD_C_Mat: pointer to cv::Mat object, which can be obained by FD_C_Imread interface
> * **fd_c_detection_result**FD_C_DetectionResult*): Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResults
#### Result
```c
FD_C_Mat FD_C_VisDetection(FD_C_Mat im, FD_C_DetectionResult* fd_detection_result,
float score_threshold, int line_size, float font_size);
```
>
> Visualize detection results and return visualization image.
>
> **Params**
> * **im**(FD_C_Mat): pointer to input image
> * **fd_detection_result**(FD_C_DetectionResult*): pointer to C DetectionResult structure
> * **score_threshold**(float): score threshold
> * **line_size**(int): line size
> * **font_size**(float): font size
>
> **Return**
> * **vis_im**(FD_C_Mat): pointer to visualization image.
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)

View File

@@ -0,0 +1,165 @@
[English](README.md) | 简体中文
# YOLOv5 C 部署示例
本目录下提供`infer.c`来调用C API快速完成YOLOv5模型在CPU/GPU上部署的示例。
在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
以Linux上推理为例在本目录执行如下命令即可完成编译测试支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
```bash
# 1. 下载官方转换好的 yolov5 ONNX 模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# CPU推理
./infer_demo yolov5s.onnx 000000014439.jpg 0
# GPU推理
./infer_demo yolov5s.onnx 000000014439.jpg 1
```
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
## YOLOv5 C API接口
### 配置
```c
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
```
> 创建一个RuntimeOption的配置对象并且返回操作它的指针。
>
> **返回**
>
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
```c
void FD_C_RuntimeOptionWrapperUseCpu(
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
```
> 开启CPU推理
>
> **参数**
>
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
```c
void FD_C_RuntimeOptionWrapperUseGpu(
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
int gpu_id)
```
> 开启GPU推理
>
> **参数**
>
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
> * **gpu_id**(int): 显卡号
### 模型
```c
FD_C_YOLOv5Wrapper* FD_C_CreateYOLOv5Wrapper(
const char* model_file, const char* params_file, const char* config_file,
FD_C_RuntimeOptionWrapper* runtime_option,
const FD_C_ModelFormat model_format)
```
> 创建一个YOLOv5的模型并且返回操作它的指针。
>
> **参数**
>
> * **model_file**(str): 模型文件路径
> * **params_file**(str): 参数文件路径当模型格式为ONNX时此参数传入空字符串即可
> * **runtime_option**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针表示后端推理配置
> * **model_format**(FD_C_ModelFormat): 模型格式
>
> **返回**
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): 指向YOLOv5模型对象的指针
#### 读写图像
```c
FD_C_Mat FD_C_Imread(const char* imgpath)
```
> 读取一个图像并且返回cv::Mat的指针。
>
> **参数**
>
> * **imgpath**(const char*): 图像文件路径
>
> **返回**
>
> * **imgmat**(FD_C_Mat): 指向图像数据cv::Mat的指针。
```c
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
```
> 将图像写入文件中。
>
> **参数**
>
> * **savepath**(const char*): 保存图像的路径
> * **img**(FD_C_Mat): 指向图像数据的指针
>
> **返回**
>
> * **result**(FD_C_Bool): 表示操作是否成功
#### Predict函数
```c
FD_C_Bool FD_C_YOLOv5WrapperPredict(
__fd_take FD_C_YOLOv5Wrapper* fd_c_yolov5_wrapper, FD_C_Mat img,
FD_C_DetectionResult* fd_c_detection_result)
```
>
> 模型预测接口,输入图像直接并生成检测结果。
>
> **参数**
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): 指向YOLOv5模型的指针
> * **img**FD_C_Mat: 输入图像的指针指向cv::Mat对象可以调用FD_C_Imread读取图像获取
> * **fd_c_detection_result**FD_C_DetectionResult*): 指向检测结果的指针,检测结果包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
#### Predict结果
```c
FD_C_Mat FD_C_VisDetection(FD_C_Mat im, FD_C_DetectionResult* fd_detection_result,
float score_threshold, int line_size, float font_size);
```
>
> 对检测结果进行可视化,返回可视化的图像。
>
> **参数**
> * **im**(FD_C_Mat): 指向输入图像的指针
> * **fd_detection_result**(FD_C_DetectionResult*): 指向FD_C_DetectionResult结构的指针
> * **score_threshold**(float): 检测阈值
> * **line_size**(int): 检测框线大小
> * **font_size**(float): 检测框字体大小
>
> **返回**
> * **vis_im**(FD_C_Mat): 指向可视化图像的指针
- [模型介绍](../../)
- [Python部署](../python)
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)