mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-05 16:48:03 +08:00
[Doc] Add docs for ppocr ppseg examples (#1429)
* add docs for examples * add english doc * fix * fix docs
This commit is contained in:
@@ -41,3 +41,78 @@ fastdeploy.vision.OCRResult
|
||||
- **rec_scores**: Member variable which indicates the confidence level of the recognized text, where the element number is the same as `boxes.size()`.
|
||||
- **cls_scores**: Member variable which indicates the confidence level of the classification result of the text box, where the element number is the same as `boxes.size()`.
|
||||
- **cls_labels**: Member variable which indicates the directional category of the textbox, where the element number is the same as `boxes.size()`.
|
||||
|
||||
|
||||
## C# Definition
|
||||
|
||||
`fastdeploy.vision.OCRResult`
|
||||
|
||||
```C#
|
||||
public class OCRResult {
|
||||
public List<int[]> boxes;
|
||||
public List<string> text;
|
||||
public List<float> rec_scores;
|
||||
public List<float> cls_scores;
|
||||
public List<int> cls_labels;
|
||||
public ResultType type;
|
||||
}
|
||||
```
|
||||
|
||||
- **boxes**: Member variable which indicates the coordinates of all detected target boxes in a single image. `boxes.Count` indicates the number of detected boxes. Each box is represented by 8 int values to indicate the 4 coordinates of the box, in the order of lower left, lower right, upper right, upper left.
|
||||
- **text**: Member variable which indicates the content of the recognized text in multiple text boxes, where the element number is the same as `boxes.Count`.
|
||||
- **rec_scores**: Member variable which indicates the confidence level of the recognized text, where the element number is the same as `boxes.Count`.
|
||||
- **cls_scores**: Member variable which indicates the confidence level of the classification result of the text box, where the element number is the same as `boxes.Count`.
|
||||
- **cls_labels**: Member variable which indicates the directional category of the textbox, where the element number is the same as `boxes.Count`.
|
||||
|
||||
## C Definition
|
||||
|
||||
```c
|
||||
struct FD_C_OCRResult {
|
||||
FD_C_TwoDimArrayInt32 boxes;
|
||||
FD_C_OneDimArrayCstr text;
|
||||
FD_C_OneDimArrayFloat rec_scores;
|
||||
FD_C_OneDimArrayFloat cls_scores;
|
||||
FD_C_OneDimArrayInt32 cls_labels;
|
||||
FD_C_ResultType type;
|
||||
};
|
||||
```
|
||||
|
||||
- **boxes**: Member variable which indicates the coordinates of all detected target boxes in a single image.
|
||||
|
||||
```c
|
||||
typedef struct FD_C_TwoDimArrayInt32 {
|
||||
size_t size;
|
||||
FD_C_OneDimArrayInt32* data;
|
||||
} FD_C_TwoDimArrayInt32;
|
||||
```
|
||||
|
||||
```c
|
||||
typedef struct FD_C_OneDimArrayInt32 {
|
||||
size_t size;
|
||||
int32_t* data;
|
||||
} FD_C_OneDimArrayInt32;
|
||||
```
|
||||
|
||||
- **text**: Member variable which indicates the content of the recognized text in multiple text boxes
|
||||
|
||||
```c
|
||||
typedef struct FD_C_Cstr {
|
||||
size_t size;
|
||||
char* data;
|
||||
} FD_C_Cstr;
|
||||
typedef struct FD_C_OneDimArrayCstr {
|
||||
size_t size;
|
||||
FD_C_Cstr* data;
|
||||
} FD_C_OneDimArrayCstr;
|
||||
```
|
||||
|
||||
- **rec_scores**: Member variable which indicates the confidence level of the recognized text
|
||||
|
||||
```c
|
||||
typedef struct FD_C_OneDimArrayFloat {
|
||||
size_t size;
|
||||
float* data;
|
||||
} FD_C_OneDimArrayFloat;
|
||||
```
|
||||
- **cls_scores**: Member variable which indicates the confidence level of the classification result of the text box
|
||||
- **cls_labels**: Member variable which indicates the directional category of the textbox
|
||||
|
@@ -41,3 +41,78 @@ fastdeploy.vision.OCRResult
|
||||
- **rec_scores**: 成员变量,表示文本框内识别出来的文本的置信度,其元素个数与`boxes.size()`一致
|
||||
- **cls_scores**: 成员变量,表示文本框的分类结果的置信度,其元素个数与`boxes.size()`一致
|
||||
- **cls_labels**: 成员变量,表示文本框的方向分类类别,其元素个数与`boxes.size()`一致
|
||||
|
||||
## C# 定义
|
||||
|
||||
`fastdeploy.vision.OCRResult`
|
||||
|
||||
```C#
|
||||
public class OCRResult {
|
||||
public List<int[]> boxes;
|
||||
public List<string> text;
|
||||
public List<float> rec_scores;
|
||||
public List<float> cls_scores;
|
||||
public List<int> cls_labels;
|
||||
public ResultType type;
|
||||
}
|
||||
```
|
||||
|
||||
- **boxes**: 成员变量,表示单张图片检测出来的所有目标框坐标,`boxes.Count`表示单张图内检测出的框的个数,每个框以8个int数值依次表示框的4个坐标点,顺序为左下,右下,右上,左上
|
||||
- **text**: 成员变量,表示多个文本框内被识别出来的文本内容,其元素个数与`boxes.Count`一致
|
||||
- **rec_scores**: 成员变量,表示文本框内识别出来的文本的置信度,其元素个数与`boxes.Count`一致
|
||||
- **cls_scores**: 成员变量,表示文本框的分类结果的置信度,其元素个数与`boxes.Count`一致
|
||||
- **cls_labels**: 成员变量,表示文本框的方向分类类别,其元素个数与`boxes.Count`一致
|
||||
|
||||
## C定义
|
||||
|
||||
```c
|
||||
struct FD_C_OCRResult {
|
||||
FD_C_TwoDimArrayInt32 boxes;
|
||||
FD_C_OneDimArrayCstr text;
|
||||
FD_C_OneDimArrayFloat rec_scores;
|
||||
FD_C_OneDimArrayFloat cls_scores;
|
||||
FD_C_OneDimArrayInt32 cls_labels;
|
||||
FD_C_ResultType type;
|
||||
};
|
||||
```
|
||||
|
||||
- **boxes**: 成员变量,表示单张图片检测出来的所有目标框坐标。
|
||||
|
||||
```c
|
||||
typedef struct FD_C_TwoDimArrayInt32 {
|
||||
size_t size;
|
||||
FD_C_OneDimArrayInt32* data;
|
||||
} FD_C_TwoDimArrayInt32;
|
||||
```
|
||||
|
||||
```c
|
||||
typedef struct FD_C_OneDimArrayInt32 {
|
||||
size_t size;
|
||||
int32_t* data;
|
||||
} FD_C_OneDimArrayInt32;
|
||||
```
|
||||
|
||||
- **text**: 成员变量,表示多个文本框内被识别出来的文本内容。
|
||||
|
||||
```c
|
||||
typedef struct FD_C_Cstr {
|
||||
size_t size;
|
||||
char* data;
|
||||
} FD_C_Cstr;
|
||||
|
||||
typedef struct FD_C_OneDimArrayCstr {
|
||||
size_t size;
|
||||
FD_C_Cstr* data;
|
||||
} FD_C_OneDimArrayCstr;
|
||||
```
|
||||
|
||||
- **rec_scores**: 成员变量,表示文本框内识别出来的文本的置信度。
|
||||
|
||||
```c
|
||||
typedef struct FD_C_OneDimArrayFloat {
|
||||
size_t size;
|
||||
float* data;
|
||||
} FD_C_OneDimArrayFloat;
|
||||
```
|
||||
- **cls_scores**: 成员变量,表示文本框的分类结果的置信度。
|
||||
- **cls_labels**: 成员变量,表示文本框的方向分类类别。
|
||||
|
@@ -31,3 +31,63 @@ struct SegmentationResult {
|
||||
- **label_map**(list of int): Member variable which indicates the segmentation category of each pixel in a single image.
|
||||
- **score_map**(list of float): Member variable which indicates the predicted segmentation category probability value corresponding to the label_map one-to-one, the member variable is not empty only when `--output_op none` is specified when exporting the PaddleSeg model, otherwise the member variable is empty.
|
||||
- **shape**(list of int): Member variable which indicates the shape of the output image as H\*W.
|
||||
|
||||
|
||||
## C# Definition
|
||||
|
||||
`fastdeploy.vision.SegmentationResult`
|
||||
|
||||
```C#
|
||||
public class SegmentationResult{
|
||||
public List<byte> label_map;
|
||||
public List<float> score_map;
|
||||
public List<long> shape;
|
||||
public bool contain_score_map;
|
||||
public ResultType type;
|
||||
}
|
||||
```
|
||||
|
||||
- **label_map**(list of int): Member variable which indicates the segmentation category of each pixel in a single image.
|
||||
- **score_map**(list of float): Member variable which indicates the predicted segmentation category probability value corresponding to the label_map one-to-one, the member variable is not empty only when `--output_op none` is specified when exporting the PaddleSeg model, otherwise the member variable is empty.
|
||||
- **shape**(list of int): Member variable which indicates the shape of the output image as H\*W.
|
||||
|
||||
|
||||
|
||||
## C Definition
|
||||
|
||||
```c
|
||||
struct FD_C_SegmentationResult {
|
||||
FD_C_OneDimArrayUint8 label_map;
|
||||
FD_C_OneDimArrayFloat score_map;
|
||||
FD_C_OneDimArrayInt64 shape;
|
||||
FD_C_Bool contain_score_map;
|
||||
FD_C_ResultType type;
|
||||
};
|
||||
```
|
||||
|
||||
- **label_map**(FD_C_OneDimArrayUint8): Member variable which indicates the segmentation category of each pixel in a single image.
|
||||
|
||||
```c
|
||||
struct FD_C_OneDimArrayUint8 {
|
||||
size_t size;
|
||||
uint8_t* data;
|
||||
};
|
||||
```
|
||||
|
||||
- **score_map**(FD_C_OneDimArrayFloat): Member variable which indicates the predicted segmentation category probability value corresponding to the label_map one-to-one, the member variable is not empty only when `--output_op none` is specified when exporting the PaddleSeg model, otherwise the member variable is empty.
|
||||
|
||||
```c
|
||||
struct FD_C_OneDimArrayFloat {
|
||||
size_t size;
|
||||
float* data;
|
||||
};
|
||||
```
|
||||
|
||||
- **shape**(FD_C_OneDimArrayInt64): Member variable which indicates the shape of the output image as H\*W.
|
||||
|
||||
```c
|
||||
struct FD_C_OneDimArrayInt64 {
|
||||
size_t size;
|
||||
int64_t* data;
|
||||
};
|
||||
```
|
||||
|
@@ -33,3 +33,61 @@ struct SegmentationResult {
|
||||
- **label_map**(list of int): 成员变量,表示单张图片每个像素点的分割类别
|
||||
- **score_map**(list of float): 成员变量,与label_map一一对应的所预测的分割类别概率值,只有导出PaddleSeg模型时指定`--output_op none`时,该成员变量才不为空,否则该成员变量为空
|
||||
- **shape**(list of int): 成员变量,表示输出图片的shape,为H\*W
|
||||
|
||||
## C# 定义
|
||||
|
||||
`fastdeploy.vision.SegmentationResult`
|
||||
|
||||
```C#
|
||||
public class SegmentationResult{
|
||||
public List<byte> label_map;
|
||||
public List<float> score_map;
|
||||
public List<long> shape;
|
||||
public bool contain_score_map;
|
||||
public ResultType type;
|
||||
}
|
||||
```
|
||||
|
||||
- **label_map**(list of byte): 成员变量,表示单张图片每个像素点的分割类别
|
||||
- **score_map**(list of float): 成员变量,与label_map一一对应的所预测的分割类别概率值,只有导出PaddleSeg模型时指定`--output_op none`时,该成员变量才不为空,否则该成员变量为空
|
||||
- **shape**(list of long): 成员变量,表示输出图片的shape,为H\*W
|
||||
|
||||
|
||||
## C定义
|
||||
|
||||
```c
|
||||
struct FD_C_SegmentationResult {
|
||||
FD_C_OneDimArrayUint8 label_map;
|
||||
FD_C_OneDimArrayFloat score_map;
|
||||
FD_C_OneDimArrayInt64 shape;
|
||||
FD_C_Bool contain_score_map;
|
||||
FD_C_ResultType type;
|
||||
};
|
||||
```
|
||||
|
||||
- **label_map**(FD_C_OneDimArrayUint8): 成员变量,表示单张图片每个像素点的分割类别
|
||||
|
||||
```c
|
||||
struct FD_C_OneDimArrayUint8 {
|
||||
size_t size;
|
||||
uint8_t* data;
|
||||
};
|
||||
```
|
||||
|
||||
- **score_map**(FD_C_OneDimArrayFloat): 成员变量,与label_map一一对应的所预测的分割类别概率值,只有导出PaddleSeg模型时指定`--output_op none`时,该成员变量才不为空,否则该成员变量为空
|
||||
|
||||
```c
|
||||
struct FD_C_OneDimArrayFloat {
|
||||
size_t size;
|
||||
float* data;
|
||||
};
|
||||
```
|
||||
|
||||
- **shape**(FD_C_OneDimArrayInt64): 成员变量,表示输出图片的shape,为H\*W
|
||||
|
||||
```c
|
||||
struct FD_C_OneDimArrayInt64 {
|
||||
size_t size;
|
||||
int64_t* data;
|
||||
};
|
||||
```
|
||||
|
@@ -49,9 +49,9 @@ Then you can run your program and test the model with image
|
||||
```shell
|
||||
cd Release
|
||||
# CPU inference
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 0
|
||||
infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 0
|
||||
# GPU inference
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 1
|
||||
infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 1
|
||||
```
|
||||
|
||||
## PaddleClas C# Interface
|
||||
|
@@ -8,7 +8,7 @@
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上ResNet50_vd推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
在本目录执行如下命令即可在Windows完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
## 1. 下载C#包管理程序nuget客户端
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
@@ -50,9 +50,9 @@ fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\exampl
|
||||
```shell
|
||||
cd Release
|
||||
# CPU推理
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 0
|
||||
infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 0
|
||||
# GPU推理
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 1
|
||||
infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 1
|
||||
```
|
||||
|
||||
## PaddleClas C#接口
|
||||
|
162
examples/vision/detection/yolov5/c/README.md
Executable file
162
examples/vision/detection/yolov5/c/README.md
Executable file
@@ -0,0 +1,162 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5 C Deployment Example
|
||||
|
||||
This directory provides `infer.c` to finish the deployment of YOLOv5 on CPU/GPU.
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
|
||||
|
||||
```bash
|
||||
# 1. # Download the YOLOv5 model file and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU inference
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 0
|
||||
# GPU inference
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 1
|
||||
```
|
||||
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
## YOLOv5 C Interface
|
||||
|
||||
### RuntimeOption
|
||||
|
||||
```c
|
||||
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
|
||||
```
|
||||
|
||||
> Create a RuntimeOption object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseCpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
|
||||
```
|
||||
|
||||
> Enable Cpu inference.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseGpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
int gpu_id)
|
||||
```
|
||||
> Enable Gpu inference.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
> * **gpu_id**(int): gpu id
|
||||
|
||||
|
||||
### Model
|
||||
|
||||
```c
|
||||
|
||||
FD_C_YOLOv5Wrapper* FD_C_CreateYOLOv5Wrapper(
|
||||
const char* model_file, const char* params_file, const char* config_file,
|
||||
FD_C_RuntimeOptionWrapper* runtime_option,
|
||||
const FD_C_ModelFormat model_format)
|
||||
|
||||
```
|
||||
|
||||
> Create a YOLOv5 model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path,when model format is onnx,this can be empty string
|
||||
> * **runtime_option**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(FD_C_ModelFormat): Model format.
|
||||
>
|
||||
> **Return**
|
||||
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): Pointer to manipulate YOLOv5 object.
|
||||
|
||||
|
||||
#### Read and write image
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
```
|
||||
|
||||
> Read an image, and return a pointer to cv::Mat.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **imgpath**(const char*): image path
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **imgmat**(FD_C_Mat): pointer to cv::Mat object which holds the image.
|
||||
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
```
|
||||
|
||||
> Write image to a file.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **savepath**(const char*): save path
|
||||
> * **img**(FD_C_Mat): pointer to cv::Mat object
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **result**(FD_C_Bool): bool to indicate success or failure
|
||||
|
||||
|
||||
#### Prediction
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_YOLOv5WrapperPredict(
|
||||
__fd_take FD_C_YOLOv5Wrapper* fd_c_yolov5_wrapper, FD_C_Mat img,
|
||||
FD_C_DetectionResult* fd_c_detection_result)
|
||||
```
|
||||
>
|
||||
> Predict an image, and generate detection result.
|
||||
>
|
||||
> **Params**
|
||||
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): Pointer to manipulate YOLOv5 object.
|
||||
> * **img**(FD_C_Mat): pointer to cv::Mat object, which can be obained by FD_C_Imread interface
|
||||
> * **fd_c_detection_result**FD_C_DetectionResult*): Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResults
|
||||
|
||||
|
||||
#### Result
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisDetection(FD_C_Mat im, FD_C_DetectionResult* fd_detection_result,
|
||||
float score_threshold, int line_size, float font_size);
|
||||
```
|
||||
>
|
||||
> Visualize detection results and return visualization image.
|
||||
>
|
||||
> **Params**
|
||||
> * **im**(FD_C_Mat): pointer to input image
|
||||
> * **fd_detection_result**(FD_C_DetectionResult*): pointer to C DetectionResult structure
|
||||
> * **score_threshold**(float): score threshold
|
||||
> * **line_size**(int): line size
|
||||
> * **font_size**(float): font size
|
||||
>
|
||||
> **Return**
|
||||
> * **vis_im**(FD_C_Mat): pointer to visualization image.
|
||||
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
165
examples/vision/detection/yolov5/c/README_CN.md
Normal file
165
examples/vision/detection/yolov5/c/README_CN.md
Normal file
@@ -0,0 +1,165 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5 C 部署示例
|
||||
|
||||
本目录下提供`infer.c`来调用C API快速完成YOLOv5模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
```bash
|
||||
# 1. 下载官方转换好的 yolov5 ONNX 模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 1
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
## YOLOv5 C API接口
|
||||
|
||||
### 配置
|
||||
|
||||
```c
|
||||
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
|
||||
```
|
||||
|
||||
> 创建一个RuntimeOption的配置对象,并且返回操作它的指针。
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseCpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
|
||||
```
|
||||
|
||||
> 开启CPU推理
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseGpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
int gpu_id)
|
||||
```
|
||||
> 开启GPU推理
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
> * **gpu_id**(int): 显卡号
|
||||
|
||||
|
||||
### 模型
|
||||
|
||||
```c
|
||||
|
||||
FD_C_YOLOv5Wrapper* FD_C_CreateYOLOv5Wrapper(
|
||||
const char* model_file, const char* params_file, const char* config_file,
|
||||
FD_C_RuntimeOptionWrapper* runtime_option,
|
||||
const FD_C_ModelFormat model_format)
|
||||
|
||||
```
|
||||
|
||||
> 创建一个YOLOv5的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针,表示后端推理配置
|
||||
> * **model_format**(FD_C_ModelFormat): 模型格式
|
||||
>
|
||||
> **返回**
|
||||
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): 指向YOLOv5模型对象的指针
|
||||
|
||||
|
||||
#### 读写图像
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
```
|
||||
|
||||
> 读取一个图像,并且返回cv::Mat的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **imgpath**(const char*): 图像文件路径
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **imgmat**(FD_C_Mat): 指向图像数据cv::Mat的指针。
|
||||
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
```
|
||||
|
||||
> 将图像写入文件中。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **savepath**(const char*): 保存图像的路径
|
||||
> * **img**(FD_C_Mat): 指向图像数据的指针
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **result**(FD_C_Bool): 表示操作是否成功
|
||||
|
||||
|
||||
#### Predict函数
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_YOLOv5WrapperPredict(
|
||||
__fd_take FD_C_YOLOv5Wrapper* fd_c_yolov5_wrapper, FD_C_Mat img,
|
||||
FD_C_DetectionResult* fd_c_detection_result)
|
||||
```
|
||||
>
|
||||
> 模型预测接口,输入图像直接并生成检测结果。
|
||||
>
|
||||
> **参数**
|
||||
> * **fd_c_yolov5_wrapper**(FD_C_YOLOv5Wrapper*): 指向YOLOv5模型的指针
|
||||
> * **img**(FD_C_Mat): 输入图像的指针,指向cv::Mat对象,可以调用FD_C_Imread读取图像获取
|
||||
> * **fd_c_detection_result**FD_C_DetectionResult*): 指向检测结果的指针,检测结果包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
#### Predict结果
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisDetection(FD_C_Mat im, FD_C_DetectionResult* fd_detection_result,
|
||||
float score_threshold, int line_size, float font_size);
|
||||
```
|
||||
>
|
||||
> 对检测结果进行可视化,返回可视化的图像。
|
||||
>
|
||||
> **参数**
|
||||
> * **im**(FD_C_Mat): 指向输入图像的指针
|
||||
> * **fd_detection_result**(FD_C_DetectionResult*): 指向FD_C_DetectionResult结构的指针
|
||||
> * **score_threshold**(float): 检测阈值
|
||||
> * **line_size**(int): 检测框线大小
|
||||
> * **font_size**(float): 检测框字体大小
|
||||
>
|
||||
> **返回**
|
||||
> * **vis_im**(FD_C_Mat): 指向可视化图像的指针
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
98
examples/vision/detection/yolov5/csharp/README.md
Executable file
98
examples/vision/detection/yolov5/csharp/README.md
Executable file
@@ -0,0 +1,98 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5 C# Deployment Example
|
||||
|
||||
This directory provides `infer.cs` to finish the deployment of YOLOv5 on CPU/GPU.
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Please follow below instructions to compile and test in Windows. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
|
||||
|
||||
## 1. Download C# package management tool nuget client
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
|
||||
Add nuget program into system variable **PATH**
|
||||
|
||||
## 2. Download model and image for test
|
||||
> https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
> https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
## 3. Compile example code
|
||||
|
||||
Open `x64 Native Tools Command Prompt for VS 2019` command tool on Windows, cd to the demo path of ppyoloe and execute commands
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\detection\yolov5\csharp
|
||||
|
||||
mkdir build && cd build
|
||||
cmake .. -G "Visual Studio 16 2019" -A x64 -DFASTDEPLOY_INSTALL_DIR=D:\Download\fastdeploy-win-x64-gpu-x.x.x -DCUDA_DIRECTORY="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2"
|
||||
|
||||
nuget restore
|
||||
msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
|
||||
```
|
||||
|
||||
For more information about how to use FastDeploy SDK to compile a project with Visual Studio 2019. Please refer to
|
||||
- [Using the FastDeploy C++ SDK on Windows Platform](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
## 4. Execute compiled program
|
||||
|
||||
fastdeploy.dll and related dynamic libraries are required by the program. FastDeploy provide a script to copy all required dll to your program path.
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x
|
||||
|
||||
fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\detection\yolov5\csharp\build\Release
|
||||
```
|
||||
|
||||
Then you can run your program and test the model with image
|
||||
```shell
|
||||
cd Release
|
||||
infer_demo yolov5s.onnx 000000014439.jpg 0 # CPU
|
||||
infer_demo yolov5s.onnx 000000014439.jpg 1 # GPU
|
||||
```
|
||||
|
||||
## YOLOv5 C# Interface
|
||||
|
||||
### Model Class
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.detection.YOLOv5(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
> YOLOv5 initialization.
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **model_file**(str): Model file path
|
||||
>> * **params_file**(str): Parameter file path,when model format is onnx,this can be empty string
|
||||
>> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
>> * **model_format**(ModelFormat): Model format.
|
||||
|
||||
|
||||
#### Predict Function
|
||||
|
||||
```c#
|
||||
fastdeploy.DetectionResult Predict(OpenCvSharp.Mat im)
|
||||
```
|
||||
|
||||
> Model prediction interface. Input images and output results directly.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
>> * **im**(Mat): Input images in HWC or BGR format
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
>> * **result**(DetectionResult): Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult
|
||||
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
98
examples/vision/detection/yolov5/csharp/README_CN.md
Normal file
98
examples/vision/detection/yolov5/csharp/README_CN.md
Normal file
@@ -0,0 +1,98 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5 C#部署示例
|
||||
|
||||
本目录下提供`infer.cs`来调用C# API快速完成YOLOv5模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
在Windows下执行如下操作完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
## 1. 下载C#包管理程序nuget客户端
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
|
||||
下载完成后将该程序添加到环境变量**PATH**中
|
||||
|
||||
## 2. 下载模型文件和测试图片
|
||||
> https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
> https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
## 3. 编译示例代码
|
||||
|
||||
本文档编译的示例代码可在解压的库中找到,编译工具依赖VS 2019的安装,**Windows打开x64 Native Tools Command Prompt for VS 2019命令工具**,通过如下命令开始编译
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\detection\yolov5\csharp
|
||||
|
||||
mkdir build && cd build
|
||||
cmake .. -G "Visual Studio 16 2019" -A x64 -DFASTDEPLOY_INSTALL_DIR=D:\Download\fastdeploy-win-x64-gpu-x.x.x -DCUDA_DIRECTORY="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2"
|
||||
|
||||
nuget restore
|
||||
msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
|
||||
```
|
||||
|
||||
关于使用Visual Studio 2019创建sln工程,或者CMake工程等方式编译的更详细信息,可参考如下文档
|
||||
- [在 Windows 使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
- [FastDeploy C++库在Windows上的多种使用方式](../../../../../docs/cn/faq/use_sdk_on_windows_build.md)
|
||||
|
||||
## 4. 运行可执行程序
|
||||
|
||||
注意Windows上运行时,需要将FastDeploy依赖的库拷贝至可执行程序所在目录, 或者配置环境变量。FastDeploy提供了工具帮助我们快速将所有依赖库拷贝至可执行程序所在目录,通过如下命令将所有依赖的dll文件拷贝至可执行程序所在的目录(可能生成的可执行文件在Release下还有一层目录,这里假设生成的可执行文件在Release处)
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x
|
||||
|
||||
fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\detection\yolov5\csharp\build\Release
|
||||
```
|
||||
|
||||
将dll拷贝到当前路径后,准备好模型和图片,使用如下命令运行可执行程序即可
|
||||
```shell
|
||||
cd Release
|
||||
infer_demo yolov5s.onnx 000000014439.jpg 0 # CPU
|
||||
infer_demo yolov5s.onnx 000000014439.jpg 1 # GPU
|
||||
```
|
||||
|
||||
## YOLOv5 C#接口
|
||||
|
||||
### 模型
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.detection.YOLOv5(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
> YOLOv5 模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **model_file**(str): 模型文件路径
|
||||
>> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
>> * **runtime_option**(RuntimeOption): 后端推理配置,默认为null,即采用默认配置
|
||||
>> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
|
||||
#### Predict函数
|
||||
|
||||
```c#
|
||||
fastdeploy.DetectionResult Predict(OpenCvSharp.Mat im)
|
||||
```
|
||||
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
>> * **im**(Mat): 输入图像,注意需为HWC,BGR格式
|
||||
>
|
||||
> **返回值**
|
||||
>
|
||||
>> * **result**(DetectionResult): 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
251
examples/vision/ocr/PP-OCRv2/c/README.md
Executable file
251
examples/vision/ocr/PP-OCRv2/c/README.md
Executable file
@@ -0,0 +1,251 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PPOCRv2 C Deployment Example
|
||||
|
||||
This directory provides `infer.c` to finish the deployment of PPOCRv2 on CPU/GPU.
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
|
||||
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
|
||||
# Download model, image, and dictionary files
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar
|
||||
tar -xvf ch_PP-OCRv2_det_infer.tar
|
||||
|
||||
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar
|
||||
tar -xvf ch_PP-OCRv2_rec_infer.tar
|
||||
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
# CPU inference
|
||||
./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
# GPU inference
|
||||
./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||
```
|
||||
|
||||
The above command works for Linux or MacOS. For SDK in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
The visualized result after running is as follows
|
||||
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/109218879/185826024-f7593a0c-1bd2-4a60-b76c-15588484fa08.jpg">
|
||||
|
||||
|
||||
## PPOCRv2 C Interface
|
||||
|
||||
### RuntimeOption
|
||||
|
||||
```c
|
||||
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
|
||||
```
|
||||
|
||||
> Create a RuntimeOption object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseCpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
|
||||
```
|
||||
|
||||
> Enable Cpu inference.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseGpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
int gpu_id)
|
||||
```
|
||||
> Enable Gpu inference.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
> * **gpu_id**(int): gpu id
|
||||
|
||||
|
||||
### Model
|
||||
|
||||
```c
|
||||
|
||||
FD_C_DBDetectorWrapper* FD_C_CreateDBDetectorWrapper(
|
||||
const char* model_file, const char* params_file,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
> Create a DBDetector model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **model_file**(const char*): Model file path
|
||||
> * **params_file**(const char*): Parameter file path
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(FD_C_ModelFormat): Model format.
|
||||
>
|
||||
> **Return**
|
||||
> * **fd_c_dbdetector_wrapper**(FD_C_DBDetectorWrapper*): Pointer to manipulate DBDetector object.
|
||||
|
||||
```c
|
||||
FD_C_ClassifierWrapper* FD_C_CreateClassifierWrapper(
|
||||
const char* model_file, const char* params_file,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
> Create a Classifier model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **model_file**(const char*): Model file path
|
||||
> * **params_file**(const char*): Parameter file path
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(FD_C_ModelFormat): Model format.
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **fd_c_classifier_wrapper**(FD_C_ClassifierWrapper*): Pointer to manipulate Classifier object.
|
||||
|
||||
```c
|
||||
FD_C_RecognizerWrapper* FD_C_CreateRecognizerWrapper(
|
||||
const char* model_file, const char* params_file, const char* label_path,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
> Create a Recognizer model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **model_file**(const char*): Model file path
|
||||
> * **params_file**(const char*): Parameter file path
|
||||
> * **label_path**(const char*): Label file path
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(FD_C_ModelFormat): Model format.
|
||||
>
|
||||
> **Return**
|
||||
> * **fd_c_recognizer_wrapper**(FD_C_RecognizerWrapper*): Pointer to manipulate Recognizer object.
|
||||
|
||||
```c
|
||||
FD_C_PPOCRv2Wrapper* FD_C_CreatePPOCRv2Wrapper(
|
||||
FD_C_DBDetectorWrapper* det_model,
|
||||
FD_C_ClassifierWrapper* cls_model,
|
||||
FD_C_RecognizerWrapper* rec_model
|
||||
)
|
||||
```
|
||||
> Create a PPOCRv2 model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **det_model**(FD_C_DBDetectorWrapper*): DBDetector model
|
||||
> * **cls_model**(FD_C_ClassifierWrapper*): Classifier model
|
||||
> * **rec_model**(FD_C_RecognizerWrapper*): Recognizer model
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **fd_c_ppocrv2_wrapper**(FD_C_PPOCRv2Wrapper*): Pointer to manipulate PPOCRv2 object.
|
||||
|
||||
|
||||
|
||||
#### Read and write image
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
```
|
||||
|
||||
> Read an image, and return a pointer to cv::Mat.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **imgpath**(const char*): image path
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **imgmat**(FD_C_Mat): pointer to cv::Mat object which holds the image.
|
||||
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
```
|
||||
|
||||
> Write image to a file.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **savepath**(const char*): save path
|
||||
> * **img**(FD_C_Mat): pointer to cv::Mat object
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **result**(FD_C_Bool): bool to indicate success or failure
|
||||
|
||||
|
||||
|
||||
#### Prediction
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_PPOCRv2WrapperPredict(
|
||||
FD_C_PPOCRv2Wrapper* fd_c_ppocrv2_wrapper,
|
||||
FD_C_Mat img,
|
||||
FD_C_OCRResult* result)
|
||||
```
|
||||
>
|
||||
> Predict an image, and generate result.
|
||||
>
|
||||
> **Params**
|
||||
> * **fd_c_ppocrv2_wrapper**(FD_C_PPOCRv2Wrapper*): Pointer to manipulate PPOCRv2 object.
|
||||
> * **img**(FD_C_Mat): pointer to cv::Mat object, which can be obained by FD_C_Imread interface
|
||||
> * **result**(FD_C_OCRResult*): OCR prediction results, including the position of the detection box from the detection model, the classification of the direction from the classification model, and the recognition result from the recognition model. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for OCRResult
|
||||
|
||||
#### Result
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisOcr(FD_C_Mat im, FD_C_OCRResult* ocr_result)
|
||||
```
|
||||
>
|
||||
> Visualize OCR results and return visualization image.
|
||||
>
|
||||
> **Params**
|
||||
> * **im**(FD_C_Mat): pointer to input image
|
||||
> * **ocr_result**(FD_C_OCRResult*): pointer to C FD_C_OCRResult structure
|
||||
>
|
||||
> **Return**
|
||||
> * **vis_im**(FD_C_Mat): pointer to visualization image.
|
||||
|
||||
|
||||
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PPOCR Model Description](../../)
|
||||
- [PPOCRv2 Python Deployment](../python)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
251
examples/vision/ocr/PP-OCRv2/c/README_CN.md
Normal file
251
examples/vision/ocr/PP-OCRv2/c/README_CN.md
Normal file
@@ -0,0 +1,251 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PPOCRv2 C部署示例
|
||||
|
||||
本目录下提供`infer.c`来调用C API快速完成PPOCRv2模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
|
||||
# 下载模型,图片和字典文件
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar
|
||||
tar -xvf ch_PP-OCRv2_det_infer.tar
|
||||
|
||||
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar
|
||||
tar -xvf ch_PP-OCRv2_rec_infer.tar
|
||||
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
# CPU推理
|
||||
./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/109218879/185826024-f7593a0c-1bd2-4a60-b76c-15588484fa08.jpg">
|
||||
|
||||
|
||||
## PPOCRv2 C API接口
|
||||
|
||||
### 配置
|
||||
|
||||
```c
|
||||
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
|
||||
```
|
||||
|
||||
> 创建一个RuntimeOption的配置对象,并且返回操作它的指针。
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseCpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
|
||||
```
|
||||
|
||||
> 开启CPU推理
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseGpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
int gpu_id)
|
||||
```
|
||||
> 开启GPU推理
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
> * **gpu_id**(int): 显卡号
|
||||
|
||||
|
||||
### 模型
|
||||
|
||||
```c
|
||||
|
||||
FD_C_DBDetectorWrapper* FD_C_CreateDBDetectorWrapper(
|
||||
const char* model_file, const char* params_file,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
> 创建一个DBDetector的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **model_file**(const char*): 模型文件路径
|
||||
> * **params_file**(const char*): 参数文件路径
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针,表示后端推理配置
|
||||
> * **model_format**(FD_C_ModelFormat): 模型格式
|
||||
>
|
||||
> **返回**
|
||||
> * **fd_c_dbdetector_wrapper**(FD_C_DBDetectorWrapper*): 指向DBDetector模型对象的指针
|
||||
|
||||
```c
|
||||
FD_C_ClassifierWrapper* FD_C_CreateClassifierWrapper(
|
||||
const char* model_file, const char* params_file,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
> 创建一个Classifier的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **model_file**(const char*): 模型文件路径
|
||||
> * **params_file**(const char*): 参数文件路径
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针,表示后端推理配置
|
||||
> * **model_format**(FD_C_ModelFormat): 模型格式
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_classifier_wrapper**(FD_C_ClassifierWrapper*): 指向Classifier模型对象的指针
|
||||
|
||||
```c
|
||||
FD_C_RecognizerWrapper* FD_C_CreateRecognizerWrapper(
|
||||
const char* model_file, const char* params_file, const char* label_path,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
> 创建一个Recognizer的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **model_file**(const char*): 模型文件路径
|
||||
> * **params_file**(const char*): 参数文件路径
|
||||
> * **label_path**(const char*): 标签文件路径
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针,表示后端推理配置
|
||||
> * **model_format**(FD_C_ModelFormat): 模型格式
|
||||
>
|
||||
> **返回**
|
||||
> * **fd_c_recognizer_wrapper**(FD_C_RecognizerWrapper*): 指向Recognizer模型对象的指针
|
||||
|
||||
```c
|
||||
FD_C_PPOCRv2Wrapper* FD_C_CreatePPOCRv2Wrapper(
|
||||
FD_C_DBDetectorWrapper* det_model,
|
||||
FD_C_ClassifierWrapper* cls_model,
|
||||
FD_C_RecognizerWrapper* rec_model
|
||||
)
|
||||
```
|
||||
> 创建一个PPOCRv2的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **det_model**(FD_C_DBDetectorWrapper*): DBDetector模型
|
||||
> * **cls_model**(FD_C_ClassifierWrapper*): Classifier模型
|
||||
> * **rec_model**(FD_C_RecognizerWrapper*): Recognizer模型
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_ppocrv2_wrapper**(FD_C_PPOCRv2Wrapper*): 指向PPOCRv2模型对象的指针
|
||||
|
||||
|
||||
|
||||
#### 读写图像
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
```
|
||||
|
||||
> 读取一个图像,并且返回cv::Mat的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **imgpath**(const char*): 图像文件路径
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **imgmat**(FD_C_Mat): 指向图像数据cv::Mat的指针。
|
||||
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
```
|
||||
|
||||
> 将图像写入文件中。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **savepath**(const char*): 保存图像的路径
|
||||
> * **img**(FD_C_Mat): 指向图像数据的指针
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **result**(FD_C_Bool): 表示操作是否成功
|
||||
|
||||
|
||||
#### Predict函数
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_PPOCRv2WrapperPredict(
|
||||
FD_C_PPOCRv2Wrapper* fd_c_ppocrv2_wrapper,
|
||||
FD_C_Mat img,
|
||||
FD_C_OCRResult* result)
|
||||
```
|
||||
>
|
||||
> 模型预测接口,输入图像直接并生成结果。
|
||||
>
|
||||
> **参数**
|
||||
> * **fd_c_ppocrv2_wrapper**(FD_C_PPOCRv2Wrapper*): 指向PPOCRv2模型的指针
|
||||
> * **img**(FD_C_Mat): 输入图像的指针,指向cv::Mat对象,可以调用FD_C_Imread读取图像获取
|
||||
> * **result**(FD_C_OCRResult*): OCR预测结果,包括由检测模型输出的检测框位置,分类模型输出的方向分类,以及识别模型输出的识别结果, OCRResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
#### Predict结果
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisOcr(FD_C_Mat im, FD_C_OCRResult* ocr_result)
|
||||
```
|
||||
>
|
||||
> 对结果进行可视化,返回可视化的图像。
|
||||
>
|
||||
> **参数**
|
||||
> * **im**(FD_C_Mat): 指向输入图像的指针
|
||||
> * **ocr_result**(FD_C_OCRResult*): 指向 FD_C_OCRResult结构的指针
|
||||
>
|
||||
> **返回**
|
||||
> * **vis_im**(FD_C_Mat): 指向可视化图像的指针
|
||||
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [PPOCR 系列模型介绍](../../)
|
||||
- [PPOCRv2 Python部署](../python)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
153
examples/vision/ocr/PP-OCRv2/csharp/README.md
Executable file
153
examples/vision/ocr/PP-OCRv2/csharp/README.md
Executable file
@@ -0,0 +1,153 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PPOCRv2 C# Deployment Example
|
||||
|
||||
This directory provides `infer.cs` to finish the deployment of PPOCRv2 on CPU/GPU.
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Please follow below instructions to compile and test in Windows. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
|
||||
|
||||
## 1. Download C# package management tool nuget client
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
|
||||
Add nuget program into system variable **PATH**
|
||||
|
||||
## 2. Download model and image for test
|
||||
> https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar (Decompress it)
|
||||
> https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
> https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar
|
||||
> https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
> https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
## 3. Compile example code
|
||||
|
||||
Open `x64 Native Tools Command Prompt for VS 2019` command tool on Windows, cd to the demo path of ppyoloe and execute commands
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\ocr\PP-OCRv2\csharp
|
||||
|
||||
mkdir build && cd build
|
||||
cmake .. -G "Visual Studio 16 2019" -A x64 -DFASTDEPLOY_INSTALL_DIR=D:\Download\fastdeploy-win-x64-gpu-x.x.x -DCUDA_DIRECTORY="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2"
|
||||
|
||||
nuget restore
|
||||
msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
|
||||
```
|
||||
|
||||
For more information about how to use FastDeploy SDK to compile a project with Visual Studio 2019. Please refer to
|
||||
- [Using the FastDeploy C++ SDK on Windows Platform](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
## 4. Execute compiled program
|
||||
|
||||
fastdeploy.dll and related dynamic libraries are required by the program. FastDeploy provide a script to copy all required dll to your program path.
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x
|
||||
|
||||
fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\ocr\PP-OCRv2\csharp\build\Release
|
||||
```
|
||||
|
||||
Then you can run your program and test the model with image
|
||||
```shell
|
||||
cd Release
|
||||
# CPU inference
|
||||
infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
# GPU inference
|
||||
infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||
```
|
||||
|
||||
## PPOCRv2 C# Interface
|
||||
|
||||
### Model Class
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.DBDetector(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> DBDetector initialization
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **model_file**(str): Model file path
|
||||
>> * **params_file**(str): Parameter file path
|
||||
>> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
>> * **model_format**(ModelFormat): Model format.
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.Classifier(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> Classifier initialization
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **model_file**(str): Model file path
|
||||
>> * **params_file**(str): Parameter file path
|
||||
>> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
>> * **model_format**(ModelFormat): Model format.
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.Recognizer(
|
||||
string model_file,
|
||||
string params_file,
|
||||
string label_path,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> Recognizer initialization
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **model_file**(str): Model file path
|
||||
>> * **params_file**(str): Parameter file path
|
||||
>> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
>> * **model_format**(ModelFormat): Model format.
|
||||
|
||||
```c#
|
||||
fastdeploy.pipeline.PPOCRv2Model(
|
||||
DBDetector dbdetector,
|
||||
Classifier classifier,
|
||||
Recognizer recognizer)
|
||||
```
|
||||
|
||||
> PPOCRv2Model initialization
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **det_model**(FD_C_DBDetectorWrapper*): DBDetector model
|
||||
>> * **cls_model**(FD_C_ClassifierWrapper*): Classifier model
|
||||
>> * **rec_model**(FD_C_RecognizerWrapper*): Recognizer model
|
||||
|
||||
#### Predict Function
|
||||
|
||||
```c#
|
||||
fastdeploy.OCRResult Predict(OpenCvSharp.Mat im)
|
||||
```
|
||||
|
||||
> Model prediction interface. Input images and output results directly.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
>> * **im**(Mat): Input images in HWC or BGR format
|
||||
>>
|
||||
> **Return**
|
||||
>
|
||||
>> * **result**: OCR prediction results, including the position of the detection box from the detection model, the classification of the direction from the classification model, and the recognition result from the recognition model. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for OCRResult
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PPOCR Model Description](../../)
|
||||
- [PPOCRv2 Python Deployment](../python)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
153
examples/vision/ocr/PP-OCRv2/csharp/README_CN.md
Normal file
153
examples/vision/ocr/PP-OCRv2/csharp/README_CN.md
Normal file
@@ -0,0 +1,153 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PPOCRv2 C#部署示例
|
||||
|
||||
本目录下提供`infer.cs`来调用C# API快速完成PPOCRv2模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
在本目录执行如下命令即可在Windows完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
## 1. 下载C#包管理程序nuget客户端
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
|
||||
下载完成后将该程序添加到环境变量**PATH**中
|
||||
|
||||
## 2. 下载模型文件和测试图片
|
||||
> https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar # (下载后解压缩)
|
||||
> https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
> https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar
|
||||
> https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
> https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
## 3. 编译示例代码
|
||||
|
||||
本文档编译的示例代码可在解压的库中找到,编译工具依赖VS 2019的安装,**Windows打开x64 Native Tools Command Prompt for VS 2019命令工具**,通过如下命令开始编译
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\ocr\PP-OCRv2\csharp
|
||||
|
||||
mkdir build && cd build
|
||||
cmake .. -G "Visual Studio 16 2019" -A x64 -DFASTDEPLOY_INSTALL_DIR=D:\Download\fastdeploy-win-x64-gpu-x.x.x -DCUDA_DIRECTORY="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2"
|
||||
|
||||
nuget restore
|
||||
msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
|
||||
```
|
||||
|
||||
关于使用Visual Studio 2019创建sln工程,或者CMake工程等方式编译的更详细信息,可参考如下文档
|
||||
- [在 Windows 使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
- [FastDeploy C++库在Windows上的多种使用方式](../../../../../docs/cn/faq/use_sdk_on_windows_build.md)
|
||||
|
||||
## 4. 运行可执行程序
|
||||
|
||||
注意Windows上运行时,需要将FastDeploy依赖的库拷贝至可执行程序所在目录, 或者配置环境变量。FastDeploy提供了工具帮助我们快速将所有依赖库拷贝至可执行程序所在目录,通过如下命令将所有依赖的dll文件拷贝至可执行程序所在的目录(可能生成的可执行文件在Release下还有一层目录,这里假设生成的可执行文件在Release处)
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x
|
||||
|
||||
fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\ocr\PP-OCRv2\csharp\build\Release
|
||||
```
|
||||
|
||||
将dll拷贝到当前路径后,准备好模型和图片,使用如下命令运行可执行程序即可
|
||||
```shell
|
||||
cd Release
|
||||
# CPU推理
|
||||
infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
# GPU推理
|
||||
infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||
```
|
||||
|
||||
## PPOCRv2 C#接口
|
||||
|
||||
### 模型
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.DBDetector(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> DBDetector模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **model_file**(str): 模型文件路径
|
||||
>> * **params_file**(str): 参数文件路径
|
||||
>> * **runtime_option**(RuntimeOption): 后端推理配置,默认为null,即采用默认配置
|
||||
>> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.Classifier(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> Classifier模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **model_file**(str): 模型文件路径
|
||||
>> * **params_file**(str): 参数文件路径
|
||||
>> * **runtime_option**(RuntimeOption): 后端推理配置,默认为null,即采用默认配置
|
||||
>> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.Recognizer(
|
||||
string model_file,
|
||||
string params_file,
|
||||
string label_path,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> Recognizer模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **model_file**(str): 模型文件路径
|
||||
>> * **params_file**(str): 参数文件路径
|
||||
>> * **label_path**(str): 标签文件路径
|
||||
>> * **runtime_option**(RuntimeOption): 后端推理配置,默认为null,即采用默认配置
|
||||
>> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
|
||||
|
||||
```c#
|
||||
fastdeploy.pipeline.PPOCRv2Model(
|
||||
DBDetector dbdetector,
|
||||
Classifier classifier,
|
||||
Recognizer recognizer)
|
||||
```
|
||||
|
||||
> PPOCRv2Model模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **det_model**(FD_C_DBDetectorWrapper*): DBDetector模型
|
||||
>> * **cls_model**(FD_C_ClassifierWrapper*): Classifier模型
|
||||
>> * **rec_model**(FD_C_RecognizerWrapper*): Recognizer模型文件
|
||||
|
||||
#### Predict函数
|
||||
|
||||
```c#
|
||||
fastdeploy.OCRResult Predict(OpenCvSharp.Mat im)
|
||||
```
|
||||
|
||||
> 模型预测接口,输入图像直接输出结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
>> * **im**(Mat): 输入图像,注意需为HWC,BGR格式
|
||||
>>
|
||||
> **返回值**
|
||||
>
|
||||
>> * **result**: OCR预测结果,包括由检测模型输出的检测框位置,分类模型输出的方向分类,以及识别模型输出的识别结果, OCRResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
251
examples/vision/ocr/PP-OCRv3/c/README.md
Executable file
251
examples/vision/ocr/PP-OCRv3/c/README.md
Executable file
@@ -0,0 +1,251 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PPOCRv3 C Deployment Example
|
||||
|
||||
This directory provides `infer.c` to finish the deployment of PPOCRv3 on CPU/GPU.
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
|
||||
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
|
||||
# Download model, image, and dictionary files
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_det_infer.tar
|
||||
|
||||
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_rec_infer.tar
|
||||
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
# CPU inference
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
# GPU inference
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||
```
|
||||
|
||||
The above command works for Linux or MacOS. For SDK in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
The visualized result after running is as follows
|
||||
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/109218879/185826024-f7593a0c-1bd2-4a60-b76c-15588484fa08.jpg">
|
||||
|
||||
|
||||
## PPOCRv3 C Interface
|
||||
|
||||
### RuntimeOption
|
||||
|
||||
```c
|
||||
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
|
||||
```
|
||||
|
||||
> Create a RuntimeOption object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseCpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
|
||||
```
|
||||
|
||||
> Enable Cpu inference.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseGpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
int gpu_id)
|
||||
```
|
||||
> Enable Gpu inference.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
> * **gpu_id**(int): gpu id
|
||||
|
||||
|
||||
### Model
|
||||
|
||||
```c
|
||||
|
||||
FD_C_DBDetectorWrapper* FD_C_CreateDBDetectorWrapper(
|
||||
const char* model_file, const char* params_file,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
> Create a DBDetector model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **model_file**(const char*): Model file path
|
||||
> * **params_file**(const char*): Parameter file path
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(FD_C_ModelFormat): Model format.
|
||||
>
|
||||
> **Return**
|
||||
> * **fd_c_dbdetector_wrapper**(FD_C_DBDetectorWrapper*): Pointer to manipulate DBDetector object.
|
||||
|
||||
```c
|
||||
FD_C_ClassifierWrapper* FD_C_CreateClassifierWrapper(
|
||||
const char* model_file, const char* params_file,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
> Create a Classifier model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **model_file**(const char*): Model file path
|
||||
> * **params_file**(const char*): Parameter file path
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(FD_C_ModelFormat): Model format.
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **fd_c_classifier_wrapper**(FD_C_ClassifierWrapper*): Pointer to manipulate Classifier object.
|
||||
|
||||
```c
|
||||
FD_C_RecognizerWrapper* FD_C_CreateRecognizerWrapper(
|
||||
const char* model_file, const char* params_file, const char* label_path,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
> Create a Recognizer model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **model_file**(const char*): Model file path
|
||||
> * **params_file**(const char*): Parameter file path
|
||||
> * **label_path**(const char*): Label file path
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(FD_C_ModelFormat): Model format.
|
||||
>
|
||||
> **Return**
|
||||
> * **fd_c_recognizer_wrapper**(FD_C_RecognizerWrapper*): Pointer to manipulate Recognizer object.
|
||||
|
||||
```c
|
||||
FD_C_PPOCRv3Wrapper* FD_C_CreatePPOCRv3Wrapper(
|
||||
FD_C_DBDetectorWrapper* det_model,
|
||||
FD_C_ClassifierWrapper* cls_model,
|
||||
FD_C_RecognizerWrapper* rec_model
|
||||
)
|
||||
```
|
||||
> Create a PPOCRv3 model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **det_model**(FD_C_DBDetectorWrapper*): DBDetector model
|
||||
> * **cls_model**(FD_C_ClassifierWrapper*): Classifier model
|
||||
> * **rec_model**(FD_C_RecognizerWrapper*): Recognizer model
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **fd_c_ppocrv3_wrapper**(FD_C_PPOCRv3Wrapper*): Pointer to manipulate PPOCRv3 object.
|
||||
|
||||
|
||||
|
||||
#### Read and write image
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
```
|
||||
|
||||
> Read an image, and return a pointer to cv::Mat.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **imgpath**(const char*): image path
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **imgmat**(FD_C_Mat): pointer to cv::Mat object which holds the image.
|
||||
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
```
|
||||
|
||||
> Write image to a file.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **savepath**(const char*): save path
|
||||
> * **img**(FD_C_Mat): pointer to cv::Mat object
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **result**(FD_C_Bool): bool to indicate success or failure
|
||||
|
||||
|
||||
|
||||
#### Prediction
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_PPOCRv3WrapperPredict(
|
||||
FD_C_PPOCRv3Wrapper* fd_c_ppocrv3_wrapper,
|
||||
FD_C_Mat img,
|
||||
FD_C_OCRResult* result)
|
||||
```
|
||||
>
|
||||
> Predict an image, and generate result.
|
||||
>
|
||||
> **Params**
|
||||
> * **fd_c_ppocrv3_wrapper**(FD_C_PPOCRv3Wrapper*): Pointer to manipulate PPOCRv3 object.
|
||||
> * **img**(FD_C_Mat): pointer to cv::Mat object, which can be obained by FD_C_Imread interface
|
||||
> * **result**(FD_C_OCRResult*): OCR prediction results, including the position of the detection box from the detection model, the classification of the direction from the classification model, and the recognition result from the recognition model. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for OCRResult
|
||||
|
||||
#### Result
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisOcr(FD_C_Mat im, FD_C_OCRResult* ocr_result)
|
||||
```
|
||||
>
|
||||
> Visualize OCR results and return visualization image.
|
||||
>
|
||||
> **Params**
|
||||
> * **im**(FD_C_Mat): pointer to input image
|
||||
> * **ocr_result**(FD_C_OCRResult*): pointer to C FD_C_OCRResult structure
|
||||
>
|
||||
> **Return**
|
||||
> * **vis_im**(FD_C_Mat): pointer to visualization image.
|
||||
|
||||
|
||||
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PPOCR Model Description](../../)
|
||||
- [PPOCRv3 Python Deployment](../python)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
247
examples/vision/ocr/PP-OCRv3/c/README_CN.md
Normal file
247
examples/vision/ocr/PP-OCRv3/c/README_CN.md
Normal file
@@ -0,0 +1,247 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PPOCRv3 C部署示例
|
||||
|
||||
本目录下提供`infer.c`来调用C API快速完成PPOCRv3模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
|
||||
# 下载模型,图片和字典文件
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_det_infer.tar
|
||||
|
||||
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_rec_infer.tar
|
||||
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
# CPU推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/109218879/185826024-f7593a0c-1bd2-4a60-b76c-15588484fa08.jpg">
|
||||
|
||||
|
||||
## PPOCRv3 C API接口
|
||||
|
||||
### 配置
|
||||
|
||||
```c
|
||||
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
|
||||
```
|
||||
|
||||
> 创建一个RuntimeOption的配置对象,并且返回操作它的指针。
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseCpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
|
||||
```
|
||||
|
||||
> 开启CPU推理
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseGpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
int gpu_id)
|
||||
```
|
||||
> 开启GPU推理
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
> * **gpu_id**(int): 显卡号
|
||||
|
||||
|
||||
### 模型
|
||||
|
||||
```c
|
||||
FD_C_DBDetectorWrapper* FD_C_CreateDBDetectorWrapper(
|
||||
const char* model_file, const char* params_file,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
|
||||
> 创建一个DBDetector的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **model_file**(const char*): 模型文件路径
|
||||
> * **params_file**(const char*): 参数文件路径
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针,表示后端推理配置
|
||||
> * **model_format**(FD_C_ModelFormat): 模型格式
|
||||
>
|
||||
> **返回**
|
||||
> * **fd_c_dbdetector_wrapper**(FD_C_DBDetectorWrapper*): 指向DBDetector模型对象的指针
|
||||
|
||||
```c
|
||||
FD_C_ClassifierWrapper* FD_C_CreateClassifierWrapper(
|
||||
const char* model_file, const char* params_file,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
> 创建一个Classifier的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **model_file**(const char*): 模型文件路径
|
||||
> * **params_file**(const char*): 参数文件路径
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针,表示后端推理配置
|
||||
> * **model_format**(FD_C_ModelFormat): 模型格式
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_classifier_wrapper**(FD_C_ClassifierWrapper*): 指向Classifier模型对象的指针
|
||||
|
||||
```c
|
||||
FD_C_RecognizerWrapper* FD_C_CreateRecognizerWrapper(
|
||||
const char* model_file, const char* params_file, const char* label_path,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
> 创建一个Recognizer的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **model_file**(const char*): 模型文件路径
|
||||
> * **params_file**(const char*): 参数文件路径
|
||||
> * **label_path**(const char*): 标签文件路径
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针,表示后端推理配置
|
||||
> * **model_format**(FD_C_ModelFormat): 模型格式
|
||||
>
|
||||
> **返回**
|
||||
> * **fd_c_recognizer_wrapper**(FD_C_RecognizerWrapper*): 指向Recognizer模型对象的指针
|
||||
|
||||
```c
|
||||
FD_C_PPOCRv3Wrapper* FD_C_CreatePPOCRv3Wrapper(
|
||||
FD_C_DBDetectorWrapper* det_model,
|
||||
FD_C_ClassifierWrapper* cls_model,
|
||||
FD_C_RecognizerWrapper* rec_model
|
||||
)
|
||||
```
|
||||
> 创建一个PPOCRv3的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **det_model**(FD_C_DBDetectorWrapper*): DBDetector模型
|
||||
> * **cls_model**(FD_C_ClassifierWrapper*): Classifier模型
|
||||
> * **rec_model**(FD_C_RecognizerWrapper*): Recognizer模型
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_ppocrv3_wrapper**(FD_C_PPOCRv3Wrapper*): 指向PPOCRv3模型对象的指针
|
||||
|
||||
|
||||
|
||||
#### 读写图像
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
```
|
||||
|
||||
> 读取一个图像,并且返回cv::Mat的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **imgpath**(const char*): 图像文件路径
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **imgmat**(FD_C_Mat): 指向图像数据cv::Mat的指针。
|
||||
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
```
|
||||
|
||||
> 将图像写入文件中。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **savepath**(const char*): 保存图像的路径
|
||||
> * **img**(FD_C_Mat): 指向图像数据的指针
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **result**(FD_C_Bool): 表示操作是否成功
|
||||
|
||||
|
||||
#### Predict函数
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_PPOCRv3WrapperPredict(
|
||||
FD_C_PPOCRv3Wrapper* fd_c_ppocrv3_wrapper,
|
||||
FD_C_Mat img,
|
||||
FD_C_OCRResult* result)
|
||||
```
|
||||
>
|
||||
> 模型预测接口,输入图像直接并生成结果。
|
||||
>
|
||||
> **参数**
|
||||
> * **fd_c_ppocrv3_wrapper**(FD_C_PPOCRv3Wrapper*): 指向PPOCRv3模型的指针
|
||||
> * **img**(FD_C_Mat): 输入图像的指针,指向cv::Mat对象,可以调用FD_C_Imread读取图像获取
|
||||
> * **result**(FD_C_OCRResult*): OCR预测结果,包括由检测模型输出的检测框位置,分类模型输出的方向分类,以及识别模型输出的识别结果, OCRResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
#### Predict结果
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisOcr(FD_C_Mat im, FD_C_OCRResult* ocr_result)
|
||||
```
|
||||
>
|
||||
> 对结果进行可视化,返回可视化的图像。
|
||||
>
|
||||
> **参数**
|
||||
> * **im**(FD_C_Mat): 指向输入图像的指针
|
||||
> * **ocr_result**(FD_C_OCRResult*): 指向 FD_C_OCRResult结构的指针
|
||||
>
|
||||
> **返回**
|
||||
> * **vis_im**(FD_C_Mat): 指向可视化图像的指针
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [PPOCR 系列模型介绍](../../)
|
||||
- [PPOCRv3 Python部署](../python)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
153
examples/vision/ocr/PP-OCRv3/csharp/README.md
Executable file
153
examples/vision/ocr/PP-OCRv3/csharp/README.md
Executable file
@@ -0,0 +1,153 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PPOCRv3 C# Deployment Example
|
||||
|
||||
This directory provides `infer.cs` to finish the deployment of PPOCRv3 on CPU/GPU.
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Please follow below instructions to compile and test in Windows. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
|
||||
|
||||
## 1. Download C# package management tool nuget client
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
|
||||
Add nuget program into system variable **PATH**
|
||||
|
||||
## 2. Download model and image for test
|
||||
> https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar # (Decompress it)
|
||||
> https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
> https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
|
||||
> https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
> https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
## 3. Compile example code
|
||||
|
||||
Open `x64 Native Tools Command Prompt for VS 2019` command tool on Windows, cd to the demo path of ppyoloe and execute commands
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\ocr\PP-OCRv3\csharp
|
||||
|
||||
mkdir build && cd build
|
||||
cmake .. -G "Visual Studio 16 2019" -A x64 -DFASTDEPLOY_INSTALL_DIR=D:\Download\fastdeploy-win-x64-gpu-x.x.x -DCUDA_DIRECTORY="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2"
|
||||
|
||||
nuget restore
|
||||
msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
|
||||
```
|
||||
|
||||
For more information about how to use FastDeploy SDK to compile a project with Visual Studio 2019. Please refer to
|
||||
- [Using the FastDeploy C++ SDK on Windows Platform](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
## 4. Execute compiled program
|
||||
|
||||
fastdeploy.dll and related dynamic libraries are required by the program. FastDeploy provide a script to copy all required dll to your program path.
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x
|
||||
|
||||
fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\ocr\PP-OCRv3\csharp\build\Release
|
||||
```
|
||||
|
||||
Then you can run your program and test the model with image
|
||||
```shell
|
||||
cd Release
|
||||
# CPU inference
|
||||
infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
# GPU inference
|
||||
infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||
```
|
||||
|
||||
## PPOCRv3 C# Interface
|
||||
|
||||
### Model Class
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.DBDetector(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> DBDetector initialization
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **model_file**(str): Model file path
|
||||
>> * **params_file**(str): Parameter file path
|
||||
>> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
>> * **model_format**(ModelFormat): Model format.
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.Classifier(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> Classifier initialization
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **model_file**(str): Model file path
|
||||
>> * **params_file**(str): Parameter file path
|
||||
>> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
>> * **model_format**(ModelFormat): Model format.
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.Recognizer(
|
||||
string model_file,
|
||||
string params_file,
|
||||
string label_path,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> Recognizer initialization
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **model_file**(str): Model file path
|
||||
>> * **params_file**(str): Parameter file path
|
||||
>> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
>> * **model_format**(ModelFormat): Model format.
|
||||
|
||||
```c#
|
||||
fastdeploy.pipeline.PPOCRv3Model(
|
||||
DBDetector dbdetector,
|
||||
Classifier classifier,
|
||||
Recognizer recognizer)
|
||||
```
|
||||
|
||||
> PPOCRv3Model initialization
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **det_model**(FD_C_DBDetectorWrapper*): DBDetector model
|
||||
>> * **cls_model**(FD_C_ClassifierWrapper*): Classifier model
|
||||
>> * **rec_model**(FD_C_RecognizerWrapper*): Recognizer model
|
||||
|
||||
#### Predict Function
|
||||
|
||||
```c#
|
||||
fastdeploy.OCRResult Predict(OpenCvSharp.Mat im)
|
||||
```
|
||||
|
||||
> Model prediction interface. Input images and output results directly.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
>> * **im**(Mat): Input images in HWC or BGR format
|
||||
>>
|
||||
> **Return**
|
||||
>
|
||||
>> * **result**: OCR prediction results, including the position of the detection box from the detection model, the classification of the direction from the classification model, and the recognition result from the recognition model. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for OCRResult
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PPOCR Model Description](../../)
|
||||
- [PPOCRv3 Python Deployment](../python)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
153
examples/vision/ocr/PP-OCRv3/csharp/README_CN.md
Normal file
153
examples/vision/ocr/PP-OCRv3/csharp/README_CN.md
Normal file
@@ -0,0 +1,153 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PPOCRv3 C#部署示例
|
||||
|
||||
本目录下提供`infer.cs`来调用C# API快速完成PPOCRv3模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
在本目录执行如下命令即可在Windows完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
## 1. 下载C#包管理程序nuget客户端
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
|
||||
下载完成后将该程序添加到环境变量**PATH**中
|
||||
|
||||
## 2. 下载模型文件和测试图片
|
||||
> https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar # (下载后解压缩)
|
||||
> https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
> https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
|
||||
> https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
> https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
## 3. 编译示例代码
|
||||
|
||||
本文档编译的示例代码可在解压的库中找到,编译工具依赖VS 2019的安装,**Windows打开x64 Native Tools Command Prompt for VS 2019命令工具**,通过如下命令开始编译
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\ocr\PP-OCRv3\csharp
|
||||
|
||||
mkdir build && cd build
|
||||
cmake .. -G "Visual Studio 16 2019" -A x64 -DFASTDEPLOY_INSTALL_DIR=D:\Download\fastdeploy-win-x64-gpu-x.x.x -DCUDA_DIRECTORY="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2"
|
||||
|
||||
nuget restore
|
||||
msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
|
||||
```
|
||||
|
||||
关于使用Visual Studio 2019创建sln工程,或者CMake工程等方式编译的更详细信息,可参考如下文档
|
||||
- [在 Windows 使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
- [FastDeploy C++库在Windows上的多种使用方式](../../../../../docs/cn/faq/use_sdk_on_windows_build.md)
|
||||
|
||||
## 4. 运行可执行程序
|
||||
|
||||
注意Windows上运行时,需要将FastDeploy依赖的库拷贝至可执行程序所在目录, 或者配置环境变量。FastDeploy提供了工具帮助我们快速将所有依赖库拷贝至可执行程序所在目录,通过如下命令将所有依赖的dll文件拷贝至可执行程序所在的目录(可能生成的可执行文件在Release下还有一层目录,这里假设生成的可执行文件在Release处)
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x
|
||||
|
||||
fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\ocr\PP-OCRv3\csharp\build\Release
|
||||
```
|
||||
|
||||
将dll拷贝到当前路径后,准备好模型和图片,使用如下命令运行可执行程序即可
|
||||
```shell
|
||||
cd Release
|
||||
# CPU推理
|
||||
infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v3.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
# GPU推理
|
||||
infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v3.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||
```
|
||||
|
||||
## PPOCRv3 C#接口
|
||||
|
||||
### 模型
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.DBDetector(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> DBDetector模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **model_file**(str): 模型文件路径
|
||||
>> * **params_file**(str): 参数文件路径
|
||||
>> * **runtime_option**(RuntimeOption): 后端推理配置,默认为null,即采用默认配置
|
||||
>> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.Classifier(
|
||||
string model_file,
|
||||
string params_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> Classifier模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **model_file**(str): 模型文件路径
|
||||
>> * **params_file**(str): 参数文件路径
|
||||
>> * **runtime_option**(RuntimeOption): 后端推理配置,默认为null,即采用默认配置
|
||||
>> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.ocr.Recognizer(
|
||||
string model_file,
|
||||
string params_file,
|
||||
string label_path,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> Recognizer模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **model_file**(str): 模型文件路径
|
||||
>> * **params_file**(str): 参数文件路径
|
||||
>> * **label_path**(str): 标签文件路径
|
||||
>> * **runtime_option**(RuntimeOption): 后端推理配置,默认为null,即采用默认配置
|
||||
>> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
|
||||
|
||||
```c#
|
||||
fastdeploy.pipeline.PPOCRv3Model(
|
||||
DBDetector dbdetector,
|
||||
Classifier classifier,
|
||||
Recognizer recognizer)
|
||||
```
|
||||
|
||||
> PPOCRv3Model模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **det_model**(FD_C_DBDetectorWrapper*): DBDetector模型
|
||||
>> * **cls_model**(FD_C_ClassifierWrapper*): Classifier模型
|
||||
>> * **rec_model**(FD_C_RecognizerWrapper*): Recognizer模型文件
|
||||
|
||||
#### Predict函数
|
||||
|
||||
```c#
|
||||
fastdeploy.OCRResult Predict(OpenCvSharp.Mat im)
|
||||
```
|
||||
|
||||
> 模型预测接口,输入图像直接输出结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
>> * **im**(Mat): 输入图像,注意需为HWC,BGR格式
|
||||
>>
|
||||
> **返回值**
|
||||
>
|
||||
>> * **result**: OCR预测结果,包括由检测模型输出的检测框位置,分类模型输出的方向分类,以及识别模型输出的识别结果, OCRResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
184
examples/vision/segmentation/paddleseg/cpu-gpu/c/README.md
Executable file
184
examples/vision/segmentation/paddleseg/cpu-gpu/c/README.md
Executable file
@@ -0,0 +1,184 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg C Deployment Example
|
||||
|
||||
This directory provides `infer.c` to finish the deployment of PaddleSeg on CPU/GPU.
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# Download model, image files
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
|
||||
# CPU inference
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
||||
# GPU inference
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
||||
```
|
||||
|
||||
The above command works for Linux or MacOS. For SDK in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
The visualized result after running is as follows
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
|
||||
## PaddleSeg C Interface
|
||||
|
||||
### RuntimeOption
|
||||
|
||||
```c
|
||||
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
|
||||
```
|
||||
|
||||
> Create a RuntimeOption object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseCpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
|
||||
```
|
||||
|
||||
> Enable Cpu inference.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseGpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
int gpu_id)
|
||||
```
|
||||
> Enable Gpu inference.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Pointer to manipulate RuntimeOption object.
|
||||
|
||||
> * **gpu_id**(int): gpu id
|
||||
|
||||
|
||||
### Model
|
||||
|
||||
```c
|
||||
FD_C_PaddleSegWrapper* FD_C_CreatePaddleSegWrapper(
|
||||
const char* model_file, const char* params_file, const char* config_file,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
|
||||
> Create a PaddleSeg model object, and return a pointer to manipulate it.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **model_file**(const char*): Model file path
|
||||
> * **params_file**(const char*): Parameter file path
|
||||
> * **config_file**(const char*): config file path
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(FD_C_ModelFormat): Model format
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **fd_c_ppseg_wrapper**(FD_C_PaddleSegWrapper*): Pointer to manipulate PaddleSeg object.
|
||||
|
||||
|
||||
|
||||
#### Read and write image
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
```
|
||||
|
||||
> Read an image, and return a pointer to cv::Mat.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **imgpath**(const char*): image path
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **imgmat**(FD_C_Mat): pointer to cv::Mat object which holds the image.
|
||||
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
```
|
||||
|
||||
> Write image to a file.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
> * **savepath**(const char*): save path
|
||||
> * **img**(FD_C_Mat): pointer to cv::Mat object
|
||||
>
|
||||
> **Return**
|
||||
>
|
||||
> * **result**(FD_C_Bool): bool to indicate success or failure
|
||||
|
||||
|
||||
#### Prediction
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_PaddleSegWrapperPredict(
|
||||
FD_C_PaddleSegWrapper* fd_c_ppseg_wrapper,
|
||||
FD_C_Mat img,
|
||||
FD_C_SegmentationResult* result)
|
||||
```
|
||||
>
|
||||
> Predict an image, and generate result.
|
||||
>
|
||||
> **Params**
|
||||
> * **fd_c_ppseg_wrapper**(FD_C_PaddleSegWrapper*): Pointer to manipulate PaddleSeg object.
|
||||
> * **img**(FD_C_Mat): pointer to cv::Mat object, which can be obained by FD_C_Imread interface
|
||||
> * **result**(FD_C_SegmentationResult*): Segmentation prediction results, Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for SegmentationResult
|
||||
|
||||
|
||||
#### Result
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisSegmentation(FD_C_Mat im,
|
||||
FD_C_SegmentationResult* result,
|
||||
float weight)
|
||||
```
|
||||
>
|
||||
> Visualize segmentation results and return visualization image.
|
||||
>
|
||||
> **Params**
|
||||
> * **im**(FD_C_Mat): pointer to input image
|
||||
> * **segmentation_result**(FD_C_SegmentationResult*): pointer to C FD_C_SegmentationResult structure
|
||||
> * **weight**(float): weight transparent weight of visualized result image
|
||||
>
|
||||
> **Return**
|
||||
> * **vis_im**(FD_C_Mat): pointer to visualization image.
|
||||
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PPSegmentation Model Description](../../)
|
||||
- [PaddleSeg Python Deployment](../python)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
185
examples/vision/segmentation/paddleseg/cpu-gpu/c/README_CN.md
Normal file
185
examples/vision/segmentation/paddleseg/cpu-gpu/c/README_CN.md
Normal file
@@ -0,0 +1,185 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C部署示例
|
||||
|
||||
本目录下提供`infer.c`来调用C API快速完成PaddleSeg模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
||||
# GPU推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
|
||||
## PaddleSeg C API接口
|
||||
|
||||
### 配置
|
||||
|
||||
```c
|
||||
FD_C_RuntimeOptionWrapper* FD_C_CreateRuntimeOptionWrapper()
|
||||
```
|
||||
|
||||
> 创建一个RuntimeOption的配置对象,并且返回操作它的指针。
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseCpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper)
|
||||
```
|
||||
|
||||
> 开启CPU推理
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
|
||||
```c
|
||||
void FD_C_RuntimeOptionWrapperUseGpu(
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
int gpu_id)
|
||||
```
|
||||
> 开启GPU推理
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption对象的指针
|
||||
> * **gpu_id**(int): 显卡号
|
||||
|
||||
|
||||
### 模型
|
||||
|
||||
```c
|
||||
FD_C_PaddleSegWrapper* FD_C_CreatePaddleSegWrapper(
|
||||
const char* model_file, const char* params_file, const char* config_file,
|
||||
FD_C_RuntimeOptionWrapper* fd_c_runtime_option_wrapper,
|
||||
const FD_C_ModelFormat model_format
|
||||
)
|
||||
```
|
||||
> 创建一个PaddleSeg的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **model_file**(const char*): 模型文件路径
|
||||
> * **params_file**(const char*): 参数文件路径
|
||||
> * **config_file**(const char*): 配置文件路径
|
||||
> * **fd_c_runtime_option_wrapper**(FD_C_RuntimeOptionWrapper*): 指向RuntimeOption的指针,表示后端推理配置
|
||||
> * **model_format**(FD_C_ModelFormat): 模型格式
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_ppseg_wrapper**(FD_C_PaddleSegWrapper*): 指向PaddleSeg模型对象的指针
|
||||
|
||||
|
||||
|
||||
#### 读写图像
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
```
|
||||
|
||||
> 读取一个图像,并且返回cv::Mat的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **imgpath**(const char*): 图像文件路径
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **imgmat**(FD_C_Mat): 指向图像数据cv::Mat的指针。
|
||||
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
```
|
||||
|
||||
> 将图像写入文件中。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> * **savepath**(const char*): 保存图像的路径
|
||||
> * **img**(FD_C_Mat): 指向图像数据的指针
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **result**(FD_C_Bool): 表示操作是否成功
|
||||
|
||||
|
||||
#### Predict函数
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_PaddleSegWrapperPredict(
|
||||
FD_C_PaddleSegWrapper* fd_c_ppseg_wrapper,
|
||||
FD_C_Mat img,
|
||||
FD_C_SegmentationResult* result)
|
||||
```
|
||||
>
|
||||
> 模型预测接口,输入图像直接并生成分类结果。
|
||||
>
|
||||
> **参数**
|
||||
> * **fd_c_ppseg_wrapper**(FD_C_PaddleSegWrapper*): 指向PaddleSeg模型的指针
|
||||
> * **img**(FD_C_Mat): 输入图像的指针,指向cv::Mat对象,可以调用FD_C_Imread读取图像获取
|
||||
> * **result**FD_C_SegmentationResult*): Segmentation检测结果,SegmentationResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
#### Predict结果
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisSegmentation(FD_C_Mat im,
|
||||
FD_C_SegmentationResult* result,
|
||||
float weight)
|
||||
```
|
||||
>
|
||||
> 对结果进行可视化,返回可视化的图像。
|
||||
>
|
||||
> **参数**
|
||||
> * **im**(FD_C_Mat): 指向输入图像的指针
|
||||
> * **segmentation_result**(FD_C_SegmentationResult*): 指向 FD_C_SegmentationResult结构的指针
|
||||
> * **weight**(float): 透明度权重
|
||||
>
|
||||
> **返回**
|
||||
> * **vis_im**(FD_C_Mat): 指向可视化图像的指针
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [PPSegmentation 系列模型介绍](../../)
|
||||
- [PaddleSeg Python部署](../python)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
104
examples/vision/segmentation/paddleseg/cpu-gpu/csharp/README.md
Executable file
104
examples/vision/segmentation/paddleseg/cpu-gpu/csharp/README.md
Executable file
@@ -0,0 +1,104 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg C# Deployment Example
|
||||
|
||||
This directory provides `infer.cs` to finish the deployment of PaddleSeg on CPU/GPU.
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Please follow below instructions to compile and test in Windows. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
|
||||
|
||||
## 1. Download C# package management tool nuget client
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
|
||||
Add nuget program into system variable **PATH**
|
||||
|
||||
## 2. Download model and image for test
|
||||
> https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz # (Decompress it)
|
||||
> https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
|
||||
## 3. Compile example code
|
||||
|
||||
Open `x64 Native Tools Command Prompt for VS 2019` command tool on Windows, cd to the demo path of ppyoloe and execute commands
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\segmentation\paddleseg\cpu-gpu\csharp
|
||||
|
||||
mkdir build && cd build
|
||||
cmake .. -G "Visual Studio 16 2019" -A x64 -DFASTDEPLOY_INSTALL_DIR=D:\Download\fastdeploy-win-x64-gpu-x.x.x -DCUDA_DIRECTORY="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2"
|
||||
|
||||
nuget restore
|
||||
msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
|
||||
```
|
||||
|
||||
For more information about how to use FastDeploy SDK to compile a project with Visual Studio 2019. Please refer to
|
||||
- [Using the FastDeploy C++ SDK on Windows Platform](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
## 4. Execute compiled program
|
||||
|
||||
fastdeploy.dll and related dynamic libraries are required by the program. FastDeploy provide a script to copy all required dll to your program path.
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x
|
||||
|
||||
fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\segmentation\paddleseg\cpu-gpu\csharp\build\Release
|
||||
```
|
||||
|
||||
Then you can run your program and test the model with image
|
||||
```shell
|
||||
cd Release
|
||||
# CPU inference
|
||||
infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
||||
# GPU inference
|
||||
infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
||||
```
|
||||
|
||||
## PaddleSeg C# Interface
|
||||
|
||||
### Model Class
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.segmentation.PaddleSeg(
|
||||
string model_file,
|
||||
string params_file,
|
||||
string config_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> PaddleSeg initialization
|
||||
|
||||
> **Params**
|
||||
|
||||
>> * **model_file**(str): Model file path
|
||||
>> * **params_file**(str): Parameter file path
|
||||
>> * **config_file**(str): Config file path
|
||||
>> * **runtime_option**(RuntimeOption): Backend inference configuration. null by default, which is the default configuration
|
||||
>> * **model_format**(ModelFormat): Model format.
|
||||
|
||||
#### Predict Function
|
||||
|
||||
```c#
|
||||
fastdeploy.SegmentationResult Predict(OpenCvSharp.Mat im)
|
||||
```
|
||||
|
||||
> Model prediction interface. Input images and output results directly.
|
||||
>
|
||||
> **Params**
|
||||
>
|
||||
>> * **im**(Mat): Input images in HWC or BGR format
|
||||
>>
|
||||
> **Return**
|
||||
>
|
||||
>> * **result**: Segmentation prediction results, refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for SegmentationResult
|
||||
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PPSegmentation Model Description](../../)
|
||||
- [PaddleSeg Python Deployment](../python)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -0,0 +1,102 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C#部署示例
|
||||
|
||||
本目录下提供`infer.cs`来调用C# API快速完成PaddleSeg模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
在本目录执行如下命令即可在Windows完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
## 1. 下载C#包管理程序nuget客户端
|
||||
> https://dist.nuget.org/win-x86-commandline/v6.4.0/nuget.exe
|
||||
|
||||
下载完成后将该程序添加到环境变量**PATH**中
|
||||
|
||||
## 2. 下载模型文件和测试图片
|
||||
> https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz # (下载后解压缩)
|
||||
> https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
|
||||
## 3. 编译示例代码
|
||||
|
||||
本文档编译的示例代码可在解压的库中找到,编译工具依赖VS 2019的安装,**Windows打开x64 Native Tools Command Prompt for VS 2019命令工具**,通过如下命令开始编译
|
||||
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\segmentation\paddleseg\cpu-gpu\csharp
|
||||
|
||||
mkdir build && cd build
|
||||
cmake .. -G "Visual Studio 16 2019" -A x64 -DFASTDEPLOY_INSTALL_DIR=D:\Download\fastdeploy-win-x64-gpu-x.x.x -DCUDA_DIRECTORY="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2"
|
||||
|
||||
nuget restore
|
||||
msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
|
||||
```
|
||||
|
||||
关于使用Visual Studio 2019创建sln工程,或者CMake工程等方式编译的更详细信息,可参考如下文档
|
||||
- [在 Windows 使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
- [FastDeploy C++库在Windows上的多种使用方式](../../../../../docs/cn/faq/use_sdk_on_windows_build.md)
|
||||
|
||||
## 4. 运行可执行程序
|
||||
|
||||
注意Windows上运行时,需要将FastDeploy依赖的库拷贝至可执行程序所在目录, 或者配置环境变量。FastDeploy提供了工具帮助我们快速将所有依赖库拷贝至可执行程序所在目录,通过如下命令将所有依赖的dll文件拷贝至可执行程序所在的目录(可能生成的可执行文件在Release下还有一层目录,这里假设生成的可执行文件在Release处)
|
||||
```shell
|
||||
cd D:\Download\fastdeploy-win-x64-gpu-x.x.x
|
||||
|
||||
fastdeploy_init.bat install %cd% D:\Download\fastdeploy-win-x64-gpu-x.x.x\examples\vision\segmentation\paddleseg\cpu-gpu\csharp\build\Release
|
||||
```
|
||||
|
||||
将dll拷贝到当前路径后,准备好模型和图片,使用如下命令运行可执行程序即可
|
||||
```shell
|
||||
cd Release
|
||||
# CPU推理
|
||||
infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
||||
# GPU推理
|
||||
infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
||||
```
|
||||
|
||||
## PaddleSeg C#接口
|
||||
|
||||
### 模型
|
||||
|
||||
```c#
|
||||
fastdeploy.vision.segmentation.PaddleSeg(
|
||||
string model_file,
|
||||
string params_file,
|
||||
string config_file,
|
||||
fastdeploy.RuntimeOption runtime_option = null,
|
||||
fastdeploy.ModelFormat model_format = ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
> PaddleSeg模型加载和初始化。
|
||||
|
||||
> **参数**
|
||||
|
||||
>> * **model_file**(str): 模型文件路径
|
||||
>> * **params_file**(str): 参数文件路径
|
||||
>> * **config_file**(str): 配置文件路径
|
||||
>> * **runtime_option**(RuntimeOption): 后端推理配置,默认为null,即采用默认配置
|
||||
>> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
```c#
|
||||
fastdeploy.SegmentationResult Predict(OpenCvSharp.Mat im)
|
||||
```
|
||||
|
||||
> 模型预测接口,输入图像直接输出结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
>> * **im**(Mat): 输入图像,注意需为HWC,BGR格式
|
||||
>>
|
||||
> **返回值**
|
||||
>
|
||||
>> * **result**: Segmentation检测结果,SegmentationResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
Reference in New Issue
Block a user