[Doc]Add English version of documents in docs/cn and api/vision_results (#931)

* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md
This commit is contained in:
charl-u
2022-12-22 18:15:01 +08:00
committed by GitHub
parent ac255b8ab8
commit 02eab973ce
80 changed files with 1430 additions and 53 deletions

View File

@@ -1,3 +1,4 @@
[English](README_EN.md)| 简体中文
# 视觉模型预测结果说明 # 视觉模型预测结果说明
FastDeploy根据视觉模型的任务类型定义了不同的结构体(`fastdeploy/vision/common/result.h`)来表达模型预测结果,具体如下表所示 FastDeploy根据视觉模型的任务类型定义了不同的结构体(`fastdeploy/vision/common/result.h`)来表达模型预测结果,具体如下表所示

View File

@@ -0,0 +1,18 @@
[简体中文](README_CN.md)| English
# Prediction Results of the Vision Model
FastDeploy defines different structures (`fastdeploy/vision/common/result.h`) to express the model prediction results according to the vision model task.
| Structure | Document | Description | Corresponding Model |
|:------------------------|:----------------------------------------------|:------------------|:------------------------|
| ClassifyResult | [C++/Python document](./classification_result_EN.md) | Image classification return results | ResNet50, MobileNetV3, etc. |
| SegmentationResult | [C++/Python document](./segmentation_result_EN.md) | Image segmentation result | PP-HumanSeg, PP-LiteSeg, etc. |
| DetectionResult | [C++/Python document](./detection_result_EN.md) | Target detection result | PP-YOLOE, YOLOv7, etc. |
| FaceDetectionResult | [C++/Python document](./face_detection_result_EN.md) | Result of face detection | SCRFD, RetinaFace, etc. |
| FaceAlignmentResult | [C++/Python document](./face_alignment_result_EN.md) | Face alignment result(Face keypoint detection) | PFLD model, etc. |
| KeyPointDetectionResult | [C++/Python document](./keypointdetection_result_EN.md) | Result of keypoint detection | PP-Tinypose model, etc. |
| FaceRecognitionResult | [C++/Python document](./face_recognition_result_EN.md) | Result of face recognition | ArcFace, CosFace, etc. |
| MattingResult | [C++/Python document](./matting_result_EN.md) | Image/video keying result | MODNet, RVM, etc. |
| OCRResult | [C++/Python document](./ocr_result_EN.md) | Text box detection, classification and text recognition result | OCR, etc. |
| MOTResult | [C++/Python document](./mot_result_EN.md) | Multi-target tracking result | pptracking, etc. |
| HeadPoseResult | [C++/Python document](./headpose_result_EN.md) | Head pose estimation result | FSANet, etc. |

View File

@@ -1,3 +1,4 @@
中文 [English](classification_result_EN.md)
# ClassifyResult 图像分类结果 # ClassifyResult 图像分类结果
ClassifyResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像的分类结果和置信度。 ClassifyResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像的分类结果和置信度。

View File

@@ -0,0 +1,29 @@
English | [中文](classification_result.md)
# Image Classification Result
The ClassifyResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the classification result and confidence level of the image.
## C++ Definition
`fastdeploy::vision::ClassifyResult`
```c++
struct ClassifyResult {
std::vector<int32_t> label_ids;
std::vector<float> scores;
void Clear();
std::string Str();
};
```
- **label_ids**: Member variable which indicates the classification results of a single image. Its number is determined by the topk passed in when using the classification model, e.g. it can return the top 5 classification results.
- **scores**: Member variable which indicates the confidence level of a single image on the corresponding classification result. Its number is determined by the topk passed in when using the classification model, e.g. it can return the top 5 classification confidence level.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
`fastdeploy.vision.ClassifyResult`
- **label_ids**(list of int): Member variable which indicates the classification results of a single image. Its number is determined by the topk passed in when using the classification model, e.g. it can return the top 5 classification results.
- **scores**(list of float): Member variable which indicates the confidence level of a single image on the corresponding classification result. Its number is determined by the topk passed in when using the classification model, e.g. it can return the top 5 classification confidence level.

View File

@@ -1,3 +1,4 @@
中文 [English](detection_result_EN.md)
# DetectionResult 目标检测结果 # DetectionResult 目标检测结果
DetectionResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像检测出来的目标框、目标类别和目标置信度。 DetectionResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像检测出来的目标框、目标类别和目标置信度。

View File

@@ -0,0 +1,66 @@
English | [中文](detection_result.md)
# Target Detection Result
The DetectionResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the target frame, target class and target confidence level detected in the image.
## C++ Definition
```c++
fastdeploy::vision::DetectionResult
```
```c++
struct DetectionResult {
std::vector<std::array<float, 4>> boxes;
std::vector<float> scores;
std::vector<int32_t> label_ids;
std::vector<Mask> masks;
bool contain_masks = false;
void Clear();
std::string Str();
};
```
- **boxes**: Member variable which indicates the coordinates of all detected target boxes in a single image. `boxes.size()` indicates the number of boxes, each box is represented by 4 float values in order of xmin, ymin, xmax, ymax, i.e. the coordinates of the top left and bottom right corner.
- **scores**: Member variable which indicates the confidence level of all targets detected in a single image, where the number of elements is the same as `boxes.size()`.
- **label_ids**: Member variable which indicates all target categories detected in a single image, where the number of elements is the same as `boxes.size()`.
- **masks**: Member variable which indicates all detected instance masks of a single image, where the number of elements and the shape size are the same as `boxes`.
- **contain_masks**: Member variable which indicates whether the detected result contains instance masks, which is generally true for the instance segmentation model.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
```c++
fastdeploy::vision::Mask
```
```c++
struct Mask {
std::vector<int32_t> data;
std::vector<int64_t> shape; // (H,W) ...
void Clear();
std::string Str();
};
```
- **data**: Member variable which indicates a detected mask.
- **shape**: Member variable which indicates the shape of the mask, e.g. (h,w).
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
```python
fastdeploy.vision.DetectionResult
```
- **boxes**(list of list(float)): Member variable which indicates the coordinates of all detected target boxes in a single frame. It is a list, and each element in it is also a list of length 4, representing a box with 4 float values representing xmin, ymin, xmax, ymax, i.e. the coordinates of the top left and bottom right corner.
- **scores**(list of float): Member variable which indicates the confidence level of all targets detected in a single image.
- **label_ids**(list of int): Member variable which indicates all target categories detected in a single image.
- **masks**: Member variable which indicates all detected instance masks of a single image, where the number of elements and the shape size are the same as `boxes`.
- **contain_masks**: Member variable which indicates whether the detected result contains instance masks, which is generally true for the instance segmentation model.
```python
fastdeploy.vision.Mask
```
- **data**: Member variable which indicates a detected mask.
- **shape**: Member variable which indicates the shape of the mask, e.g. (h,w).

View File

@@ -1,3 +1,4 @@
中文 [English](face_alignment_result_EN.md)
# FaceAlignmentResult 人脸对齐(人脸关键点检测)结果 # FaceAlignmentResult 人脸对齐(人脸关键点检测)结果
FaceAlignmentResult 代码定义在`fastdeploy/vision/common/result.h`用于表明人脸landmarks。 FaceAlignmentResult 代码定义在`fastdeploy/vision/common/result.h`用于表明人脸landmarks。

View File

@@ -0,0 +1,26 @@
English | [中文](face_alignment_result.md)
# Face Alignment Result
The FaceAlignmentResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate face landmarks.
## C++ Definition
`fastdeploy::vision::FaceAlignmentResult`
```c++
struct FaceAlignmentResult {
std::vector<std::array<float, 2>> landmarks;
void Clear();
std::string Str();
};
```
- **landmarks**: Member variable which indicates all the key points detected in a single face image.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
`fastdeploy.vision.FaceAlignmentResult`
- **landmarks**(list of list(float)): Member variable which indicates all the key points detected in a single face image.

View File

@@ -1,3 +1,4 @@
中文 | [English](face_detection_result_EN.md)
# FaceDetectionResult 人脸检测结果 # FaceDetectionResult 人脸检测结果
FaceDetectionResult 代码定义在`fastdeploy/vision/common/result.h`用于表明人脸检测出来的目标框、人脸landmarks目标置信度和每张人脸的landmark数量。 FaceDetectionResult 代码定义在`fastdeploy/vision/common/result.h`用于表明人脸检测出来的目标框、人脸landmarks目标置信度和每张人脸的landmark数量。

View File

@@ -0,0 +1,35 @@
English | [中文](face_detection_result.md)
# Face Detection Result
The FaceDetectionResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the target frames, face landmarks, target confidence and the number of landmark per face.
## C++ Definition
``fastdeploy::vision::FaceDetectionResult``
```c++
struct FaceDetectionResult {
std::vector<std::array<float, 4>> boxes;
std::vector<std::array<float, 2>> landmarks;
std::vector<float> scores;
int landmarks_per_face;
void Clear();
std::string Str();
};
```
- **boxes**: Member variable which indicates the coordinates of all detected target boxes in a single image. `boxes.size()` indicates the number of boxes, each box is represented by 4 float values in order of xmin, ymin, xmax, ymax, i.e. the coordinates of the top left and bottom right corner.
- **scores**: Member variable which indicates the confidence level of all targets detected in a single image, where the number of elements is the same as `boxes.size()`.
- **landmarks**: Member variable which indicates the keypoints of all faces detected in a single image, where the number of elements is the same as `boxes.size()`.
- **landmarks_per_face**: Member variable which indicates the number of keypoints in each face box.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
`fastdeploy.vision.FaceDetectionResult`
- **boxes**(list of list(float)): Member variable which indicates the coordinates of all detected target boxes in a single frame. It is a list, and each element in it is also a list of length 4, representing a box with 4 float values representing xmin, ymin, xmax, ymax, i.e. the coordinates of the top left and bottom right corner.
- **scores**(list of float): Member variable which indicates the confidence level of all targets detected in a single image.
- **landmarks**(list of list(float)): Member variable which indicates the keypoints of all faces detected in a single image.
- **landmarks_per_face**(int): Member variable which indicates the number of keypoints in each face box.

View File

@@ -1,3 +1,4 @@
中文 [English](face_recognition_result_EN.md)
# FaceRecognitionResult 人脸识别结果 # FaceRecognitionResult 人脸识别结果
FaceRecognitionResult 代码定义在`fastdeploy/vision/common/result.h`用于表明人脸识别模型对图像特征的embedding。 FaceRecognitionResult 代码定义在`fastdeploy/vision/common/result.h`用于表明人脸识别模型对图像特征的embedding。

View File

@@ -0,0 +1,26 @@
English | [中文](face_recognition_result.md)
# Face Recognition Result
The FaceRecognitionResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the image features embedding in the face recognition model.
## C++ Definition
`fastdeploy::vision::FaceRecognitionResult`
```c++
struct FaceRecognitionResult {
std::vector<float> embedding;
void Clear();
std::string Str();
};
```
- **embedding**: Member variable which indicates the final extracted feature embedding of the face recognition model, and can be used to calculate the facial feature similarity.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
`fastdeploy.vision.FaceRecognitionResult`
- **embedding**(list of float): Member variable which indicates the final extracted feature embedding of the face recognition model, and can be used to calculate the facial feature similarity.

View File

@@ -1,3 +1,4 @@
中文 [English](headpose_result_EN.md)
# HeadPoseResult 头部姿态结果 # HeadPoseResult 头部姿态结果
HeadPoseResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明头部姿态结果。 HeadPoseResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明头部姿态结果。

View File

@@ -0,0 +1,26 @@
English | [中文](headpose_result.md)
# Head Pose Result
The HeadPoseResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the head pose result.
## C++ Definition
``fastdeploy::vision::HeadPoseResult`''
```c++
struct HeadPoseResult {
std::vector<float> euler_angles;
void Clear();
std::string Str();
};
```
- **euler_angles**: Member variable which indicates the Euler angles predicted for a single face image, stored in the order (yaw, pitch, roll), with yaw representing the horizontal turn angle, pitch representing the vertical angle, and roll representing the roll angle, all with a value range of [-90,+90].
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
`fastdeploy.vision.HeadPoseResult`
- **euler_angles**(list of float): Member variable which indicates the Euler angles predicted for a single face image, stored in the order (yaw, pitch, roll), with yaw representing the horizontal turn angle, pitch representing the vertical angle, and roll representing the roll angle, all with a value range of [-90,+90].

View File

@@ -1,3 +1,4 @@
中文 | [English](keypointdetection_result_EN.md)
# KeyPointDetectionResult 目标检测结果 # KeyPointDetectionResult 目标检测结果
KeyPointDetectionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像中目标行为的各个关键点坐标和置信度。 KeyPointDetectionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像中目标行为的各个关键点坐标和置信度。
@@ -16,10 +17,12 @@ struct KeyPointDetectionResult {
}; };
``` ```
- **keypoints**: 成员变量,表示识别到的目标行为的关键点坐标。`keypoints.size()= N * J` - **keypoints**: 成员变量,表示识别到的目标行为的关键点坐标。
`keypoints.size()= N * J`
- `N`:图片中的目标数量 - `N`:图片中的目标数量
- `J`num_joints一个目标的关键点数量 - `J`num_joints一个目标的关键点数量
- **scores**: 成员变量,表示识别到的目标行为的关键点坐标的置信度。`scores.size()= N * J` - **scores**: 成员变量,表示识别到的目标行为的关键点坐标的置信度。
`scores.size()= N * J`
- `N`:图片中的目标数量 - `N`:图片中的目标数量
- `J`:num_joints一个目标的关键点数量 - `J`:num_joints一个目标的关键点数量
- **num_joints**: 成员变量,一个目标的关键点数量 - **num_joints**: 成员变量,一个目标的关键点数量
@@ -32,10 +35,10 @@ struct KeyPointDetectionResult {
- **keypoints**(list of list(float)): 成员变量,表示识别到的目标行为的关键点坐标。 - **keypoints**(list of list(float)): 成员变量,表示识别到的目标行为的关键点坐标。
`keypoints.size()= N * J` `keypoints.size()= N * J`
`N`:图片中的目标数量 - `N`:图片中的目标数量
`J`:num_joints关键点数量 - `J`:num_joints关键点数量
- **scores**(list of float): 成员变量,表示识别到的目标行为的关键点坐标的置信度。 - **scores**(list of float): 成员变量,表示识别到的目标行为的关键点坐标的置信度。
`scores.size()= N * J` `scores.size()= N * J`
`N`:图片中的目标数量 - `N`:图片中的目标数量
`J`:num_joints一个目标的关键点数量 - `J`:num_joints一个目标的关键点数量
- **num_joints**(int): 成员变量,一个目标的关键点数量 - **num_joints**(int): 成员变量,一个目标的关键点数量

View File

@@ -0,0 +1,44 @@
English | [中文](keypointdetection_result.md)
# Keypoint Detection Result
The KeyPointDetectionResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the coordinates and confidence level of each keypoint of the target's behavior in the image.
## C++ Definition
``fastdeploy::vision::KeyPointDetectionResult``
```c++
struct KeyPointDetectionResult {
std::vector<std::array<float, 2>> keypoints;
std::vector<float> scores;
int num_joints = -1;
void Clear();
std::string Str();
};
```
- **keypoints**: Member variable which indicates the coordinates of the identified target behavior keypoint.
` keypoints.size() = N * J`:
- `N`: the number of targets in the image
- `J`: num_joints (the number of keypoints of a target)
- **scores**: Member variable which indicates the confidence level of the keypoint coordinates of the identified target behavior.
`scores.size() = N * J`:
- `N`: the number of targets in the picture
- `J`:num_joints (the number of keypoints of a target)
- **num_joints**: Member variable which indicates the number of keypoints of a target.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
`fastdeploy.vision.KeyPointDetectionResult`
- **keypoints**(list of list(float)): Member variable which indicates the coordinates of the identified target behavior keypoint.
` keypoints.size() = N * J`:
- `N`: the number of targets in the image
- `J`: num_joints (the number of keypoints of a target)
- **scores**(list of float): Member variable which indicates the confidence level of the keypoint coordinates of the identified target behavior.
`scores.size() = N * J`:
- `N`: the number of targets in the picture
- `J`:num_joints (the number of keypoints of a target)
- **num_joints**(int): Member variable which indicates the number of keypoints of a target.

View File

@@ -1,3 +1,4 @@
中文 [English](matting_result_EN.md)
# MattingResult 抠图结果 # MattingResult 抠图结果
MattingResult 代码定义在`fastdeploy/vision/common/result.h`用于表明模型预测的alpha透明度的值预测的前景等。 MattingResult 代码定义在`fastdeploy/vision/common/result.h`用于表明模型预测的alpha透明度的值预测的前景等。

View File

@@ -0,0 +1,37 @@
English | [中文](matting_result.md)
# MattingResult keying results
The MattingResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the predicted value of alpha transparency predicted and the predicted foreground, etc.
## C++ Definition
``fastdeploy::vision::MattingResult`''
```c++
struct MattingResult {
std::vector<float> alpha;
std::vector<float> foreground;
std::vector<int64_t> shape;
bool contain_foreground = false;
void Clear();
std::string Str();
};
```
- **alpha**: It is a one-dimensional vector, indicating the predicted value of alpha transparency. The value range is [0.,1.], and the length is hxw, in which h,w represent the height and the width of the input image seperately.
- **foreground**: It is a one-dimensional vector, indicating the predicted foreground. The value range is [0.,255.], and the length is hxwxc, in which h,w represent the height and the width of the input image, and c is generally 3. This vector is valid only when the model itself predicts the foreground.
- **contain_foreground**: Used to indicate whether the result contains foreground.
- **shape**: Used to indicate the shape of the output. When contain_foreground is false, the shape only contains (h,w), while when contain_foreground is true, the shape contains (h,w,c), in which c is generally 3.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
`fastdeploy.vision.MattingResult`
- **alpha**(list of float): It is a one-dimensional vector, indicating the predicted value of alpha transparency. The value range is [0.,1.], and the length is hxw, in which h,w represent the height and the width of the input image seperately.
- **foreground**(list of float): It is a one-dimensional vector, indicating the predicted foreground. The value range is [0.,255.], and the length is hxwxc, in which h,w represent the height and the width of the input image, and c is generally 3. This vector is valid only when the model itself predicts the foreground.
- **contain_foreground**(bool): Used to indicate whether the result contains foreground.
- **shape**(list of int): Used to indicate the shape of the output. When contain_foreground is false, the shape only contains (h,w), while when contain_foreground is true, the shape contains (h,w,c), in which c is generally 3.

View File

@@ -1,3 +1,4 @@
中文 [English](mot_result_EN.md)
# MOTResult 多目标跟踪结果 # MOTResult 多目标跟踪结果
MOTResult代码定义在`fastdeploy/vision/common/result.h`用于表明多目标跟踪中的检测出来的目标框、目标跟踪id、目标类别和目标置信度。 MOTResult代码定义在`fastdeploy/vision/common/result.h`用于表明多目标跟踪中的检测出来的目标框、目标跟踪id、目标类别和目标置信度。

View File

@@ -0,0 +1,41 @@
English | [中文](mot_result.md)
# Multi-target Tracking Result
The MOTResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the detected target frame, target tracking id, target class and target confidence ratio in multi-target tracking task.
## C++ Definition
```c++
fastdeploy::vision::MOTResult
```
```c++
struct MOTResult{
// left top right bottom
std::vector<std::array<int, 4>> boxes;
std::vector<int> ids;
std::vector<float> scores;
std::vector<int> class_ids;
void Clear();
std::string Str();
};
```
- **boxes**: Member variable which indicates the coordinates of all detected target boxes in a single frame. `boxes.size()` indicates the number of boxes, each box is represented by 4 float values in order of xmin, ymin, xmax, ymax, i.e. the coordinates of the top left and bottom right corner.
- **ids**: Member variable which indicates the ids of all targets in a single frame, where the element number is the same as `boxes.size()`.
- **scores**: Member variable which indicates the confidence level of all targets detected in a single frame, where the number of elements is the same as `boxes.size()`.
- **class_ids**: Member variable which indicates all target classes detected in a single frame, where the element number is the same as `boxes.size()`.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
```python
fastdeploy.vision.MOTResult
```
- **boxes**(list of list(float)): Member variable which indicates the coordinates of all detected target boxes in a single frame. It is a list, and each element in it is also a list of length 4, representing a box with 4 float values representing xmin, ymin, xmax, ymax, i.e. the coordinates of the top left and bottom right corner.
- **ids**(list of list(float)): Member variable which indicates the ids of all targets in a single frame, where the element number is the same as `boxes`.
- **scores**(list of float): Member variable which indicates the confidence level of all targets detected in a single frame.
- **class_ids**(list of float): Member variable which indicates all target classes detected in a single frame.

View File

@@ -1,3 +1,4 @@
中文 [English](ocr_result_EN.md)
# OCRResult OCR预测结果 # OCRResult OCR预测结果
OCRResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像检测和识别出来的文本框,文本框方向分类,以及文本框内的文本内容 OCRResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像检测和识别出来的文本框,文本框方向分类,以及文本框内的文本内容

View File

@@ -0,0 +1,43 @@
English | [中文](ocr_result.md)
# OCR prediction result
The OCRResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the text box detected in the image, text box orientation classification, and the text content.
## C++ Definition
```c++
fastdeploy::vision::OCRResult
```
```c++
struct OCRResult {
std::vector<std::array<int, 8>> boxes;
std::vector<std::string> text;
std::vector<float> rec_scores;
std::vector<float> cls_scores;
std::vector<int32_t> cls_labels;
ResultType type = ResultType::OCR;
void Clear();
std::string Str();
};
```
- **boxes**: Member variable which indicates the coordinates of all detected target boxes in a single image. `boxes.size()` indicates the number of detected boxes. Each box is represented by 8 int values to indicate the 4 coordinates of the box, in the order of lower left, lower right, upper right, upper left.
- **text**: Member variable which indicates the content of the recognized text in multiple text boxes, where the element number is the same as `boxes.size()`.
- **rec_scores**: Member variable which indicates the confidence level of the recognized text, where the element number is the same as `boxes.size()`.
- **cls_scores**: Member variable which indicates the confidence level of the classification result of the text box, where the element number is the same as `boxes.size()`.
- **cls_labels**: Member variable which indicates the directional category of the textbox, where the element number is the same as `boxes.size()`.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
```python
fastdeploy.vision.OCRResult
```
- **boxes**: Member variable which indicates the coordinates of all detected target boxes in a single image. `boxes.size()` indicates the number of detected boxes. Each box is represented by 8 int values to indicate the 4 coordinates of the box, in the order of lower left, lower right, upper right, upper left.
- **text**: Member variable which indicates the content of the recognized text in multiple text boxes, where the element number is the same as `boxes.size()`.
- **rec_scores**: Member variable which indicates the confidence level of the recognized text, where the element number is the same as `boxes.size()`.
- **cls_scores**: Member variable which indicates the confidence level of the classification result of the text box, where the element number is the same as `boxes.size()`.
- **cls_labels**: Member variable which indicates the directional category of the textbox, where the element number is the same as `boxes.size()`.

View File

@@ -1,3 +1,4 @@
中文 [English](segmentation_result_EN.md)
# SegmentationResult 目标检测结果 # SegmentationResult 目标检测结果
SegmentationResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像中每个像素预测出来的分割类别和分割类别的概率值。 SegmentationResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像中每个像素预测出来的分割类别和分割类别的概率值。

View File

@@ -0,0 +1,33 @@
English | [中文](segmentation_result.md)
# Segmentation Result
The SegmentationResult code is defined in `fastdeploy/vision/common/result.h`, indicating the segmentation category and the segmentation category probability predicted in each pixel in the image.
## C++ Definition
``fastdeploy::vision::SegmentationResult``
```c++
struct SegmentationResult {
std::vector<uint8_t> label_map;
std::vector<float> score_map;
std::vector<int64_t> shape;
bool contain_score_map = false;
void Clear();
std::string Str();
};
```
- **label_map**: Member variable which indicates the segmentation category of each pixel in a single image. `label_map.size()` indicates the number of pixel points of a image.
- **score_map**: Member variable which indicates the predicted segmentation category probability value (specified as `--output_op argmax` when export) corresponding to label_map, or the probability value normalized by softmax (specified as `--output_op softmax` when export, or as `--output_op when exporting the model). none` when export while setting the [class member attribute](../../../examples/vision/segmentation/paddleseg/cpp/) as `apply_softmax=True` during model initialization).
- **shape**: Member variable which indicates the shape of the output image as H\*W.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
## Python Definition
`fastdeploy.vision.SegmentationResult`
- **label_map**(list of int): Member variable which indicates the segmentation category of each pixel in a single image.
- **score_map**(list of float): Member variable which indicates the predicted segmentation category probability value (specified as `--output_op argmax` when export) corresponding to label_map, or the probability value normalized by softmax (specified as `--output_op softmax` when export, or as `--output_op when exporting the model). none` when export while setting the [class member attribute](../../../examples/vision/segmentation/paddleseg/cpp/) as `apply_softmax=True` during model initialization).
- **shape**(list of int): Member variable which indicates the shape of the output image as H\*W.

View File

@@ -1,3 +1,5 @@
[English](../../en/build_and_install/a311d.md) | 简体中文
# 晶晨 A311D 部署环境编译安装 # 晶晨 A311D 部署环境编译安装
FastDeploy 基于 Paddle Lite 后端支持在晶晨 NPU 上进行部署推理。 FastDeploy 基于 Paddle Lite 后端支持在晶晨 NPU 上进行部署推理。

View File

@@ -1,3 +1,5 @@
[English](../../en/build_and_install/android.md) | 简体中文
# Android部署库编译 # Android部署库编译
FastDeploy当前在Android仅支持Paddle Lite后端推理支持armeabi-v7a和arm64-v8a两种cpu架构在armv8.2架构的arm设备支持fp16精度推理。相关编译选项说明如下 FastDeploy当前在Android仅支持Paddle Lite后端推理支持armeabi-v7a和arm64-v8a两种cpu架构在armv8.2架构的arm设备支持fp16精度推理。相关编译选项说明如下

View File

@@ -1,3 +1,4 @@
[English](../../en/build_and_install/cpu.md) | 简体中文
# CPU部署库编译 # CPU部署库编译

View File

@@ -1,3 +1,5 @@
[English](../../en/build_and_install/download_prebuilt_libraries.md) | 简体中文
# 预编译库安装 # 预编译库安装
FastDeploy提供各平台预编译库供开发者直接下载安装使用。当然FastDeploy编译也非常容易开发者也可根据自身需求编译FastDeploy。 FastDeploy提供各平台预编译库供开发者直接下载安装使用。当然FastDeploy编译也非常容易开发者也可根据自身需求编译FastDeploy。

View File

@@ -1,3 +1,4 @@
[English](../../en/build_and_install/gpu.md) | 简体中文
# GPU部署库编译 # GPU部署库编译

View File

@@ -1,3 +1,4 @@
[English](../../en/build_and_install/ipu.md) | 简体中文
# IPU部署库编译 # IPU部署库编译

View File

@@ -1,3 +1,4 @@
[English](../../en/build_and_install/jetson.md) | 简体中文
# Jetson部署库编译 # Jetson部署库编译

View File

@@ -1,3 +1,4 @@
[English](../../en/build_and_install/rknpu2.md) | 简体中文
# RK2代NPU部署库编译 # RK2代NPU部署库编译
## 写在前面 ## 写在前面

View File

@@ -1,3 +1,5 @@
[English](../../en/build_and_install/rv1126.md) | 简体中文
# 瑞芯微 RV1126 部署环境编译安装 # 瑞芯微 RV1126 部署环境编译安装
FastDeploy基于 Paddle Lite 后端支持在瑞芯微RockchipSoc 上进行部署推理。 FastDeploy基于 Paddle Lite 后端支持在瑞芯微RockchipSoc 上进行部署推理。

View File

@@ -1,3 +1,5 @@
[English](../../en/build_and_install/third_libraries.md) | 简体中文
# 第三方库依赖 # 第三方库依赖
FastDeploy当前根据编译选项会依赖如下第三方依赖 FastDeploy当前根据编译选项会依赖如下第三方依赖

View File

@@ -1,3 +1,5 @@
[English](../../en/build_and_install/xpu.md) | 简体中文
# 昆仑芯 XPU 部署环境编译安装 # 昆仑芯 XPU 部署环境编译安装
FastDeploy 基于 Paddle Lite 后端支持在昆仑芯 XPU 上进行部署推理。 FastDeploy 基于 Paddle Lite 后端支持在昆仑芯 XPU 上进行部署推理。

View File

@@ -1,3 +1,5 @@
[English](../../en/faq/build_on_win_with_gui.md) | 中文
# CMakeGUI + VS 2019 IDE编译FastDeploy # CMakeGUI + VS 2019 IDE编译FastDeploy
此方式仅支持编译FastDeploy C++ SDK 此方式仅支持编译FastDeploy C++ SDK

View File

@@ -1,3 +1,6 @@
[English](../../en/faq/develop_a_new_model.md) | 中文
# FastDeploy集成新模型流程 # FastDeploy集成新模型流程
在FastDeploy里面新增一个模型包括增加C++/Python的部署支持。 本文以torchvision v0.12.0中的ResNet50模型为例介绍使用FastDeploy做外部[模型集成](#modelsupport)具体包括如下3步。 在FastDeploy里面新增一个模型包括增加C++/Python的部署支持。 本文以torchvision v0.12.0中的ResNet50模型为例介绍使用FastDeploy做外部[模型集成](#modelsupport)具体包括如下3步。

View File

@@ -1,3 +1,6 @@
[English](../../en/faq/how_to_change_backend.md) | 中文
# 如何切换模型推理后端 # 如何切换模型推理后端
FastDeploy中各视觉模型可支持多种后端包括 FastDeploy中各视觉模型可支持多种后端包括

View File

@@ -1,3 +1,5 @@
[English](../../../en/faq/rknpu2/export.md) | 中文
# 导出模型指南 # 导出模型指南
## 简介 ## 简介

View File

@@ -1,3 +1,4 @@
[English](../../../en/faq/rknpu2/install_rknn_toolkit2.md) | 中文
# 安装rknn_toolkit2仓库 # 安装rknn_toolkit2仓库
## 下载rknn_toolkit2 ## 下载rknn_toolkit2

View File

@@ -1,3 +1,4 @@
[English](../../../en/faq/rknpu2/rknpu2.md) | 中文
# RKNPU2模型部署 # RKNPU2模型部署
## 安装环境 ## 安装环境

View File

@@ -1,3 +1,6 @@
[English](../../en/faq/tensorrt_tricks.md) | 中文
# TensorRT使用问题 # TensorRT使用问题
## 1. 运行TensorRT过程中出现如下日志提示 ## 1. 运行TensorRT过程中出现如下日志提示

View File

@@ -1,12 +1,17 @@
[English](../../en/faq/use_cpp_sdk_on_android.md) | 中文
# 在 Android 中通过 JNI 使用 FastDeploy C++ SDK # 在 Android 中通过 JNI 使用 FastDeploy C++ SDK
本文档将以PicoDet为例讲解如何通过JNI将FastDeploy中的模型封装到Android中进行调用。阅读本文档您至少需要了解C++、Java、JNI以及Android的基础知识。如果您主要关注如何在Java层如何调用FastDeploy的API则可以不阅读本文档。 本文档将以PicoDet为例讲解如何通过JNI将FastDeploy中的模型封装到Android中进行调用。阅读本文档您至少需要了解C++、Java、JNI以及Android的基础知识。如果您主要关注如何在Java层如何调用FastDeploy的API则可以不阅读本文档。
## 目录 ## 目录
- [新建Java类并定义native API](#Java) - [在 Android 中通过 JNI 使用 FastDeploy C++ SDK](#在-android-中通过-jni-使用-fastdeploy-c-sdk)
- [Android Studio 生成JNI函数定义](#JNI) - [目录](#目录)
- [在C++层实现JNI函数](#CPP) - [新建Java类并定义native API](#新建java类并定义native-api)
- [编写CMakeLists.txt及配置build.gradle](#CMakeAndGradle) - [Android Studio 生成JNI函数定义](#android-studio-生成jni函数定义)
- [更多FastDeploy Android 使用案例](#Examples) - [在C++层实现JNI函数](#在c层实现jni函数)
- [编写CMakeLists.txt及配置build.gradle](#编写cmakeliststxt及配置buildgradle)
- [更多FastDeploy Android 使用案例](#更多fastdeploy-android-使用案例)
## 新建Java类并定义native API ## 新建Java类并定义native API
<div id="Java"></div> <div id="Java"></div>

View File

@@ -1,3 +1,5 @@
[English](../../../en/quick_start/models/cpp.md) | 中文
# C++部署 # C++部署
确认开发环境已准备FastDeploy C++部署库,参考[FastDeploy安装](../../build_and_install/)安装预编译的FastDeploy或根据自己需求进行编译安装。 确认开发环境已准备FastDeploy C++部署库,参考[FastDeploy安装](../../build_and_install/)安装预编译的FastDeploy或根据自己需求进行编译安装。

View File

@@ -1,3 +1,5 @@
[English](../../../en/quick_start/models/python.md) | 中文
# PPYOLOE Python部署 # PPYOLOE Python部署
确认开发环境已安装FastDeploy参考[FastDeploy安装](../../build_and_install/)安装预编译的FastDeploy或根据自己需求进行编译安装。 确认开发环境已安装FastDeploy参考[FastDeploy安装](../../build_and_install/)安装预编译的FastDeploy或根据自己需求进行编译安装。

View File

@@ -1,3 +1,5 @@
[English](../../../en/quick_start/runtime/cpp.md) | 中文
# C++推理 # C++推理
确认开发环境已准备FastDeploy C++部署库,参考[FastDeploy安装](../../build_and_install/)安装预编译的FastDeploy或根据自己需求进行编译安装。 确认开发环境已准备FastDeploy C++部署库,参考[FastDeploy安装](../../build_and_install/)安装预编译的FastDeploy或根据自己需求进行编译安装。

View File

@@ -1,3 +1,5 @@
[English](../../../en/quick_start/runtime/python.md) | 中文
# Python推理 # Python推理
确认开发环境已安装FastDeploy参考[FastDeploy安装](../../build_and_install/)安装预编译的FastDeploy或根据自己需求进行编译安装。 确认开发环境已安装FastDeploy参考[FastDeploy安装](../../build_and_install/)安装预编译的FastDeploy或根据自己需求进行编译安装。

View File

@@ -1,3 +1,5 @@
English | [中文](../../cn/build_and_install/a311d.md)
# How to Build A311D Deployment Environment # How to Build A311D Deployment Environment
FastDeploy supports AI deployment on Rockchip Soc based on Paddle Lite backend. For more detailed information, please refer to: [Paddle Lite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html). FastDeploy supports AI deployment on Rockchip Soc based on Paddle Lite backend. For more detailed information, please refer to: [Paddle Lite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html).

View File

@@ -1,3 +1,5 @@
English | [中文](../../cn/build_and_install/android.md)
# How to Build FastDeploy Android C++ SDK # How to Build FastDeploy Android C++ SDK
FastDeploy supports Paddle Lite backend on Android. It supports both armeabi-v7a and arm64-v8a cpu architectures, and supports fp16 precision inference on the armv8.2 architecture. The relevant compilation options are described as follows: FastDeploy supports Paddle Lite backend on Android. It supports both armeabi-v7a and arm64-v8a cpu architectures, and supports fp16 precision inference on the armv8.2 architecture. The relevant compilation options are described as follows:

View File

@@ -1,4 +1,4 @@
English | [中文](../../cn/build_and_install/cpu.md)
# How to Build CPU Deployment Environment # How to Build CPU Deployment Environment

View File

@@ -1,4 +1,5 @@
English | [中文](../../cn/build_and_install/download_prebuilt_libraries.md) English | [中文](../../cn/build_and_install/download_prebuilt_libraries.md)
# How to Install Prebuilt Library # How to Install Prebuilt Library
FastDeploy provides pre-built libraries for developers to download and install directly. Meanwhile, FastDeploy also offers easy access to compile so that developers can compile FastDeploy according to their own needs. FastDeploy provides pre-built libraries for developers to download and install directly. Meanwhile, FastDeploy also offers easy access to compile so that developers can compile FastDeploy according to their own needs.
@@ -92,7 +93,7 @@ Install the released versionLatest 1.0.1 for now, Android is 1.0.1
| Mac OSX x64 | [fastdeploy-osx-x86_64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-x86_64-1.0.1.tgz) | clang++ 10.0.0| | Mac OSX x64 | [fastdeploy-osx-x86_64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-x86_64-1.0.1.tgz) | clang++ 10.0.0|
| Mac OSX arm64 | [fastdeploy-osx-arm64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-arm64-1.0.1.tgz) | clang++ 13.0.0 | | Mac OSX arm64 | [fastdeploy-osx-arm64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-arm64-1.0.1.tgz) | clang++ 13.0.0 |
| Linux aarch64 | [fastdeploy-osx-arm64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-aarch64-1.0.1.tgz) | gcc 6.3 | | Linux aarch64 | [fastdeploy-osx-arm64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-aarch64-1.0.1.tgz) | gcc 6.3 |
| Android armv7&v8 | [fastdeploy-android-1.0.0-shared.tgz](https://bj.bcebos.com/fastdeploy/release/android/fastdeploy-android-1.0.0-shared.tgz)| NDK 25, clang++, support arm64-v8aarmeabi-v7a | | Android armv7&v8 | [fastdeploy-android-1.0.0-shared.tgz](https://bj.bcebos.com/fastdeploy/release/android/fastdeploy-android-1.0.0-shared.tgz)| NDK 25, clang++, support arm64-v8a and armeabi-v7a |
## Java SDK ## Java SDK
@@ -109,6 +110,6 @@ Install the Develop versionNightly build
| Linux x64 | [fastdeploy-linux-x64-0.0.0.tgz](https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz) | g++ 8.2 | | Linux x64 | [fastdeploy-linux-x64-0.0.0.tgz](https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz) | g++ 8.2 |
| Windows x64 | [fastdeploy-win-x64-0.0.0.zip](https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-win-x64-0.0.0.zip) | Visual Studio 16 2019 | | Windows x64 | [fastdeploy-win-x64-0.0.0.zip](https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-win-x64-0.0.0.zip) | Visual Studio 16 2019 |
| Mac OSX x64 | [fastdeploy-osx-arm64-0.0.0.tgz](https://bj.bcebos.com/fastdeploy/dev/cpp/fastdeploy-osx-arm64-0.0.0.tgz) | - | | Mac OSX x64 | [fastdeploy-osx-arm64-0.0.0.tgz](https://bj.bcebos.com/fastdeploy/dev/cpp/fastdeploy-osx-arm64-0.0.0.tgz) | - |
| Mac OSX arm64 | [fastdeploy-osx-arm64-0.0.0.tgz](https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-osx-arm64-0.0.0.tgz) | clang++ 13.0.0编译产出 | | Mac OSX arm64 | [fastdeploy-osx-arm64-0.0.0.tgz](https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-osx-arm64-0.0.0.tgz) | clang++ 13.0.0 to compile |
| Linux aarch64 | - | - | | Linux aarch64 | - | - |
| Android armv7&v8 | - | - | | Android armv7&v8 | - | - |

View File

@@ -1,3 +1,4 @@
English | [中文](../../cn/build_and_install/gpu.md)
# How to Build GPU Deployment Environment # How to Build GPU Deployment Environment

View File

@@ -1,3 +1,4 @@
English | [中文](../../cn/build_and_install/ipu.md)
# How to Build IPU Deployment Environment # How to Build IPU Deployment Environment

View File

@@ -1,3 +1,4 @@
English | [中文](../../cn/build_and_install/jetson.md)
# How to Build FastDeploy Library on Nvidia Jetson Platform # How to Build FastDeploy Library on Nvidia Jetson Platform

View File

@@ -0,0 +1,106 @@
English | [中文](../../cn/build_and_install/rknpu2.md)
# How to Build RKNPU2 Deployment Environment
## Notes
FastDeploy has initial support for RKNPU2 deployments. If you find bugs while using, please report an issue to give us feedback.
## Introduction
Currently, the following backend engines on the RK platform are supported:
| Backend | Platform | Model format supported | Description |
|:------------------|:---------------------|:-------|:-------------------------------------------|
| ONNX&nbsp;Runtime | RK356X <br> RK3588 | ONNX | Compile switch is controlled by setting `ENABLE_ORT_BACKEND` ON or OFF(default) |
| RKNPU2 | RK356X <br> RK3588 | RKNN | Compile switch is controlled by setting `ENABLE_RKNPU2_BACKEND` ON or OFF(default) |
## How to Build and Install C++ SDK
RKNPU2 only supports compiling on linux, the following steps are done on linux.
### Update the driver and install the compiling environment
Before running the program, we need to install the latest RKNPU driver, which is currently updated to 1.4.0. To simplify the installation, here is a quick install script.
**Method 1: Install via script**
```bash
# Download and unzip rknpu2_device_install_1.4.0
wget https://bj.bcebos.com/fastdeploy/third_libs/rknpu2_device_install_1.4.0.zip
unzip rknpu2_device_install_1.4.0.zip
cd rknpu2_device_install_1.4.0
# For RK3588
sudo rknn_install_rk3588.sh
# For RK356X
sudo rknn_install_rk356X.sh
```
**Method 2: Install via gitee**
```bash
# Install necessary packages
sudo apt update -y
sudo apt install -y python3
sudo apt install -y python3-dev
sudo apt install -y python3-pip
sudo apt install -y gcc
sudo apt install -y python3-opencv
sudo apt install -y python3-numpy
sudo apt install -y cmake
# download rknpu2
# For RK3588
git clone https://gitee.com/mirrors_rockchip-linux/rknpu2.git
sudo cp ./rknpu2/runtime/RK3588/Linux/librknn_api/aarch64/* /usr/lib
sudo cp ./rknpu2/runtime/RK3588/Linux/rknn_server/aarch64/usr/bin/* /usr/bin/
# For RK356X
git clone https://gitee.com/mirrors_rockchip-linux/rknpu2.git
sudo cp ./rknpu2/runtime/RK356X/Linux/librknn_api/aarch64/* /usr/lib
sudo cp ./rknpu2/runtime/RK356X/Linux/rknn_server/aarch64/usr/bin/* /usr/bin/
```
### Compile C++ SDK
```bash
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
mkdir build && cd build
# Only a few key configurations are introduced here, see README.md for details.
# -DENABLE_ORT_BACKEND: Whether to enable ONNX model, default OFF
# -DENABLE_RKNPU2_BACKEND: Whether to enable RKNPU model, default OFF
# -RKNN2_TARGET_SOC: Compile the SDK board model. Enter RK356X or RK3588 with case sensitive required.
cmake .. -DENABLE_ORT_BACKEND=ON \
-DENABLE_RKNPU2_BACKEND=ON \
-DENABLE_VISION=ON \
-DRKNN2_TARGET_SOC=RK3588 \
-DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy-0.0.3
make -j8
make install
```
### Compile Python SDK
Python packages depend on `wheel`, please run `pip install wheel` before compiling.
```bash
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
cd python
export ENABLE_ORT_BACKEND=ON
export ENABLE_RKNPU2_BACKEND=ON
export ENABLE_VISION=ON
export RKNN2_TARGET_SOC=RK3588
python3 setup.py build
python3 setup.py bdist_wheel
cd dist
pip3 install fastdeploy_python-0.0.0-cp39-cp39-linux_aarch64.whl
```
## Model Deployment
Please refer to [RKNPU2 Model Deployment](../faq/rknpu2/rknpu2.md).

View File

@@ -1,3 +1,5 @@
English | [中文](../../cn/build_and_install/rv1126.md)
# How to Build RV1126 Deployment Environment # How to Build RV1126 Deployment Environment
FastDeploy supports AI deployment on Rockchip Soc based on Paddle Lite backend. For more detailed information, please refer to: [Paddle Lite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html). FastDeploy supports AI deployment on Rockchip Soc based on Paddle Lite backend. For more detailed information, please refer to: [Paddle Lite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html).

View File

@@ -0,0 +1,16 @@
English | [中文](../../cn/build_and_install/third_libraries.md)
# Third Library Dependency
FastDeploy will depend on the following third libraries according to compile options.
- OpenCV: OpenCV 3.4.16 library will be downloaded and pre-compiled automatically while ENABLE_VISION=ON.
- ONNX Runimte: ONNX Runtime library will be downloaded automatically while ENABLE_ORT_BACKEND=ON.
- OpenVINO: OpenVINO library will be downloaded automatically while ENABLE_OPENVINO_BACKEND=ON.
You can decide your own third libraries that exist in the environment by setting the following switches.
- OPENCV_DIRECTORY: Specify the OpenCV path in your environment, e.g. `-DOPENCV_DIRECTORY=/usr/lib/aarch64-linux-gnu/cmake/opencv4/`
- ORT_DIRECTORY: Specify the ONNX Runtime path in your environment, e.g.`-DORT_DIRECTORY=/download/onnxruntime-linux-x64-1.0.0`
- OPENVINO_DIRECTORY: Specify the OpenVINO path in your environment, e.g.`-DOPENVINO_DIRECTORY=//download/openvino`

View File

@@ -1,3 +1,5 @@
English | [中文](../../cn/build_and_install/xpu.md)
# How to Build KunlunXin XPU Deployment Environment # How to Build KunlunXin XPU Deployment Environment
FastDeploy supports deployment AI on KunlunXin XPU based on Paddle Lite backend. For more detailed information, please refer to: [Paddle Lite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/kunlunxin_xpu.html#xpu)。 FastDeploy supports deployment AI on KunlunXin XPU based on Paddle Lite backend. For more detailed information, please refer to: [Paddle Lite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/kunlunxin_xpu.html#xpu)。

View File

@@ -1,13 +1,18 @@
English | [中文](../../cn/faq/build_on_win_with_gui.md)
# Use CMakeGUI + VS 2019 IDE to Compile FastDeploy # Use CMakeGUI + VS 2019 IDE to Compile FastDeploy
Note: This method only supports FastDeploy C++ SDK Note: This method only supports FastDeploy C++ SDK
## Contents ## Contents
- [How to Use CMake GUI for Basic Compliation](#CMakeGuiAndVS2019Basic) - [Use CMakeGUI + VS 2019 IDE to Compile FastDeploy](#use-cmakegui--vs-2019-ide-to-compile-fastdeploy)
- [How to Set for CPU version C++ SDK Compilation](#CMakeGuiAndVS2019CPU) - [Contents](#contents)
- [How to Set for GPU version C++ SDK Compilation](#CMakeGuiAndVS2019GPU) - [How to Use CMake GUI for Basic Compilation](#how-to-use-cmake-gui-for-basic-compilation)
- [How to Use Visual Studio 2019 IDE for Compliation](#CMakeGuiAndVS2019Build) - [How to Set for CPU version C++ SDK Compilation](#how-to-set-for-cpu-version-c-sdk-compilation)
- [How to Set for GPU version C++ SDK Compilation](#how-to-set-for-gpu-version-c-sdk-compilation)
- [How to Use Visual Studio 2019 IDE for Compliation](#how-to-use-visual-studio-2019-ide-for-compliation)
- [Compile all examplesOptional](#compile-all-examplesoptional)
- [Note](#note)
### How to Use CMake GUI for Basic Compilation ### How to Use CMake GUI for Basic Compilation
<div id="CMakeGuiAndVS2019Basic"></div> <div id="CMakeGuiAndVS2019Basic"></div>

View File

@@ -1,3 +1,4 @@
English | [中文](../../cn/faq/develop_a_new_model.md)
# How to Integrate New Model on FastDeploy # How to Integrate New Model on FastDeploy
How to add a new model on FastDeploy, including C++/Python deployment? Here, we take the ResNet50 model in torchvision v0.12.0 as an example, introducing external [Model Integration](#modelsupport) on FastDeploy. The whole process only needs 3 steps. How to add a new model on FastDeploy, including C++/Python deployment? Here, we take the ResNet50 model in torchvision v0.12.0 as an example, introducing external [Model Integration](#modelsupport) on FastDeploy. The whole process only needs 3 steps.

View File

@@ -1,3 +1,4 @@
English | [中文](../../cn/faq/how_to_change_backend.md)
# How to Change Model Inference Backend # How to Change Model Inference Backend
FastDeploy supports various backends, including FastDeploy supports various backends, including

View File

@@ -0,0 +1,50 @@
English | [中文](../../../cn/faq/rknpu2/export.md)
# Export Model
## Introduction
Fastdeploy has simply integrated the onnx->rknn conversion process. In this instruction, we first write yaml configuration files, then export models in `tools/export.py`.
Before you start the conversion, please check if the environment is installed successfully referring to [RKNN-Toolkit2 Installation](./install_rknn_toolkit2.md).
## Configuration Parameter in export.py
| Parameter | Whether it can be NULL | Parameter Role |
|-----------------|------------|--------------------|
| verbose | Y(DEFAULT=TRUE) | Decide whether to output specific information when converting |
| config_path | N | Path to configuration file |
## Config File Introduction
### Module of config yaml file
```yaml
model_path: ./portrait_pp_humansegv2_lite_256x144_pretrained.onnx
output_folder: ./
target_platform: RK3588
normalize:
mean: [[0.5,0.5,0.5]]
std: [[0.5,0.5,0.5]]
outputs: None
```
### Config parameters
* model_path: Model saving path.
* output_folder: Model saving folder name.
* target_platform: The device model runs on, only RK3588 or RK3568 can be chosen.
* normalize: Configure the normalize operation on NPU with two parameters std and mean.
* std: If you do the normalize operation externally, please configure to [1/255,1/255,1/255].
* mean: If you do the normalize operation externally, please configure to [0,0,0].
* outputs: Output node list, if you use default output node, please configure to None.
## How to convert model
Run the line in the root directory:
```bash
python tools/export.py --config_path=./config.yaml
```
## Things to note in Model Export
* Please don't export models with softmax or argmax, calculate them externally instead.

View File

@@ -0,0 +1,49 @@
English | [中文](../../../cn/faq/rknpu2/install_rknn_toolkit2.md)
# RKNN-Toolkit2 Installation
## Download
Here are two methods to download RKNN-Toolkit2:
* Download from github library
A stable version of RKNN-Toolkit2 is available on github.
```bash
git clone https://github.com/rockchip-linux/rknn-toolkit2.git
```
* Download from Baidu Netdisk
In some cases, if the stable version has bugs and does not meet the requirements for model deployment, you can also use the beta version by downloading it from Baidu Netdisk. The installation way is the same as its stable version.
```text
linkhttps://eyun.baidu.com/s/3eTDMk6Y passwordrknn
```
## Installation
There will be dependency issues during the installation. Since some specific packages are required, it is recommended that you create a new conda environment at first.
You may get conda installation instruction on google, let's just skip it and introduce how to install RKNN-Toolkit2.
### Download and Install the packages required
```bash
sudo apt-get install libxslt1-dev zlib1g zlib1g-dev libglib2.0-0 \
libsm6 libgl1-mesa-glx libprotobuf-dev gcc g++
```
### Environment for installing RKNN-Toolkit2
```bash
# Create a new environment
conda create -n rknn2 python=3.6
conda activate rknn2
# RKNN-Toolkit2 has a specific dependency on numpy
pip install numpy==1.16.6
# Install rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl
cd ~/download/rknn-toolkit2-master/packages
pip install rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl
```
## Other Documents
- [How to convert ONNX to RKNN](./export.md)

View File

@@ -0,0 +1,74 @@
English | [中文](../../../cn/faq/rknpu2/rknpu2.md)
# RKNPU2 Model Deployment
## Installation Environment
RKNPU2 model export is only supported on x86 Linux platform, please refer to [RKNPU2 Model Export Environment Configuration](./install_rknn_toolkit2.md).
## Convert ONNX to RKNN
Since the ONNX model cannot directly calculate by calling the NPU, it is necessary to convert the ONNX model to RKNN model. For detailed information, please refer to [RKNPU2 Conversion Document](./export.md).
## Models supported for RKNPU2
The following tests are at end-to-end speed, and the test environment is as follows:
* Device Model: RK3588
* ARM CPU is tested on ONNX
* with single-core NPU
| Mission Scenario | Model | Model Version(tested version) | ARM CPU/RKNN speed(ms) |
|------------------|-------------------|-------------------------------|--------------------|
| Detection | Picodet | Picodet-s | 162/112 |
| Detection | RKYOLOV5 | YOLOV5-S-Relu(int8) | -/57 |
| Detection | RKYOLOX | - | -/- |
| Detection | RKYOLOV7 | - | -/- |
| Segmentation | Unet | Unet-cityscapes | -/- |
| Segmentation | PP-HumanSegV2Lite | portrait | 133/43 |
| Segmentation | PP-HumanSegV2Lite | human | 133/43 |
| Face Detection | SCRFD | SCRFD-2.5G-kps-640 | 108/42 |
## How to use RKNPU2 Backend to Infer Models
We provide an example on Scrfd model here to show how to use RKNPU2 Backend for model inference. The modifications mentioned in the annotations below are in comparison to the ONNX CPU.
```c++
int infer_scrfd_npu() {
char model_path[] = "./model/scrfd_2.5g_bnkps_shape640x640.rknn";
char image_file[] = "./image/test_lite_face_detector_3.jpg";
auto option = fastdeploy::RuntimeOption();
// Modification1: option.UseRKNPU2 function should be called
option.UseRKNPU2();
// Modification2: The parameter 'fastdeploy::ModelFormat::RKNN' should be transferred when loading the model
auto *model = new fastdeploy::vision::facedet::SCRFD(model_path,"",option,fastdeploy::ModelFormat::RKNN);
if (!model->Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return 0;
}
// Modification3(optional): RKNPU2 supports to normalize using NPU and the input format is nhwc format.
// The action of DisableNormalizeAndPermute will block the nor action and hwc to chw converting action during preprocessing.
// If you use an already supported model list, please call its method before Predict.
model->DisableNormalizeAndPermute();
auto im = cv::imread(image_file);
auto im_bak = im.clone();
fastdeploy::vision::FaceDetectionResult res;
clock_t start = clock();
if (!model->Predict(&im, &res, 0.8, 0.8)) {
std::cerr << "Failed to predict." << std::endl;
return 0;
}
clock_t end = clock();
double dur = (double) (end - start);
printf("infer_scrfd_npu use time:%f\n", (dur / CLOCKS_PER_SEC));
auto vis_im = fastdeploy::vision::Visualize::VisFaceDetection(im_bak, res);
cv::imwrite("scrfd_rknn_vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./scrfd_rknn_vis_result.jpg" << std::endl;
return 0;
}
```
## Other related Documents
- [How to Build RKNPU2 Deployment Environment](../../build_and_install/rknpu2.md)
- [RKNN-Toolkit2 Installation Document](./install_rknn_toolkit2.md)
- [How to convert ONNX to RKNN](./export.md)

View File

@@ -1,3 +1,250 @@
English | [中文](../../cn/faq/use_cpp_sdk_on_android.md)
# FastDeploy to deploy on Android Platform # FastDeploy to deploy on Android Platform
coming soon... This document will take PicoDet as an example and explain how to encapsulate FastDeploy model to Android through JNI. You need to know at least the basics of C++, Java, JNI and Android. If you mainly focus on how to call FastDeploy API in Java layer, you can skip this document.
## Content
- [FastDeploy to deploy on Android Platform](#fastdeploy-to-deploy-on-android-platform)
- [Content](#content)
- [Create a new Java class and Define the native API](#create-a-new-java-class-and-define-the-native-api)
- [Generate JNI function definition with Android Studio](#generate-jni-function-definition-with-android-studio)
- [Implement JNI function in the C++ layer](#implement-jni-function-in-the-c-layer)
- [Write CMakeLists.txt and configure build.gradle](#write-cmakeliststxt-and-configure-buildgradle)
- [More examples of FastDeploy Android](#more-examples-of-fastdeploy-android)
## Create a new Java class and Define the native API
<div id="Java"></div>
```java
public class PicoDet {
protected long mNativeModelContext = 0; // Context from native.
protected boolean mInitialized = false;
// ...
// Bind predictor from native context.
private static native long bindNative(String modelFile,
String paramsFile,
String configFile,
int cpuNumThread,
boolean enableLiteFp16,
int litePowerMode,
String liteOptimizedModelDir,
boolean enableRecordTimeOfRuntime,
String labelFile);
// Call prediction from native context.
private static native long predictNative(long nativeModelContext,
Bitmap ARGB8888Bitmap,
boolean saved,
String savedImagePath,
float scoreThreshold,
boolean rendering);
// Release buffers allocated in native context.
private static native boolean releaseNative(long nativeModelContext);
// Initializes at the beginning.
static {
FastDeployInitializer.init();
}
}
```
These interfaces, marked as native, are required to be implemented by JNI and should be available to call for Class PicoDet in the Java layer. For the complete PicoDet Java code, please refer to [PicoDet.java](../../../java/android/fastdeploy/src/main/java/com/baidu/paddle/fastdeploy/vision/detection/PicoDet.java). The functions are described seperately:
- `bindNative`: Initialize the model resource in the C++ layer. It returns a cursor (of type long) to the model if it is successfully initialized, otherwise it returns a 0 cursor.
- `predictNative`: Run the prediction code in th C++ layer with the initialized model cursor. If executed successfully, it returns a cursor to the result, otherwise it returns a 0 cursor. Please note that the cursor needs to be released after the current prediction, please refer to the definition of the `predict` funtion in [PicoDet.java](../../../java/android/fastdeploy/src/main/java/com/baidu/paddle/fastdeploy/vision/detection/PicoDet.java) for details.
- `releaseNative`: Release model resources in the C++ layer according to the input model cursor.
## Generate JNI function definition with Android Studio
<div id="JNI"></div>
Hover over the native function defined in Java and Android Studio will prompt if you want to create a JNI function definition. Here, we create the definition in a pre-created c++ file `picodet_jni.cc`.
- Create a JNI function definition with Android Studio:
![](https://user-images.githubusercontent.com/31974251/197341065-cdf8f626-4bb1-4a57-8d7a-80b382fe994e.png)
- Create the definition in picodet_jni.cc:
![](https://user-images.githubusercontent.com/31974251/197341190-b887dec5-fa75-43c9-9ab3-7ead50c0eb45.png)
- The JNI function definition created:
![](https://user-images.githubusercontent.com/31974251/197341274-e9671bac-9e77-4043-a870-9d5db914586b.png)
You can create JNI function definitions corresponding to other native functions referring to this process.
## Implement JNI function in the C++ layer
<div id="CPP"></div>
Here is an example of the PicoDet JNI layer implementation. For the complete C++ code, please refer to [android/app/src/main/cpp](../../../examples/vision/detection/paddledetection/android/app/src/main/cpp/).
```C++
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <jni.h> // NOLINT
#include "fastdeploy_jni/convert_jni.h" // NOLINT
#include "fastdeploy_jni/assets_loader_jni.h" // NOLINT
#include "fastdeploy_jni/runtime_option_jni.h" // NOLINT
#include "fastdeploy_jni/vision/results_jni.h" // NOLINT
#include "fastdeploy_jni/vision/detection/detection_utils_jni.h" // NOLINT
namespace fni = fastdeploy::jni;
namespace vision = fastdeploy::vision;
namespace detection = fastdeploy::vision::detection;
#ifdef __cplusplus
extern "C" {
#endif
JNIEXPORT jlong JNICALL
Java_com_baidu_paddle_fastdeploy_vision_detection_PicoDet_bindNative(
JNIEnv *env, jobject thiz, jstring model_file, jstring params_file,
jstring config_file, jobject runtime_option, jstring label_file) {
auto c_model_file = fni::ConvertTo<std::string>(env, model_file);
auto c_params_file = fni::ConvertTo<std::string>(env, params_file);
auto c_config_file = fni::ConvertTo<std::string>(env, config_file);
auto c_label_file = fni::ConvertTo<std::string>(env, label_file);
auto c_runtime_option = fni::NewCxxRuntimeOption(env, runtime_option);
auto c_model_ptr = new detection::PicoDet(
c_model_file, c_params_file, c_config_file, c_runtime_option);
INITIALIZED_OR_RETURN(c_model_ptr)
#ifdef ENABLE_RUNTIME_PERF
c_model_ptr->EnableRecordTimeOfRuntime();
#endif
if (!c_label_file.empty()) {
fni::AssetsLoader::LoadDetectionLabels(c_label_file);
}
vision::EnableFlyCV();
return reinterpret_cast<jlong>(c_model_ptr);
}
JNIEXPORT jobject JNICALL
Java_com_baidu_paddle_fastdeploy_vision_detection_PicoDet_predictNative(
JNIEnv *env, jobject thiz, jlong cxx_context, jobject argb8888_bitmap,
jboolean save_image, jstring save_path, jboolean rendering,
jfloat score_threshold) {
if (cxx_context == 0) {
return NULL;
}
cv::Mat c_bgr;
if (!fni::ARGB888Bitmap2BGR(env, argb8888_bitmap, &c_bgr)) {
return NULL;
}
auto c_model_ptr = reinterpret_cast<detection::PicoDet *>(cxx_context);
vision::DetectionResult c_result;
auto t = fni::GetCurrentTime();
c_model_ptr->Predict(&c_bgr, &c_result);
PERF_TIME_OF_RUNTIME(c_model_ptr, t)
if (rendering) {
fni::RenderingDetection(env, c_bgr, c_result, argb8888_bitmap, save_image,
score_threshold, save_path);
}
return fni::NewJavaResultFromCxx(env, reinterpret_cast<void *>(&c_result),
vision::ResultType::DETECTION);
}
JNIEXPORT jboolean JNICALL
Java_com_baidu_paddle_fastdeploy_vision_detection_PicoDet_releaseNative(
JNIEnv *env, jobject thiz, jlong cxx_context) {
if (cxx_context == 0) {
return JNI_FALSE;
}
auto c_model_ptr = reinterpret_cast<detection::PicoDet *>(cxx_context);
PERF_TIME_OF_RUNTIME(c_model_ptr, -1)
delete c_model_ptr;
LOGD("[End] Release PicoDet in native !");
return JNI_TRUE;
}
#ifdef __cplusplus
}
#endif
```
## Write CMakeLists.txt and configure build.gradle
<div id="CMakeAndGradle"></div>
The implemented JNI code needs to be compiled into a so library to be called by Java. To achieve this, you need to add JNI project support in build.gradle, and write the corresponding CMakeLists.txt.
- Configure NDK, CMake and Android ABI in build.gradle
```java
android {
defaultConfig {
// Other configurations are omitted ...
externalNativeBuild {
cmake {
arguments '-DANDROID_PLATFORM=android-21', '-DANDROID_STL=c++_shared', "-DANDROID_TOOLCHAIN=clang"
abiFilters 'armeabi-v7a', 'arm64-v8a'
cppFlags "-std=c++11"
}
}
}
// Other configurations are omitted ...
externalNativeBuild {
cmake {
path file('src/main/cpp/CMakeLists.txt')
version '3.10.2'
}
}
sourceSets {
main {
jniLibs.srcDirs = ['libs']
}
}
ndkVersion '20.1.5948944'
}
```
- An example of CMakeLists.txt
```cmake
cmake_minimum_required(VERSION 3.10.2)
project("fastdeploy_jni")
# Where xxx indicates the version number of C++ SDK
set(FastDeploy_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../../../libs/fastdeploy-android-xxx-shared")
find_package(FastDeploy REQUIRED)
include_directories(${CMAKE_CURRENT_SOURCE_DIR})
include_directories(${FastDeploy_INCLUDE_DIRS})
add_library(
fastdeploy_jni
SHARED
utils_jni.cc
bitmap_jni.cc
vision/results_jni.cc
vision/visualize_jni.cc
vision/detection/picodet_jni.cc
vision/classification/paddleclas_model_jni.cc)
find_library(log-lib log)
target_link_libraries(
# Specifies the target library.
fastdeploy_jni
jnigraphics
${FASTDEPLOY_LIBS}
GLESv2
EGL
${log-lib}
)
```
For the complete project, please refer to [CMakelists.txt](../../../java/android/fastdeploy/src/main/cpp/CMakeLists.txt) and [build.gradle](../../../java/android/fastdeploy/build.gradle).
## More examples of FastDeploy Android
<div id="Examples"></div>
For more examples of using FastDeploy Android, you can refer to:
- [Image classification on Android](../../../examples/vision/classification/paddleclas/android/README.md)
- [Object detection on Android](../../../examples/vision/detection/paddledetection/android/README.md)

View File

@@ -3,33 +3,38 @@ English | [中文](../../cn/faq/use_sdk_on_windows.md)
# Using the FastDeploy C++ SDK on Windows Platform # Using the FastDeploy C++ SDK on Windows Platform
## Contents ## Contents
- [1. Environment Dependent](#Environment)
- [2. Download FastDeploy Windows 10 C++ SDK](#Download) - [Using the FastDeploy C++ SDK on Windows Platform](#using-the-fastdeploy-c-sdk-on-windows-platform)
- [3. Various ways to use C++ SDK on Windows Platform](#CommandLine) - [Contents](#contents)
- [3.1 Using the C++ SDK from the Command Line](#CommandLine) - [1. Environment Dependent](#1-environment-dependent)
- [3.1.1 Build the example on the Windows Platform command line terminal](#CommandLine) - [2. Download FastDeploy Windows 10 C++ SDK](#2-download-fastdeploy-windows-10-c-sdk)
- [3.1.2 Run the Executable to Get Inference Results](#CommandLine) - [2.1 Download the Pre-built Library or Build the Latest SDK from Source](#21-download-the-pre-built-library-or-build-the-latest-sdk-from-source)
- [3.2 Visual Studio 2019 Creates sln Project Using C++ SDK](#VisualStudio2019Sln) - [2.2 Prepare Model Files and Test Images](#22-prepare-model-files-and-test-images)
- [3.2.1 Visual Studio 2019 creates sln project project](#VisualStudio2019Sln1) - [3. Various ways to use C++ SDK on Windows Platform](#3-various-ways-to-use-c-sdk-on-windows-platform)
- [3.2.2 Copy the code of infer_ppyoloe.cc from examples to the project](#VisualStudio2019Sln2) - [3.1 SDK usage method 1Using the C++ SDK from the Command Line](#31-sdk-usage-method-1using-the-c-sdk-from-the-command-line)
- [3.2.3 Set the project configuration to "Release x64" configuration](#VisualStudio2019Sln3) - [3.1.1 Build PPYOLOE on Windows Platform](#311-build-ppyoloe-on-windows-platform)
- [3.2.4 Configure Include Header File Path](#VisualStudio2019Sln4) - [3.1.2 Run Demo](#312-run-demo)
- [3.2.5 Configure Lib Path and Add Library Files](#VisualStudio2019Sln5) - [3.2 SDK usage method 2: Visual Studio 2019 creates sln project using C++ SDK](#32-sdk-usage-method-2-visual-studio-2019-creates-sln-project-using-c-sdk)
- [3.2.6 Build the Project and Run to Get the Result](#VisualStudio2019Sln6) - [3.2.1 Step 1Visual Studio 2019 creates sln project project](#321-step-1visual-studio-2019-creates-sln-project-project)
- [3.3 Visual Studio 2019 Create CMake project using C++ SDK](#VisualStudio2019) - [3.2.2 Step 2Copy the code of infer\_ppyoloe.cc from examples to the project](#322-step-2copy-the-code-of-infer_ppyoloecc-from-examples-to-the-project)
- [3.3.1 Visual Studio 2019 Creates a CMake Project](#VisualStudio20191) - [3.2.3 Step 3Set the project configuration to "Release x64" configuration](#323-step-3set-the-project-configuration-to-release-x64-configuration)
- [3.3.2 Configure FastDeploy C++ SDK in CMakeLists](#VisualStudio20192) - [3.2.4 Step 4Configure Include Header File Path](#324-step-4configure-include-header-file-path)
- [3.3.3 Generate project cache and Modify CMakeSetting.json Configuration](#VisualStudio20193) - [3.2.5 Step 5Configure Lib Path and Add Library Files](#325-step-5configure-lib-path-and-add-library-files)
- [3.3.4 Generate executable file, Run to Get the Result](#VisualStudio20194) - [3.2.6 Step 6Build the Project and Run to Get the Result](#326-step-6build-the-project-and-run-to-get-the-result)
- [4. Multiple methods to Configure the Required Dependencies for the Exe Runtime](#CommandLineDeps1) - [3.3 Visual Studio 2019 Create CMake project using C++ SDK](#33-visual-studio-2019-create-cmake-project-using-c-sdk)
- [4.1 Use Fastdeploy_init.bat for Configuration (Recommended)](#CommandLineDeps1) - [3.3.1 Step 1 Visual Studio 2019 Creates a CMake Project](#331-step-1-visual-studio-2019-creates-a-cmake-project)
- [4.1.1 fastdeploy_init.bat User's Manual](#CommandLineDeps11) - [3.3.2 Step 2Configure FastDeploy C++ SDK in CMakeLists](#332-step-2configure-fastdeploy-c-sdk-in-cmakelists)
- [4.1.2 fastdeploy_init.bat View all dll, lib and include paths in the SDK](#CommandLineDeps12) - [3.3.3 Step 3Generate project cache and Modify CMakeSetting.json Configuration](#333-step-3generate-project-cache-and-modify-cmakesettingjson-configuration)
- [4.1.3 fastdeploy_init.bat Installs all the dlls in the SDK to the specified directory](#CommandLineDeps13) - [3.3.4 Step 4Generate executable file, Run to Get the Result](#334-step-4generate-executable-file-run-to-get-the-result)
- [4.1.4 fastdeploy_init.bat Configures SDK Environment Variables](#CommandLineDeps14) - [4. Multiple methods to Configure the Required Dependencies for the Exe Runtime](#4-multiple-methods-to-configure-the-required-dependencies-for-the-exe-runtime)
- [4.2 Modify CMakeLists.txt, One Line of Command Configuration (Recommended)](#CommandLineDeps2) - [4.1 Use method 1Use Fastdeploy\_init.bat for Configuration (Recommended)](#41--use-method-1use-fastdeploy_initbat-for-configuration-recommended)
- [4.3 Command Line Setting Environment Variables](#CommandLineDeps3) - [4.1.1 fastdeploy\_init.bat User's Manual](#411-fastdeploy_initbat-users-manual)
- [4.4 Manually Copy the Dependency Library to the Exe Directory](#CommandLineDeps4) - [4.1.2 fastdeploy\_init.bat View all dll, lib and include paths in the SDK](#412-fastdeploy_initbat-view-all-dll-lib-and-include-paths-in-the-sdk)
- [4.1.3 fastdeploy\_init.bat Installs all the dlls in the SDK to the specified directory](#413-fastdeploy_initbat-installs-all-the-dlls-in-the-sdk-to-the-specified-directory)
- [4.1.4 fastdeploy\_init.bat Configures SDK Environment Variables](#414-fastdeploy_initbat-configures-sdk-environment-variables)
- [4.2 Use method 2Modify CMakeLists.txt, One Line of Command Configuration (Recommended)](#42--use-method-2modify-cmakeliststxt-one-line-of-command-configuration-recommended)
- [4.3 Use method 3Command Line Setting Environment Variables](#43--use-method-3command-line-setting-environment-variables)
- [4.4 Use method 4Manually Copy the Dependency Library to the Exe Directory](#44-use-method-4manually-copy-the-dependency-library-to-the-exe-directory)
## 1. Environment Dependent ## 1. Environment Dependent
@@ -52,7 +57,7 @@ Please refer to source code compilation: [build_and_install](../build_and_instal
### 2.2 Prepare Model Files and Test Images ### 2.2 Prepare Model Files and Test Images
Model files and test images can be downloaded from the link below and unzipped Model files and test images can be downloaded from the link below and unzipped
```text ```text
https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz # (下载后解压缩) https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz # (please unzip it after downloading)
https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
``` ```

View File

@@ -1,3 +1,4 @@
English | [中文](../../../cn/quick_start/models/cpp.md)
# C++ Deployment # C++ Deployment
Please make sure the development environment has FastDeploy C++ SDK installed. Refer to [FastDeploy installation](../../build_and_install/) to install the pre-built FastDeploy, or build and install according to your own needs. Please make sure the development environment has FastDeploy C++ SDK installed. Refer to [FastDeploy installation](../../build_and_install/) to install the pre-built FastDeploy, or build and install according to your own needs.

View File

@@ -1,3 +1,5 @@
English | [中文](../../../cn/quick_start/models/python.md)
# Python Deployment # Python Deployment
Make sure that FastDeploy is installed in the development environment. Refer to [FastDeploy Installation](../../build_and_install/) to install the pre-built FastDeploy, or build and install according to your own needs. Make sure that FastDeploy is installed in the development environment. Refer to [FastDeploy Installation](../../build_and_install/) to install the pre-built FastDeploy, or build and install according to your own needs.

View File

@@ -1,3 +1,120 @@
# C++ Deployment English | [中文](../../../cn/quick_start/runtime/cpp.md)
# C++ Inference
coming soon... Please check out the FastDeploy C++ deployment library is already in your environment. You can refer to [FastDeploy Installation](../../build_and_install/) to install the pre-compiled FastDeploy, or customize your installation.
This document shows an inference sample on the CPU using the PaddleClas classification model MobileNetV2 as an example.
## 1. Obtaining the Module
```bash
wget https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz
tar xvf mobilenetv2.tgz
```
## 2. Backend Configuration
The following C++ code is saved as `infer_paddle_onnxruntime.cc`.
``` c++
#include "fastdeploy/runtime.h"
namespace fd = fastdeploy;
int main(int argc, char* argv[]) {
std::string model_file = "mobilenetv2/inference.pdmodel";
std::string params_file = "mobilenetv2/inference.pdiparams";
// setup option
fd::RuntimeOption runtime_option;
runtime_option.SetModelPath(model_file, params_file, fd::ModelFormat::PADDLE);
runtime_option.UseOrtBackend();
runtime_option.SetCpuThreadNum(12);
// init runtime
std::unique_ptr<fd::Runtime> runtime =
std::unique_ptr<fd::Runtime>(new fd::Runtime());
if (!runtime->Init(runtime_option)) {
std::cerr << "--- Init FastDeploy Runitme Failed! "
<< "\n--- Model: " << model_file << std::endl;
return -1;
} else {
std::cout << "--- Init FastDeploy Runitme Done! "
<< "\n--- Model: " << model_file << std::endl;
}
// init input tensor shape
fd::TensorInfo info = runtime->GetInputInfo(0);
info.shape = {1, 3, 224, 224};
std::vector<fd::FDTensor> input_tensors(1);
std::vector<fd::FDTensor> output_tensors(1);
std::vector<float> inputs_data;
inputs_data.resize(1 * 3 * 224 * 224);
for (size_t i = 0; i < inputs_data.size(); ++i) {
inputs_data[i] = std::rand() % 1000 / 1000.0f;
}
input_tensors[0].SetExternalData({1, 3, 224, 224}, fd::FDDataType::FP32, inputs_data.data());
//get input name
input_tensors[0].name = info.name;
runtime->Infer(input_tensors, &output_tensors);
output_tensors[0].PrintInfo();
return 0;
}
```
When loading is complete, you can get the following output information indicating the initialized backend and the hardware devices.
```
[INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init Runtime initialized with Backend::OrtBackend in device Device::CPU.
```
## 3. Prepare for CMakeLists.txt
FastDeploy contains several dependencies, it is more complicated to compile directly with `g++` or a compiler, so we recommend to use cmake to compile and configure. The sample configuration is as follows.
```cmake
PROJECT(runtime_demo C CXX)
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
# Specify path to the fastdeploy library after downloading and unpacking
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
# Add FastDeploy dependency headers
include_directories(${FASTDEPLOY_INCS})
add_executable(runtime_demo ${PROJECT_SOURCE_DIR}/infer_onnx_openvino.cc)
# Add FastDeploy dependency libraries
target_link_libraries(runtime_demo ${FASTDEPLOY_LIBS})
```
## 4. Compile executable program
Open a terminal, go to the directory where `infer_paddle_onnxruntime.cc` and `CMakeLists.txt` are located, and then run:
```bash
cd examples/runtime/cpp
mkdir build & cd build
cmake .. -DFASTDEPLOY_INSTALL_DIR=$fastdeploy_cpp_sdk
make -j
```
```fastdeploy_cpp_sdk``` is path to FastDeploy C++ deployment library.
After compiling, you can get your results by running:
```bash
./runtime_demo
```
If `error while loading shared libraries: libxxx.so: cannot open shared object file: No such file...`is reported, it means that the path to FastDeploy is not found. You can re-execute the program after adding the FastDeploy library path to the environment variable by running the following command.
```bash
source /Path/to/fastdeploy_cpp_sdk/fastdeploy_init.sh
```
This sample code is common on all platforms (Windows/Linux/Mac), but the compilation process is only supported on (Linux/Mac),while using msbuild to compile on Windows. Please refer to [FastDeploy C++ SDK on Windows](../../faq/use_sdk_on_windows.md).
## Other Documents
- [Runtime demos on different backends](../../../../examples/runtime/README.md)
- [Switching hardware and backend for model inference](../../faq/how_to_change_backend.md)

View File

@@ -1,3 +1,54 @@
# Python Deployment English | [中文](../../../cn/quick_start/runtime/python.md)
# Python Inference
Please check out the FastDeploy is already installed in your environment. You can refer to [FastDeploy Installation](../../build_and_install/) to install the pre-compiled FastDeploy, or customize your installation.
This document shows an inference sample on the CPU using the PaddleClas classification model MobileNetV2 as an example.
## 1. Obtaining the Module
``` python
import fastdeploy as fd
model_url = "https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz"
fd.download_and_decompress(model_url, path=".")
```
## 2. Backend Configuration
- For more examples, you can refer to [examples/runtime](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/runtime).
``` python
option = fd.RuntimeOption()
option.set_model_path("mobilenetv2/inference.pdmodel",
"mobilenetv2/inference.pdiparams")
# **** CPU Configuration ****
option.use_cpu()
option.use_ort_backend()
option.set_cpu_thread_num(12)
# Initialise runtime
runtime = fd.Runtime(option)
# Get model input name
input_name = runtime.get_input_info(0).name
# Constructing random data for inference
results = runtime.infer({
input_name: np.random.rand(1, 3, 224, 224).astype("float32")
})
print(results[0].shape)
```
When loading is complete, you can get the following output information indicating the initialized backend and the hardware devices.
```
[INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init Runtime initialized with Backend::OrtBackend in device Device::CPU.
```
## Other Documents
- [Runtime demos on different backends](../../../../examples/runtime/README.md)
- [Switching hardware and backend for model inference](../../faq/how_to_change_backend.md)
coming soon...

View File

@@ -0,0 +1 @@
English | [中文](../zh_CN/client.md)

View File

@@ -1,3 +1,4 @@
English | [中文](../zh_CN/compile.md)
# FastDeploy Serving Deployment Image Compilation # FastDeploy Serving Deployment Image Compilation
How to create a FastDploy image How to create a FastDploy image

148
serving/docs/EN/demo-en.md Normal file
View File

@@ -0,0 +1,148 @@
English | [中文](../zh_CN/demo.md)
# Service-oriented Deployment Demo
We take the YOLOv5 model as an simple example, and introduce how to execute a service-oriented deployment. For the detailed code, please refer to [Service-oriented Deployment of YOLOv5](../../../examples/vision/detection/yolov5/serving). It is recommend that you read the following documents before reading this article.
- [Service-oriented Model Catalog Description](model_repository-en.md) (how to prepare the model catalog)
- [Service-oriented Deployment Configuration Description](model_configuration-en.md) (the configuration option for runtime)
## Fundamental Introduction
Similar to common deep learning models, the process of YOLOv5 consists of three stages: pre-processing, model prediction and post-processing.
Pre-processing, model prediction, and post-processing are all considered as one **model service** in FastDeployServer. The **config.pbtxt** configuration file of each model service describes its input data format, output data format, the type of model service (i.e. **backend** or **platform** in config.pbtxt), and some other options.
In pre-processing and post-processing stage, we generally run a piece of Python code. So let us simply call it **Python model service**, and the corresponding config.pbtxt configure `backend: "python"`.
The model prediction stage is when the deep learning model prediction engine loads the deep learning model files user supplied to run the model prediction, which we call **Runtime model service**, and the corresponding config.pbtxt configure `backend: "fastdeploy"`.
Depending on different type of model provided, the configuration of using CPU, GPU, TRT, ONNX, etc. can be set in **optimization**. Please refer to [Service-based Deployment Configuration Introduction](model_configuration-en.md) for configuring methods.
In addition, **Ensemble model service** is required to combine the 3 **model services** stages of pre-processing, model prediction, and post-processing into one whole, and is used to describe the correlation between the 3 model services. For example, the correspondence between the output of pre-processing and the input of model prediction, the calling order of multiple model services, the series-parallel relationship, etc. Its corresponding config.pbtxt configure `platform: "ensemble"`.
In this YOLOv5 example, **Ensemble model service** combines 3 **model services** stages of pre-processing, model prediction, and post-processing as a whole, and the overall structure is shown in the figure below.
<p align="center">
<br>
<img src='https://user-images.githubusercontent.com/35565423/204268774-7b2f6b4a-50b1-4962-ade9-cd10cf3897ab.png'>
<br>
</p>
For [a combinition model of multiple deep learning models like OCR](../../../examples/vision/ocr/PP-OCRv3/serving), or [a deep learning model with streaming input and output](../../../examples/audio/pp-tts/serving), the **Ensemble model service** configuration is more complex.
## Introduction to Python Model Service
Let us take [Pre-processing in YOLOv5](../../../examples/vision/detection/yolov5/serving/models/preprocess/1/model.py) as an example to briefly introduce the notes in programming a Python model service.
The overall structure framework of the Python model service code model.py is shown below. The core is the class `TritonPythonModel`, which contains three member functions `initialize`, `execute`, and `finalize`. Name of classes, member functions, and input variables are not allowed to be changed. On this basis, you can write your own code.
```
import json
import numpy as np
import time
import fastdeploy as fd
# triton_python_backend_utils is available in every Triton Python model. You
# need to use this module to create inference requests and responses. It also
# contains some utility functions for extracting information from model_config
# and converting Triton input/output types to numpy types.
import triton_python_backend_utils as pb_utils
class TritonPythonModel:
"""Your Python model must use the same class name. Every Python model
that is created must have "TritonPythonModel" as the class name.
"""
def initialize(self, args):
"""`initialize` is called only once when the model is being loaded.
Implementing `initialize` function is optional. This function allows
the model to intialize any state associated with this model.
Parameters
----------
args : dict
Both keys and values are strings. The dictionary keys and values are:
* model_config: A JSON string containing the model configuration
* model_instance_kind: A string containing model instance kind
* model_instance_device_id: A string containing model instance device ID
* model_repository: Model repository path
* model_version: Model version
* model_name: Model name
"""
#The initialize function is only called when loading the model.
def execute(self, requests):
"""`execute` must be implemented in every Python model. `execute`
function receives a list of pb_utils.InferenceRequest as the only
argument. This function is called when an inference is requested
for this model. Depending on the batching configuration (e.g. Dynamic
Batching) used, `requests` may contain multiple requests. Every
Python model, must create one pb_utils.InferenceResponse for every
pb_utils.InferenceRequest in `requests`. If there is an error, you can
set the error argument when creating a pb_utils.InferenceResponse.
Parameters
----------
requests : list
A list of pb_utils.InferenceRequest
Returns
-------
list
A list of pb_utils.InferenceResponse. The length of this list must
be the same as `requests`
"""
#Pre-processing code that calls the execute function for each prediction.
#FastDeploy provides pre and post processing python functions for some models, so you don't need to program them.
#Please use fd.vision.detection.YOLOv5.preprocess(data) for calling.
#You can write your own processing logic
def finalize(self):
"""`finalize` is called only once when the model is being unloaded.
Implementing `finalize` function is optional. This function allows
the model to perform any necessary clean ups before exit.
"""
#Destructor code, it is only called when the model is being unloaded
```
Initialization operations are generally in function `initialize` , and this function is only executed once when the Python model service is being loaded.
Destructive release operations are generally in function `finalize`, and this function is only executed once when the Python model service is being unloaded.
The pre and post processing logic is generally designed in function `execute`, and this function is executed once each time the server receives a client request.
The input parameter `requests` of the function `execute`, is a collection of InferenceRequest. When [Dynamic Batching](#Dynamic-Batching) is not enabled, the length of requests is 1, i.e. there is only one InferenceRequest.
The return parameter `responses` of the function `execute` must be a collection of InferenceResponse, with the length usually the same as the length of `requests`, that is, N InferenceRequest must return N InferenceResponse.
You can write your own code in the `execute` function for data pre-processing or post-processing. For convenience, FastDeploy provides pre and post processing python functions for some models. You can write:
```
import fastdeploy as fd
fd.vision.detection.YOLOv5.preprocess(data)
```
## Dynamic Batching
The principle of dynamic batching is shown in the figure. When the user request concurrency is high while the GPU utilization is low, the throughput performance can be improved by merging different user requests into a large Batch for model prediction.
<p align="center">
<br>
<img src='https://user-images.githubusercontent.com/35565423/204285444-1f9aaf24-05c2-4aae-bbd5-47dc3582dc01.png'>
<br>
</p>
Enabling dynamic batching is as simple as adding the lines `dynamic_batching{}` to the end of config.pbtxt. Please note that the maximum batch size should not exceed `max_batch_size`.
**Note**: The field `ensemble_scheduling` and the field `dynamic_batching` should not coexist. That is, dynamic batching is not available for **Ensemble Model Service**, since **Ensemble Model Service** itself is just a combination of multiple model services.
## Multi-Model Instance
The principle of multi-model instance is shown in the figure below. When pre and post processing (which usually does not support Batch) becomes the performance bottleneck of the whole service, it is possible to improve the latency performance by adding **Python Model Service** instances for pre and post processing.
Of course, you can also turn on multiple **Runtime Model Service** instances to improve GPU utilization.
<p align="center">
<br>
<img src='https://user-images.githubusercontent.com/35565423/204268809-6ea95a9f-e014-468a-8597-98b67ebc7381.png'>
<br>
</p>
It is simple to set a multi-model instance, just write:
```
instance_group [
{
count: 3
kind: KIND_CPU
}
]
```

View File

@@ -0,0 +1 @@
English | [中文](../zh_CN/model_configuration.md)

View File

@@ -1,3 +1,4 @@
English | [中文](../zh_CN/model_repository.md)
# Model Repository # Model Repository
FastDeploy starts the serving by specifying one or more models in the model repository to deploy the service. When the serving is running, the models in the service can be modified following [Model Management](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_management.md), and obtain serving from one or more model repositories specified at the serving initiation. FastDeploy starts the serving by specifying one or more models in the model repository to deploy the service. When the serving is running, the models in the service can be modified following [Model Management](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_management.md), and obtain serving from one or more model repositories specified at the serving initiation.

View File

@@ -1,3 +1,4 @@
中文 [English](../EN/client-en.md)
# 客户端访问说明 # 客户端访问说明
本文以访问使用fastdeployserver部署的yolov5模型为例讲述客户端如何请求服务端进行推理服务。关于如何使用fastdeployserver部署yolov5模型可以参考文档[yolov5服务化部署](../../../examples/vision/detection/yolov5/serving) 本文以访问使用fastdeployserver部署的yolov5模型为例讲述客户端如何请求服务端进行推理服务。关于如何使用fastdeployserver部署yolov5模型可以参考文档[yolov5服务化部署](../../../examples/vision/detection/yolov5/serving)

View File

@@ -1,3 +1,4 @@
中文 [English](../EN/compile-en.md)
# 服务化部署镜像编译 # 服务化部署镜像编译
本文档介绍如何制作FastDploy镜像 本文档介绍如何制作FastDploy镜像

View File

@@ -1,3 +1,4 @@
中文 [English](../EN/demo-en.md)
# 服务化部署示例 # 服务化部署示例
我们以最简单的yolov5模型为例讲述如何进行服务化部署详细的代码和操作步骤见[yolov5服务化部署](../../../examples/vision/detection/yolov5/serving),阅读本文之前建议您先阅读以下文档: 我们以最简单的yolov5模型为例讲述如何进行服务化部署详细的代码和操作步骤见[yolov5服务化部署](../../../examples/vision/detection/yolov5/serving),阅读本文之前建议您先阅读以下文档:
- [服务化模型目录说明](model_repository.md) (说明如何准备模型目录) - [服务化模型目录说明](model_repository.md) (说明如何准备模型目录)

View File

@@ -1,3 +1,4 @@
中文 [English](../EN/model_configuration-en.md)
# 模型配置 # 模型配置
模型存储库中的每个模型都必须包含一个模型配置,该配置提供了关于模型的必要和可选信息。这些配置信息一般写在 *config.pbtxt* 文件中,[ModelConfig protobuf](https://github.com/triton-inference-server/common/blob/main/protobuf/model_config.proto)格式。 模型存储库中的每个模型都必须包含一个模型配置,该配置提供了关于模型的必要和可选信息。这些配置信息一般写在 *config.pbtxt* 文件中,[ModelConfig protobuf](https://github.com/triton-inference-server/common/blob/main/protobuf/model_config.proto)格式。

View File

@@ -1,3 +1,4 @@
中文 [English](../EN/model_repository-en.md)
# 模型仓库(Model Repository) # 模型仓库(Model Repository)
FastDeploy启动服务时指定模型仓库中一个或多个模型部署服务。当服务运行时可以用[Model Management](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_management.md)中描述的方式修改服务中的模型。 FastDeploy启动服务时指定模型仓库中一个或多个模型部署服务。当服务运行时可以用[Model Management](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_management.md)中描述的方式修改服务中的模型。