mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-06 09:07:10 +08:00
Update docs
This commit is contained in:
@@ -1,35 +1,27 @@
|
||||
简体中文 | [English](face_detection_result.md)
|
||||
# FaceDetectionResult 人脸检测结果
|
||||
[English](face_alignment_result.md) | 简体中文
|
||||
|
||||
FaceDetectionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明人脸检测出来的目标框、人脸landmarks,目标置信度和每张人脸的landmark数量。
|
||||
# FaceAlignmentResult 人脸对齐(人脸关键点检测)结果
|
||||
|
||||
FaceAlignmentResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明人脸landmarks。
|
||||
|
||||
## C++ 定义
|
||||
|
||||
`fastdeploy::vision::FaceDetectionResult`
|
||||
`fastdeploy::vision::FaceAlignmentResult`
|
||||
|
||||
```c++
|
||||
struct FaceDetectionResult {
|
||||
std::vector<std::array<float, 4>> boxes;
|
||||
struct FaceAlignmentResult {
|
||||
std::vector<std::array<float, 2>> landmarks;
|
||||
std::vector<float> scores;
|
||||
int landmarks_per_face;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **boxes**: 成员变量,表示单张图片检测出来的所有目标框坐标,`boxes.size()`表示框的个数,每个框以4个float数值依次表示xmin, ymin, xmax, ymax, 即左上角和右下角坐标
|
||||
- **scores**: 成员变量,表示单张图片检测出来的所有目标置信度,其元素个数与`boxes.size()`一致
|
||||
- **landmarks**: 成员变量,表示单张图片检测出来的所有人脸的关键点,其元素个数与`boxes.size()`一致
|
||||
- **landmarks_per_face**: 成员变量,表示每个人脸框中的关键点的数量。
|
||||
- **landmarks**: 成员变量,表示单张人脸图片检测出来的所有关键点
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
## Python 定义
|
||||
|
||||
`fastdeploy.vision.FaceDetectionResult`
|
||||
`fastdeploy.vision.FaceAlignmentResult`
|
||||
|
||||
- **boxes**(list of list(float)): 成员变量,表示单张图片检测出来的所有目标框坐标。boxes是一个list,其每个元素为一个长度为4的list, 表示为一个框,每个框以4个float数值依次表示xmin, ymin, xmax, ymax, 即左上角和右下角坐标
|
||||
- **scores**(list of float): 成员变量,表示单张图片检测出来的所有目标置信度
|
||||
- **landmarks**(list of list(float)): 成员变量,表示单张图片检测出来的所有人脸的关键点
|
||||
- **landmarks_per_face**(int): 成员变量,表示每个人脸框中的关键点的数量。
|
||||
- **landmarks**(list of list(float)): 成员变量,表示单张人脸图片检测出来的所有关键点
|
||||
|
306
docs/api_docs/cpp/vision_results_cn.md
Normal file
306
docs/api_docs/cpp/vision_results_cn.md
Normal file
@@ -0,0 +1,306 @@
|
||||
# 视觉模型预测结果说明
|
||||
|
||||
## ClassifyResult 图像分类结果
|
||||
|
||||
ClassifyResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像的分类结果和置信度。
|
||||
|
||||
### C++ 定义
|
||||
|
||||
`fastdeploy::vision::ClassifyResult`
|
||||
|
||||
```c++
|
||||
struct ClassifyResult {
|
||||
std::vector<int32_t> label_ids;
|
||||
std::vector<float> scores;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **label_ids**: 成员变量,表示单张图片的分类结果,其个数根据在使用分类模型时传入的topk决定,例如可以返回top 5的分类结果
|
||||
- **scores**: 成员变量,表示单张图片在相应分类结果上的置信度,其个数根据在使用分类模型时传入的topk决定,例如可以返回top 5的分类置信度
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
## SegmentationResult 图像分割结果
|
||||
|
||||
SegmentationResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像中每个像素预测出来的分割类别和分割类别的概率值。
|
||||
|
||||
### C++ 定义
|
||||
|
||||
`fastdeploy::vision::SegmentationResult`
|
||||
|
||||
```c++
|
||||
struct SegmentationResult {
|
||||
std::vector<uint8_t> label_map;
|
||||
std::vector<float> score_map;
|
||||
std::vector<int64_t> shape;
|
||||
bool contain_score_map = false;
|
||||
void Clear();
|
||||
void Free();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **label_map**: 成员变量,表示单张图片每个像素点的分割类别,`label_map.size()`表示图片像素点的个数
|
||||
- **score_map**: 成员变量,与label_map一一对应的所预测的分割类别概率值(当导出模型时指定`--output_op argmax`)或者经过softmax归一化化后的概率值(当导出模型时指定`--output_op softmax`或者导出模型时指定`--output_op none`同时模型初始化的时候设置模型[类成员属性](../../../examples/vision/segmentation/paddleseg/cpp/)`apply_softmax=True`)
|
||||
- **shape**: 成员变量,表示输出图片的shape,为H\*W
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Free()**: 成员函数,用于清除结构体中存储的结果并释放内存
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
## DetectionResult 目标检测结果
|
||||
|
||||
DetectionResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像检测出来的目标框、目标类别和目标置信度。
|
||||
|
||||
### C++ 定义
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::DetectionResult
|
||||
```
|
||||
|
||||
```c++
|
||||
struct DetectionResult {
|
||||
std::vector<std::array<float, 4>> boxes;
|
||||
std::vector<float> scores;
|
||||
std::vector<int32_t> label_ids;
|
||||
std::vector<Mask> masks;
|
||||
bool contain_masks = false;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **boxes**: 成员变量,表示单张图片检测出来的所有目标框坐标,`boxes.size()`表示框的个数,每个框以4个float数值依次表示xmin, ymin, xmax, ymax, 即左上角和右下角坐标
|
||||
- **scores**: 成员变量,表示单张图片检测出来的所有目标置信度,其元素个数与`boxes.size()`一致
|
||||
- **label_ids**: 成员变量,表示单张图片检测出来的所有目标类别,其元素个数与`boxes.size()`一致
|
||||
- **masks**: 成员变量,表示单张图片检测出来的所有实例mask,其元素个数及shape大小与`boxes`一致
|
||||
- **contain_masks**: 成员变量,表示检测结果中是否包含实例mask,实例分割模型的结果此项一般为true.
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::Mask
|
||||
```
|
||||
```c++
|
||||
struct Mask {
|
||||
std::vector<int32_t> data;
|
||||
std::vector<int64_t> shape; // (H,W) ...
|
||||
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
- **data**: 成员变量,表示检测到的一个mask
|
||||
- **shape**: 成员变量,表示mask的shape,如 (h,w)
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
## FaceAlignmentResult 人脸对齐(人脸关键点检测)结果
|
||||
|
||||
FaceAlignmentResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明人脸landmarks。
|
||||
|
||||
### C++ 定义
|
||||
|
||||
`fastdeploy::vision::FaceAlignmentResult`
|
||||
|
||||
```c++
|
||||
struct FaceAlignmentResult {
|
||||
std::vector<std::array<float, 2>> landmarks;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **landmarks**: 成员变量,表示单张人脸图片检测出来的所有关键点
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
## KeyPointDetectionResult 目标检测结果
|
||||
|
||||
KeyPointDetectionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像中目标行为的各个关键点坐标和置信度。
|
||||
|
||||
### C++ 定义
|
||||
|
||||
`fastdeploy::vision::KeyPointDetectionResult`
|
||||
|
||||
```c++
|
||||
struct KeyPointDetectionResult {
|
||||
std::vector<std::array<float, 2>> keypoints;
|
||||
std::vector<float> scores;
|
||||
int num_joints = -1;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **keypoints**: 成员变量,表示识别到的目标行为的关键点坐标。
|
||||
`keypoints.size()= N * J`
|
||||
- `N`:图片中的目标数量
|
||||
- `J`:num_joints(一个目标的关键点数量)
|
||||
- **scores**: 成员变量,表示识别到的目标行为的关键点坐标的置信度。
|
||||
`scores.size()= N * J`
|
||||
- `N`:图片中的目标数量
|
||||
- `J`:num_joints(一个目标的关键点数量)
|
||||
- **num_joints**: 成员变量,一个目标的关键点数量
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
|
||||
## FaceRecognitionResult 人脸识别结果
|
||||
|
||||
FaceRecognitionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明人脸识别模型对图像特征的embedding。
|
||||
### C++ 定义
|
||||
|
||||
`fastdeploy::vision::FaceRecognitionResult`
|
||||
|
||||
```c++
|
||||
struct FaceRecognitionResult {
|
||||
std::vector<float> embedding;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **embedding**: 成员变量,表示人脸识别模型最终的提取的特征embedding,可以用来计算人脸之间的特征相似度。
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
|
||||
|
||||
## MattingResult 抠图结果
|
||||
|
||||
MattingResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明模型预测的alpha透明度的值,预测的前景等。
|
||||
|
||||
### C++ 定义
|
||||
|
||||
`fastdeploy::vision::MattingResult`
|
||||
|
||||
```c++
|
||||
struct MattingResult {
|
||||
std::vector<float> alpha;
|
||||
std::vector<float> foreground;
|
||||
std::vector<int64_t> shape;
|
||||
bool contain_foreground = false;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **alpha**: 是一维向量,为预测的alpha透明度的值,值域为[0.,1.],长度为hxw,h,w为输入图像的高和宽
|
||||
- **foreground**: 是一维向量,为预测的前景,值域为[0.,255.],长度为hxwxc,h,w为输入图像的高和宽,c一般为3,foreground不是一定有的,只有模型本身预测了前景,这个属性才会有效
|
||||
- **contain_foreground**: 表示预测的结果是否包含前景
|
||||
- **shape**: 表示输出结果的shape,当contain_foreground为false,shape只包含(h,w),当contain_foreground为true,shape包含(h,w,c), c一般为3
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
## OCRResult OCR预测结果
|
||||
|
||||
OCRResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明图像检测和识别出来的文本框,文本框方向分类,以及文本框内的文本内容
|
||||
|
||||
### C++ 定义
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::OCRResult
|
||||
```
|
||||
|
||||
```c++
|
||||
struct OCRResult {
|
||||
std::vector<std::array<int, 8>> boxes;
|
||||
std::vector<std::string> text;
|
||||
std::vector<float> rec_scores;
|
||||
std::vector<float> cls_scores;
|
||||
std::vector<int32_t> cls_labels;
|
||||
ResultType type = ResultType::OCR;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **boxes**: 成员变量,表示单张图片检测出来的所有目标框坐标,`boxes.size()`表示单张图内检测出的框的个数,每个框以8个int数值依次表示框的4个坐标点,顺序为左下,右下,右上,左上
|
||||
- **text**: 成员变量,表示多个文本框内被识别出来的文本内容,其元素个数与`boxes.size()`一致
|
||||
- **rec_scores**: 成员变量,表示文本框内识别出来的文本的置信度,其元素个数与`boxes.size()`一致
|
||||
- **cls_scores**: 成员变量,表示文本框的分类结果的置信度,其元素个数与`boxes.size()`一致
|
||||
- **cls_labels**: 成员变量,表示文本框的方向分类类别,其元素个数与`boxes.size()`一致
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
|
||||
## FaceDetectionResult 人脸检测结果
|
||||
|
||||
FaceDetectionResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明人脸检测出来的目标框、人脸landmarks,目标置信度和每张人脸的landmark数量。
|
||||
|
||||
### C++ 定义
|
||||
|
||||
`fastdeploy::vision::FaceDetectionResult`
|
||||
|
||||
```c++
|
||||
struct FaceDetectionResult {
|
||||
std::vector<std::array<float, 4>> boxes;
|
||||
std::vector<std::array<float, 2>> landmarks;
|
||||
std::vector<float> scores;
|
||||
int landmarks_per_face;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **boxes**: 成员变量,表示单张图片检测出来的所有目标框坐标,`boxes.size()`表示框的个数,每个框以4个float数值依次表示xmin, ymin, xmax, ymax, 即左上角和右下角坐标
|
||||
- **scores**: 成员变量,表示单张图片检测出来的所有目标置信度,其元素个数与`boxes.size()`一致
|
||||
- **landmarks**: 成员变量,表示单张图片检测出来的所有人脸的关键点,其元素个数与`boxes.size()`一致
|
||||
- **landmarks_per_face**: 成员变量,表示每个人脸框中的关键点的数量。
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
## HeadPoseResult 头部姿态结果
|
||||
|
||||
HeadPoseResult 代码定义在`fastdeploy/vision/common/result.h`中,用于表明头部姿态结果。
|
||||
|
||||
### C++ 定义
|
||||
|
||||
`fastdeploy::vision::HeadPoseResult`
|
||||
|
||||
```c++
|
||||
struct HeadPoseResult {
|
||||
std::vector<float> euler_angles;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **euler_angles**: 成员变量,表示单张人脸图片预测的欧拉角,存放的顺序是(yaw, pitch, roll), yaw 代表水平转角,pitch 代表垂直角,roll 代表翻滚角,值域都为 [-90,+90]度
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
||||
|
||||
|
||||
API:`fastdeploy.vision.HeadPoseResult`, 该结果返回:
|
||||
- **euler_angles**(list of float): 成员变量,表示单张人脸图片预测的欧拉角,存放的顺序是(yaw, pitch, roll), yaw 代表水平转角,pitch 代表垂直角,roll 代表翻滚角,值域都为 [-90, +90]度
|
||||
|
||||
## MOTResult 多目标跟踪结果
|
||||
|
||||
MOTResult代码定义在`fastdeploy/vision/common/result.h`中,用于表明多目标跟踪中的检测出来的目标框、目标跟踪id、目标类别和目标置信度。
|
||||
|
||||
### C++ 定义
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::MOTResult
|
||||
```
|
||||
|
||||
```c++
|
||||
struct MOTResult{
|
||||
// left top right bottom
|
||||
std::vector<std::array<int, 4>> boxes;
|
||||
std::vector<int> ids;
|
||||
std::vector<float> scores;
|
||||
std::vector<int> class_ids;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **boxes**: 成员变量,表示单帧画面中检测出来的所有目标框坐标,`boxes.size()`表示框的个数,每个框以4个float数值依次表示xmin, ymin, xmax, ymax, 即左上角和右下角坐标
|
||||
- **ids**: 成员变量,表示单帧画面中所有目标的id,其元素个数与`boxes.size()`一致
|
||||
- **scores**: 成员变量,表示单帧画面检测出来的所有目标置信度,其元素个数与`boxes.size()`一致
|
||||
- **class_ids**: 成员变量,表示单帧画面出来的所有目标类别,其元素个数与`boxes.size()`一致
|
||||
- **Clear()**: 成员函数,用于清除结构体中存储的结果
|
||||
- **Str()**: 成员函数,将结构体中的信息以字符串形式输出(用于Debug)
|
276
docs/api_docs/cpp/vision_results_en.md
Normal file
276
docs/api_docs/cpp/vision_results_en.md
Normal file
@@ -0,0 +1,276 @@
|
||||
# Description of Vision Results
|
||||
|
||||
本文档的中文版本参考[视觉模型预测结果说明](./vision_results_cn.md)
|
||||
|
||||
## Image Classification Result
|
||||
|
||||
The ClassifyResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the classification result and confidence level of the image.
|
||||
|
||||
### C++ Definition
|
||||
|
||||
`fastdeploy::vision::ClassifyResult`
|
||||
|
||||
```c++
|
||||
struct ClassifyResult {
|
||||
std::vector<int32_t> label_ids;
|
||||
std::vector<float> scores;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **label_ids**: Member variable which indicates the classification results of a single image. Its number is determined by the topk passed in when using the classification model, e.g. it can return the top 5 classification results.
|
||||
- **scores**: Member variable which indicates the confidence level of a single image on the corresponding classification result. Its number is determined by the topk passed in when using the classification model, e.g. it can return the top 5 classification confidence level.
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
||||
|
||||
|
||||
## Segmentation Result
|
||||
|
||||
The SegmentationResult code is defined in `fastdeploy/vision/common/result.h`, indicating the segmentation category and the segmentation category probability predicted in each pixel in the image.
|
||||
|
||||
### C++ Definition
|
||||
|
||||
``fastdeploy::vision::SegmentationResult``
|
||||
|
||||
```c++
|
||||
struct SegmentationResult {
|
||||
std::vector<uint8_t> label_map;
|
||||
std::vector<float> score_map;
|
||||
std::vector<int64_t> shape;
|
||||
bool contain_score_map = false;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **label_map**: Member variable which indicates the segmentation category of each pixel in a single image. `label_map.size()` indicates the number of pixel points of a image.
|
||||
- **score_map**: Member variable which indicates the predicted segmentation category probability value (specified as `--output_op argmax` when export) corresponding to label_map, or the probability value normalized by softmax (specified as `--output_op softmax` when export, or as `--output_op when exporting the model). none` when export while setting the [class member attribute](../../../examples/vision/segmentation/paddleseg/cpp/) as `apply_softmax=True` during model initialization).
|
||||
- **shape**: Member variable which indicates the shape of the output image as H\*W.
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
||||
|
||||
## Target Detection Result
|
||||
|
||||
The DetectionResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the target frame, target class and target confidence level detected in the image.
|
||||
|
||||
### C++ Definition
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::DetectionResult
|
||||
```
|
||||
|
||||
```c++
|
||||
struct DetectionResult {
|
||||
std::vector<std::array<float, 4>> boxes;
|
||||
std::vector<float> scores;
|
||||
std::vector<int32_t> label_ids;
|
||||
std::vector<Mask> masks;
|
||||
bool contain_masks = false;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **boxes**: Member variable which indicates the coordinates of all detected target boxes in a single image. `boxes.size()` indicates the number of boxes, each box is represented by 4 float values in order of xmin, ymin, xmax, ymax, i.e. the coordinates of the top left and bottom right corner.
|
||||
- **scores**: Member variable which indicates the confidence level of all targets detected in a single image, where the number of elements is the same as `boxes.size()`.
|
||||
- **label_ids**: Member variable which indicates all target categories detected in a single image, where the number of elements is the same as `boxes.size()`.
|
||||
- **masks**: Member variable which indicates all detected instance masks of a single image, where the number of elements and the shape size are the same as `boxes`.
|
||||
- **contain_masks**: Member variable which indicates whether the detected result contains instance masks, which is generally true for the instance segmentation model.
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::Mask
|
||||
```
|
||||
```c++
|
||||
struct Mask {
|
||||
std::vector<int32_t> data;
|
||||
std::vector<int64_t> shape; // (H,W) ...
|
||||
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
- **data**: Member variable which indicates a detected mask.
|
||||
- **shape**: Member variable which indicates the shape of the mask, e.g. (h,w).
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
||||
|
||||
|
||||
## Face Detection Result
|
||||
|
||||
The FaceDetectionResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the target frames, face landmarks, target confidence and the number of landmark per face.
|
||||
|
||||
### C++ Definition
|
||||
|
||||
``fastdeploy::vision::FaceDetectionResult``
|
||||
|
||||
```c++
|
||||
struct FaceDetectionResult {
|
||||
std::vector<std::array<float, 4>> boxes;
|
||||
std::vector<std::array<float, 2>> landmarks;
|
||||
std::vector<float> scores;
|
||||
int landmarks_per_face;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **boxes**: Member variable which indicates the coordinates of all detected target boxes in a single image. `boxes.size()` indicates the number of boxes, each box is represented by 4 float values in order of xmin, ymin, xmax, ymax, i.e. the coordinates of the top left and bottom right corner.
|
||||
- **scores**: Member variable which indicates the confidence level of all targets detected in a single image, where the number of elements is the same as `boxes.size()`.
|
||||
- **landmarks**: Member variable which indicates the keypoints of all faces detected in a single image, where the number of elements is the same as `boxes.size()`.
|
||||
- **landmarks_per_face**: Member variable which indicates the number of keypoints in each face box.
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
||||
|
||||
|
||||
## Keypoint Detection Result
|
||||
|
||||
The KeyPointDetectionResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the coordinates and confidence level of each keypoint of the target's behavior in the image.
|
||||
|
||||
### C++ Definition
|
||||
|
||||
``fastdeploy::vision::KeyPointDetectionResult``
|
||||
|
||||
```c++
|
||||
struct KeyPointDetectionResult {
|
||||
std::vector<std::array<float, 2>> keypoints;
|
||||
std::vector<float> scores;
|
||||
int num_joints = -1;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **keypoints**: Member variable which indicates the coordinates of the identified target behavior keypoint.
|
||||
` keypoints.size() = N * J`:
|
||||
- `N`: the number of targets in the image
|
||||
- `J`: num_joints (the number of keypoints of a target)
|
||||
- **scores**: Member variable which indicates the confidence level of the keypoint coordinates of the identified target behavior.
|
||||
`scores.size() = N * J`:
|
||||
- `N`: the number of targets in the picture
|
||||
- `J`:num_joints (the number of keypoints of a target)
|
||||
- **num_joints**: Member variable which indicates the number of keypoints of a target.
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
||||
|
||||
|
||||
## Face Recognition Result
|
||||
|
||||
The FaceRecognitionResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the image features embedding in the face recognition model.
|
||||
### C++ Definition
|
||||
|
||||
`fastdeploy::vision::FaceRecognitionResult`
|
||||
|
||||
```c++
|
||||
struct FaceRecognitionResult {
|
||||
std::vector<float> embedding;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **embedding**: Member variable which indicates the final extracted feature embedding of the face recognition model, and can be used to calculate the facial feature similarity.
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
||||
|
||||
## Matting Result
|
||||
|
||||
The MattingResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the predicted value of alpha transparency predicted and the predicted foreground, etc.
|
||||
|
||||
### C++ Definition
|
||||
|
||||
``fastdeploy::vision::MattingResult`''
|
||||
|
||||
```c++
|
||||
struct MattingResult {
|
||||
std::vector<float> alpha;
|
||||
std::vector<float> foreground;
|
||||
std::vector<int64_t> shape;
|
||||
bool contain_foreground = false;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **alpha**: It is a one-dimensional vector, indicating the predicted value of alpha transparency. The value range is [0.,1.], and the length is hxw, in which h,w represent the height and the width of the input image seperately.
|
||||
- **foreground**: It is a one-dimensional vector, indicating the predicted foreground. The value range is [0.,255.], and the length is hxwxc, in which h,w represent the height and the width of the input image, and c is generally 3. This vector is valid only when the model itself predicts the foreground.
|
||||
- **contain_foreground**: Used to indicate whether the result contains foreground.
|
||||
- **shape**: Used to indicate the shape of the output. When contain_foreground is false, the shape only contains (h,w), while when contain_foreground is true, the shape contains (h,w,c), in which c is generally 3.
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
||||
|
||||
|
||||
## OCR prediction result
|
||||
|
||||
The OCRResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the text box detected in the image, text box orientation classification, and the text content.
|
||||
|
||||
### C++ Definition
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::OCRResult
|
||||
```
|
||||
|
||||
```c++
|
||||
struct OCRResult {
|
||||
std::vector<std::array<int, 8>> boxes;
|
||||
std::vector<std::string> text;
|
||||
std::vector<float> rec_scores;
|
||||
std::vector<float> cls_scores;
|
||||
std::vector<int32_t> cls_labels;
|
||||
ResultType type = ResultType::OCR;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **boxes**: Member variable which indicates the coordinates of all detected target boxes in a single image. `boxes.size()` indicates the number of detected boxes. Each box is represented by 8 int values to indicate the 4 coordinates of the box, in the order of lower left, lower right, upper right, upper left.
|
||||
- **text**: Member variable which indicates the content of the recognized text in multiple text boxes, where the element number is the same as `boxes.size()`.
|
||||
- **rec_scores**: Member variable which indicates the confidence level of the recognized text, where the element number is the same as `boxes.size()`.
|
||||
- **cls_scores**: Member variable which indicates the confidence level of the classification result of the text box, where the element number is the same as `boxes.size()`.
|
||||
- **cls_labels**: Member variable which indicates the directional category of the textbox, where the element number is the same as `boxes.size()`.
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
||||
|
||||
|
||||
## Face Alignment Result
|
||||
|
||||
The FaceAlignmentResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate face landmarks.
|
||||
|
||||
### C++ Definition
|
||||
|
||||
`fastdeploy::vision::FaceAlignmentResult`
|
||||
|
||||
```c++
|
||||
struct FaceAlignmentResult {
|
||||
std::vector<std::array<float, 2>> landmarks;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **landmarks**: Member variable which indicates all the key points detected in a single face image.
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
||||
|
||||
|
||||
## Head Pose Result
|
||||
|
||||
The HeadPoseResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the head pose result.
|
||||
|
||||
### C++ Definition
|
||||
|
||||
``fastdeploy::vision::HeadPoseResult`''
|
||||
|
||||
```c++
|
||||
struct HeadPoseResult {
|
||||
std::vector<float> euler_angles;
|
||||
void Clear();
|
||||
std::string Str();
|
||||
};
|
||||
```
|
||||
|
||||
- **euler_angles**: Member variable which indicates the Euler angles predicted for a single face image, stored in the order (yaw, pitch, roll), with yaw representing the horizontal turn angle, pitch representing the vertical angle, and roll representing the roll angle, all with a value range of [-90,+90].
|
||||
- **Clear()**: Member function used to clear the results stored in the structure.
|
||||
- **Str()**: Member function used to output the information in the structure as string (for Debug).
|
@@ -4,6 +4,7 @@
|
||||
|
||||
## FastDeploy预编译库安装
|
||||
- [FastDeploy预编译库下载安装](download_prebuilt_libraries.md)
|
||||
>> **注意**:FastDeploy目前只提供部分环境的预编译库,其他环境需要参考下方文档自行编译
|
||||
|
||||
## 自行编译安装
|
||||
- [NVIDIA GPU部署环境](gpu.md)
|
||||
|
@@ -1,5 +1,17 @@
|
||||
# 华为昇腾NPU 部署环境编译准备
|
||||
|
||||
## 导航目录
|
||||
|
||||
* [简介以及编译选项](#简介以及编译选项)
|
||||
* [华为昇腾环境准备](#一华为昇腾环境准备)
|
||||
* [编译环境搭建](#二编译环境搭建)
|
||||
* [基于 Paddle Lite 的 C++ FastDeploy 库编译](#三基于-paddle-lite-的-c-fastdeploy-库编译)
|
||||
* [基于 Paddle Lite 的 Python FastDeploy 库编译](#四基于-paddle-lite-的-python-fastdeploy-库编译)
|
||||
* [昇腾部署时开启FlyCV](#五昇腾部署时开启flycv)
|
||||
* [昇腾部署Demo参考](#六昇腾部署demo参考)
|
||||
|
||||
## 简介以及编译选项
|
||||
|
||||
FastDeploy基于 Paddle-Lite 后端, 支持在华为昇腾NPU上进行部署推理。
|
||||
更多详细的信息请参考:[Paddle Lite部署示例](https://github.com/PaddlePaddle/Paddle-Lite/blob/develop/docs/demo_guides/huawei_ascend_npu.md)。
|
||||
|
||||
@@ -114,7 +126,7 @@ python setup.py bdist_wheel
|
||||
## 五.昇腾部署时开启FlyCV
|
||||
[FlyCV](https://github.com/PaddlePaddle/FlyCV) 是一款高性能计算机图像处理库, 针对ARM架构做了很多优化, 相比其他图像处理库性能更为出色.
|
||||
FastDeploy现在已经集成FlyCV, 用户可以在支持的硬件平台上使用FlyCV, 实现模型端到端推理性能的加速.
|
||||
模型端到端推理中, 预处理和后处理阶段为CPU计算, 当用户使用ARM CPU + 昇腾的硬件平台时, 我们推荐用户使用FlyCV, 可以实现端到端的推理性能加速, 详见[FLyCV使用文档](./boost_cv_by_flycv.md).
|
||||
模型端到端推理中, 预处理和后处理阶段为CPU计算, 当用户使用ARM CPU + 昇腾的硬件平台时, 我们推荐用户使用FlyCV, 可以实现端到端的推理性能加速, 详见[FLyCV使用文档](../faq/boost_cv_by_flycv.md).
|
||||
|
||||
|
||||
## 六.昇腾部署Demo参考
|
||||
|
1
examples/vision/matting/ppmatting
Symbolic link
1
examples/vision/matting/ppmatting
Symbolic link
@@ -0,0 +1 @@
|
||||
/huangjianhui/doc/FastDeploy/examples/vision/segmentation/ppmatting/
|
@@ -1,42 +0,0 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PP-Matting Model Deployment
|
||||
|
||||
## Model Description
|
||||
|
||||
- [PP-Matting Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
|
||||
## List of Supported Models
|
||||
|
||||
Now FastDeploy supports the deployment of the following models
|
||||
|
||||
- [PP-Matting models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
- [PP-HumanMatting models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
- [ModNet models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
|
||||
|
||||
## Export Deployment Model
|
||||
|
||||
Before deployment, PP-Matting needs to be exported into the deployment model. Refer to [Export Model](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) for more information. (Tips: You need to set the `--input_shape` parameter of the export script when exporting PP-Matting and PP-HumanMatting models)
|
||||
|
||||
|
||||
## Download Pre-trained Models
|
||||
|
||||
For developers' testing, models exported by PP-Matting are provided below. Developers can download and use them directly.
|
||||
|
||||
The accuracy metric is sourced from the model description in PP-Matting. (Accuracy data are not provided) Refer to the introduction in PP-Matting for more details.
|
||||
|
||||
| Model | Parameter Size | Accuracy | Note |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :------ |
|
||||
| [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - |
|
||||
| [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - |
|
||||
| [PP-HumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - |
|
||||
| [Modnet-ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - |
|
||||
| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - |
|
||||
| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - |
|
||||
|
||||
|
||||
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
@@ -1,43 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-Matting模型部署
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PP-Matting Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
|
||||
## 支持模型列表
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
||||
- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
|
||||
|
||||
## 导出部署模型
|
||||
|
||||
在部署前,需要先将PP-Matting导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)(Tips:导出PP-Matting系列模型和PP-HumanMatting系列模型需要设置导出脚本的`--input_shape`参数)
|
||||
|
||||
|
||||
## 下载预训练模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。
|
||||
|
||||
其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。
|
||||
|
||||
|
||||
| 模型 | 参数大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :------ |
|
||||
| [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - |
|
||||
| [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - |
|
||||
| [PP-HumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - |
|
||||
| [Modnet-ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - |
|
||||
| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - |
|
||||
| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - |
|
||||
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
@@ -1,14 +0,0 @@
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
|
||||
# 添加FastDeploy依赖头文件
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
@@ -1,93 +0,0 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PP-Matting C++ Deployment Example
|
||||
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of PP-Matting on CPU/GPU and GPU accelerated by TensorRT.
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Taking the PP-Matting inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# Download PP-Matting model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
|
||||
|
||||
# CPU inference
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
|
||||
# GPU inference
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2
|
||||
# kunlunxin XPU inference
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 3
|
||||
```
|
||||
|
||||
The visualized result after running is as follows
|
||||
<div width="840">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852040-759da522-fca4-4786-9205-88c622cd4a39.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852587-48895efc-d24a-43c9-aeec-d7b0362ab2b9.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
## PP-Matting C++ Interface
|
||||
|
||||
### PPMatting Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::matting::PPMatting(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PP-Matting model loading and initialization, among which model_file is the exported Paddle model format.
|
||||
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> PPMatting::Predict(cv::Mat* im, MattingResult* result)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: The segmentation result, including the predicted label of the segmentation and the corresponding probability of the label. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of SegmentationResult
|
||||
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
@@ -1,94 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-Matting C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成PP-Matting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上 PP-Matting 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载PP-Matting模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2
|
||||
# 昆仑芯XPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 3
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div width="840">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852040-759da522-fca4-4786-9205-88c622cd4a39.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852587-48895efc-d24a-43c9-aeec-d7b0362ab2b9.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## PP-Matting C++接口
|
||||
|
||||
### PPMatting类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::matting::PPMatting(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PP-Matting模型加载和初始化,其中model_file为导出的Paddle模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> PPMatting::Predict(cv::Mat* im, MattingResult* result)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, MattingResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,173 +0,0 @@
|
||||
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include "fastdeploy/vision.h"
|
||||
|
||||
#ifdef WIN32
|
||||
const char sep = '\\';
|
||||
#else
|
||||
const char sep = '/';
|
||||
#endif
|
||||
|
||||
void CpuInfer(const std::string& model_dir, const std::string& image_file,
|
||||
const std::string& background_file) {
|
||||
auto model_file = model_dir + sep + "model.pdmodel";
|
||||
auto params_file = model_dir + sep + "model.pdiparams";
|
||||
auto config_file = model_dir + sep + "deploy.yaml";
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
option.UseCpu();
|
||||
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
|
||||
config_file, option);
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
cv::Mat bg = cv::imread(background_file);
|
||||
fastdeploy::vision::MattingResult res;
|
||||
if (!model.Predict(&im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
auto vis_im = fastdeploy::vision::VisMatting(im, res);
|
||||
auto vis_im_with_bg =
|
||||
fastdeploy::vision::SwapBackground(im, bg, res);
|
||||
cv::imwrite("visualized_result.jpg", vis_im_with_bg);
|
||||
cv::imwrite("visualized_result_fg.jpg", vis_im);
|
||||
std::cout << "Visualized result save in ./visualized_result_replaced_bg.jpg "
|
||||
"and ./visualized_result_fg.jpg"
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
void KunlunXinInfer(const std::string& model_dir, const std::string& image_file,
|
||||
const std::string& background_file) {
|
||||
auto model_file = model_dir + sep + "model.pdmodel";
|
||||
auto params_file = model_dir + sep + "model.pdiparams";
|
||||
auto config_file = model_dir + sep + "deploy.yaml";
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
option.UseKunlunXin();
|
||||
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
|
||||
config_file, option);
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
cv::Mat bg = cv::imread(background_file);
|
||||
fastdeploy::vision::MattingResult res;
|
||||
if (!model.Predict(&im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
auto vis_im = fastdeploy::vision::VisMatting(im, res);
|
||||
auto vis_im_with_bg =
|
||||
fastdeploy::vision::SwapBackground(im, bg, res);
|
||||
cv::imwrite("visualized_result.jpg", vis_im_with_bg);
|
||||
cv::imwrite("visualized_result_fg.jpg", vis_im);
|
||||
std::cout << "Visualized result save in ./visualized_result_replaced_bg.jpg "
|
||||
"and ./visualized_result_fg.jpg"
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
void GpuInfer(const std::string& model_dir, const std::string& image_file,
|
||||
const std::string& background_file) {
|
||||
auto model_file = model_dir + sep + "model.pdmodel";
|
||||
auto params_file = model_dir + sep + "model.pdiparams";
|
||||
auto config_file = model_dir + sep + "deploy.yaml";
|
||||
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
option.UseGpu();
|
||||
option.UsePaddleInferBackend();
|
||||
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
|
||||
config_file, option);
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
cv::Mat bg = cv::imread(background_file);
|
||||
fastdeploy::vision::MattingResult res;
|
||||
if (!model.Predict(&im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
auto vis_im = fastdeploy::vision::VisMatting(im, res);
|
||||
auto vis_im_with_bg =
|
||||
fastdeploy::vision::SwapBackground(im, bg, res);
|
||||
cv::imwrite("visualized_result.jpg", vis_im_with_bg);
|
||||
cv::imwrite("visualized_result_fg.jpg", vis_im);
|
||||
std::cout << "Visualized result save in ./visualized_result_replaced_bg.jpg "
|
||||
"and ./visualized_result_fg.jpg"
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
void TrtInfer(const std::string& model_dir, const std::string& image_file,
|
||||
const std::string& background_file) {
|
||||
auto model_file = model_dir + sep + "model.pdmodel";
|
||||
auto params_file = model_dir + sep + "model.pdiparams";
|
||||
auto config_file = model_dir + sep + "deploy.yaml";
|
||||
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
option.UseGpu();
|
||||
option.UseTrtBackend();
|
||||
option.SetTrtInputShape("img", {1, 3, 512, 512});
|
||||
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
|
||||
config_file, option);
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
cv::Mat bg = cv::imread(background_file);
|
||||
fastdeploy::vision::MattingResult res;
|
||||
if (!model.Predict(&im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
auto vis_im = fastdeploy::vision::VisMatting(im, res);
|
||||
auto vis_im_with_bg =
|
||||
fastdeploy::vision::SwapBackground(im, bg, res);
|
||||
cv::imwrite("visualized_result.jpg", vis_im_with_bg);
|
||||
cv::imwrite("visualized_result_fg.jpg", vis_im);
|
||||
std::cout << "Visualized result save in ./visualized_result_replaced_bg.jpg "
|
||||
"and ./visualized_result_fg.jpg"
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
int main(int argc, char* argv[]) {
|
||||
if (argc < 5) {
|
||||
std::cout
|
||||
<< "Usage: infer_demo path/to/model_dir path/to/image run_option, "
|
||||
"e.g ./infer_model ./PP-Matting-512 ./test.jpg ./test_bg.jpg 0"
|
||||
<< std::endl;
|
||||
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
|
||||
"with gpu; 2: run with gpu and use tensorrt backend, 3: run with kunlunxin."
|
||||
<< std::endl;
|
||||
return -1;
|
||||
}
|
||||
if (std::atoi(argv[4]) == 0) {
|
||||
CpuInfer(argv[1], argv[2], argv[3]);
|
||||
} else if (std::atoi(argv[4]) == 1) {
|
||||
GpuInfer(argv[1], argv[2], argv[3]);
|
||||
} else if (std::atoi(argv[4]) == 2) {
|
||||
TrtInfer(argv[1], argv[2], argv[3]);
|
||||
} else if (std::atoi(argv[4]) == 3) {
|
||||
KunlunXinInfer(argv[1], argv[2], argv[3]);
|
||||
}
|
||||
return 0;
|
||||
}
|
@@ -1,81 +0,0 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PP-Matting Python Deployment Example
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of PP-Matting on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
```bash
|
||||
# Download the deployment example code
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/matting/ppmatting/python
|
||||
|
||||
# Download PP-Matting model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
# CPU inference
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu
|
||||
# GPU inference
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu
|
||||
# TensorRT inference on GPU(Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
|
||||
# kunlunxin XPU inference
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device kunlunxin
|
||||
```
|
||||
|
||||
The visualized result after running is as follows
|
||||
<div width="840">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852040-759da522-fca4-4786-9205-88c622cd4a39.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852587-48895efc-d24a-43c9-aeec-d7b0362ab2b9.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
## PP-Matting Python Interface
|
||||
|
||||
```python
|
||||
fd.vision.matting.PPMatting(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PP-Matting model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) for more information
|
||||
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> PPMatting.predict(input_image)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): Input data in HWC or BGR format
|
||||
|
||||
> **Return**
|
||||
>
|
||||
> > Return `fastdeploy.vision.MattingResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||
|
||||
### Class Member Variable
|
||||
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PP-Matting Model Description](..)
|
||||
- [PP-Matting C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
@@ -1,81 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-Matting Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成PP-Matting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/matting/ppmatting/python
|
||||
|
||||
# 下载PP-Matting模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
# CPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
|
||||
# 昆仑芯XPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device kunlunxin
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div width="840">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852040-759da522-fca4-4786-9205-88c622cd4a39.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852587-48895efc-d24a-43c9-aeec-d7b0362ab2b9.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
## PP-Matting Python接口
|
||||
|
||||
```python
|
||||
fd.vision.matting.PPMatting(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PP-Matting模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> PPMatting.predict(input_image)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.MattingResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [PP-Matting 模型介绍](..)
|
||||
- [PP-Matting C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,70 +0,0 @@
|
||||
import fastdeploy as fd
|
||||
import cv2
|
||||
import os
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
import argparse
|
||||
import ast
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--model", required=True, help="Path of PaddleSeg model.")
|
||||
parser.add_argument(
|
||||
"--image", type=str, required=True, help="Path of test image file.")
|
||||
parser.add_argument(
|
||||
"--bg",
|
||||
type=str,
|
||||
required=True,
|
||||
default=None,
|
||||
help="Path of test background image file.")
|
||||
parser.add_argument(
|
||||
"--device",
|
||||
type=str,
|
||||
default='cpu',
|
||||
help="Type of inference device, support 'cpu', 'kunlunxin' or 'gpu'.")
|
||||
parser.add_argument(
|
||||
"--use_trt",
|
||||
type=ast.literal_eval,
|
||||
default=False,
|
||||
help="Wether to use tensorrt.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def build_option(args):
|
||||
option = fd.RuntimeOption()
|
||||
if args.device.lower() == "gpu":
|
||||
option.use_gpu()
|
||||
option.use_paddle_infer_backend()
|
||||
|
||||
if args.use_trt:
|
||||
option.use_trt_backend()
|
||||
option.set_trt_input_shape("img", [1, 3, 512, 512])
|
||||
|
||||
if args.device.lower() == "kunlunxin":
|
||||
option.use_kunlunxin()
|
||||
return option
|
||||
|
||||
|
||||
args = parse_arguments()
|
||||
|
||||
# 配置runtime,加载模型
|
||||
runtime_option = build_option(args)
|
||||
model_file = os.path.join(args.model, "model.pdmodel")
|
||||
params_file = os.path.join(args.model, "model.pdiparams")
|
||||
config_file = os.path.join(args.model, "deploy.yaml")
|
||||
model = fd.vision.matting.PPMatting(
|
||||
model_file, params_file, config_file, runtime_option=runtime_option)
|
||||
|
||||
# 预测图片抠图结果
|
||||
im = cv2.imread(args.image)
|
||||
bg = cv2.imread(args.bg)
|
||||
result = model.predict(im)
|
||||
print(result)
|
||||
# 可视化结果
|
||||
vis_im = fd.vision.vis_matting(im, result)
|
||||
vis_im_with_bg = fd.vision.swap_background(im, bg, result)
|
||||
cv2.imwrite("visualized_result_fg.jpg", vis_im)
|
||||
cv2.imwrite("visualized_result_replaced_bg.jpg", vis_im_with_bg)
|
||||
print(
|
||||
"Visualized result save in ./visualized_result_replaced_bg.jpg and ./visualized_result_fg.jpg"
|
||||
)
|
@@ -1,49 +1,32 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg Model Deployment
|
||||
# PaddleSeg高性能全场景模型部署方案——FastDeploy
|
||||
|
||||
## Model Description
|
||||
## FastDeploy介绍
|
||||
|
||||
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||
[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)是一款全场景、易用灵活、极致高效的AI推理部署工具,使用FastDeploy可以简单高效的在10+款硬件上对PaddleSeg模型进行快速部署
|
||||
|
||||
FastDeploy currently supports the deployment of the following models
|
||||
## 支持如下的硬件部署
|
||||
|
||||
- [U-Net models](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/unet/README.md)
|
||||
- [PP-LiteSeg models](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md)
|
||||
- [PP-HumanSeg models](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/contrib/PP-HumanSeg/README.md)
|
||||
- [FCN models](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/fcn/README.md)
|
||||
- [DeepLabV3 models](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/deeplabv3/README.md)
|
||||
|
||||
【Attention】For **PP-Matting**、**PP-HumanMatting** and **ModNet** deployment, please refer to [Matting Model Deployment](../../matting)
|
||||
|
||||
## Prepare PaddleSeg Deployment Model
|
||||
|
||||
For the export of the PaddleSeg model, refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) for more information
|
||||
|
||||
**Attention**
|
||||
- The exported PaddleSeg model contains three files, including `model.pdmodel`、`model.pdiparams` and `deploy.yaml`. FastDeploy will get the pre-processing information for inference from yaml files.
|
||||
|
||||
## Download Pre-trained Model
|
||||
|
||||
For developers' testing, part of the PaddleSeg exported models are provided below.
|
||||
- without-argmax export mode: **Not specified**`--input_shape`,**specified**`--output_op none`
|
||||
- with-argmax export mode:**Not specified**`--input_shape`,**specified**`--output_op argmax`
|
||||
|
||||
Developers can download directly.
|
||||
| 硬件支持列表 | | | |
|
||||
|:----- | :-- | :-- | :-- |
|
||||
| [NVIDIA GPU](cpu-gpu) | [X86 CPU](cpu-gpu)| [飞腾CPU](cpu-gpu) | [ARM CPU](cpu-gpu) |
|
||||
| [Intel GPU(独立显卡/集成显卡)](cpu-gpu) | [昆仑](kunlun) | [昇腾](ascend) | [瑞芯微](rockchip) |
|
||||
| [晶晨](amlogic) | [算能](sophgo) |
|
||||
|
||||
|
||||
| Model | Parameter Size | Input Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||
| [Unet-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_with_argmax_infer.tgz) \| [Unet-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
|
||||
| [PP-LiteSeg-B(STDC2)-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz) \| [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 79.04% | 79.52% | 79.85% |
|
||||
|[PP-HumanSegV1-Lite-with-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV1_Lite_with_argmax_infer.tgz) \| [PP-HumanSegV1-Lite-without-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
|
||||
|[PP-HumanSegV2-Lite-with-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Lite-without-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
|
||||
| [PP-HumanSegV2-Mobile-with-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Mobile-without-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
|
||||
|[PP-HumanSegV1-Server-with-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_with_argmax_infer.tgz) \| [PP-HumanSegV1-Server-without-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
|
||||
| [Portait-PP-HumanSegV2-Lite-with-argmax(Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_with_argmax_infer.tgz) \| [Portait-PP-HumanSegV2-Lite-without-argmax(Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
|
||||
| [FCN-HRNet-W18-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_with_argmax_infer.tgz) \| [FCN-HRNet-W18-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz)(GPU inference for ONNXRuntime is not supported now) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
|
||||
| [Deeplabv3-ResNet101-OS8-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_with_argmax_infer.tgz) \| [Deeplabv3-ResNet101-OS8-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |
|
||||
## 更多部署方式
|
||||
|
||||
## Detailed Deployment Tutorials
|
||||
- [Android ARM CPU部署](android)
|
||||
- [服务化Serving部署](serving)
|
||||
- [web部署](web)
|
||||
- [模型自动化压缩工具](quantize)
|
||||
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
||||
## 常见问题
|
||||
|
||||
遇到问题可查看常见问题集合文档或搜索FastDeploy issues,链接如下:
|
||||
|
||||
[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq)
|
||||
|
||||
[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues)
|
||||
|
||||
若以上方式都无法解决问题,欢迎给FastDeploy提交新的[issue](https://github.com/PaddlePaddle/FastDeploy/issues)
|
||||
|
@@ -1,23 +0,0 @@
|
||||
# 使用FastDeploy部署PaddleSeg模型
|
||||
|
||||
## FastDeploy介绍
|
||||
|
||||
FastDeploy是一款全场景、易用灵活、极致高效的AI推理部署工具,使用FastDeploy可以简单高效的在10+款硬件上对PaddleSeg模型进行快速部署
|
||||
|
||||
## 详细文档
|
||||
|
||||
- [NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)](cpu-gpu)
|
||||
- [昆仑](kunlun)
|
||||
- [升腾](ascend)
|
||||
- [瑞芯微](rockchip)
|
||||
- [晶晨](amlogic)
|
||||
- [算能](sophgo)
|
||||
- [Android ARM CPU部署](android)
|
||||
- [服务化Serving部署](serving)
|
||||
- [模型自动化压缩工具](quantize)
|
||||
- [web部署](web)
|
||||
|
||||
## 常见问题
|
||||
遇到问题可查看常见问题集合文档或搜索 FastDeploy issues,链接如下。若都无法解决,欢迎给 FastDeploy 提交新的issue
|
||||
[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq)
|
||||
[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues)
|
32
examples/vision/segmentation/paddleseg/amlogic/a311d/README.md
Executable file → Normal file
32
examples/vision/segmentation/paddleseg/amlogic/a311d/README.md
Executable file → Normal file
@@ -1,12 +1,30 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# Deployment of PP-LiteSeg Quantification Model on A311D
|
||||
Now FastDeploy allows deploying PP-LiteSeg quantization model to A311D based on Paddle Lite.
|
||||
[English](README.md) | 简体中文
|
||||
|
||||
For model quantization and download of quantized models, refer to [Model Quantization](../quantize/README.md)
|
||||
# 在晶晨A311D上使用FastDeploy部署PaddleSeg模型
|
||||
晶晨A311D是一款先进的AI应用处理器。FastDeploy支持在A311D上基于Paddle-Lite部署PaddleSeg相关模型
|
||||
|
||||
## 晶晨A311D支持的PaddleSeg模型
|
||||
目前所支持的PaddleSeg模型如下:
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
## Detailed Deployment Tutorials
|
||||
## 预导出的推理模型
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。
|
||||
|
||||
Only C++ deployment is supported on A311D.
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||
**注意**
|
||||
- PaddleSeg量化模型包含`model.pdmodel`、`model.pdiparams`、`deploy.yaml`和`subgraph.txt`四个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息,subgraph.txt是为了异构计算而存储的配置文件
|
||||
- 若以上列表中无满足要求的模型,可参考下方教程自行导出适配A311D的模型
|
||||
|
||||
- [C++ deployment](cpp)
|
||||
## PaddleSeg动态图模型导出为A311D支持的INT8模型
|
||||
模型导出分为以下两步
|
||||
1. PaddleSeg训练的动态图模型导出为推理静态图模型,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
晶晨A311D仅支持INT8
|
||||
2. 将推理模型量化压缩为INT8模型,FastDeploy模型量化的方法及一键自动化压缩工具可以参考[模型量化](../../../quantize/README.md)
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
目前,A311D上只支持C++的部署。
|
||||
|
||||
- [C++部署](cpp)
|
||||
|
@@ -1,22 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# 在晶晨A311D上使用FastDeploy部署PaddleSeg模型
|
||||
晶晨A311D是一款先进的AI应用处理器。目前,FastDeploy支持在A311D上基于Paddle-Lite部署PaddleSeg相关模型
|
||||
|
||||
## 晶晨A311D支持的PaddleSeg模型
|
||||
由于晶晨A311D的NPU仅支持INT8量化模型的部署,因此所支持的量化模型如下:
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分推理模型,开发者可直接下载使用。
|
||||
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||
>> **注意**: FastDeploy模型量化的方法及一键自动化压缩工具可以参考[模型量化](../../../quantize/README.md)
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
目前,A311D上只支持C++的部署。
|
||||
|
||||
- [C++部署](cpp)
|
56
examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/README.md
Executable file → Normal file
56
examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/README.md
Executable file → Normal file
@@ -1,31 +1,28 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PP-LiteSeg Quantitative Model C++ Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PP-LiteSeg 量化模型 C++ 部署示例
|
||||
|
||||
`infer.cc` in this directory can help you quickly complete the inference acceleration of PP-LiteSeg quantization model deployment on A311D.
|
||||
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在晶晨 A311D 上的部署推理加速。
|
||||
|
||||
## Deployment Preparations
|
||||
### FastDeploy Cross-compile Environment Preparations
|
||||
1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction).
|
||||
## 部署准备
|
||||
### FastDeploy 交叉编译环境准备
|
||||
软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
|
||||
### Model Preparations
|
||||
1. You can directly use the quantized model provided by FastDeploy for deployment.
|
||||
2. You can use one-click automatical compression tool provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
||||
3. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
|
||||
### 模型准备
|
||||
1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README_CN.md#晶晨a311d支持的paddleseg模型)进行部署。
|
||||
2. 若FastDeploy没有提供满足要求的量化模型,用户可以参考[PaddleSeg动态图模型导出为A311D支持的INT8模型](../README_CN.md#paddleseg动态图模型导出为a311d支持的int8模型)自行导出或训练量化模型
|
||||
3. 若上述导出或训练的模型出现精度下降或者报错,则需要使用异构计算,使得模型算子部分跑在A311D的ARM CPU上进行调试以及精度验证,其中异构计算所需的文件是subgraph.txt。具体关于异构计算可参考:[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md)。
|
||||
|
||||
For more information, please refer to [Model Quantization](../../quantize/README.md)
|
||||
## 在 A311D 上部署量化后的 PP-LiteSeg 分割模型
|
||||
请按照以下步骤完成在 A311D 上部署 PP-LiteSeg 量化模型:
|
||||
|
||||
## Deploying the Quantized PP-LiteSeg Segmentation model on A311D
|
||||
Please follow these steps to complete the deployment of the PP-LiteSeg quantization model on A311D.
|
||||
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
|
||||
|
||||
2. Copy the compiled library to the current directory. You can run this line:
|
||||
1. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||
```bash
|
||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
|
||||
cp -r FastDeploy/build/fastdeploy-timvx/ path/to/paddleseg/amlogic/a311d/cpp
|
||||
```
|
||||
|
||||
3. Download the model and example images required for deployment in current path.
|
||||
2. 在当前路径下载部署所需的模型和示例图片:
|
||||
```bash
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
|
||||
cd path/to/paddleseg/amlogic/a311d/cpp
|
||||
mkdir models && mkdir images
|
||||
wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
|
||||
tar -xvf ppliteseg.tar.gz
|
||||
@@ -34,26 +31,29 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
cp -r cityscapes_demo.png images
|
||||
```
|
||||
|
||||
4. Compile the deployment example. You can run the following lines:
|
||||
3. 编译部署示例,可使入如下命令:
|
||||
```bash
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
|
||||
cd path/to/paddleseg/amlogic/a311d/cpp
|
||||
mkdir build && cd build
|
||||
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
||||
make -j8
|
||||
make install
|
||||
# After success, an install folder will be created with a running demo and libraries required for deployment.
|
||||
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
||||
```
|
||||
|
||||
5. Deploy the PP-LiteSeg segmentation model to A311D based on adb. You can run the following lines:
|
||||
4. 基于 adb 工具部署 PP-LiteSeg 分割模型到晶晨 A311D,可使用如下命令:
|
||||
```bash
|
||||
# Go to the install directory.
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp/build/install/
|
||||
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
|
||||
# 进入 install 目录
|
||||
cd path/to/paddleseg/amlogic/a311d/cpp/build/install/
|
||||
cp ../../run_with_adb.sh .
|
||||
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
||||
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
|
||||
```
|
||||
|
||||
The output is:
|
||||
部署成功后运行结果如下:
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
|
||||
|
||||
Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).
|
||||
## 快速链接
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](../../)
|
||||
|
@@ -1,59 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-LiteSeg 量化模型 C++ 部署示例
|
||||
|
||||
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 晶晨A311D 上的部署推理加速。
|
||||
|
||||
## 部署准备
|
||||
### FastDeploy 交叉编译环境准备
|
||||
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
|
||||
|
||||
### 模型准备
|
||||
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
||||
2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
||||
3. 模型需要异构计算,异构计算文件可以参考:[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
|
||||
|
||||
更多量化相关相关信息可查阅[模型量化](../../../quantize/README.md)
|
||||
|
||||
## 在 A311D 上部署量化后的 PP-LiteSeg 分割模型
|
||||
请按照以下步骤完成在 A311D 上部署 PP-LiteSeg 量化模型:
|
||||
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/a311d.md#基于-paddle-lite-的-fastdeploy-交叉编译库编译)
|
||||
|
||||
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||
```bash
|
||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/amlogic/a311d/cpp
|
||||
```
|
||||
|
||||
3. 在当前路径下载部署所需的模型和示例图片:
|
||||
```bash
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/amlogic/a311d/cpp
|
||||
mkdir models && mkdir images
|
||||
wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
|
||||
tar -xvf ppliteseg.tar.gz
|
||||
cp -r ppliteseg models
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
cp -r cityscapes_demo.png images
|
||||
```
|
||||
|
||||
4. 编译部署示例,可使入如下命令:
|
||||
```bash
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/amlogic/a311d/cpp
|
||||
mkdir build && cd build
|
||||
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
||||
make -j8
|
||||
make install
|
||||
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
||||
```
|
||||
|
||||
5. 基于 adb 工具部署 PP-LiteSeg 分割模型到晶晨 A311D,可使用如下命令:
|
||||
```bash
|
||||
# 进入 install 目录
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/amlogic/a311d/cpp/build/install/
|
||||
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
||||
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
|
||||
```
|
||||
|
||||
部署成功后运行结果如下:
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
|
||||
|
||||
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../quantize/README.md)
|
@@ -161,7 +161,7 @@ For details, please refer to [SegmentationMainActivity](./app/src/main/java/com/
|
||||
## Replace FastDeploy SDK and model
|
||||
Steps to replace the FastDeploy prediction libraries and model are very simple. The location of the prediction library is `app/libs/fastdeploy-android-sdk-xxx.aar`, where `xxx` indicates the version of the prediction library you are currently using. The location of the model is, `app/src/main/assets/models/portrait_pp_humansegv2_lite_256x144_inference_model`.
|
||||
- Replace FastDeploy Android SDK: Download or compile the latest FastDeploy Android SDK, unzip it and put it in the `app/libs` directory. For details please refer to:
|
||||
- [Use FastDeploy Java SDK on Android](../../../../../java/android/)
|
||||
- [Use FastDeploy Java SDK on Android](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android)
|
||||
|
||||
- Steps for replacing the PaddleSeg model.
|
||||
- Put your PaddleSeg model in `app/src/main/assets/models`;
|
||||
@@ -173,5 +173,5 @@ For details, please refer to [SegmentationMainActivity](./app/src/main/java/com/
|
||||
|
||||
## Other Documenets
|
||||
If you are interested in more FastDeploy Java API documents and how to access the FastDeploy C++ API via JNI, you can refer to the following:
|
||||
- [Use FastDeploy Java SDK on Android](../../../../../java/android/)
|
||||
- [Use FastDeploy C++ SDK on Android](../../../../../docs/en/faq/use_cpp_sdk_on_android.md)
|
||||
- [Use FastDeploy Java SDK on Android](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android)
|
||||
- [Use FastDeploy C++ SDK on Android](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_cpp_sdk_on_android.md)
|
||||
|
@@ -10,7 +10,7 @@
|
||||
|
||||
## 部署步骤
|
||||
|
||||
1. 目标检测 PaddleSeg Demo 位于 `fastdeploy/examples/vision/segmentation/paddleseg/android` 目录
|
||||
1. 目标检测 PaddleSeg Demo 位于 `path/to/paddleseg/android` 目录
|
||||
2. 用 Android Studio 打开 paddleseg/android 工程
|
||||
3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
|
||||
|
||||
@@ -161,7 +161,7 @@ model.init(modelFile, paramFile, configFile, option);
|
||||
## 替换 FastDeploy SDK和模型
|
||||
替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models/portrait_pp_humansegv2_lite_256x144_inference_model`。
|
||||
- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK,解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
|
||||
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
|
||||
- [在 Android 中使用 FastDeploy Java SDK](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android)
|
||||
|
||||
- 替换PaddleSeg模型的步骤:
|
||||
- 将您的PaddleSeg模型放在 `app/src/main/assets/models` 目录下;
|
||||
|
@@ -14,7 +14,7 @@ FastDeploy支持在华为昇腾上部署PaddleSeg模型
|
||||
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
||||
- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md)
|
||||
|
||||
>>**注意** 若需要在华为昇腾上部署**PP-Matting**、**PP-HumanMatting**请从[Matting模型部署](../../matting/)下载对应模型,部署过程与此文档一致
|
||||
>>**注意** 若需要在华为昇腾上部署**PP-Matting**、**PP-HumanMatting**请从[Matting模型部署](../../ppmatting/)下载对应模型,部署过程与此文档一致
|
||||
|
||||
## 准备PaddleSeg部署模型
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
@@ -22,7 +22,7 @@ PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.co
|
||||
**注意**
|
||||
- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
## 下载预训练模型
|
||||
## 预导出的推理模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分推理模型模型
|
||||
- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none`
|
100
examples/vision/segmentation/paddleseg/ascend/cpp/README.md
Executable file → Normal file
100
examples/vision/segmentation/paddleseg/ascend/cpp/README.md
Executable file → Normal file
@@ -1,96 +1,38 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg C++ Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT.
|
||||
本目录下提供`infer.cc`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
## 华为昇腾NPU编译FastDeploy环境准备
|
||||
在部署前,需自行编译基于华为昇腾NPU的预测库,参考文档[华为昇腾NPU部署环境编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
【Attention】For the deployment of **PP-Matting**、**PP-HumanMatting** and **ModNet**, refer to [Matting Model Deployment](../../../matting)
|
||||
|
||||
Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.0 or above (x.x.x>=1.0.0) is required to support this model.
|
||||
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../ppmatting/)下载
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
cd path/to/paddleseg/ascend/cpp
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
# 使用编译完成的FastDeploy库编译infer_demo
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-ascend
|
||||
make -j
|
||||
|
||||
# Download Unet model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
|
||||
# CPU inference
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
||||
# GPU inference
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 2
|
||||
# kunlunxin XPU inference
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 3
|
||||
# 华为昇腾推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png
|
||||
```
|
||||
|
||||
The visualized result after running is as follows
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## PaddleSeg C++ Interface
|
||||
|
||||
### PaddleSeg Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::segmentation::PaddleSegModel(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PaddleSegModel model loading and initialization, among which model_file is the exported Paddle model format.
|
||||
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> PaddleSegModel::Predict(cv::Mat* im, DetectionResult* result)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: The segmentation result, including the predicted label of the segmentation and the corresponding probability of the label. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of SegmentationResult
|
||||
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **is_vertical_screen**(bool): For PP-HumanSeg models, the input image is portrait, height greater than a width, by setting this parameter to`true`
|
||||
|
||||
#### Post-processing Parameter
|
||||
> > * **apply_softmax**(bool): The `apply_softmax` parameter is not specified when the model is exported. Set this parameter to `true` to normalize the probability result (score_map) of the predicted output segmentation label (label_map)
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
## 快速链接
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](../../)
|
||||
- [Python部署](../python)
|
||||
|
@@ -1,88 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||
|
||||
在部署前,需自行编译基于华为昇腾NPU的预测库,参考文档[华为昇腾NPU部署环境编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/huawei_ascend.md)
|
||||
|
||||
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/ascend/cpp
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
# 使用编译完成的FastDeploy库编译infer_demo
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-ascend
|
||||
make -j
|
||||
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# 华为昇腾推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
## PaddleSeg C++接口
|
||||
|
||||
### PaddleSeg类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::segmentation::PaddleSegModel(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> PaddleSegModel::Predict(cv::Mat* im, DetectionResult* result)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
|
||||
|
||||
#### 后处理参数
|
||||
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||
|
||||
## 快速链接
|
||||
- [PaddleSeg模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
)
|
88
examples/vision/segmentation/paddleseg/ascend/python/README.md
Executable file → Normal file
88
examples/vision/segmentation/paddleseg/ascend/python/README.md
Executable file → Normal file
@@ -1,82 +1,36 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg Python Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
本目录下提供`infer.py`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
## 华为昇腾NPU编译FastDeploy wheel包环境准备
|
||||
在部署前,需自行编译基于华为昇腾NPU的FastDeploy python wheel包并安装,参考文档[华为昇腾NPU部署环境编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
|
||||
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../ppmatting)下载
|
||||
|
||||
【Attention】For the deployment of **PP-Matting**、**PP-HumanMatting** and **ModNet**, refer to [Matting Model Deployment](../../../matting)
|
||||
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
```bash
|
||||
# Download the deployment example code
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python
|
||||
#下载部署示例代码
|
||||
cd path/to/paddleseg/ascend/cpp
|
||||
|
||||
# Download Unet model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# CPU inference
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
|
||||
# GPU inference
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
|
||||
# TensorRT inference on GPU(Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
|
||||
# kunlunxin XPU inference
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device kunlunxin
|
||||
# 华为昇腾推理
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png
|
||||
```
|
||||
|
||||
The visualized result after running is as follows
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
## PaddleSegModel Python Interface
|
||||
## 快速链接
|
||||
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
|
||||
```python
|
||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) for more information
|
||||
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> PaddleSegModel.predict(input_image)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): Input data in HWC or BGR format
|
||||
|
||||
> **Return**
|
||||
>
|
||||
> > Return `fastdeploy.vision.SegmentationResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **is_vertical_screen**(bool): For PP-HumanSeg models, the input image is portrait with height greater than width by setting this parameter to `true`
|
||||
#### Post-processing Parameter
|
||||
> > * **apply_softmax**(bool): The `apply_softmax` parameter is not specified when the model is exported. Set this parameter to `true` to normalize the probability result (score_map) of the predicted output segmentation label (label_map) in softmax
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PaddleSeg Model Description](..)
|
||||
- [PaddleSeg C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)
|
||||
|
@@ -1,79 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
本目录下提供`infer.py`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||
|
||||
在部署前,需自行编译基于华为昇腾NPU的FastDeploy python wheel包,参考文档[华为昇腾NPU部署环境编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/huawei_ascend.md),编译python wheel包并安装
|
||||
|
||||
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载
|
||||
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/ascend/cpp
|
||||
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# 华为昇腾推理
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
## PaddleSegModel Python接口
|
||||
|
||||
```python
|
||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> PaddleSegModel.predict(input_image)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.SegmentationResult`结构体,SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
|
||||
|
||||
#### 后处理参数
|
||||
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||
|
||||
## 快速链接
|
||||
|
||||
- [PaddleSeg 模型介绍](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
@@ -15,7 +15,7 @@ FastDeploy支持在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立
|
||||
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
||||
- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md)
|
||||
|
||||
>>**注意**】如部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting/)
|
||||
>>**注意** 如部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../ppmatting)
|
||||
|
||||
## 准备PaddleSeg部署模型
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
@@ -23,7 +23,7 @@ PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.co
|
||||
**注意**
|
||||
- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
## 下载预训练模型
|
||||
## 预导出的推理模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型
|
||||
- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none`
|
105
examples/vision/segmentation/paddleseg/cpu-gpu/cpp/README.md
Executable file → Normal file
105
examples/vision/segmentation/paddleseg/cpu-gpu/cpp/README.md
Executable file → Normal file
@@ -1,96 +1,59 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg C++ Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT.
|
||||
本目录下提供`infer.cc`快速完成PP-LiteSeg在CPU/GPU,以及GPU上通过Paddle-TensorRT加速部署的示例。
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
## 部署环境准备
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
在部署前,需确认软硬件环境,同时下载预编译部署库,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)
|
||||
|
||||
【Attention】For the deployment of **PP-Matting**、**PP-HumanMatting** and **ModNet**, refer to [Matting Model Deployment](../../../matting)
|
||||
>> **注意** 如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../ppmatting)
|
||||
|
||||
Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.0 or above (x.x.x>=1.0.0) is required to support this model.
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.0以上(x.x.x>=1.0.0)
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
cd path/to/paddleseg/cpp-gpu/cpp
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# Download Unet model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
|
||||
# CPU inference
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
||||
# GPU inference
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 2
|
||||
# kunlunxin XPU inference
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 3
|
||||
# CPU推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
||||
# GPU推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
||||
# GPU上Paddle-TensorRT推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 2
|
||||
```
|
||||
|
||||
The visualized result after running is as follows
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
> **注意:**
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## PaddleSeg C++ Interface
|
||||
## 快速链接
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](../../)
|
||||
- [Python部署](../python)
|
||||
|
||||
### PaddleSeg Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::segmentation::PaddleSegModel(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PaddleSegModel model loading and initialization, among which model_file is the exported Paddle model format.
|
||||
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> PaddleSegModel::Predict(cv::Mat* im, DetectionResult* result)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: The segmentation result, including the predicted label of the segmentation and the corresponding probability of the label. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of SegmentationResult
|
||||
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **is_vertical_screen**(bool): For PP-HumanSeg models, the input image is portrait, height greater than a width, by setting this parameter to`true`
|
||||
|
||||
#### Post-processing Parameter
|
||||
> > * **apply_softmax**(bool): The `apply_softmax` parameter is not specified when the model is exported. Set this parameter to `true` to normalize the probability result (score_map) of the predicted output segmentation label (label_map)
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
## 常见问题
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
||||
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
|
||||
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md)
|
||||
|
@@ -1,106 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成PP-LiteSeg在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)
|
||||
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.0以上(x.x.x>=1.0.0)
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/cpp-gpu/cpp
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
||||
# GPU推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
> **注意:**
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## PaddleSeg C++接口
|
||||
|
||||
### PaddleSeg类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::segmentation::PaddleSegModel(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> PaddleSegModel::Predict(const cv::Mat &im, SegmentationResult *result)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
|
||||
|
||||
#### 后处理参数
|
||||
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||
|
||||
## 快速链接
|
||||
- [PaddleSeg模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
||||
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
|
@@ -85,6 +85,13 @@ void TrtInfer(const std::string& model_dir, const std::string& image_file) {
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
option.UseGpu();
|
||||
option.UseTrtBackend();
|
||||
// If use original Tensorrt, not Paddle-TensorRT,
|
||||
// comment the following two lines
|
||||
option.EnablePaddleToTrt();
|
||||
option.EnablePaddleTrtCollectShape();
|
||||
option.SetTrtInputShape("x", {1, 3, 256, 256}, {1, 3, 1024, 1024},
|
||||
{1, 3, 2048, 2048})
|
||||
|
||||
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
|
||||
model_file, params_file, config_file, option);
|
||||
|
||||
|
95
examples/vision/segmentation/paddleseg/cpu-gpu/python/README.md
Executable file → Normal file
95
examples/vision/segmentation/paddleseg/cpu-gpu/python/README.md
Executable file → Normal file
@@ -1,82 +1,45 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg Python Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
本目录下提供`infer.py`快速完成PP-LiteSeg在CPU/GPU,以及GPU上通过Paddle-TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
## 部署环境准备
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
在部署前,需确认软硬件环境,同时下载预编译python wheel 包,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)
|
||||
|
||||
【Attention】For the deployment of **PP-Matting**、**PP-HumanMatting** and **ModNet**, refer to [Matting Model Deployment](../../../matting)
|
||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../ppmatting)
|
||||
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
```bash
|
||||
# Download the deployment example code
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/cpu-gpu/python
|
||||
|
||||
# Download Unet model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
||||
# 下载Unet模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# CPU inference
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
|
||||
# GPU inference
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
|
||||
# TensorRT inference on GPU(Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
|
||||
# kunlunxin XPU inference
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device kunlunxin
|
||||
# CPU推理
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
|
||||
# GPU上使用Paddle-TensorRT推理 (注意:Paddle-TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
|
||||
```
|
||||
|
||||
The visualized result after running is as follows
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
## PaddleSegModel Python Interface
|
||||
## 快速链接
|
||||
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
|
||||
```python
|
||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) for more information
|
||||
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> PaddleSegModel.predict(input_image)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): Input data in HWC or BGR format
|
||||
|
||||
> **Return**
|
||||
>
|
||||
> > Return `fastdeploy.vision.SegmentationResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **is_vertical_screen**(bool): For PP-HumanSeg models, the input image is portrait with height greater than width by setting this parameter to `true`
|
||||
#### Post-processing Parameter
|
||||
> > * **apply_softmax**(bool): The `apply_softmax` parameter is not specified when the model is exported. Set this parameter to `true` to normalize the probability result (score_map) of the predicted output segmentation label (label_map) in softmax
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PaddleSeg Model Description](..)
|
||||
- [PaddleSeg C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
||||
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
|
||||
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md)
|
||||
|
@@ -1,88 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)
|
||||
|
||||
本目录下提供`infer.py`快速完成PP-LiteSeg在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/cpu-gpu/python
|
||||
|
||||
# 下载Unet模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
## PaddleSegModel Python接口
|
||||
|
||||
```python
|
||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> PaddleSegModel.predict(input_image)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.SegmentationResult`结构体,结构体说明参考文档[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
|
||||
|
||||
#### 后处理参数
|
||||
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [PaddleSeg 模型介绍](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
||||
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
|
@@ -32,6 +32,10 @@ def build_option(args):
|
||||
|
||||
if args.use_trt:
|
||||
option.use_trt_backend()
|
||||
# If use original Tensorrt, not Paddle-TensorRT,
|
||||
# comment the following two lines
|
||||
option.enable_paddle_to_trt()
|
||||
option.enable_paddle_trt_collect_shape()
|
||||
option.set_trt_input_shape("x", [1, 3, 256, 256], [1, 3, 1024, 1024],
|
||||
[1, 3, 2048, 2048])
|
||||
return option
|
||||
|
@@ -13,7 +13,7 @@
|
||||
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
||||
- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md)
|
||||
|
||||
>>**注意** 若需要在华为昇腾上部署**PP-Matting**、**PP-HumanMatting**请从[Matting模型部署](../../matting/)下载对应模型,部署过程与此文档一致
|
||||
>>**注意** 若需要在华为昇腾上部署**PP-Matting**、**PP-HumanMatting**请从[Matting模型部署](../../ppmating/)下载对应模型,部署过程与此文档一致
|
||||
|
||||
## 准备PaddleSeg部署模型
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
@@ -21,7 +21,7 @@ PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.co
|
||||
**注意**
|
||||
- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
## 下载预训练模型
|
||||
## 预导出的推理模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型
|
||||
- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none`
|
101
examples/vision/segmentation/paddleseg/kunlun/cpp/README.md
Executable file → Normal file
101
examples/vision/segmentation/paddleseg/kunlun/cpp/README.md
Executable file → Normal file
@@ -1,96 +1,39 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg C++ Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT.
|
||||
本目录下提供`infer.cc`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
## 昆仑芯XPU编译FastDeploy环境准备
|
||||
在部署前,需自行编译基于昆仑芯XPU的预测库,参考文档[昆仑芯XPU部署环境编译安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
【Attention】For the deployment of **PP-Matting**、**PP-HumanMatting** and **ModNet**, refer to [Matting Model Deployment](../../../matting)
|
||||
|
||||
Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.0 or above (x.x.x>=1.0.0) is required to support this model.
|
||||
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
cd path/to/paddleseg/ascend/cpp
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
# 使用编译完成的FastDeploy库编译infer_demo
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-ascend
|
||||
make -j
|
||||
|
||||
# Download Unet model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
|
||||
# CPU inference
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
||||
# GPU inference
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 2
|
||||
# kunlunxin XPU inference
|
||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 3
|
||||
# 华为昇腾推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png
|
||||
```
|
||||
|
||||
The visualized result after running is as follows
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## PaddleSeg C++ Interface
|
||||
|
||||
### PaddleSeg Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::segmentation::PaddleSegModel(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PaddleSegModel model loading and initialization, among which model_file is the exported Paddle model format.
|
||||
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> PaddleSegModel::Predict(cv::Mat* im, DetectionResult* result)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: The segmentation result, including the predicted label of the segmentation and the corresponding probability of the label. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of SegmentationResult
|
||||
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **is_vertical_screen**(bool): For PP-HumanSeg models, the input image is portrait, height greater than a width, by setting this parameter to`true`
|
||||
|
||||
#### Post-processing Parameter
|
||||
> > * **apply_softmax**(bool): The `apply_softmax` parameter is not specified when the model is exported. Set this parameter to `true` to normalize the probability result (score_map) of the predicted output segmentation label (label_map)
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
## 快速链接
|
||||
how_to_change_backend.md)
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](../../)
|
||||
- [Python部署](../python)
|
||||
|
@@ -1,88 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||
|
||||
在部署前,需自行编译基于昆仑芯XPU的预测库,参考文档[昆仑芯XPU部署环境编译安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/kunlunxin.md)
|
||||
|
||||
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/ascend/cpp
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
# 使用编译完成的FastDeploy库编译infer_demo
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-ascend
|
||||
make -j
|
||||
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# 华为昇腾推理
|
||||
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
## PaddleSeg C++接口
|
||||
|
||||
### PaddleSeg类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::segmentation::PaddleSegModel(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> PaddleSegModel::Predict(cv::Mat* im, DetectionResult* result)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
|
||||
|
||||
#### 后处理参数
|
||||
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||
|
||||
## 快速链接
|
||||
- [PaddleSeg模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
)
|
89
examples/vision/segmentation/paddleseg/kunlun/python/README.md
Executable file → Normal file
89
examples/vision/segmentation/paddleseg/kunlun/python/README.md
Executable file → Normal file
@@ -1,82 +1,37 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg Python Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
本目录下提供`infer.py`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
## 昆仑XPU编译FastDeploy wheel包环境准备
|
||||
|
||||
在部署前,需自行编译基于昆仑XPU的FastDeploy python wheel包并安装,参考文档[昆仑芯XPU部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
|
||||
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../ppmatting)下载
|
||||
|
||||
【Attention】For the deployment of **PP-Matting**、**PP-HumanMatting** and **ModNet**, refer to [Matting Model Deployment](../../../matting)
|
||||
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
```bash
|
||||
# Download the deployment example code
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python
|
||||
#下载部署示例代码
|
||||
cd path/to/paddleseg/ascend/cpp
|
||||
|
||||
# Download Unet model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# CPU inference
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
|
||||
# GPU inference
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
|
||||
# TensorRT inference on GPU(Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
|
||||
# kunlunxin XPU inference
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device kunlunxin
|
||||
# 华为昇腾推理
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png
|
||||
```
|
||||
|
||||
The visualized result after running is as follows
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
## PaddleSegModel Python Interface
|
||||
## 快速链接
|
||||
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
|
||||
```python
|
||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) for more information
|
||||
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> PaddleSegModel.predict(input_image)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): Input data in HWC or BGR format
|
||||
|
||||
> **Return**
|
||||
>
|
||||
> > Return `fastdeploy.vision.SegmentationResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **is_vertical_screen**(bool): For PP-HumanSeg models, the input image is portrait with height greater than width by setting this parameter to `true`
|
||||
#### Post-processing Parameter
|
||||
> > * **apply_softmax**(bool): The `apply_softmax` parameter is not specified when the model is exported. Set this parameter to `true` to normalize the probability result (score_map) of the predicted output segmentation label (label_map) in softmax
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PaddleSeg Model Description](..)
|
||||
- [PaddleSeg C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)
|
||||
|
@@ -1,79 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
本目录下提供`infer.py`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||
|
||||
在部署前,需自行编译基于昆仑芯XPU的FastDeploy wheel 包,参考文档[昆仑芯XPU部署环境编译安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/kunlunxin.md),编译python wheel包并安装
|
||||
|
||||
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载
|
||||
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/ascend/cpp
|
||||
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# 华为昇腾推理
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
## PaddleSegModel Python接口
|
||||
|
||||
```python
|
||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> PaddleSegModel.predict(input_image)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.SegmentationResult`结构体,SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
|
||||
|
||||
#### 后处理参数
|
||||
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||
|
||||
## 快速链接
|
||||
|
||||
- [PaddleSeg 模型介绍](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
51
examples/vision/segmentation/paddleseg/quantize/README.md
Executable file → Normal file
51
examples/vision/segmentation/paddleseg/quantize/README.md
Executable file → Normal file
@@ -1,37 +1,26 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg Quantized Model Deployment
|
||||
FastDeploy already supports the deployment of quantitative models and provides a tool to automatically compress model with just one click.
|
||||
You can use the one-click automatical model compression tool to quantify and deploy the models, or directly download the quantified models provided by FastDeploy for deployment.
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg 量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
|
||||
## FastDeploy One-Click Automation Model Compression Tool
|
||||
FastDeploy provides an one-click automatical model compression tool that can quantify a model simply by entering configuration file.
|
||||
For details, please refer to [one-click automatical compression tool](../../../../../tools/common_tools/auto_compression/).
|
||||
Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](https://github.com/PaddlePaddle/FastDeploy/tree/develop/tools/common_tools/auto_compression)
|
||||
>> **注意**: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。
|
||||
|
||||
## Download the Quantized PaddleSeg Model
|
||||
You can also directly download the quantized models in the following table for deployment (click model name to download).
|
||||
## 量化完成的PaddleSeg模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
|
||||
|
||||
Note:
|
||||
- Runtime latency is the inference latency of the model on various Runtimes, including CPU->GPU data copy, GPU inference, and GPU->CPU data copy time. It does not include the respective pre and post processing time of the models.
|
||||
- The end-to-end latency is the latency of the model in the actual inference scenario, including the pre and post processing of the model.
|
||||
- The measured latencies are averaged over 1000 inferences, in milliseconds.
|
||||
- INT8 + FP16 is to enable the FP16 inference option for Runtime while inferring the INT8 quantization model.
|
||||
- INT8 + FP16 + PM is the option to use Pinned Memory while inferring INT8 quantization model and turning on FP16, which can speed up the GPU->CPU data copy speed.
|
||||
- The maximum speedup ratio is obtained by dividing the FP32 latency by the fastest INT8 inference latency.
|
||||
- The strategy is quantitative distillation training, using a small number of unlabeled data sets to train the quantitative model, and verify the accuracy on the full validation set, INT8 accuracy does not represent the highest INT8 accuracy.
|
||||
- The CPU is Intel(R) Xeon(R) Gold 6271C with a fixed CPU thread count of 1 in all tests. The GPU is Tesla T4, TensorRT version 8.4.15.
|
||||
| 模型 | 量化方式 |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) |量化蒸馏训练 |
|
||||
|
||||
#### Runtime Benchmark
|
||||
| Model |Inference Backends | Hardware | FP32 Runtime Latency | INT8 Runtime Latency | INT8 + FP16 Runtime Latency | INT8+FP16+PM Runtime Latency | Max Speedup | FP32 mIoU | INT8 mIoU | Method |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) | Paddle Inference | CPU | 1138.04| 602.62 |None|None | 1.89 |77.37 | 71.62 |Quantaware Distillation Training |
|
||||
量化后模型的Benchmark比较,请参考[量化模型 Benchmark](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/quantize.md)
|
||||
|
||||
#### End to End Benchmark
|
||||
| Model |Inference Backends | Hardware | FP32 End2End Latency | INT8 End2End Latency | INT8 + FP16 End2End Latency | INT8+FP16+PM End2End Latency | Max Speedup | FP32 mIoU | INT8 mIoU | Method |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) | Paddle Inference | CPU | 4726.65| 4134.91|None|None | 1.14 |77.37 | 71.62 |Quantaware Distillation Training|
|
||||
## 支持部署量化模型的硬件
|
||||
FastDeploy 量化模型部署的过程大致都与FP32模型类似,只是模型量化与非量化的区别,如果硬件在量化模型部署过程有特殊处理,也会在文档中特别标明,因此量化模型部署可以参考如下硬件的链接
|
||||
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
| 硬件支持列表 | | | |
|
||||
|:----- | :-- | :-- | :-- |
|
||||
| [NVIDIA GPU](cpu-gpu) | [X86 CPU](cpu-gpu)| [飞腾CPU](cpu-gpu) | [ARM CPU](cpu-gpu) |
|
||||
| [Intel GPU(独立显卡/集成显卡)](cpu-gpu) | [昆仑](kunlun) | [昇腾](ascend) | [瑞芯微](rockchip) |
|
||||
| [晶晨](amlogic) | [算能](sophgo) |
|
||||
|
@@ -1,26 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg 量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](https://github.com/PaddlePaddle/FastDeploy/tree/develop/tools/common_tools/auto_compression)
|
||||
>> **注意**: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。
|
||||
|
||||
## 量化完成的PaddleSeg模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
|
||||
|
||||
| 模型 | 量化方式 |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) |量化蒸馏训练 |
|
||||
|
||||
量化后模型的Benchmark比较,请参考[量化模型 Benchmark](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/quantize.md)
|
||||
|
||||
## 支持部署量化模型的硬件
|
||||
FastDeploy 量化模型部署的过程大致都与FP32模型类似,只是模型量化与非量化的区别,如果硬件在量化模型部署过程有特殊处理,也会在文档中特别标明,因此量化模型部署可以参考如下硬件的链接
|
||||
- [NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU](../cpu-gpu)
|
||||
- [昆仑](../kunlun)
|
||||
- [升腾](../ascend)
|
||||
- [瑞芯微](../rockchip)
|
||||
- [晶晨](../amlogic)
|
||||
- [算能](../sophgo)
|
@@ -1,34 +1,63 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg Model Deployment
|
||||
[English](README.md) | 简体中文
|
||||
|
||||
## Model Version
|
||||
# 基于RKNPU2使用FastDeploy部署PaddleSeg模型
|
||||
RKNPU2 提供了一个高性能接口来访问 Rockchip NPU,支持如下硬件的部署
|
||||
- RK3566/RK3568
|
||||
- RK3588/RK3588S
|
||||
- RV1103/RV1106
|
||||
|
||||
本示例基于 RV3588 来介绍如何使用 FastDeploy 部署 PaddleSeg 模型
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||
|
||||
Currently FastDeploy using RKNPU2 to infer PPSeg supports the following model deployments:
|
||||
目前FastDeploy使用RKNPU2推理PaddleSeg支持如下模型的部署:
|
||||
- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md)
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md)
|
||||
- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md)
|
||||
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
||||
|
||||
| Model | Parameter File Size | Input Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------------------------------------------------------------------------------------|:-------|:---------|:-------|:------------|:---------------|
|
||||
## 准备PaddleSeg部署模型
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
|
||||
**注意**
|
||||
- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
## 下载预训练模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型
|
||||
- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none`
|
||||
- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax`
|
||||
|
||||
开发者可直接下载使用。
|
||||
|
||||
| 模型 | 参数文件大小 | 输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:----------------|:-------|:---------|:-------|:------------|:---------------|
|
||||
| [Unet-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||
| [PP-HumanSegV1-Lite(Universal portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
|
||||
| [PP-HumanSegV2-Lite(Universal portrait segmentation model)](https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
|
||||
| [PP-HumanSegV2-Mobile(Universal portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
|
||||
| [PP-HumanSegV1-Server(Universal portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
|
||||
| [Portait-PP-HumanSegV2_Lite(Portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
|
||||
| [PP-HumanSegV1-Lite(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
|
||||
| [PP-HumanSegV2-Lite(通用人像分割模型)](https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
|
||||
| [PP-HumanSegV2-Mobile(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
|
||||
| [PP-HumanSegV1-Server(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
|
||||
| [Portait-PP-HumanSegV2_Lite(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
|
||||
| [FCN-HRNet-W18-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
|
||||
| [Deeplabv3-ResNet101-OS8-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |
|
||||
|
||||
## Prepare PaddleSeg Deployment Model and Conversion Model
|
||||
RKNPU needs to convert the Paddle model to RKNN model before deploying, the steps are as follows:
|
||||
* For the conversion of Paddle dynamic diagram model to ONNX model, please refer to [PaddleSeg Model Export](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg).
|
||||
* For the process of converting ONNX model to RKNN model, please refer to [Conversion document](../../../../../docs/en/faq/rknpu2/export.md).
|
||||
## 准备PaddleSeg部署模型以及转换模型
|
||||
RKNPU部署模型前需要将Paddle模型转换成RKNN模型,具体步骤如下:
|
||||
* PaddleSeg训练模型导出为推理模型,请参考[PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md),也可以使用上表中的FastDeploy的预导出模型
|
||||
* Paddle模型转换为ONNX模型,请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)
|
||||
* ONNX模型转换RKNN模型的过程,请参考[转换文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/export.md)进行转换。
|
||||
|
||||
## An example of Model Conversion
|
||||
上述步骤可参考以下具体示例
|
||||
|
||||
* [PPHumanSeg](./pp_humanseg_EN.md)
|
||||
## 模型转换example
|
||||
|
||||
## Detailed Deployment Document
|
||||
- [Overall RKNN Deployment Guidance](../../../../../docs/en/faq/rknpu2/rknpu2.md)
|
||||
- [Deploy with C++](cpp)
|
||||
- [Deploy with Python](python)
|
||||
* [PP-HumanSeg](./pp_humanseg.md)
|
||||
|
||||
## 详细部署文档
|
||||
- [RKNN总体部署教程](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)
|
||||
- [C++部署](cpp)
|
||||
- [Python部署](python)
|
||||
|
@@ -1,55 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg 模型部署
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||
|
||||
目前FastDeploy使用RKNPU2推理PPSeg支持如下模型的部署:
|
||||
- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md)
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md)
|
||||
- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md)
|
||||
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
||||
|
||||
## 准备PaddleSeg部署模型
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
|
||||
**注意**
|
||||
- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
## 下载预训练模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型
|
||||
- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none`
|
||||
- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax`
|
||||
|
||||
开发者可直接下载使用。
|
||||
|
||||
| 模型 | 参数文件大小 | 输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:----------------|:-------|:---------|:-------|:------------|:---------------|
|
||||
| [Unet-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||
| [PP-HumanSegV1-Lite(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
|
||||
| [PP-HumanSegV2-Lite(通用人像分割模型)](https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
|
||||
| [PP-HumanSegV2-Mobile(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
|
||||
| [PP-HumanSegV1-Server(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
|
||||
| [Portait-PP-HumanSegV2_Lite(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
|
||||
| [FCN-HRNet-W18-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
|
||||
| [Deeplabv3-ResNet101-OS8-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |
|
||||
|
||||
## 准备PaddleSeg部署模型以及转换模型
|
||||
RKNPU部署模型前需要将Paddle模型转换成RKNN模型,具体步骤如下:
|
||||
* PaddleSeg训练模型导出为推理模型,请参考[PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md),也可以使用上表中的FastDeploy的预导出模型
|
||||
* Paddle模型转换为ONNX模型,请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)
|
||||
* ONNX模型转换RKNN模型的过程,请参考[转换文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/export.md)进行转换。
|
||||
上述步骤可以可参考以下具体示例
|
||||
|
||||
## 模型转换example
|
||||
|
||||
* [PPHumanSeg](./pp_humanseg.md)
|
||||
|
||||
## 详细部署文档
|
||||
- [RKNN总体部署教程](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)
|
||||
- [C++部署](cpp)
|
||||
- [Python部署](python)
|
@@ -1,31 +1,31 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg Deployment Examples for C++
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
This directory demonstrates the deployment of PaddleSeg series models on RKNPU2. The following deployment process takes PHumanSeg as an example.
|
||||
本目录下用于展示PaddleSeg系列模型在RKNPU2上的部署,以下的部署过程以PPHumanSeg为例子。
|
||||
|
||||
Before deployment, the following two steps need to be confirmed:
|
||||
在部署前,需确认以下两个步骤:
|
||||
|
||||
1. Hardware and software environment meets the requirements.
|
||||
2. Download the pre-compiled deployment repository or compile the FastDeploy repository from scratch according to the development environment.
|
||||
1. 软硬件环境满足要求
|
||||
2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
|
||||
|
||||
For the above steps, please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md).
|
||||
以上步骤请参考[RK2代NPU部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)实现
|
||||
|
||||
## Generate Basic Directory Files
|
||||
## 生成基本目录文件
|
||||
|
||||
The routine consists of the following parts:
|
||||
该例程由以下几个部分组成
|
||||
```text
|
||||
.
|
||||
├── CMakeLists.txt
|
||||
├── build # Compile Folder
|
||||
├── image # Folder for images
|
||||
├── build # 编译文件夹
|
||||
├── image # 存放图片的文件夹
|
||||
├── infer_cpu_npu.cc
|
||||
├── infer_cpu_npu.h
|
||||
├── main.cc
|
||||
├── model # Folder for models
|
||||
└── thirdpartys # Folder for sdk
|
||||
├── model # 存放模型文件的文件夹
|
||||
└── thirdpartys # 存放sdk的文件夹
|
||||
```
|
||||
|
||||
First, please build a directory structure
|
||||
首先需要先生成目录结构
|
||||
```bash
|
||||
mkdir build
|
||||
mkdir images
|
||||
@@ -33,23 +33,23 @@ mkdir model
|
||||
mkdir thirdpartys
|
||||
```
|
||||
|
||||
## Compile
|
||||
## 编译
|
||||
|
||||
### Compile and Copy SDK to folder thirdpartys
|
||||
### 编译并拷贝SDK到thirdpartys文件夹
|
||||
|
||||
Please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory, please move it to the thirdpartys directory.
|
||||
请参考[RK2代NPU部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-x-x-x目录,请移动它至thirdpartys目录下.
|
||||
|
||||
### Copy model and configuration files to folder Model
|
||||
In the process of Paddle dynamic map model -> Paddle static map model -> ONNX mdoel, ONNX file and the corresponding yaml configuration file will be generated. Please move the configuration file to the folder model.
|
||||
After converting to RKNN, the model file also needs to be copied to folder model. Run the following command to download and use (the model file is RK3588. RK3568 needs to be [reconverted to PPSeg RKNN model](../README.md)).
|
||||
### 拷贝模型文件,以及配置文件至model文件夹
|
||||
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
|
||||
转换为RKNN后的模型文件也需要拷贝至model,输入以下命令下载使用(模型文件为RK3588,RK3568需要重新[转换PPSeg RKNN模型](../README.md))。
|
||||
|
||||
### Prepare Test Images to folder image
|
||||
### 准备测试图片至image文件夹
|
||||
```bash
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
|
||||
unzip -qo images.zip
|
||||
```
|
||||
|
||||
### Compile example
|
||||
### 编译example
|
||||
|
||||
```bash
|
||||
cd build
|
||||
@@ -58,16 +58,19 @@ make -j8
|
||||
make install
|
||||
```
|
||||
|
||||
## Running Routines
|
||||
## 运行例程
|
||||
|
||||
```bash
|
||||
cd ./build/install
|
||||
./rknpu_test model/Portrait_PP_HumanSegV2_Lite_256x144_infer/ images/portrait_heng.jpg
|
||||
```
|
||||
|
||||
## Notes
|
||||
The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisableNormalizeAndPermute(C++) or disable_normalize_and_permute(Python) first when deploying with FastDeploy to disable normalization and data format conversion in the preprocessing stage.
|
||||
## 注意事项
|
||||
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Convert PPSeg and RKNN model](../README.md)
|
||||
## 快速链接
|
||||
- [FastDeploy部署PaddleSeg模型概览](../../)
|
||||
- [Python部署](../python)
|
||||
- [转换PPSeg RKNN模型文档](../README.md)
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
)
|
||||
|
@@ -1,73 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
本目录下用于展示PaddleSeg系列模型在RKNPU2上的部署,以下的部署过程以PPHumanSeg为例子。
|
||||
|
||||
在部署前,需确认以下两个步骤:
|
||||
|
||||
1. 软硬件环境满足要求
|
||||
2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
|
||||
|
||||
以上步骤请参考[RK2代NPU部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)实现
|
||||
|
||||
## 生成基本目录文件
|
||||
|
||||
该例程由以下几个部分组成
|
||||
```text
|
||||
.
|
||||
├── CMakeLists.txt
|
||||
├── build # 编译文件夹
|
||||
├── image # 存放图片的文件夹
|
||||
├── infer_cpu_npu.cc
|
||||
├── infer_cpu_npu.h
|
||||
├── main.cc
|
||||
├── model # 存放模型文件的文件夹
|
||||
└── thirdpartys # 存放sdk的文件夹
|
||||
```
|
||||
|
||||
首先需要先生成目录结构
|
||||
```bash
|
||||
mkdir build
|
||||
mkdir images
|
||||
mkdir model
|
||||
mkdir thirdpartys
|
||||
```
|
||||
|
||||
## 编译
|
||||
|
||||
### 编译并拷贝SDK到thirdpartys文件夹
|
||||
|
||||
请参考[RK2代NPU部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-x-x-x目录,请移动它至thirdpartys目录下.
|
||||
|
||||
### 拷贝模型文件,以及配置文件至model文件夹
|
||||
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
|
||||
转换为RKNN后的模型文件也需要拷贝至model,输入以下命令下载使用(模型文件为RK3588,RK3568需要重新[转换PPSeg RKNN模型](../README.md))。
|
||||
|
||||
### 准备测试图片至image文件夹
|
||||
```bash
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
|
||||
unzip -qo images.zip
|
||||
```
|
||||
|
||||
### 编译example
|
||||
|
||||
```bash
|
||||
cd build
|
||||
cmake ..
|
||||
make -j8
|
||||
make install
|
||||
```
|
||||
|
||||
## 运行例程
|
||||
|
||||
```bash
|
||||
cd ./build/install
|
||||
./rknpu_test model/Portrait_PP_HumanSegV2_Lite_256x144_infer/ images/portrait_heng.jpg
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [转换PPSeg RKNN模型文档](../README.md)
|
@@ -1,5 +1,5 @@
|
||||
[English](pp_humanseg_EN.md) | 简体中文
|
||||
# PPHumanSeg模型部署
|
||||
# PP-HumanSeg模型转换示例
|
||||
|
||||
## 转换模型
|
||||
下面以Portait-PP-HumanSegV2_Lite(肖像分割模型)为例子,教大家如何转换PaddleSeg模型到RKNN模型。
|
||||
|
@@ -1,36 +1,38 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg Deployment Examples for Python
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
Before deployment, the following step need to be confirmed:
|
||||
在部署前,需确认以下步骤
|
||||
|
||||
- 1. Hardware and software environment meets the requirements, please refer to [Environment Requirements for FastDeploy](../../../../../../docs/en/build_and_install/rknpu2.md).
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)
|
||||
|
||||
【Note】If you are deploying **PP-Matting**, **PP-HumanMatting** or **ModNet**, please refer to [Matting Model Deployment](../../../../matting/).
|
||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../../matting/)
|
||||
|
||||
This directory provides `infer.py` for a quick example of PPHumanseg deployment on RKNPU. This can be done by running the following script.
|
||||
本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
# Download the deploying demo code.
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python
|
||||
|
||||
# Download images.
|
||||
# 下载图片
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
|
||||
unzip images.zip
|
||||
|
||||
# Inference.
|
||||
# 推理
|
||||
python3 infer.py --model_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn \
|
||||
--config_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/deploy.yaml \
|
||||
--image images/portrait_heng.jpg
|
||||
```
|
||||
|
||||
|
||||
## Notes
|
||||
The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisableNormalizeAndPermute(C++) or disable_normalize_and_permute(Python) first when deploying with FastDeploy to disable normalization and data format conversion in the preprocessing stage.
|
||||
## 注意事项
|
||||
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
|
||||
|
||||
## Other Documents
|
||||
## 快速链接
|
||||
|
||||
- [PaddleSeg Model Description](..)
|
||||
- [PaddleSeg C++ Deployment](../cpp)
|
||||
- [Description of the prediction](../../../../../../docs/api/vision_results/)
|
||||
- [Convert PPSeg and RKNN model](../README.md)
|
||||
- [FastDeploy部署PaddleSeg模型概览](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
- [转换PaddleSeg模型至RKNN模型文档](../README_CN.md#准备paddleseg部署模型以及转换模型)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)
|
||||
|
@@ -1,36 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
在部署前,需确认以下步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)
|
||||
|
||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../../matting/)
|
||||
|
||||
本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python
|
||||
|
||||
# 下载图片
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
|
||||
unzip images.zip
|
||||
|
||||
# 推理
|
||||
python3 infer.py --model_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn \
|
||||
--config_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/deploy.yaml \
|
||||
--image images/portrait_heng.jpg
|
||||
```
|
||||
|
||||
|
||||
## 注意事项
|
||||
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [PaddleSeg 模型介绍](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
- [模型预测结果说明](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
- [转换PaddleSeg模型至RKNN模型文档](../README.md)
|
29
examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md
Executable file → Normal file
29
examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md
Executable file → Normal file
@@ -1,12 +1,27 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# Deployment of PP-LiteSeg Quantification Model on RV1126
|
||||
Now FastDeploy allows deploying PP-LiteSeg quantization model to RV1126 based on Paddle Lite.
|
||||
[English](README.md) | 简体中文
|
||||
# 在瑞芯微 RV1126 上使用 FastDeploy 部署 PaddleSeg 模型
|
||||
瑞芯微 RV1126 是一款编解码芯片,专门面相人工智能的机器视觉领域。目前,FastDeploy 支持在 RV1126 上基于 Paddle-Lite 部署 PaddleSeg 相关模型
|
||||
|
||||
For model quantization and download of quantized models, refer to [Model Quantization](../quantize/README.md)
|
||||
## 瑞芯微 RV1126 支持的PaddleSeg模型
|
||||
目前瑞芯微 RV1126 的 NPU 支持的量化模型如下:
|
||||
## 预导出的推理模型
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。
|
||||
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||
**注意**
|
||||
- PaddleSeg量化模型包含`model.pdmodel`、`model.pdiparams`、`deploy.yaml`和`subgraph.txt`四个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息,subgraph.txt是为了异构计算而存储的配置文件
|
||||
- 若以上列表中无满足要求的模型,可参考下方教程自行导出适配A311D的模型
|
||||
|
||||
## Detailed Deployment Tutorials
|
||||
## PaddleSeg动态图模型导出为RV1126支持的INT8模型
|
||||
模型导出分为以下两步
|
||||
1. PaddleSeg训练的动态图模型导出为推理静态图模型,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
瑞芯微RV1126仅支持INT8
|
||||
2. 将推理模型量化压缩为INT8模型,FastDeploy模型量化的方法及一键自动化压缩工具可以参考[模型量化](../../../quantize/README.md)
|
||||
|
||||
Only C++ deployment is supported on RV1126.
|
||||
## 详细部署文档
|
||||
|
||||
- [C++ Deployment](cpp)
|
||||
目前,瑞芯微 RV1126 上只支持C++的部署。
|
||||
|
||||
- [C++部署](cpp)
|
||||
|
@@ -1,20 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# 在瑞芯微 RV1126 上使用 FastDeploy 部署 PaddleSeg 模型
|
||||
瑞芯微 RV1126 是一款编解码芯片,专门面相人工智能的机器视觉领域。目前,FastDeploy 支持在 RV1126 上基于 Paddle-Lite 部署 PaddleSeg 相关模型
|
||||
|
||||
## 瑞芯微 RV1126 支持的PaddleSeg模型
|
||||
由于瑞芯微 RV1126 的 NPU 仅支持 INT8 量化模型的部署,因此所支持的量化模型如下:
|
||||
- [PP-LiteSeg 系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
为了方便开发者的测试,下面提供了 PaddleSeg 导出的部分模型,开发者可直接下载使用。
|
||||
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||
>> **注意**: FastDeploy 模型量化的方法及一键自动化压缩工具可以参考[模型量化](../../../quantize/README.md)
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
目前,瑞芯微 RV1126 上只支持C++的部署。
|
||||
|
||||
- [C++部署](cpp)
|
54
examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/README.md
Executable file → Normal file
54
examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/README.md
Executable file → Normal file
@@ -1,29 +1,27 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PP-LiteSeg Quantitative Model C++ Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PP-LiteSeg 量化模型 C++ 部署示例
|
||||
|
||||
`infer.cc` in this directory can help you quickly complete the inference acceleration of PP-LiteSeg quantization model deployment on RV1126.
|
||||
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 RV1126 上的部署推理加速。
|
||||
|
||||
## Deployment Preparations
|
||||
### FastDeploy Cross-compile Environment Preparations
|
||||
1. For the software and hardware environment, and the cross-compile environment, please refer to [Preparations for FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/rv1126.md#Cross-compilation-environment-construction).
|
||||
## 部署准备
|
||||
### FastDeploy 交叉编译环境准备
|
||||
软硬件环境满足要求,以及交叉编译环境的准备,请参考:[瑞芯微RV1126部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
|
||||
### Model Preparations
|
||||
1. You can directly use the quantized model provided by FastDeploy for deployment.
|
||||
2. You can use one-click automatical compression tool provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
||||
3. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
|
||||
### 模型准备
|
||||
1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README_CN.md#瑞芯微-rv1126-支持的paddleseg模型)进行部署。
|
||||
2. 若FastDeploy没有提供满足要求的量化模型,用户可以参考[PaddleSeg动态图模型导出为RV1126支持的INT8模型](../README_CN.md#paddleseg动态图模型导出为rv1126支持的int8模型)自行导出或训练量化模型
|
||||
3. 若上述导出或训练的模型出现精度下降或者报错,则需要使用异构计算,使得模型算子部分跑在RV1126的ARM CPU上进行调试以及精度验证,其中异构计算所需的文件是subgraph.txt。具体关于异构计算可参考:[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md)。
|
||||
|
||||
For more information, please refer to [Model Quantization](../../quantize/README.md).
|
||||
## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型
|
||||
请按照以下步骤完成在 RV1126 上部署 PP-LiteSeg 量化模型:
|
||||
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/a311d.md#基于-paddle-lite-的-fastdeploy-交叉编译库编译)
|
||||
|
||||
## Deploying the Quantized PP-LiteSeg Segmentation model on RV1126
|
||||
Please follow these steps to complete the deployment of the PP-LiteSeg quantization model on RV1126.
|
||||
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/rv1126.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite).
|
||||
|
||||
2. Copy the compiled library to the current directory. You can run this line:
|
||||
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||
```bash
|
||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp
|
||||
cp -r FastDeploy/build/fastdeploy-timvx/ path/to/paddleseg/rockchip/rv1126/cpp
|
||||
```
|
||||
|
||||
3. Download the model and example images required for deployment in current path.
|
||||
3. 在当前路径下载部署所需的模型和示例图片:
|
||||
```bash
|
||||
mkdir models && mkdir images
|
||||
wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
|
||||
@@ -33,25 +31,29 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
cp -r cityscapes_demo.png images
|
||||
```
|
||||
|
||||
4. Compile the deployment example. You can run the following lines:
|
||||
4. 编译部署示例,可使入如下命令:
|
||||
```bash
|
||||
mkdir build && cd build
|
||||
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
|
||||
make -j8
|
||||
make install
|
||||
# After success, an install folder will be created with a running demo and libraries required for deployment.
|
||||
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
||||
```
|
||||
|
||||
5. Deploy the PP-LiteSeg segmentation model to Rockchip RV1126 based on adb. You can run the following lines:
|
||||
5. 基于 adb 工具部署 PP-LiteSeg 分割模型到 Rockchip RV1126,可使用如下命令:
|
||||
```bash
|
||||
# Go to the install directory.
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp/build/install/
|
||||
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
|
||||
# 进入 install 目录
|
||||
cd path/to/paddleseg/rockchip/rv1126/cpp/build/install/
|
||||
cp ../../run_with_adb.sh .
|
||||
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
||||
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
|
||||
```
|
||||
|
||||
The output is:
|
||||
部署成功后运行结果如下:
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
|
||||
|
||||
Please note that the model deployed on RV1126 needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).
|
||||
|
||||
## 快速链接
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](../../)
|
||||
|
@@ -1,57 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-LiteSeg 量化模型 C++ 部署示例
|
||||
|
||||
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 RV1126 上的部署推理加速。
|
||||
|
||||
## 部署准备
|
||||
### FastDeploy 交叉编译环境准备
|
||||
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
|
||||
|
||||
### 模型准备
|
||||
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
||||
2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
||||
3. 模型需要异构计算,异构计算文件可以参考:[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
|
||||
|
||||
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
||||
|
||||
## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型
|
||||
请按照以下步骤完成在 RV1126 上部署 PP-LiteSeg 量化模型:
|
||||
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/a311d.md#基于-paddle-lite-的-fastdeploy-交叉编译库编译)
|
||||
|
||||
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||
```bash
|
||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp
|
||||
```
|
||||
|
||||
3. 在当前路径下载部署所需的模型和示例图片:
|
||||
```bash
|
||||
mkdir models && mkdir images
|
||||
wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
|
||||
tar -xvf ppliteseg.tar.gz
|
||||
cp -r ppliteseg models
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
cp -r cityscapes_demo.png images
|
||||
```
|
||||
|
||||
4. 编译部署示例,可使入如下命令:
|
||||
```bash
|
||||
mkdir build && cd build
|
||||
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
|
||||
make -j8
|
||||
make install
|
||||
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
||||
```
|
||||
|
||||
5. 基于 adb 工具部署 PP-LiteSeg 分割模型到 Rockchip RV1126,可使用如下命令:
|
||||
```bash
|
||||
# 进入 install 目录
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/build/install/
|
||||
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
||||
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
|
||||
```
|
||||
|
||||
部署成功后运行结果如下:
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
|
||||
|
||||
需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../quantize/README.md)
|
@@ -1,7 +1,9 @@
|
||||
[English](README.md) | 简体中文
|
||||
# 使用 FastDeploy 服务化部署 PaddleSeg 模型
|
||||
## FastDeploy 服务化部署介绍
|
||||
在线推理作为企业或个人线上部署模型的最后一环,是工业界必不可少的环节,其中最重要的就是服务化推理框架。FastDeploy 目前提供两种服务化部署方式:simple_serving和fastdeploy_serving。simple_serving 基于Flask框架具有简单高效的特点,可以快速验证线上部署模型的可行性。fastdeploy_serving基于Triton Inference Server框架,是一套完备且性能卓越的服务化部署框架,可用于实际生产。
|
||||
在线推理作为企业或个人线上部署模型的最后一环,是工业界必不可少的环节,其中最重要的就是服务化推理框架。FastDeploy 目前提供两种服务化部署方式:simple_serving和fastdeploy_serving
|
||||
- simple_serving基于Flask框架具有简单高效的特点,可以快速验证线上部署模型的可行性。
|
||||
- fastdeploy_serving基于Triton Inference Server框架,是一套完备且性能卓越的服务化部署框架,可用于实际生产。
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
|
@@ -1,41 +1,51 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg C++ Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
## Supporting Model List
|
||||
## 支持模型列表
|
||||
|
||||
- PP-LiteSeg deployment models are from [PaddleSeg PP-LiteSeg series model](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md).
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
## PP-LiteSeg Model Deployment and Conversion Preparations
|
||||
## 预导出的推理模型
|
||||
|
||||
Befor SOPHGO-TPU model deployment, you should first convert Paddle model to bmodel model. Specific steps are as follows:
|
||||
- Download Paddle model: [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz).
|
||||
- Convert Paddle model to ONNX model. Please refer to [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX).
|
||||
- For the process of converting ONNX model to bmodel, please refer to [TPU-MLIR](https://github.com/sophgo/tpu-mlir).
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分推理模型,开发者可直接下载使用。
|
||||
|
||||
## Model Converting Example
|
||||
PaddleSeg训练模型导出为推理模型,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
|
||||
Here we take [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz) as an example to show you how to convert Paddle model to SOPHGO-TPU model.
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||
|
||||
### Download PP-LiteSeg-B(STDC2)-cityscapes-without-argmax, and convert it to ONNX
|
||||
## 将PaddleSeg推理模型转换为bmodel模型步骤
|
||||
|
||||
SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型,具体步骤如下:
|
||||
- 下载Paddle模型[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)
|
||||
- Paddle模型转换为ONNX模型,请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)
|
||||
- ONNX模型转换bmodel模型的过程,请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
|
||||
|
||||
## bmode模型转换example
|
||||
|
||||
下面以[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)为例,教大家如何转换Paddle模型到SOPHGO-TPU支持的bmodel模型
|
||||
|
||||
### 下载PP-LiteSeg-B(STDC2)-cityscapes-without-argmax模型,并转换为ONNX模型
|
||||
```shell
|
||||
# Download Paddle2ONNX repository.
|
||||
# 下载Paddle2ONNX仓库
|
||||
git clone https://github.com/PaddlePaddle/Paddle2ONNX
|
||||
|
||||
# Download the Paddle static map model and fix the input shape.
|
||||
## Go to the directory where the input shape is fixed for the Paddle static map model.
|
||||
# 下载Paddle静态图模型并为Paddle静态图模型固定输入shape
|
||||
## 进入为Paddle静态图模型固定输入shape的目录
|
||||
cd Paddle2ONNX/tools/paddle
|
||||
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
|
||||
# Modify the input shape of PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer model from dynamic input to constant input.
|
||||
# 修改PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer模型的输入shape,由动态输入变成固定输入
|
||||
python paddle_infer_shape.py --model_dir PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer \
|
||||
--model_filename model.pdmodel \
|
||||
--params_filename model.pdiparams \
|
||||
--save_dir pp_liteseg_fix \
|
||||
--input_shape_dict="{'x':[1,3,512,512]}"
|
||||
|
||||
# Convert constant input Paddle model to ONNX model.
|
||||
#将固定输入的Paddle模型转换成ONNX模型
|
||||
paddle2onnx --model_dir pp_liteseg_fix \
|
||||
--model_filename model.pdmodel \
|
||||
--params_filename model.pdiparams \
|
||||
@@ -43,32 +53,32 @@ paddle2onnx --model_dir pp_liteseg_fix \
|
||||
--enable_dev_version True
|
||||
```
|
||||
|
||||
### Export bmodel
|
||||
### 导出bmodel模型
|
||||
|
||||
Take converting BM1684x model to bmodel as an example. You need to download [TPU-MLIR](https://github.com/sophgo/tpu-mlir) project. For the process of installation, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
|
||||
### 1. Installation
|
||||
以转换BM1684x的bmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
|
||||
#### 1. 安装
|
||||
``` shell
|
||||
docker pull sophgo/tpuc_dev:latest
|
||||
|
||||
# myname1234 is just an example, you can customize your own name.
|
||||
# myname1234是一个示例,也可以设置其他名字
|
||||
docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
|
||||
|
||||
source ./envsetup.sh
|
||||
./build.sh
|
||||
```
|
||||
|
||||
### 2. Convert ONNX model to bmodel
|
||||
#### 2. ONNX模型转换为bmodel模型
|
||||
``` shell
|
||||
mkdir pp_liteseg && cd pp_liteseg
|
||||
|
||||
# Put the test image in this file, and put the converted pp_liteseg.onnx into this folder.
|
||||
#在该文件中放入测试图片,同时将上一步转换的pp_liteseg.onnx放入该文件夹中
|
||||
cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
|
||||
cp -rf ${REGRESSION_PATH}/image .
|
||||
# Put in the onnx model file pp_liteseg.onnx.
|
||||
#放入onnx模型文件pp_liteseg.onnx
|
||||
|
||||
mkdir workspace && cd workspace
|
||||
|
||||
# Convert ONNX model to mlir model, the parameter --output_names can be viewed via NETRON.
|
||||
#将ONNX模型转换为mlir模型,其中参数--output_names可以通过NETRON查看
|
||||
model_transform.py \
|
||||
--model_name pp_liteseg \
|
||||
--model_def ../pp_liteseg.onnx \
|
||||
@@ -82,7 +92,7 @@ model_transform.py \
|
||||
--test_result pp_liteseg_top_outputs.npz \
|
||||
--mlir pp_liteseg.mlir
|
||||
|
||||
# Convert mlir model to BM1684x F32 bmodel.
|
||||
#将mlir模型转换为BM1684x的F32 bmodel模型
|
||||
model_deploy.py \
|
||||
--mlir pp_liteseg.mlir \
|
||||
--quantize F32 \
|
||||
@@ -91,7 +101,8 @@ model_deploy.py \
|
||||
--test_reference pp_liteseg_top_outputs.npz \
|
||||
--model pp_liteseg_1684x_f32.bmodel
|
||||
```
|
||||
The final bmodel, pp_liteseg_1684x_f32.bmodel, can run on BM1684x. If you want to further accelerate the model, you can convert ONNX model to INT8 bmodel. For details, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
|
||||
最终获得可以在BM1684x上能够运行的bmodel模型pp_liteseg_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
|
||||
|
||||
## Other Documents
|
||||
- [Cpp Deployment](./cpp)
|
||||
## 快速链接
|
||||
- [Cpp部署](./cpp)
|
||||
- [Python部署](./python)
|
||||
|
@@ -1,106 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
## 支持模型列表
|
||||
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分推理模型,开发者可直接下载使用。
|
||||
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||
|
||||
## 准备PP-LiteSeg部署模型以及转换模型
|
||||
|
||||
SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型,具体步骤如下:
|
||||
- 下载Paddle模型[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)
|
||||
- Paddle模型转换为ONNX模型,请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)
|
||||
- ONNX模型转换bmodel模型的过程,请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
|
||||
|
||||
## 模型转换example
|
||||
|
||||
下面以[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)为例子,教大家如何转换Paddle模型到SOPHGO-TPU模型
|
||||
|
||||
### 下载PP-LiteSeg-B(STDC2)-cityscapes-without-argmax模型,并转换为ONNX模型
|
||||
```shell
|
||||
# 下载Paddle2ONNX仓库
|
||||
git clone https://github.com/PaddlePaddle/Paddle2ONNX
|
||||
|
||||
# 下载Paddle静态图模型并为Paddle静态图模型固定输入shape
|
||||
## 进入为Paddle静态图模型固定输入shape的目录
|
||||
cd Paddle2ONNX/tools/paddle
|
||||
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
|
||||
# 修改PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer模型的输入shape,由动态输入变成固定输入
|
||||
python paddle_infer_shape.py --model_dir PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer \
|
||||
--model_filename model.pdmodel \
|
||||
--params_filename model.pdiparams \
|
||||
--save_dir pp_liteseg_fix \
|
||||
--input_shape_dict="{'x':[1,3,512,512]}"
|
||||
|
||||
#将固定输入的Paddle模型转换成ONNX模型
|
||||
paddle2onnx --model_dir pp_liteseg_fix \
|
||||
--model_filename model.pdmodel \
|
||||
--params_filename model.pdiparams \
|
||||
--save_file pp_liteseg.onnx \
|
||||
--enable_dev_version True
|
||||
```
|
||||
|
||||
### 导出bmodel模型
|
||||
|
||||
以转换BM1684x的bmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
|
||||
### 1. 安装
|
||||
``` shell
|
||||
docker pull sophgo/tpuc_dev:latest
|
||||
|
||||
# myname1234是一个示例,也可以设置其他名字
|
||||
docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
|
||||
|
||||
source ./envsetup.sh
|
||||
./build.sh
|
||||
```
|
||||
|
||||
### 2. ONNX模型转换为bmodel模型
|
||||
``` shell
|
||||
mkdir pp_liteseg && cd pp_liteseg
|
||||
|
||||
#在该文件中放入测试图片,同时将上一步转换的pp_liteseg.onnx放入该文件夹中
|
||||
cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
|
||||
cp -rf ${REGRESSION_PATH}/image .
|
||||
#放入onnx模型文件pp_liteseg.onnx
|
||||
|
||||
mkdir workspace && cd workspace
|
||||
|
||||
#将ONNX模型转换为mlir模型,其中参数--output_names可以通过NETRON查看
|
||||
model_transform.py \
|
||||
--model_name pp_liteseg \
|
||||
--model_def ../pp_liteseg.onnx \
|
||||
--input_shapes [[1,3,512,512]] \
|
||||
--mean 0.0,0.0,0.0 \
|
||||
--scale 0.0039216,0.0039216,0.0039216 \
|
||||
--keep_aspect_ratio \
|
||||
--pixel_format rgb \
|
||||
--output_names bilinear_interp_v2_6.tmp_0 \
|
||||
--test_input ../image/dog.jpg \
|
||||
--test_result pp_liteseg_top_outputs.npz \
|
||||
--mlir pp_liteseg.mlir
|
||||
|
||||
#将mlir模型转换为BM1684x的F32 bmodel模型
|
||||
model_deploy.py \
|
||||
--mlir pp_liteseg.mlir \
|
||||
--quantize F32 \
|
||||
--chip bm1684x \
|
||||
--test_input pp_liteseg_in_f32.npz \
|
||||
--test_reference pp_liteseg_top_outputs.npz \
|
||||
--model pp_liteseg_1684x_f32.bmodel
|
||||
```
|
||||
最终获得可以在BM1684x上能够运行的bmodel模型pp_liteseg_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
|
||||
|
||||
## 快速链接
|
||||
- [Cpp部署](./cpp)
|
||||
- [Python部署](./python)
|
@@ -1,57 +1,56 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg C++ Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
`infer.cc` in this directory provides a quick example of accelerated deployment of the pp_liteseg model on SOPHGO BM1684x.
|
||||
本目录下提供`infer.cc`快速完成PP-LiteSeg在SOPHGO BM1684x板子上加速部署的示例。
|
||||
|
||||
Before deployment, the following two steps need to be confirmed:
|
||||
## 算能硬件编译FastDeploy环境准备
|
||||
在部署前,需自行编译基于算能硬件的预测库,参考文档[算能硬件部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#算能硬件部署环境)
|
||||
|
||||
1. Hardware and software environment meets the requirements.
|
||||
2. Compile the FastDeploy repository from scratch according to the development environment.
|
||||
## 生成基本目录文件
|
||||
|
||||
For the above steps, please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md).
|
||||
|
||||
## Generate Basic Directory Files
|
||||
|
||||
The routine consists of the following parts:
|
||||
该例程由以下几个部分组成
|
||||
```text
|
||||
.
|
||||
├── CMakeLists.txt
|
||||
├── build # Compile Folder
|
||||
├── image # Folder for images
|
||||
├── fastdeploy-sophgo # 编译文件夹
|
||||
├── image # 存放图片的文件夹
|
||||
├── infer.cc
|
||||
└── model # Folder for models
|
||||
└── model # 存放模型文件的文件夹
|
||||
```
|
||||
|
||||
## Compile
|
||||
## 编译
|
||||
|
||||
### Compile and Copy SDK to folder thirdpartys
|
||||
### 编译FastDeploy
|
||||
|
||||
Please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory.
|
||||
请参考[SOPHGO部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)编译SDK,编译完成后,将在build目录下生成fastdeploy-sophgo目录。拷贝fastdeploy-sophgo至当前目录
|
||||
|
||||
### Copy model and configuration files to folder Model
|
||||
Convert Paddle model to SOPHGO bmodel model. For the conversion steps, please refer to [Document](../README.md).
|
||||
Please copy the converted SOPHGO bmodel to folder model.
|
||||
### 拷贝模型文件,以及配置文件至model文件夹
|
||||
将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README_CN.md#将paddleseg推理模型转换为bmodel模型步骤)
|
||||
|
||||
### Prepare Test Images to folder image
|
||||
将转换后的SOPHGO bmodel模型文件拷贝至model中
|
||||
|
||||
### 准备测试图片至image文件夹
|
||||
```bash
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
cp cityscapes_demo.png ./images
|
||||
```
|
||||
|
||||
### Compile example
|
||||
### 编译example
|
||||
|
||||
```bash
|
||||
cd build
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-sophgo
|
||||
make
|
||||
```
|
||||
|
||||
## Running Routines
|
||||
## 运行例程
|
||||
|
||||
```bash
|
||||
./infer_demo model images/cityscapes_demo.png
|
||||
```
|
||||
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Model Conversion](../)
|
||||
## 快速链接
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](../../)
|
||||
- [Python部署](../python)
|
||||
- [模型转换](../README_CN.md#将paddleseg推理模型转换为bmodel模型步骤)
|
||||
|
@@ -1,57 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成pp_liteseg模型在SOPHGO BM1684x板子上加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤:
|
||||
|
||||
1. 软硬件环境满足要求
|
||||
2. 根据开发环境,从头编译FastDeploy仓库
|
||||
|
||||
以上步骤请参考[SOPHGO部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)实现
|
||||
|
||||
## 生成基本目录文件
|
||||
|
||||
该例程由以下几个部分组成
|
||||
```text
|
||||
.
|
||||
├── CMakeLists.txt
|
||||
├── build # 编译文件夹
|
||||
├── image # 存放图片的文件夹
|
||||
├── infer.cc
|
||||
└── model # 存放模型文件的文件夹
|
||||
```
|
||||
|
||||
## 编译
|
||||
|
||||
### 编译并拷贝SDK到thirdpartys文件夹
|
||||
|
||||
请参考[SOPHGO部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录.
|
||||
|
||||
### 拷贝模型文件,以及配置文件至model文件夹
|
||||
将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md)
|
||||
将转换后的SOPHGO bmodel模型文件拷贝至model中
|
||||
|
||||
### 准备测试图片至image文件夹
|
||||
```bash
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
cp cityscapes_demo.png ./images
|
||||
```
|
||||
|
||||
### 编译example
|
||||
|
||||
```bash
|
||||
cd build
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
|
||||
make
|
||||
```
|
||||
|
||||
## 运行例程
|
||||
|
||||
```bash
|
||||
./infer_demo model images/cityscapes_demo.png
|
||||
```
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [模型转换](../)
|
@@ -1,27 +1,33 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg Python Deployment Example
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
Before deployment, the following step need to be confirmed:
|
||||
## 算能硬件编译FastDeploy wheel包环境准备
|
||||
|
||||
- 1. Hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirement](../../../../../../docs/en/build_and_install/sophgo.md).
|
||||
在部署前,需自行编译基于算能硬件的FastDeploy python wheel包并安装,参考文档[算能硬件部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#算能硬件部署环境)
|
||||
|
||||
`infer.py` in this directory provides a quick example of deployment of the pp_liteseg model on SOPHGO TPU. Please run the following script:
|
||||
本目录下提供`infer.py`快速完成 pp_liteseg 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
# Download the sample deployment code.
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/sophgo/python
|
||||
cd path/to/paddleseg/sophgo/python
|
||||
|
||||
# Download images.
|
||||
# 下载图片
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# Inference.
|
||||
# PaddleSeg模型转换为bmodel模型
|
||||
将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README_CN.md#将paddleseg推理模型转换为bmodel模型步骤)
|
||||
|
||||
# 推理
|
||||
python3 infer.py --model_file ./bmodel/pp_liteseg_1684x_f32.bmodel --config_file ./bmodel/deploy.yaml --image cityscapes_demo.png
|
||||
|
||||
# The returned result.
|
||||
The result is saved as sophgo_img.png.
|
||||
# 运行完成后返回结果如下所示
|
||||
运行结果保存在sophgo_img.png中
|
||||
```
|
||||
|
||||
## Other Documents
|
||||
- [pp_liteseg C++ Deployment](../cpp)
|
||||
- [Converting pp_liteseg SOPHGO model](../README.md)
|
||||
## 快速链接
|
||||
- [pp_liteseg C++部署](../cpp)
|
||||
- [转换 pp_liteseg SOPHGO模型文档](../README_CN.md#导出bmodel模型)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)
|
||||
|
@@ -1,27 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
在部署前,需确认以下步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成 pp_liteseg 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/sophgo/python
|
||||
|
||||
# 下载图片
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# 推理
|
||||
python3 infer.py --model_file ./bmodel/pp_liteseg_1684x_f32.bmodel --config_file ./bmodel/deploy.yaml --image cityscapes_demo.png
|
||||
|
||||
# 运行完成后返回结果如下所示
|
||||
运行结果保存在sophgo_img.png中
|
||||
```
|
||||
|
||||
## 其它文档
|
||||
- [pp_liteseg C++部署](../cpp)
|
||||
- [转换 pp_liteseg SOPHGO模型文档](../README.md)
|
Reference in New Issue
Block a user