Update docs

This commit is contained in:
felixhjh
2023-02-15 07:11:41 +00:00
parent 001c46558a
commit fb1d95c7c9
13 changed files with 62 additions and 14 deletions

View File

@@ -19,7 +19,7 @@ struct SegmentationResult {
```
- **label_map**: Member variable which indicates the segmentation category of each pixel in a single image. `label_map.size()` indicates the number of pixel points of a image.
- **score_map**: Member variable which indicates the predicted segmentation category probability value (specified as `--output_op argmax` when export) corresponding to label_map, or the probability value normalized by softmax (specified as `--output_op softmax` when export, or as `--output_op when exporting the model). none` when export while setting the [class member attribute](../../../examples/vision/segmentation/paddleseg/cpp/) as `apply_softmax=True` during model initialization).
- **score_map**: Member variable which indicates the predicted segmentation category probability value corresponding to the label_map one-to-one, the member variable is not empty only when `--output_op none` is specified when exporting the PaddleSeg model, otherwise the member variable is empty.
- **shape**: Member variable which indicates the shape of the output image as H\*W.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).
@@ -29,5 +29,5 @@ struct SegmentationResult {
`fastdeploy.vision.SegmentationResult`
- **label_map**(list of int): Member variable which indicates the segmentation category of each pixel in a single image.
- **score_map**(list of float): Member variable which indicates the predicted segmentation category probability value (specified as `--output_op argmax` when export) corresponding to label_map, or the probability value normalized by softmax (specified as `--output_op softmax` when export, or as `--output_op when exporting the model). none` when export while setting the [class member attribute](../../../examples/vision/segmentation/paddleseg/cpp/) as `apply_softmax=True` during model initialization).
- **score_map**(list of float): Member variable which indicates the predicted segmentation category probability value corresponding to the label_map one-to-one, the member variable is not empty only when `--output_op none` is specified when exporting the PaddleSeg model, otherwise the member variable is empty.
- **shape**(list of int): Member variable which indicates the shape of the output image as H\*W.

View File

@@ -20,7 +20,7 @@ struct SegmentationResult {
```
- **label_map**: 成员变量,表示单张图片每个像素点的分割类别,`label_map.size()`表示图片像素点的个数
- **score_map**: 成员变量与label_map一一对应的所预测的分割类别概率值(当导出模型时指定`--output_op argmax`)或者经过softmax归一化化后的概率值(当导出模型时指定`--output_op softmax`或者导出模型时指定`--output_op none`同时模型初始化的时候设置模型[类成员属性](../../../examples/vision/segmentation/paddleseg/cpp/)`apply_softmax=True`)
- **score_map**: 成员变量与label_map一一对应的所预测的分割类别概率值只有导出PaddleSeg模型时指定`--output_op none`时,该成员变量才不为空,否则该成员变量为空
- **shape**: 成员变量表示输出图片的shape为H\*W
- **Clear()**: 成员函数,用于清除结构体中存储的结果
- **Free()**: 成员函数,用于清除结构体中存储的结果并释放内存
@@ -31,5 +31,5 @@ struct SegmentationResult {
`fastdeploy.vision.SegmentationResult`
- **label_map**(list of int): 成员变量,表示单张图片每个像素点的分割类别
- **score_map**(list of float): 成员变量与label_map一一对应的所预测的分割类别概率值(当导出模型时指定`--output_op argmax`)或者经过softmax归一化化后的概率值(当导出模型时指定`--output_op softmax`或者导出模型时指定`--output_op none`同时模型初始化的时候设置模型[类成员属性](../../../examples/vision/segmentation/paddleseg/python/)`apply_softmax=true`)
- **score_map**(list of float): 成员变量与label_map一一对应的所预测的分割类别概率值只有导出PaddleSeg模型时指定`--output_op none`时,该成员变量才不为空,否则该成员变量为空
- **shape**(list of int): 成员变量表示输出图片的shape为H\*W

View File

@@ -49,7 +49,7 @@ struct SegmentationResult {
```
- **label_map**: 成员变量,表示单张图片每个像素点的分割类别,`label_map.size()`表示图片像素点的个数
- **score_map**: 成员变量与label_map一一对应的所预测的分割类别概率值(当导出模型时指定`--output_op argmax`)或者经过softmax归一化化后的概率值(当导出模型时指定`--output_op softmax`或者导出模型时指定`--output_op none`同时模型初始化的时候设置模型[类成员属性](../../../examples/vision/segmentation/paddleseg/cpp/)`apply_softmax=True`)
- **score_map**: 成员变量与label_map一一对应的所预测的分割类别概率值只有导出PaddleSeg模型时指定`--output_op none`时,该成员变量才不为空,否则该成员变量为空
- **shape**: 成员变量表示输出图片的shape为H\*W
- **Clear()**: 成员函数,用于清除结构体中存储的结果
- **Free()**: 成员函数,用于清除结构体中存储的结果并释放内存

View File

@@ -49,7 +49,7 @@ struct SegmentationResult {
```
- **label_map**: Member variable which indicates the segmentation category of each pixel in a single image. `label_map.size()` indicates the number of pixel points of a image.
- **score_map**: Member variable which indicates the predicted segmentation category probability value (specified as `--output_op argmax` when export) corresponding to label_map, or the probability value normalized by softmax (specified as `--output_op softmax` when export, or as `--output_op when exporting the model). none` when export while setting the [class member attribute](../../../examples/vision/segmentation/paddleseg/cpp/) as `apply_softmax=True` during model initialization).
- **score_map**: Member variable which indicates the predicted segmentation category probability value corresponding to the label_map one-to-one, the member variable is not empty only when `--output_op none` is specified when exporting the PaddleSeg model, otherwise the member variable is empty.
- **shape**: Member variable which indicates the shape of the output image as H\*W.
- **Clear()**: Member function used to clear the results stored in the structure.
- **Str()**: Member function used to output the information in the structure as string (for Debug).

View File

@@ -1,8 +1,19 @@
[English](README.md) | 简体中文
# PaddleSeg在晶晨A311D上通过FastDeploy部署模型
# PaddleSeg在晶晨NPU上通过FastDeploy部署模型
## PaddleSeg支持部署的晶晨芯片型号
支持如下芯片的部署
- Amlogic A311D
- Amlogic C308X
- Amlogic S905D3
本示例基于晶晨A311D来介绍如何使用FastDeploy部署PaddleSeg模型
晶晨A311D是一款先进的AI应用处理器。PaddleSeg支持通过FastDeploy在A311D上基于Paddle-Lite部署相关Segmentation模型
>> **注意**需要注意的是芯原verisilicon作为 IP 设计厂商本身并不提供实体SoC产品而是授权其 IP 给芯片厂商晶晨Amlogic瑞芯微Rockchip等。因此本文是适用于被芯原授权了 NPU IP 的芯片产品。只要芯片产品没有大副修改芯原的底层库,则该芯片就可以使用本文档作为 Paddle Lite 推理部署的参考和教程。在本文中,晶晨 SoC 中的 NPU 和 瑞芯微 SoC 中的 NPU 统称为芯原 NPU。
## 晶晨A311D支持的PaddleSeg模型
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)

View File

@@ -1,6 +1,16 @@
[English](README.md) | 简体中文
# PaddleSeg模型高性能全场景部署方案-FastDeploy
# PaddleSeg利用FastDeploy在昆仑芯上部署模型
## PaddleSeg支持部署的昆仑芯的芯片型号
支持如下芯片的部署
- 昆仑 818-100推理芯片
- 昆仑 818-300训练芯片
支持如下芯片的设备
- K100/K200 昆仑 AI 加速卡
- R200 昆仑芯 AI 加速卡
PaddleSeg支持利用FastDeploy在昆仑芯片上部署Segmentation模型

View File

@@ -1,6 +1,18 @@
[English](README.md) | 简体中文
# PaddleSeg在瑞芯微 RV1126上通过FastDeploy部署模型
瑞芯微 RV1126 是一款编解码芯片专门面相人工智能的机器视觉领域。PaddleSeg支持通过FastDeploy在RV1126上基于Paddle-Lite部署相关Segmentation模型
# PaddleSeg在瑞芯微NPU上通过FastDeploy部署模型
## PaddleSeg支持部署的瑞芯微的芯片型号
支持如下芯片的部署
- Rockchip RV1109
- Rockchip RV1126
- Rockchip RK1808
>> **注意**需要注意的是芯原verisilicon作为 IP 设计厂商本身并不提供实体SoC产品而是授权其 IP 给芯片厂商晶晨Amlogic瑞芯微Rockchip等。因此本文是适用于被芯原授权了 NPU IP 的芯片产品。只要芯片产品没有大副修改芯原的底层库,则该芯片就可以使用本文档作为 Paddle Lite 推理部署的参考和教程。在本文中,晶晨 SoC 中的 NPU 和 瑞芯微 SoC 中的 NPU 统称为芯原 NPU。
瑞芯微 RV1126 是一款编解码芯片,专门面相人工智能的机器视觉领域。
本示例基于RV1126来介绍如何使用FastDeploy部署PaddleSeg模型
PaddleSeg支持通过FastDeploy在RV1126上基于Paddle-Lite部署相关Segmentation模型
## 瑞芯微 RV1126支持的PaddleSeg模型

View File

@@ -2,8 +2,8 @@
# PaddleSeg 使用 FastDeploy 服务化部署 Segmentation 模型
## FastDeploy 服务化部署介绍
在线推理作为企业或个人线上部署模型的最后一环是工业界必不可少的环节其中最重要的就是服务化推理框架。FastDeploy 目前提供两种服务化部署方式simple_serving和fastdeploy_serving
- simple_serving基于Flask框架具有简单高效的特点可以快速验证线上部署模型的可行性
- fastdeploy_serving基于Triton Inference Server框架是一套完备且性能卓越的服务化部署框架,可用于实际生产。
- simple_serving适用于只需要通过http等调用AI推理任务没有高并发需求的场景。simple_serving基于Flask框架具有简单高效的特点可以快速验证线上部署模型的可行性
- fastdeploy_serving:适用于高并发、高吞吐量请求的场景。基于Triton Inference Server框架是一套可用于实际生产的完备且性能卓越的服务化部署框架
## 模型版本说明

View File

@@ -1,5 +1,9 @@
English | [简体中文](README_CN.md)
# PaddleSegmentation Serving Deployment Demo
# PaddleSeg Serving Deployment Demo
The PaddleSeg serving deployment Demo is built with FastDeploy Serving. FastDeploy Serving is a service-oriented deployment framework suitable for high-concurrency and high-throughput requests encapsulated based on the Triton Inference Server framework. It is a complete and high-performance service-oriented deployment framework that can be used in actual production. If you dont need high-concurrency and high-throughput scenarios, and just want to quickly test the feasibility of online deployment of the model, please refer to [fastdeploy_serving](../simple_serving/)
## Environment
Before serving deployment, it is necessary to confirm the hardware and software environment requirements of the service image and the image pull command, please refer to [FastDeploy service deployment](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/README.md)

View File

@@ -1,6 +1,9 @@
[English](README.md) | 简体中文
# PaddleSeg 服务化部署示例
PaddleSeg 服务化部署示例是利用FastDeploy Serving搭建的服务化部署示例。FastDeploy Serving是基于Triton Inference Server框架封装的适用于高并发、高吞吐量请求的服务化部署框架是一套可用于实际生产的完备且性能卓越的服务化部署框架。如没有高并发高吞吐场景的需求只想快速检验模型线上部署的可行性请参考[fastdeploy_serving](../simple_serving/)
## 部署环境准备
在服务化部署前,需确认服务化镜像的软硬件环境要求和镜像拉取命令,请参考[FastDeploy服务化部署](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/README_CN.md)

View File

@@ -1,7 +1,8 @@
English | [简体中文](README_CN.md)
# PaddleSegmentation Python Simple Serving Demo
# PaddleSeg Python Simple Serving Demo
PaddleSeg Python Simple serving is an example of serving deployment built by FastDeploy based on the Flask framework that can quickly verify the feasibility of online model deployment. It completes AI inference tasks based on http requests, and is suitable for simple scenarios without concurrent inference task. For high concurrency and high throughput scenarios, please refer to [fastdeploy_serving](../fastdeploy_serving/)
## Environment

View File

@@ -2,6 +2,8 @@
# PaddleSeg Python轻量服务化部署示例
PaddleSeg Python轻量服务化部署是FastDeploy基于Flask框架搭建的可快速验证线上模型部署可行性的服务化部署示例基于http请求完成AI推理任务适用于无并发推理的简单场景如有高并发高吞吐场景的需求请参考[fastdeploy_serving](../fastdeploy_serving/)
## 部署环境准备
在部署前需确认软硬件环境同时下载预编译python wheel 包,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)

View File

@@ -1,5 +1,10 @@
[English](README.md) | 简体中文
# PaddleSeg在算能Sophgo硬件上通过FastDeploy部署模型
## PaddleSeg支持部署的Sophgo的芯片型号
支持如下芯片的部署
- Sophgo 1684X
PaddleSeg支持通过FastDeploy在算能TPU上部署相关Segmentation模型
## 算能硬件支持的PaddleSeg模型