mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-07 01:22:59 +08:00
Update ppmatting directory
This commit is contained in:
@@ -1,3 +1,3 @@
|
||||
PP-Matting deployment examples, please refer to [document](../../segmentation/ppmatting/README_CN.md).
|
||||
PaddleSeg Matting deployment examples, please refer to [document](../../segmentation/ppmatting/README_CN.md).
|
||||
|
||||
PP-Matting的部署示例,请参考[文档](../../segmentation/ppmatting/README_CN.md).
|
||||
PaddleSeg Matting的部署示例,请参考[文档](../../segmentation/ppmatting/README_CN.md).
|
||||
|
@@ -1,13 +1,17 @@
|
||||
[English](README.md) | 简体中文
|
||||
|
||||
# 在晶晨A311D上使用FastDeploy部署PaddleSeg模型
|
||||
晶晨A311D是一款先进的AI应用处理器。FastDeploy支持在A311D上基于Paddle-Lite部署PaddleSeg相关模型
|
||||
# PaddleSeg在晶晨A311D上通过FastDeploy部署模型
|
||||
晶晨A311D是一款先进的AI应用处理器。PaddleSeg支持通过FastDeploy在A311D上基于Paddle-Lite部署相关Segmentation模型
|
||||
|
||||
## 晶晨A311D支持的PaddleSeg模型
|
||||
目前所支持的PaddleSeg模型如下:
|
||||
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型
|
||||
|
||||
目前晶晨A311D所支持的PaddleSeg模型如下:
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
## 预导出的推理模型
|
||||
## 预导出的量化推理模型
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。
|
||||
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|
@@ -1,10 +1,13 @@
|
||||
# 使用FastDeploy部署PaddleSeg模型
|
||||
[English](README.md) | 简体中文
|
||||
|
||||
FastDeploy支持在华为昇腾上部署PaddleSeg模型
|
||||
# PaddleSeg利用FastDeploy在华为昇腾上部署模型
|
||||
|
||||
## 模型版本说明
|
||||
PaddleSeg支持通过FastDeploy在华为昇腾上部署Segmentation相关模型
|
||||
|
||||
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||
## 支持的PaddleSeg模型
|
||||
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
||||
|
@@ -1,10 +1,13 @@
|
||||
[English](README.md) | 简体中文
|
||||
|
||||
# PaddleSeg模型高性能全场景部署方案-FastDeploy
|
||||
|
||||
PaddleSeg通过FastDeploy支持在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上部署
|
||||
PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上部署Segmentation模型
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
||||
|
@@ -1,8 +1,13 @@
|
||||
# 使用FastDeploy部署PaddleSeg模型
|
||||
[English](README.md) | 简体中文
|
||||
|
||||
# PaddleSeg模型高性能全场景部署方案-FastDeploy
|
||||
|
||||
PaddleSeg支持利用FastDeploy在昆仑芯片上部署Segmentation模型
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
||||
|
@@ -1,6 +1,7 @@
|
||||
[English](README.md) | 简体中文
|
||||
|
||||
# 基于RKNPU2使用FastDeploy部署PaddleSeg模型
|
||||
# PaddleSeg利用FastDeploy基于RKNPU2部署Segmentation模型
|
||||
|
||||
RKNPU2 提供了一个高性能接口来访问 Rockchip NPU,支持如下硬件的部署
|
||||
- RK3566/RK3568
|
||||
- RK3588/RK3588S
|
||||
@@ -10,7 +11,8 @@ RKNPU2 提供了一个高性能接口来访问 Rockchip NPU,支持如下硬件
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型
|
||||
|
||||
目前FastDeploy使用RKNPU2推理PaddleSeg支持如下模型的部署:
|
||||
- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md)
|
||||
|
@@ -3,7 +3,7 @@
|
||||
|
||||
在部署前,需确认以下步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)
|
||||
- 1. 软硬件环境满足要求,RKNPU2环境部署等参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)
|
||||
|
||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../../matting/)
|
||||
|
||||
|
@@ -1,10 +1,16 @@
|
||||
[English](README.md) | 简体中文
|
||||
# 在瑞芯微 RV1126 上使用 FastDeploy 部署 PaddleSeg 模型
|
||||
瑞芯微 RV1126 是一款编解码芯片,专门面相人工智能的机器视觉领域。目前,FastDeploy 支持在 RV1126 上基于 Paddle-Lite 部署 PaddleSeg 相关模型
|
||||
# PaddleSeg在瑞芯微 RV1126上通过FastDeploy部署模型
|
||||
瑞芯微 RV1126 是一款编解码芯片,专门面相人工智能的机器视觉领域。PaddleSeg支持通过FastDeploy在RV1126上基于Paddle-Lite部署相关Segmentation模型
|
||||
|
||||
## 瑞芯微 RV1126支持的PaddleSeg模型
|
||||
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型
|
||||
|
||||
## 瑞芯微 RV1126 支持的PaddleSeg模型
|
||||
目前瑞芯微 RV1126 的 NPU 支持的量化模型如下:
|
||||
## 预导出的推理模型
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
## 预导出的量化推理模型
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。
|
||||
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|
@@ -1,10 +1,53 @@
|
||||
[English](README.md) | 简体中文
|
||||
# 使用 FastDeploy 服务化部署 PaddleSeg 模型
|
||||
# PaddleSeg 使用 FastDeploy 服务化部署 Segmentation 模型
|
||||
## FastDeploy 服务化部署介绍
|
||||
在线推理作为企业或个人线上部署模型的最后一环,是工业界必不可少的环节,其中最重要的就是服务化推理框架。FastDeploy 目前提供两种服务化部署方式:simple_serving和fastdeploy_serving
|
||||
- simple_serving基于Flask框架具有简单高效的特点,可以快速验证线上部署模型的可行性。
|
||||
- fastdeploy_serving基于Triton Inference Server框架,是一套完备且性能卓越的服务化部署框架,可用于实际生产。
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
||||
- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md)
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md)
|
||||
- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md)
|
||||
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
||||
- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md)
|
||||
|
||||
>>**注意** 如部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../ppmatting)
|
||||
|
||||
## 准备PaddleSeg部署模型
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
|
||||
**注意**
|
||||
- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
## 预导出的推理模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型
|
||||
- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none`
|
||||
- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax`
|
||||
|
||||
开发者可直接下载使用。
|
||||
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||
| [Unet-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_with_argmax_infer.tgz) \| [Unet-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
|
||||
| [PP-LiteSeg-B(STDC2)-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz) \| [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 79.04% | 79.52% | 79.85% |
|
||||
|[PP-HumanSegV1-Lite-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV1_Lite_with_argmax_infer.tgz) \| [PP-HumanSegV1-Lite-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
|
||||
|[PP-HumanSegV2-Lite-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Lite-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
|
||||
| [PP-HumanSegV2-Mobile-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Mobile-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
|
||||
|[PP-HumanSegV1-Server-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_with_argmax_infer.tgz) \| [PP-HumanSegV1-Server-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
|
||||
| [Portait-PP-HumanSegV2-Lite-with-argmax(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_with_argmax_infer.tgz) \| [Portait-PP-HumanSegV2-Lite-without-argmax(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
|
||||
| [FCN-HRNet-W18-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_with_argmax_infer.tgz) \| [FCN-HRNet-W18-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz)(暂时不支持ONNXRuntime的GPU推理) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
|
||||
| [Deeplabv3-ResNet101-OS8-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_with_argmax_infer.tgz) \| [Deeplabv3-ResNet101-OS8-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |
|
||||
| [SegFormer_B0-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-with-argmax.tgz) \| [SegFormer_B0-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-without-argmax.tgz) | 15MB | 1024x1024 | 76.73% | 77.16% | - |
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [fastdeploy serving](fastdeploy_serving)
|
||||
|
@@ -1,6 +1,8 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSegmentation Serving Deployment Demo
|
||||
|
||||
Before serving deployment, it is necessary to confirm the hardware and software environment requirements of the service image and the image pull command, please refer to [FastDeploy service deployment](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/README.md)
|
||||
|
||||
## Launch Serving
|
||||
|
||||
```bash
|
||||
|
@@ -1,9 +1,7 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg 服务化部署示例
|
||||
|
||||
在服务化部署前,需确认
|
||||
|
||||
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/README_CN.md)
|
||||
在服务化部署前,需确认服务化镜像的软硬件环境要求和镜像拉取命令,请参考[FastDeploy服务化部署](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/README_CN.md)
|
||||
|
||||
|
||||
## 启动服务
|
||||
|
@@ -5,7 +5,7 @@ English | [简体中文](README_CN.md)
|
||||
|
||||
## Environment
|
||||
|
||||
- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/build_and_install#install-prebuilt-fastdeploy)
|
||||
|
||||
Server:
|
||||
```bash
|
||||
|
@@ -2,10 +2,9 @@
|
||||
|
||||
# PaddleSeg Python轻量服务化部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
## 部署环境准备
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
在部署前,需确认软硬件环境,同时下载预编译python wheel 包,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)
|
||||
|
||||
服务端:
|
||||
```bash
|
||||
|
@@ -1,8 +1,13 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg C++部署示例
|
||||
# PaddleSeg在算能(Sophgo)硬件上通过FastDeploy部署模型
|
||||
PaddleSeg支持通过FastDeploy在算能TPU上部署相关Segmentation模型
|
||||
|
||||
## 支持模型列表
|
||||
## 算能硬件支持的PaddleSeg模型
|
||||
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型
|
||||
|
||||
目前算能TPU支持的模型如下:
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
## 预导出的推理模型
|
||||
|
@@ -1,42 +1,22 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PP-Matting Model Deployment
|
||||
# PaddleSeg高性能全场景模型部署方案—FastDeploy
|
||||
|
||||
## Model Description
|
||||
## FastDeploy介绍
|
||||
|
||||
- [PP-Matting Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)是一款全场景、易用灵活、极致高效的AI推理部署工具,使用FastDeploy可以简单高效的在10+款硬件上对PaddleSeg Matting模型进行快速部署
|
||||
|
||||
## List of Supported Models
|
||||
## 支持如下的硬件部署
|
||||
|
||||
Now FastDeploy supports the deployment of the following models
|
||||
| 硬件支持列表 | | | |
|
||||
|:----- | :-- | :-- | :-- |
|
||||
| [NVIDIA GPU](cpu-gpu) | [X86 CPU](cpu-gpu)| [飞腾CPU](cpu-gpu) | [ARM CPU](cpu-gpu) |
|
||||
| [Intel GPU(独立显卡/集成显卡)](cpu-gpu) | [昆仑](cpu-gpu) | [昇腾](cpu-gpu) |
|
||||
|
||||
- [PP-Matting models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
- [PP-HumanMatting models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
- [ModNet models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
## 常见问题
|
||||
|
||||
遇到问题可查看常见问题集合文档或搜索FastDeploy issues,链接如下:
|
||||
|
||||
## Export Deployment Model
|
||||
[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq)
|
||||
|
||||
Before deployment, PP-Matting needs to be exported into the deployment model. Refer to [Export Model](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) for more information. (Tips: You need to set the `--input_shape` parameter of the export script when exporting PP-Matting and PP-HumanMatting models)
|
||||
[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues)
|
||||
|
||||
|
||||
## Download Pre-trained Models
|
||||
|
||||
For developers' testing, models exported by PP-Matting are provided below. Developers can download and use them directly.
|
||||
|
||||
The accuracy metric is sourced from the model description in PP-Matting. (Accuracy data are not provided) Refer to the introduction in PP-Matting for more details.
|
||||
|
||||
| Model | Parameter Size | Accuracy | Note |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :------ |
|
||||
| [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - |
|
||||
| [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - |
|
||||
| [PP-HumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - |
|
||||
| [Modnet-ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - |
|
||||
| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - |
|
||||
| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - |
|
||||
|
||||
|
||||
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
若以上方式都无法解决问题,欢迎给FastDeploy提交新的[issue](https://github.com/PaddlePaddle/FastDeploy/issues)
|
||||
|
1
examples/vision/segmentation/ppmatting/ascend/README.md
Symbolic link
1
examples/vision/segmentation/ppmatting/ascend/README.md
Symbolic link
@@ -0,0 +1 @@
|
||||
../cpu-gpu/README.md
|
@@ -1,93 +0,0 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PP-Matting C++ Deployment Example
|
||||
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of PP-Matting on CPU/GPU and GPU accelerated by TensorRT.
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Taking the PP-Matting inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# Download PP-Matting model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
|
||||
|
||||
# CPU inference
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
|
||||
# GPU inference
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2
|
||||
# kunlunxin XPU inference
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 3
|
||||
```
|
||||
|
||||
The visualized result after running is as follows
|
||||
<div width="840">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852040-759da522-fca4-4786-9205-88c622cd4a39.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852587-48895efc-d24a-43c9-aeec-d7b0362ab2b9.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||
|
||||
## PP-Matting C++ Interface
|
||||
|
||||
### PPMatting Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::matting::PPMatting(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PP-Matting model loading and initialization, among which model_file is the exported Paddle model format.
|
||||
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> PPMatting::Predict(cv::Mat* im, MattingResult* result)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: The segmentation result, including the predicted label of the segmentation and the corresponding probability of the label. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of SegmentationResult
|
||||
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
@@ -1,94 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-Matting C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成PP-Matting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上 PP-Matting 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载PP-Matting模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2
|
||||
# 昆仑芯XPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 3
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div width="840">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852040-759da522-fca4-4786-9205-88c622cd4a39.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852587-48895efc-d24a-43c9-aeec-d7b0362ab2b9.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## PP-Matting C++接口
|
||||
|
||||
### PPMatting类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::matting::PPMatting(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PP-Matting模型加载和初始化,其中model_file为导出的Paddle模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> PPMatting::Predict(cv::Mat* im, MattingResult* result)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, MattingResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,30 +1,32 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-Matting模型部署
|
||||
# PaddleSeg Matting模型高性能全场景部署方案-FastDeploy
|
||||
|
||||
PaddleSeg通过[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)支持在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)、昆仑芯、华为昇腾硬件上部署Matting模型
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PP-Matting Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
|
||||
## 支持模型列表
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Matting模型
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
||||
- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
|
||||
|
||||
## 导出部署模型
|
||||
## 准备PaddleSeg部署模型
|
||||
在部署前,需要先将Matting模型导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
|
||||
在部署前,需要先将PP-Matting导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)(Tips:导出PP-Matting系列模型和PP-HumanMatting系列模型需要设置导出脚本的`--input_shape`参数)
|
||||
**注意**
|
||||
- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
|
||||
## 下载预训练模型
|
||||
## 预导出的推理模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。
|
||||
|
||||
其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。
|
||||
|
||||
>> **注意**`deploy.yaml`文件记录导出模型的`input_shape`以及预处理信息,若不满足要求,用户可重新导出相关模型
|
||||
|
||||
| 模型 | 参数大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :------ |
|
||||
@@ -35,8 +37,6 @@
|
||||
| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - |
|
||||
| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - |
|
||||
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
60
examples/vision/segmentation/ppmatting/cpu-gpu/cpp/README.md
Normal file
60
examples/vision/segmentation/ppmatting/cpu-gpu/cpp/README.md
Normal file
@@ -0,0 +1,60 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-Matting C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成PP-Matting在CPU/GPU、昆仑芯、华为昇腾以及GPU上通过Paddle-TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认软硬件环境,同时下载预编译部署库,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install)
|
||||
|
||||
>> **注意** 只有CPU、GPU提供预编译库,华为昇腾以及昆仑芯需要参考以上文档自行编译部署环境
|
||||
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.0以上(x.x.x>=1.0.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载PP-Matting模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2
|
||||
# 昆仑芯XPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 3
|
||||
```
|
||||
>> ***注意** 以上示例未提供华为昇腾的示例,在编译好昇腾部署环境后,只需改造一行代码,将示例文件中KunlunXinInfer方法的`option.UseKunlunXin()`为`option.UseAscend()`就可以完成在华为昇腾上的推理部署
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div width="840">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852040-759da522-fca4-4786-9205-88c622cd4a39.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852587-48895efc-d24a-43c9-aeec-d7b0362ab2b9.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## 快速链接
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](../../)
|
||||
- [Python部署](../python)
|
||||
|
||||
## 常见问题
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
||||
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
|
||||
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md)
|
@@ -121,6 +121,10 @@ void TrtInfer(const std::string& model_dir, const std::string& image_file,
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
option.UseGpu();
|
||||
option.UseTrtBackend();
|
||||
// If use original Tensorrt, not Paddle-TensorRT,
|
||||
// comment the following two lines
|
||||
option.EnablePaddleToTrt();
|
||||
option.EnablePaddleTrtCollectShape();
|
||||
option.SetTrtInputShape("img", {1, 3, 512, 512});
|
||||
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
|
||||
config_file, option);
|
@@ -0,0 +1,52 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-Matting Python部署示例
|
||||
|
||||
本目录下提供`infer.py`快速完成PP-Matting在CPU/GPU、昆仑芯、华为昇腾,以及GPU上通过Paddle-TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
## 部署环境准备
|
||||
|
||||
在部署前,需确认软硬件环境,同时下载预编译python wheel 包,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install)
|
||||
>> **注意** 只有CPU、GPU提供预编译库,华为昇腾以及昆仑芯需要参考以上文档自行编译部署环境
|
||||
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/matting/ppmatting/python
|
||||
|
||||
# 下载PP-Matting模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
# CPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
|
||||
# 昆仑芯XPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device kunlunxin
|
||||
```
|
||||
>> ***注意** 以上示例未提供华为昇腾的示例,在编译好昇腾部署环境后,只需改造一行代码,将示例文件中的`option.use_kunlunxin()`为`option.use_ascend()`就可以完成在华为昇腾上的推理部署
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div width="840">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852040-759da522-fca4-4786-9205-88c622cd4a39.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852587-48895efc-d24a-43c9-aeec-d7b0362ab2b9.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
|
||||
## 快速链接
|
||||
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
||||
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
|
||||
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md)
|
@@ -38,6 +38,10 @@ def build_option(args):
|
||||
|
||||
if args.use_trt:
|
||||
option.use_trt_backend()
|
||||
# If use original Tensorrt, not Paddle-TensorRT,
|
||||
# comment the following two lines
|
||||
option.enable_paddle_to_trt()
|
||||
option.enable_paddle_trt_collect_shape()
|
||||
option.set_trt_input_shape("img", [1, 3, 512, 512])
|
||||
|
||||
if args.device.lower() == "kunlunxin":
|
1
examples/vision/segmentation/ppmatting/kunlun/README.md
Symbolic link
1
examples/vision/segmentation/ppmatting/kunlun/README.md
Symbolic link
@@ -0,0 +1 @@
|
||||
../cpu-gpu/README.md
|
@@ -1,81 +0,0 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PP-Matting Python Deployment Example
|
||||
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of PP-Matting on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
```bash
|
||||
# Download the deployment example code
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/matting/ppmatting/python
|
||||
|
||||
# Download PP-Matting model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
# CPU inference
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu
|
||||
# GPU inference
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu
|
||||
# TensorRT inference on GPU(Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
|
||||
# kunlunxin XPU inference
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device kunlunxin
|
||||
```
|
||||
|
||||
The visualized result after running is as follows
|
||||
<div width="840">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852040-759da522-fca4-4786-9205-88c622cd4a39.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852587-48895efc-d24a-43c9-aeec-d7b0362ab2b9.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
## PP-Matting Python Interface
|
||||
|
||||
```python
|
||||
fd.vision.matting.PPMatting(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PP-Matting model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) for more information
|
||||
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> PPMatting.predict(input_image)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): Input data in HWC or BGR format
|
||||
|
||||
> **Return**
|
||||
>
|
||||
> > Return `fastdeploy.vision.MattingResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||
|
||||
### Class Member Variable
|
||||
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
|
||||
|
||||
## Other Documents
|
||||
|
||||
- [PP-Matting Model Description](..)
|
||||
- [PP-Matting C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
@@ -1,80 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-Matting Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成PP-Matting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/matting/ppmatting/python
|
||||
|
||||
# 下载PP-Matting模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
# CPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
|
||||
# 昆仑芯XPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device kunlunxin
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div width="840">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852040-759da522-fca4-4786-9205-88c622cd4a39.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852587-48895efc-d24a-43c9-aeec-d7b0362ab2b9.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
## PP-Matting Python接口
|
||||
|
||||
```python
|
||||
fd.vision.matting.PPMatting(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PP-Matting模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> PPMatting.predict(input_image)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.MattingResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [PP-Matting 模型介绍](..)
|
||||
- [PP-Matting C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
Reference in New Issue
Block a user