diff --git a/examples/vision/matting/ppmatting/README.md b/examples/vision/matting/ppmatting/README.md index 2e8389bc1..2a54d53c7 100644 --- a/examples/vision/matting/ppmatting/README.md +++ b/examples/vision/matting/ppmatting/README.md @@ -1,3 +1,3 @@ -PP-Matting deployment examples, please refer to [document](../../segmentation/ppmatting/README_CN.md). +PaddleSeg Matting deployment examples, please refer to [document](../../segmentation/ppmatting/README_CN.md). -PP-Matting的部署示例,请参考[文档](../../segmentation/ppmatting/README_CN.md). +PaddleSeg Matting的部署示例,请参考[文档](../../segmentation/ppmatting/README_CN.md). diff --git a/examples/vision/segmentation/paddleseg/amlogic/a311d/README.md b/examples/vision/segmentation/paddleseg/amlogic/a311d/README.md index 9f856deb4..c9a04fd41 100644 --- a/examples/vision/segmentation/paddleseg/amlogic/a311d/README.md +++ b/examples/vision/segmentation/paddleseg/amlogic/a311d/README.md @@ -1,13 +1,17 @@ [English](README.md) | 简体中文 -# 在晶晨A311D上使用FastDeploy部署PaddleSeg模型 -晶晨A311D是一款先进的AI应用处理器。FastDeploy支持在A311D上基于Paddle-Lite部署PaddleSeg相关模型 +# PaddleSeg在晶晨A311D上通过FastDeploy部署模型 +晶晨A311D是一款先进的AI应用处理器。PaddleSeg支持通过FastDeploy在A311D上基于Paddle-Lite部署相关Segmentation模型 ## 晶晨A311D支持的PaddleSeg模型 -目前所支持的PaddleSeg模型如下: + +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 + +目前晶晨A311D所支持的PaddleSeg模型如下: - [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) -## 预导出的推理模型 +## 预导出的量化推理模型 为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。 | 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | diff --git a/examples/vision/segmentation/paddleseg/ascend/README.md b/examples/vision/segmentation/paddleseg/ascend/README.md index 475d8817b..05f4d8348 100644 --- a/examples/vision/segmentation/paddleseg/ascend/README.md +++ b/examples/vision/segmentation/paddleseg/ascend/README.md @@ -1,10 +1,13 @@ -# 使用FastDeploy部署PaddleSeg模型 +[English](README.md) | 简体中文 -FastDeploy支持在华为昇腾上部署PaddleSeg模型 +# PaddleSeg利用FastDeploy在华为昇腾上部署模型 -## 模型版本说明 +PaddleSeg支持通过FastDeploy在华为昇腾上部署Segmentation相关模型 -- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) +## 支持的PaddleSeg模型 + +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 目前FastDeploy支持如下模型的部署 diff --git a/examples/vision/segmentation/paddleseg/cpu-gpu/README.md b/examples/vision/segmentation/paddleseg/cpu-gpu/README.md index a5e02e6c9..b126e9ddb 100644 --- a/examples/vision/segmentation/paddleseg/cpu-gpu/README.md +++ b/examples/vision/segmentation/paddleseg/cpu-gpu/README.md @@ -1,10 +1,13 @@ +[English](README.md) | 简体中文 + # PaddleSeg模型高性能全场景部署方案-FastDeploy -PaddleSeg通过FastDeploy支持在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上部署 +PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上部署Segmentation模型 ## 模型版本说明 -- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 目前FastDeploy支持如下模型的部署 diff --git a/examples/vision/segmentation/paddleseg/kunlun/README.md b/examples/vision/segmentation/paddleseg/kunlun/README.md index 08406d082..cdb727988 100644 --- a/examples/vision/segmentation/paddleseg/kunlun/README.md +++ b/examples/vision/segmentation/paddleseg/kunlun/README.md @@ -1,8 +1,13 @@ -# 使用FastDeploy部署PaddleSeg模型 +[English](README.md) | 简体中文 + +# PaddleSeg模型高性能全场景部署方案-FastDeploy + +PaddleSeg支持利用FastDeploy在昆仑芯片上部署Segmentation模型 ## 模型版本说明 -- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 目前FastDeploy支持如下模型的部署 diff --git a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/README.md b/examples/vision/segmentation/paddleseg/rockchip/rknpu2/README.md index 21a9b92ba..a536630e3 100644 --- a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/README.md +++ b/examples/vision/segmentation/paddleseg/rockchip/rknpu2/README.md @@ -1,6 +1,7 @@ [English](README.md) | 简体中文 -# 基于RKNPU2使用FastDeploy部署PaddleSeg模型 +# PaddleSeg利用FastDeploy基于RKNPU2部署Segmentation模型 + RKNPU2 提供了一个高性能接口来访问 Rockchip NPU,支持如下硬件的部署 - RK3566/RK3568 - RK3588/RK3588S @@ -10,7 +11,8 @@ RKNPU2 提供了一个高性能接口来访问 Rockchip NPU,支持如下硬件 ## 模型版本说明 -- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 目前FastDeploy使用RKNPU2推理PaddleSeg支持如下模型的部署: - [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md) diff --git a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/python/README.md b/examples/vision/segmentation/paddleseg/rockchip/rknpu2/python/README.md index 5b7c3df35..7524b6c60 100644 --- a/examples/vision/segmentation/paddleseg/rockchip/rknpu2/python/README.md +++ b/examples/vision/segmentation/paddleseg/rockchip/rknpu2/python/README.md @@ -3,7 +3,7 @@ 在部署前,需确认以下步骤 -- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md) +- 1. 软硬件环境满足要求,RKNPU2环境部署等参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md) 【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../../matting/) diff --git a/examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md b/examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md index 5f92e7f6f..12b9a0d05 100644 --- a/examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md +++ b/examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md @@ -1,10 +1,16 @@ [English](README.md) | 简体中文 -# 在瑞芯微 RV1126 上使用 FastDeploy 部署 PaddleSeg 模型 -瑞芯微 RV1126 是一款编解码芯片,专门面相人工智能的机器视觉领域。目前,FastDeploy 支持在 RV1126 上基于 Paddle-Lite 部署 PaddleSeg 相关模型 +# PaddleSeg在瑞芯微 RV1126上通过FastDeploy部署模型 +瑞芯微 RV1126 是一款编解码芯片,专门面相人工智能的机器视觉领域。PaddleSeg支持通过FastDeploy在RV1126上基于Paddle-Lite部署相关Segmentation模型 + +## 瑞芯微 RV1126支持的PaddleSeg模型 + +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 -## 瑞芯微 RV1126 支持的PaddleSeg模型 目前瑞芯微 RV1126 的 NPU 支持的量化模型如下: -## 预导出的推理模型 +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) + +## 预导出的量化推理模型 为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。 | 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | diff --git a/examples/vision/segmentation/paddleseg/serving/README_CN.md b/examples/vision/segmentation/paddleseg/serving/README_CN.md index ea1599432..803465941 100644 --- a/examples/vision/segmentation/paddleseg/serving/README_CN.md +++ b/examples/vision/segmentation/paddleseg/serving/README_CN.md @@ -1,10 +1,53 @@ [English](README.md) | 简体中文 -# 使用 FastDeploy 服务化部署 PaddleSeg 模型 +# PaddleSeg 使用 FastDeploy 服务化部署 Segmentation 模型 ## FastDeploy 服务化部署介绍 在线推理作为企业或个人线上部署模型的最后一环,是工业界必不可少的环节,其中最重要的就是服务化推理框架。FastDeploy 目前提供两种服务化部署方式:simple_serving和fastdeploy_serving - simple_serving基于Flask框架具有简单高效的特点,可以快速验证线上部署模型的可行性。 - fastdeploy_serving基于Triton Inference Server框架,是一套完备且性能卓越的服务化部署框架,可用于实际生产。 +## 模型版本说明 + +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 + +目前FastDeploy支持如下模型的部署 + +- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md) +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) +- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md) +- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md) +- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) +- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) + +>>**注意** 如部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../ppmatting) + +## 准备PaddleSeg部署模型 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) + +**注意** +- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +## 预导出的推理模型 + +为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型 +- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` +- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` + +开发者可直接下载使用。 + +| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | +|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | +| [Unet-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_with_argmax_infer.tgz) \| [Unet-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% | +| [PP-LiteSeg-B(STDC2)-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz) \| [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 79.04% | 79.52% | 79.85% | +|[PP-HumanSegV1-Lite-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV1_Lite_with_argmax_infer.tgz) \| [PP-HumanSegV1-Lite-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - | +|[PP-HumanSegV2-Lite-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Lite-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - | +| [PP-HumanSegV2-Mobile-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Mobile-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - | +|[PP-HumanSegV1-Server-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_with_argmax_infer.tgz) \| [PP-HumanSegV1-Server-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - | +| [Portait-PP-HumanSegV2-Lite-with-argmax(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_with_argmax_infer.tgz) \| [Portait-PP-HumanSegV2-Lite-without-argmax(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - | +| [FCN-HRNet-W18-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_with_argmax_infer.tgz) \| [FCN-HRNet-W18-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz)(暂时不支持ONNXRuntime的GPU推理) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% | +| [Deeplabv3-ResNet101-OS8-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_with_argmax_infer.tgz) \| [Deeplabv3-ResNet101-OS8-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% | +| [SegFormer_B0-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-with-argmax.tgz) \| [SegFormer_B0-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-without-argmax.tgz) | 15MB | 1024x1024 | 76.73% | 77.16% | - | + ## 详细部署文档 - [fastdeploy serving](fastdeploy_serving) diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README.md b/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README.md index a451e8730..c5b6dd41f 100644 --- a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README.md +++ b/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README.md @@ -1,6 +1,8 @@ English | [简体中文](README_CN.md) # PaddleSegmentation Serving Deployment Demo +Before serving deployment, it is necessary to confirm the hardware and software environment requirements of the service image and the image pull command, please refer to [FastDeploy service deployment](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/README.md) + ## Launch Serving ```bash diff --git a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README_CN.md b/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README_CN.md index ac8965d75..ae346cb5b 100644 --- a/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README_CN.md +++ b/examples/vision/segmentation/paddleseg/serving/fastdeploy_serving/README_CN.md @@ -1,9 +1,7 @@ [English](README.md) | 简体中文 # PaddleSeg 服务化部署示例 -在服务化部署前,需确认 - -- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/README_CN.md) +在服务化部署前,需确认服务化镜像的软硬件环境要求和镜像拉取命令,请参考[FastDeploy服务化部署](https://github.com/PaddlePaddle/FastDeploy/blob/develop/serving/README_CN.md) ## 启动服务 diff --git a/examples/vision/segmentation/paddleseg/serving/simple_serving/README.md b/examples/vision/segmentation/paddleseg/serving/simple_serving/README.md index da41a3a00..686164ad7 100644 --- a/examples/vision/segmentation/paddleseg/serving/simple_serving/README.md +++ b/examples/vision/segmentation/paddleseg/serving/simple_serving/README.md @@ -5,7 +5,7 @@ English | [简体中文](README_CN.md) ## Environment -- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) +- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/build_and_install#install-prebuilt-fastdeploy) Server: ```bash diff --git a/examples/vision/segmentation/paddleseg/serving/simple_serving/README_CN.md b/examples/vision/segmentation/paddleseg/serving/simple_serving/README_CN.md index d12bb9f2e..db06103ed 100644 --- a/examples/vision/segmentation/paddleseg/serving/simple_serving/README_CN.md +++ b/examples/vision/segmentation/paddleseg/serving/simple_serving/README_CN.md @@ -2,10 +2,9 @@ # PaddleSeg Python轻量服务化部署示例 -在部署前,需确认以下两个步骤 +## 部署环境准备 -- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/download_prebuilt_libraries.md) -- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/download_prebuilt_libraries.md) +在部署前,需确认软硬件环境,同时下载预编译python wheel 包,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装) 服务端: ```bash diff --git a/examples/vision/segmentation/paddleseg/sophgo/README.md b/examples/vision/segmentation/paddleseg/sophgo/README.md index 1c08a5b7a..366656a75 100644 --- a/examples/vision/segmentation/paddleseg/sophgo/README.md +++ b/examples/vision/segmentation/paddleseg/sophgo/README.md @@ -1,8 +1,13 @@ [English](README.md) | 简体中文 -# PaddleSeg C++部署示例 +# PaddleSeg在算能(Sophgo)硬件上通过FastDeploy部署模型 +PaddleSeg支持通过FastDeploy在算能TPU上部署相关Segmentation模型 -## 支持模型列表 +## 算能硬件支持的PaddleSeg模型 +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型 + +目前算能TPU支持的模型如下: - [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md) ## 预导出的推理模型 diff --git a/examples/vision/segmentation/ppmatting/README.md b/examples/vision/segmentation/ppmatting/README.md index a2cbdc346..b3dd9cc80 100644 --- a/examples/vision/segmentation/ppmatting/README.md +++ b/examples/vision/segmentation/ppmatting/README.md @@ -1,42 +1,22 @@ -English | [简体中文](README_CN.md) -# PP-Matting Model Deployment +# PaddleSeg高性能全场景模型部署方案—FastDeploy -## Model Description +## FastDeploy介绍 -- [PP-Matting Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) +[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)是一款全场景、易用灵活、极致高效的AI推理部署工具,使用FastDeploy可以简单高效的在10+款硬件上对PaddleSeg Matting模型进行快速部署 -## List of Supported Models +## 支持如下的硬件部署 -Now FastDeploy supports the deployment of the following models +| 硬件支持列表 | | | | +|:----- | :-- | :-- | :-- | +| [NVIDIA GPU](cpu-gpu) | [X86 CPU](cpu-gpu)| [飞腾CPU](cpu-gpu) | [ARM CPU](cpu-gpu) | +| [Intel GPU(独立显卡/集成显卡)](cpu-gpu) | [昆仑](cpu-gpu) | [昇腾](cpu-gpu) | -- [PP-Matting models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) -- [PP-HumanMatting models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) -- [ModNet models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) +## 常见问题 +遇到问题可查看常见问题集合文档或搜索FastDeploy issues,链接如下: -## Export Deployment Model +[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq) -Before deployment, PP-Matting needs to be exported into the deployment model. Refer to [Export Model](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) for more information. (Tips: You need to set the `--input_shape` parameter of the export script when exporting PP-Matting and PP-HumanMatting models) +[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues) - -## Download Pre-trained Models - -For developers' testing, models exported by PP-Matting are provided below. Developers can download and use them directly. - -The accuracy metric is sourced from the model description in PP-Matting. (Accuracy data are not provided) Refer to the introduction in PP-Matting for more details. - -| Model | Parameter Size | Accuracy | Note | -|:---------------------------------------------------------------- |:----- |:----- | :------ | -| [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - | -| [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - | -| [PP-HumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - | -| [Modnet-ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - | -| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - | -| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - | - - - -## Detailed Deployment Tutorials - -- [Python Deployment](python) -- [C++ Deployment](cpp) +若以上方式都无法解决问题,欢迎给FastDeploy提交新的[issue](https://github.com/PaddlePaddle/FastDeploy/issues) diff --git a/examples/vision/segmentation/ppmatting/ascend/README.md b/examples/vision/segmentation/ppmatting/ascend/README.md new file mode 120000 index 000000000..3ed44e130 --- /dev/null +++ b/examples/vision/segmentation/ppmatting/ascend/README.md @@ -0,0 +1 @@ +../cpu-gpu/README.md \ No newline at end of file diff --git a/examples/vision/segmentation/ppmatting/cpp/README.md b/examples/vision/segmentation/ppmatting/cpp/README.md deleted file mode 100755 index f678fabd4..000000000 --- a/examples/vision/segmentation/ppmatting/cpp/README.md +++ /dev/null @@ -1,93 +0,0 @@ -English | [简体中文](README_CN.md) -# PP-Matting C++ Deployment Example - -This directory provides examples that `infer.cc` fast finishes the deployment of PP-Matting on CPU/GPU and GPU accelerated by TensorRT. -Before deployment, two steps require confirmation - -- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) -- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) - -Taking the PP-Matting inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model. - -```bash -mkdir build -cd build -# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above -wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz -tar xvf fastdeploy-linux-x64-x.x.x.tgz -cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x -make -j - -# Download PP-Matting model files and test images -wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz -tar -xvf PP-Matting-512.tgz -wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg -wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg - - -# CPU inference -./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0 -# GPU inference -./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1 -# TensorRT inference on GPU -./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2 -# kunlunxin XPU inference -./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 3 -``` - -The visualized result after running is as follows -