mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-08 10:00:29 +08:00
Update paddleseg doc
This commit is contained in:
@@ -40,7 +40,7 @@ The visualized result after running is as follows
|
||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/docs/model_export_cn.md) for more information
|
||||
PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) for more information
|
||||
|
||||
**Parameter**
|
||||
|
||||
|
@@ -1,35 +1,25 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
本目录下提供`infer.py`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
在部署前,需自行编译基于昆仑芯XPU的FastDeploy wheel 包,参考文档[昆仑芯XPU部署环境编译安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/kunlunxin.md),编译python wheel包并安装
|
||||
|
||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)
|
||||
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载
|
||||
|
||||
本目录下提供`infer.py`快速完成Unet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/ascend/cpp
|
||||
|
||||
# 下载Unet模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
||||
# 下载PP-LiteSeg模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
|
||||
# 昆仑芯XPU推理
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device kunlunxin
|
||||
# 华为昇腾推理
|
||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device ascend
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
@@ -43,7 +33,7 @@ python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_
|
||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/docs/model_export_cn.md)
|
||||
PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
|
||||
**参数**
|
||||
|
||||
@@ -67,7 +57,7 @@ PaddleSeg模型加载和初始化,其中model_file, params_file以及config_fi
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.SegmentationResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > 返回`fastdeploy.vision.SegmentationResult`结构体,SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
@@ -78,9 +68,12 @@ PaddleSeg模型加载和初始化,其中model_file, params_file以及config_fi
|
||||
#### 后处理参数
|
||||
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||
|
||||
## 其它文档
|
||||
## 快速链接
|
||||
|
||||
- [PaddleSeg 模型介绍](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
||||
|
@@ -11,42 +11,13 @@ def parse_arguments():
|
||||
"--model", required=True, help="Path of PaddleSeg model.")
|
||||
parser.add_argument(
|
||||
"--image", type=str, required=True, help="Path of test image file.")
|
||||
parser.add_argument(
|
||||
"--device",
|
||||
type=str,
|
||||
default='cpu',
|
||||
help="Type of inference device, support 'kunlunxin', 'cpu' or 'gpu'.")
|
||||
parser.add_argument(
|
||||
"--use_trt",
|
||||
type=ast.literal_eval,
|
||||
default=False,
|
||||
help="Wether to use tensorrt.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def build_option(args):
|
||||
option = fd.RuntimeOption()
|
||||
|
||||
if args.device.lower() == "gpu":
|
||||
option.use_gpu()
|
||||
|
||||
if args.device.lower() == "kunlunxin":
|
||||
option.use_kunlunxin()
|
||||
|
||||
if args.device.lower() == "ascend":
|
||||
option.use_ascend()
|
||||
|
||||
if args.use_trt:
|
||||
option.use_trt_backend()
|
||||
option.set_trt_input_shape("x", [1, 3, 256, 256], [1, 3, 1024, 1024],
|
||||
[1, 3, 2048, 2048])
|
||||
return option
|
||||
|
||||
|
||||
args = parse_arguments()
|
||||
runtime_option = fd.RuntimeOption()
|
||||
runtime_option.use_kunlunxin()
|
||||
|
||||
# 配置runtime,加载模型
|
||||
runtime_option = build_option(args)
|
||||
model_file = os.path.join(args.model, "model.pdmodel")
|
||||
params_file = os.path.join(args.model, "model.pdiparams")
|
||||
config_file = os.path.join(args.model, "deploy.yaml")
|
||||
|
@@ -1,36 +0,0 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
|
||||
# PaddleSegmentation Python Simple Serving Demo
|
||||
|
||||
|
||||
## Environment
|
||||
|
||||
- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
Server:
|
||||
```bash
|
||||
# Download demo code
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python/serving
|
||||
|
||||
# Download PP_LiteSeg model
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz
|
||||
|
||||
# Launch server, change the configurations in server.py to select hardware, backend, etc.
|
||||
# and use --host, --port to specify IP and port
|
||||
fastdeploy simple_serving --app server:app
|
||||
```
|
||||
|
||||
Client:
|
||||
```bash
|
||||
# Download demo code
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python/serving
|
||||
|
||||
# Download test image
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# Send request and get inference result (Please adapt the IP and port if necessary)
|
||||
python client.py
|
||||
```
|
@@ -1,36 +0,0 @@
|
||||
简体中文 | [English](README.md)
|
||||
|
||||
# PaddleSegmentation Python轻量服务化部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
服务端:
|
||||
```bash
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python/serving
|
||||
|
||||
# 下载PP_LiteSeg模型文件
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz
|
||||
|
||||
# 启动服务,可修改server.py中的配置项来指定硬件、后端等
|
||||
# 可通过--host、--port指定IP和端口号
|
||||
fastdeploy simple_serving --app server:app
|
||||
```
|
||||
|
||||
客户端:
|
||||
```bash
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/detection/paddledetection/python/serving
|
||||
|
||||
# 下载测试图片
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# 请求服务,获取推理结果(如有必要,请修改脚本中的IP和端口号)
|
||||
python client.py
|
||||
```
|
@@ -1,23 +0,0 @@
|
||||
import requests
|
||||
import json
|
||||
import cv2
|
||||
import fastdeploy as fd
|
||||
from fastdeploy.serving.utils import cv2_to_base64
|
||||
|
||||
if __name__ == '__main__':
|
||||
url = "http://127.0.0.1:8000/fd/ppliteseg"
|
||||
headers = {"Content-Type": "application/json"}
|
||||
|
||||
im = cv2.imread("cityscapes_demo.png")
|
||||
data = {"data": {"image": cv2_to_base64(im)}, "parameters": {}}
|
||||
|
||||
resp = requests.post(url=url, headers=headers, data=json.dumps(data))
|
||||
if resp.status_code == 200:
|
||||
r_json = json.loads(resp.json()["result"])
|
||||
result = fd.vision.utils.json_to_segmentation(r_json)
|
||||
vis_im = fd.vision.vis_segmentation(im, result, weight=0.5)
|
||||
cv2.imwrite("visualized_result.jpg", vis_im)
|
||||
print("Visualized result save in ./visualized_result.jpg")
|
||||
else:
|
||||
print("Error code:", resp.status_code)
|
||||
print(resp.text)
|
@@ -1,38 +0,0 @@
|
||||
import fastdeploy as fd
|
||||
from fastdeploy.serving.server import SimpleServer
|
||||
import os
|
||||
import logging
|
||||
|
||||
logging.getLogger().setLevel(logging.INFO)
|
||||
|
||||
# Configurations
|
||||
model_dir = 'PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer'
|
||||
device = 'cpu'
|
||||
use_trt = False
|
||||
|
||||
# Prepare model
|
||||
model_file = os.path.join(model_dir, "model.pdmodel")
|
||||
params_file = os.path.join(model_dir, "model.pdiparams")
|
||||
config_file = os.path.join(model_dir, "deploy.yaml")
|
||||
|
||||
# Setup runtime option to select hardware, backend, etc.
|
||||
option = fd.RuntimeOption()
|
||||
if device.lower() == 'gpu':
|
||||
option.use_gpu()
|
||||
if use_trt:
|
||||
option.use_trt_backend()
|
||||
option.set_trt_cache_file('pp_lite_seg.trt')
|
||||
|
||||
# Create model instance
|
||||
model_instance = fd.vision.segmentation.PaddleSegModel(
|
||||
model_file=model_file,
|
||||
params_file=params_file,
|
||||
config_file=config_file,
|
||||
runtime_option=option)
|
||||
|
||||
# Create server, setup REST API
|
||||
app = SimpleServer()
|
||||
app.register(
|
||||
task_name="fd/ppliteseg",
|
||||
model_handler=fd.serving.handler.VisionModelHandler,
|
||||
predictor=model_instance)
|
Reference in New Issue
Block a user