[Serving] Add a simple Python serving (#962)

* init simple serving

* simple serving is working

* ppyoloe demo

* Update README_CN.md

* update readme

* complete vision result to json
This commit is contained in:
Wang Xinyu
2022-12-26 21:09:08 +08:00
committed by GitHub
parent ec67f8ee6d
commit 22d91a73c6
18 changed files with 707 additions and 0 deletions

View File

@@ -0,0 +1,43 @@
简体中文 | [English](README_EN.md)
# PaddleDetection Python轻量服务化部署示例
在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
服务端:
```bash
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/paddledetection/python/serving
# 下载PPYOLOE模型文件如果不下载代码里会自动从hub下载
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
tar xvf ppyoloe_crn_l_300e_coco.tgz
# 安装uvicorn
pip install uvicorn
# 启动服务可选择是否使用GPU和TensorRT可根据uvicorn --help配置IP、端口号等
# CPU
MODEL_DIR=ppyoloe_crn_l_300e_coco DEVICE=cpu uvicorn server:app
# GPU
MODEL_DIR=ppyoloe_crn_l_300e_coco DEVICE=gpu uvicorn server:app
# GPU上使用TensorRT 注意TensorRT推理第一次运行有序列化模型的操作有一定耗时需要耐心等待
MODEL_DIR=ppyoloe_crn_l_300e_coco DEVICE=gpu USE_TRT=true uvicorn server:app
```
客户端:
```bash
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/paddledetection/python/serving
# 下载测试图片
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# 请求服务获取推理结果如有必要请修改脚本中的IP和端口号
python client.py
```