Files
FastDeploy/examples/vision/segmentation/paddleseg/semantic_segmentation/serving/simple_serving
DefTruth 5b143219ce [Docs] Pick seg fastdeploy docs from PaddleSeg (#1482)
* [Docs] Pick seg fastdeploy docs from PaddleSeg

* [Docs] update seg docs

* [Docs] Add c&csharp examples for seg

* [Docs] Add c&csharp examples for seg

* [Doc] Update paddleseg README.md

* Update README.md
2023-03-17 11:22:46 +08:00
..

English | 简体中文

PaddleSeg Python Simple Serving Demo

PaddleSeg Python Simple serving is an example of serving deployment built by FastDeploy based on the Flask framework that can quickly verify the feasibility of online model deployment. It completes AI inference tasks based on http requests, and is suitable for simple scenarios without concurrent inference task. For high concurrency and high throughput scenarios, please refer to fastdeploy_serving

1. Environment

2. Launch Serving

# Download demo code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/segmentation/semantic_segmentation/serving/simple_serving
# If you want to download the demo code from PaddleSeg repo, please run
# git clone https://github.com/PaddlePaddle/PaddleSeg.git
# # Note: If the current branch cannot find the following fastdeploy test code, switch to the develop branch
# # git checkout develop
# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/serving/simple_serving

# Download PP_LiteSeg model
wget  https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz

# Launch server, change the configurations in server.py to select hardware, backend, etc.
# and use --host, --port to specify IP and port
fastdeploy simple_serving --app server:app

3. Client Requests

# Download test image
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png

# Send request and get inference result (Please adapt the IP and port if necessary)
python client.py