mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-05 16:48:03 +08:00
Update paddleseg doc
This commit is contained in:
@@ -6,7 +6,7 @@ FastDeploy是一款全场景、易用灵活、极致高效的AI推理部署工
|
|||||||
|
|
||||||
## 详细文档
|
## 详细文档
|
||||||
|
|
||||||
- [NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU](cpu-gpu)
|
- [NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)](cpu-gpu)
|
||||||
- [昆仑](kunlun)
|
- [昆仑](kunlun)
|
||||||
- [升腾](ascend)
|
- [升腾](ascend)
|
||||||
- [瑞芯微](rockchip)
|
- [瑞芯微](rockchip)
|
||||||
|
@@ -6,7 +6,9 @@
|
|||||||
由于晶晨A311D的NPU仅支持INT8量化模型的部署,因此所支持的量化模型如下:
|
由于晶晨A311D的NPU仅支持INT8量化模型的部署,因此所支持的量化模型如下:
|
||||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||||
|
|
||||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型,开发者可直接下载使用。
|
为了方便开发者的测试,下面提供了PaddleSeg导出的部分推理模型,开发者可直接下载使用。
|
||||||
|
|
||||||
|
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||||
|
|
||||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||||
|
@@ -173,5 +173,5 @@ model.init(modelFile, paramFile, configFile, option);
|
|||||||
|
|
||||||
## 更多参考文档
|
## 更多参考文档
|
||||||
如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
|
如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
|
||||||
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
|
- [在 Android 中使用 FastDeploy Java SDK](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android)
|
||||||
- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
|
- [在 Android 中使用 FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_cpp_sdk_on_android.md)
|
||||||
|
48
examples/vision/segmentation/paddleseg/ascend/README_CN.md
Normal file
48
examples/vision/segmentation/paddleseg/ascend/README_CN.md
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
# 使用FastDeploy部署PaddleSeg模型
|
||||||
|
|
||||||
|
FastDeploy支持在华为昇腾上部署PaddleSeg模型
|
||||||
|
|
||||||
|
## 模型版本说明
|
||||||
|
|
||||||
|
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||||
|
|
||||||
|
目前FastDeploy支持如下模型的部署
|
||||||
|
|
||||||
|
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||||
|
- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md)
|
||||||
|
- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md)
|
||||||
|
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
||||||
|
- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md)
|
||||||
|
|
||||||
|
>>**注意** 若需要在华为昇腾上部署**PP-Matting**、**PP-HumanMatting**请从[Matting模型部署](../../matting/)下载对应模型,部署过程与此文档一致
|
||||||
|
|
||||||
|
## 准备PaddleSeg部署模型
|
||||||
|
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||||
|
|
||||||
|
**注意**
|
||||||
|
- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||||
|
|
||||||
|
## 下载预训练模型
|
||||||
|
|
||||||
|
为了方便开发者的测试,下面提供了PaddleSeg导出的部分推理模型模型
|
||||||
|
- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none`
|
||||||
|
- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax`
|
||||||
|
|
||||||
|
开发者可直接下载使用。
|
||||||
|
|
||||||
|
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||||
|
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||||
|
| [PP-LiteSeg-B(STDC2)-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz) \| [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 79.04% | 79.52% | 79.85% |
|
||||||
|
|[PP-HumanSegV1-Lite-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV1_Lite_with_argmax_infer.tgz) \| [PP-HumanSegV1-Lite-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
|
||||||
|
|[PP-HumanSegV2-Lite-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Lite-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
|
||||||
|
| [PP-HumanSegV2-Mobile-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Mobile-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
|
||||||
|
|[PP-HumanSegV1-Server-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_with_argmax_infer.tgz) \| [PP-HumanSegV1-Server-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
|
||||||
|
| [Portait-PP-HumanSegV2-Lite-with-argmax(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_with_argmax_infer.tgz) \| [Portait-PP-HumanSegV2-Lite-without-argmax(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
|
||||||
|
| [FCN-HRNet-W18-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_with_argmax_infer.tgz) \| [FCN-HRNet-W18-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz)(暂时不支持ONNXRuntime的GPU推理) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
|
||||||
|
| [Deeplabv3-ResNet101-OS8-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_with_argmax_infer.tgz) \| [Deeplabv3-ResNet101-OS8-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |
|
||||||
|
| [SegFormer_B0-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-with-argmax.tgz) \| [SegFormer_B0-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/SegFormer_B0-cityscapes-without-argmax.tgz) | 15MB | 1024x1024 | 76.73% | 77.16% | - |
|
||||||
|
|
||||||
|
## 详细部署文档
|
||||||
|
|
||||||
|
- [Python部署](python)
|
||||||
|
- [C++部署](cpp)
|
@@ -1,5 +1,5 @@
|
|||||||
PROJECT(infer_demo C CXX)
|
PROJECT(infer_demo C CXX)
|
||||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
|
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
|
||||||
|
|
||||||
# 指定下载解压后的fastdeploy库路径
|
# 指定下载解压后的fastdeploy库路径
|
||||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
96
examples/vision/segmentation/paddleseg/ascend/cpp/README.md
Executable file
96
examples/vision/segmentation/paddleseg/ascend/cpp/README.md
Executable file
@@ -0,0 +1,96 @@
|
|||||||
|
English | [简体中文](README_CN.md)
|
||||||
|
# PaddleSeg C++ Deployment Example
|
||||||
|
|
||||||
|
This directory provides examples that `infer.cc` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT.
|
||||||
|
|
||||||
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
【Attention】For the deployment of **PP-Matting**、**PP-HumanMatting** and **ModNet**, refer to [Matting Model Deployment](../../../matting)
|
||||||
|
|
||||||
|
Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.0 or above (x.x.x>=1.0.0) is required to support this model.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir build
|
||||||
|
cd build
|
||||||
|
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||||
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
|
make -j
|
||||||
|
|
||||||
|
# Download Unet model files and test images
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
||||||
|
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
||||||
|
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||||
|
|
||||||
|
|
||||||
|
# CPU inference
|
||||||
|
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
||||||
|
# GPU inference
|
||||||
|
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
||||||
|
# TensorRT inference on GPU
|
||||||
|
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 2
|
||||||
|
# kunlunxin XPU inference
|
||||||
|
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 3
|
||||||
|
```
|
||||||
|
|
||||||
|
The visualized result after running is as follows
|
||||||
|
<div align="center">
|
||||||
|
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
|
## PaddleSeg C++ Interface
|
||||||
|
|
||||||
|
### PaddleSeg Class
|
||||||
|
|
||||||
|
```c++
|
||||||
|
fastdeploy::vision::segmentation::PaddleSegModel(
|
||||||
|
const string& model_file,
|
||||||
|
const string& params_file = "",
|
||||||
|
const string& config_file,
|
||||||
|
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||||
|
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||||
|
```
|
||||||
|
|
||||||
|
PaddleSegModel model loading and initialization, among which model_file is the exported Paddle model format.
|
||||||
|
|
||||||
|
**Parameter**
|
||||||
|
|
||||||
|
> * **model_file**(str): Model file path
|
||||||
|
> * **params_file**(str): Parameter file path
|
||||||
|
> * **config_file**(str): Inference deployment configuration file
|
||||||
|
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||||
|
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||||
|
|
||||||
|
#### Predict Function
|
||||||
|
|
||||||
|
> ```c++
|
||||||
|
> PaddleSegModel::Predict(cv::Mat* im, DetectionResult* result)
|
||||||
|
> ```
|
||||||
|
>
|
||||||
|
> Model prediction interface. Input images and output detection results.
|
||||||
|
>
|
||||||
|
> **Parameter**
|
||||||
|
>
|
||||||
|
> > * **im**: Input images in HWC or BGR format
|
||||||
|
> > * **result**: The segmentation result, including the predicted label of the segmentation and the corresponding probability of the label. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of SegmentationResult
|
||||||
|
|
||||||
|
### Class Member Variable
|
||||||
|
#### Pre-processing Parameter
|
||||||
|
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||||
|
|
||||||
|
> > * **is_vertical_screen**(bool): For PP-HumanSeg models, the input image is portrait, height greater than a width, by setting this parameter to`true`
|
||||||
|
|
||||||
|
#### Post-processing Parameter
|
||||||
|
> > * **apply_softmax**(bool): The `apply_softmax` parameter is not specified when the model is exported. Set this parameter to `true` to normalize the probability result (score_map) of the predicted output segmentation label (label_map)
|
||||||
|
|
||||||
|
- [Model Description](../../)
|
||||||
|
- [Python Deployment](../python)
|
||||||
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
|
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -0,0 +1,88 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PaddleSeg C++部署示例
|
||||||
|
|
||||||
|
本目录下提供`infer.cc`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||||
|
|
||||||
|
在部署前,需自行编译基于华为昇腾NPU的预测库,参考文档[华为昇腾NPU部署环境编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/huawei_ascend.md)
|
||||||
|
|
||||||
|
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#下载部署示例代码
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd FastDeploy/examples/vision/segmentation/paddleseg/ascend/cpp
|
||||||
|
|
||||||
|
mkdir build
|
||||||
|
cd build
|
||||||
|
# 使用编译完成的FastDeploy库编译infer_demo
|
||||||
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-ascend
|
||||||
|
make -j
|
||||||
|
|
||||||
|
# 下载PP-LiteSeg模型文件和测试图片
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||||
|
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||||
|
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||||
|
|
||||||
|
# 华为昇腾推理
|
||||||
|
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png
|
||||||
|
```
|
||||||
|
|
||||||
|
运行完成可视化结果如下图所示
|
||||||
|
<div align="center">
|
||||||
|
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
## PaddleSeg C++接口
|
||||||
|
|
||||||
|
### PaddleSeg类
|
||||||
|
|
||||||
|
```c++
|
||||||
|
fastdeploy::vision::segmentation::PaddleSegModel(
|
||||||
|
const string& model_file,
|
||||||
|
const string& params_file = "",
|
||||||
|
const string& config_file,
|
||||||
|
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||||
|
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||||
|
```
|
||||||
|
|
||||||
|
PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模型格式。
|
||||||
|
|
||||||
|
**参数**
|
||||||
|
|
||||||
|
> * **model_file**(str): 模型文件路径
|
||||||
|
> * **params_file**(str): 参数文件路径
|
||||||
|
> * **config_file**(str): 推理部署配置文件
|
||||||
|
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||||
|
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||||
|
|
||||||
|
#### Predict函数
|
||||||
|
|
||||||
|
> ```c++
|
||||||
|
> PaddleSegModel::Predict(cv::Mat* im, DetectionResult* result)
|
||||||
|
> ```
|
||||||
|
>
|
||||||
|
> 模型预测接口,输入图像直接输出检测结果。
|
||||||
|
>
|
||||||
|
> **参数**
|
||||||
|
>
|
||||||
|
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||||
|
> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||||
|
|
||||||
|
### 类成员属性
|
||||||
|
#### 预处理参数
|
||||||
|
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||||
|
|
||||||
|
> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
|
||||||
|
|
||||||
|
#### 后处理参数
|
||||||
|
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||||
|
|
||||||
|
## 快速链接
|
||||||
|
- [PaddleSeg模型介绍](../../)
|
||||||
|
- [Python部署](../python)
|
||||||
|
|
||||||
|
## 常见问题
|
||||||
|
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||||
|
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||||
|
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||||
|
)
|
@@ -13,25 +13,28 @@
|
|||||||
// limitations under the License.
|
// limitations under the License.
|
||||||
|
|
||||||
#include "fastdeploy/vision.h"
|
#include "fastdeploy/vision.h"
|
||||||
|
|
||||||
#ifdef WIN32
|
#ifdef WIN32
|
||||||
const char sep = '\\';
|
const char sep = '\\';
|
||||||
#else
|
#else
|
||||||
const char sep = '/';
|
const char sep = '/';
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
void InitAndInfer(const std::string& model_dir, const std::string& image_file,
|
void AscendInfer(const std::string& model_dir, const std::string& image_file) {
|
||||||
const fastdeploy::RuntimeOption& option) {
|
|
||||||
auto model_file = model_dir + sep + "model.pdmodel";
|
auto model_file = model_dir + sep + "model.pdmodel";
|
||||||
auto params_file = model_dir + sep + "model.pdiparams";
|
auto params_file = model_dir + sep + "model.pdiparams";
|
||||||
auto config_file = model_dir + sep + "deploy.yaml";
|
auto config_file = model_dir + sep + "deploy.yaml";
|
||||||
|
auto option = fastdeploy::RuntimeOption();
|
||||||
|
option.UseAscend();
|
||||||
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
|
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
|
||||||
model_file, params_file, config_file,option);
|
model_file, params_file, config_file, option);
|
||||||
|
|
||||||
assert(model.Initialized());
|
if (!model.Initialized()) {
|
||||||
|
std::cerr << "Failed to initialize." << std::endl;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
auto im = cv::imread(image_file);
|
auto im = cv::imread(image_file);
|
||||||
auto im_bak = im.clone();
|
|
||||||
|
|
||||||
fastdeploy::vision::SegmentationResult res;
|
fastdeploy::vision::SegmentationResult res;
|
||||||
if (!model.Predict(im, &res)) {
|
if (!model.Predict(im, &res)) {
|
||||||
@@ -40,37 +43,20 @@ void InitAndInfer(const std::string& model_dir, const std::string& image_file,
|
|||||||
}
|
}
|
||||||
|
|
||||||
std::cout << res.Str() << std::endl;
|
std::cout << res.Str() << std::endl;
|
||||||
|
auto vis_im = fastdeploy::vision::VisSegmentation(im, res, 0.5);
|
||||||
|
cv::imwrite("vis_result.jpg", vis_im);
|
||||||
|
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
int main(int argc, char* argv[]) {
|
int main(int argc, char* argv[]) {
|
||||||
if (argc < 4) {
|
if (argc < 3) {
|
||||||
std::cout << "Usage: infer_demo path/to/quant_model "
|
std::cout
|
||||||
"path/to/image "
|
<< "Usage: infer_demo path/to/model_dir path/to/image run_option, "
|
||||||
"run_option, "
|
"e.g ./infer_model ./ppseg_model_dir ./test.jpeg"
|
||||||
"e.g ./infer_demo ./ResNet50_vd_quant ./test.jpeg 0"
|
|
||||||
<< std::endl;
|
|
||||||
std::cout << "The data type of run_option is int, 0: run on cpu with ORT "
|
|
||||||
"backend; 1: run "
|
|
||||||
"on gpu with TensorRT backend. "
|
|
||||||
<< std::endl;
|
<< std::endl;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
fastdeploy::RuntimeOption option;
|
AscendInfer(argv[1], argv[2]);
|
||||||
int flag = std::atoi(argv[3]);
|
|
||||||
|
|
||||||
if (flag == 0) {
|
|
||||||
option.UseCpu();
|
|
||||||
option.UseOrtBackend();
|
|
||||||
} else if (flag == 1) {
|
|
||||||
option.UseCpu();
|
|
||||||
option.UsePaddleInferBackend();
|
|
||||||
}
|
|
||||||
|
|
||||||
std::string model_dir = argv[1];
|
|
||||||
std::string test_image = argv[2];
|
|
||||||
InitAndInfer(model_dir, test_image, option);
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
82
examples/vision/segmentation/paddleseg/ascend/python/README.md
Executable file
82
examples/vision/segmentation/paddleseg/ascend/python/README.md
Executable file
@@ -0,0 +1,82 @@
|
|||||||
|
English | [简体中文](README_CN.md)
|
||||||
|
# PaddleSeg Python Deployment Example
|
||||||
|
|
||||||
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
【Attention】For the deployment of **PP-Matting**、**PP-HumanMatting** and **ModNet**, refer to [Matting Model Deployment](../../../matting)
|
||||||
|
|
||||||
|
This directory provides examples that `infer.py` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
|
```bash
|
||||||
|
# Download the deployment example code
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd FastDeploy/examples/vision/segmentation/paddleseg/python
|
||||||
|
|
||||||
|
# Download Unet model files and test images
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
||||||
|
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
||||||
|
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||||
|
|
||||||
|
# CPU inference
|
||||||
|
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
|
||||||
|
# GPU inference
|
||||||
|
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
|
||||||
|
# TensorRT inference on GPU(Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
||||||
|
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
|
||||||
|
# kunlunxin XPU inference
|
||||||
|
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device kunlunxin
|
||||||
|
```
|
||||||
|
|
||||||
|
The visualized result after running is as follows
|
||||||
|
<div align="center">
|
||||||
|
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
## PaddleSegModel Python Interface
|
||||||
|
|
||||||
|
```python
|
||||||
|
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||||
|
```
|
||||||
|
|
||||||
|
PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) for more information
|
||||||
|
|
||||||
|
**Parameter**
|
||||||
|
|
||||||
|
> * **model_file**(str): Model file path
|
||||||
|
> * **params_file**(str): Parameter file path
|
||||||
|
> * **config_file**(str): Inference deployment configuration file
|
||||||
|
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||||
|
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||||
|
|
||||||
|
### predict function
|
||||||
|
|
||||||
|
> ```python
|
||||||
|
> PaddleSegModel.predict(input_image)
|
||||||
|
> ```
|
||||||
|
>
|
||||||
|
> Model prediction interface. Input images and output detection results.
|
||||||
|
>
|
||||||
|
> **Parameter**
|
||||||
|
>
|
||||||
|
> > * **input_image**(np.ndarray): Input data in HWC or BGR format
|
||||||
|
|
||||||
|
> **Return**
|
||||||
|
>
|
||||||
|
> > Return `fastdeploy.vision.SegmentationResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||||
|
|
||||||
|
### Class Member Variable
|
||||||
|
#### Pre-processing Parameter
|
||||||
|
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||||
|
|
||||||
|
> > * **is_vertical_screen**(bool): For PP-HumanSeg models, the input image is portrait with height greater than width by setting this parameter to `true`
|
||||||
|
#### Post-processing Parameter
|
||||||
|
> > * **apply_softmax**(bool): The `apply_softmax` parameter is not specified when the model is exported. Set this parameter to `true` to normalize the probability result (score_map) of the predicted output segmentation label (label_map) in softmax
|
||||||
|
|
||||||
|
## Other Documents
|
||||||
|
|
||||||
|
- [PaddleSeg Model Description](..)
|
||||||
|
- [PaddleSeg C++ Deployment](../cpp)
|
||||||
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
|
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -0,0 +1,79 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PaddleSeg Python部署示例
|
||||||
|
|
||||||
|
本目录下提供`infer.py`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||||
|
|
||||||
|
在部署前,需自行编译基于华为昇腾NPU的FastDeploy python wheel包,参考文档[华为昇腾NPU部署环境编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/huawei_ascend.md),编译python wheel包并安装
|
||||||
|
|
||||||
|
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载
|
||||||
|
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#下载部署示例代码
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd FastDeploy/examples/vision/segmentation/paddleseg/ascend/cpp
|
||||||
|
|
||||||
|
# 下载PP-LiteSeg模型文件和测试图片
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||||
|
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||||
|
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||||
|
|
||||||
|
# 华为昇腾推理
|
||||||
|
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png
|
||||||
|
```
|
||||||
|
|
||||||
|
运行完成可视化结果如下图所示
|
||||||
|
<div align="center">
|
||||||
|
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
## PaddleSegModel Python接口
|
||||||
|
|
||||||
|
```python
|
||||||
|
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||||
|
```
|
||||||
|
|
||||||
|
PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||||
|
|
||||||
|
**参数**
|
||||||
|
|
||||||
|
> * **model_file**(str): 模型文件路径
|
||||||
|
> * **params_file**(str): 参数文件路径
|
||||||
|
> * **config_file**(str): 推理部署配置文件
|
||||||
|
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||||
|
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||||
|
|
||||||
|
### predict函数
|
||||||
|
|
||||||
|
> ```python
|
||||||
|
> PaddleSegModel.predict(input_image)
|
||||||
|
> ```
|
||||||
|
>
|
||||||
|
> 模型预测结口,输入图像直接输出检测结果。
|
||||||
|
>
|
||||||
|
> **参数**
|
||||||
|
>
|
||||||
|
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||||
|
|
||||||
|
> **返回**
|
||||||
|
>
|
||||||
|
> > 返回`fastdeploy.vision.SegmentationResult`结构体,SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||||
|
|
||||||
|
### 类成员属性
|
||||||
|
#### 预处理参数
|
||||||
|
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||||
|
|
||||||
|
> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
|
||||||
|
|
||||||
|
#### 后处理参数
|
||||||
|
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||||
|
|
||||||
|
## 快速链接
|
||||||
|
|
||||||
|
- [PaddleSeg 模型介绍](..)
|
||||||
|
- [PaddleSeg C++部署](../cpp)
|
||||||
|
|
||||||
|
## 常见问题
|
||||||
|
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||||
|
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||||
|
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
34
examples/vision/segmentation/paddleseg/ascend/python/infer.py
Executable file
34
examples/vision/segmentation/paddleseg/ascend/python/infer.py
Executable file
@@ -0,0 +1,34 @@
|
|||||||
|
import fastdeploy as fd
|
||||||
|
import cv2
|
||||||
|
import os
|
||||||
|
|
||||||
|
|
||||||
|
def parse_arguments():
|
||||||
|
import argparse
|
||||||
|
import ast
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument(
|
||||||
|
"--model", required=True, help="Path of PaddleSeg model.")
|
||||||
|
parser.add_argument(
|
||||||
|
"--image", type=str, required=True, help="Path of test image file.")
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
runtime_option = fd.RuntimeOption()
|
||||||
|
runtime_option.use_ascend()
|
||||||
|
|
||||||
|
# 配置runtime,加载模型
|
||||||
|
model_file = os.path.join(args.model, "model.pdmodel")
|
||||||
|
params_file = os.path.join(args.model, "model.pdiparams")
|
||||||
|
config_file = os.path.join(args.model, "deploy.yaml")
|
||||||
|
model = fd.vision.segmentation.PaddleSegModel(
|
||||||
|
model_file, params_file, config_file, runtime_option=runtime_option)
|
||||||
|
|
||||||
|
# 预测图片分割结果
|
||||||
|
im = cv2.imread(args.image)
|
||||||
|
result = model.predict(im)
|
||||||
|
print(result)
|
||||||
|
|
||||||
|
# 可视化结果
|
||||||
|
vis_im = fd.vision.vis_segmentation(im, result, weight=0.5)
|
||||||
|
cv2.imwrite("vis_img.png", vis_im)
|
@@ -1,5 +1,7 @@
|
|||||||
# 使用FastDeploy部署PaddleSeg模型
|
# 使用FastDeploy部署PaddleSeg模型
|
||||||
|
|
||||||
|
FastDeploy支持在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上部署PaddleSeg模型
|
||||||
|
|
||||||
## 模型版本说明
|
## 模型版本说明
|
||||||
|
|
||||||
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||||
@@ -13,7 +15,7 @@
|
|||||||
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
||||||
- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md)
|
- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md)
|
||||||
|
|
||||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting/)
|
>>**注意**】如部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting/)
|
||||||
|
|
||||||
## 准备PaddleSeg部署模型
|
## 准备PaddleSeg部署模型
|
||||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||||
|
@@ -82,7 +82,7 @@ PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模
|
|||||||
> **参数**
|
> **参数**
|
||||||
>
|
>
|
||||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||||
> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, SegmentationResult说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||||
|
|
||||||
### 类成员属性
|
### 类成员属性
|
||||||
#### 预处理参数
|
#### 预处理参数
|
||||||
|
@@ -40,7 +40,7 @@ The visualized result after running is as follows
|
|||||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||||
```
|
```
|
||||||
|
|
||||||
PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/docs/model_export_cn.md) for more information
|
PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) for more information
|
||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
|
@@ -39,7 +39,7 @@ python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --ima
|
|||||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||||
```
|
```
|
||||||
|
|
||||||
PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/docs/model_export_cn.md)
|
PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||||
|
|
||||||
**参数**
|
**参数**
|
||||||
|
|
||||||
|
@@ -13,7 +13,7 @@
|
|||||||
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
||||||
- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md)
|
- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md)
|
||||||
|
|
||||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting/)
|
>>**注意** 若需要在华为昇腾上部署**PP-Matting**、**PP-HumanMatting**请从[Matting模型部署](../../matting/)下载对应模型,部署过程与此文档一致
|
||||||
|
|
||||||
## 准备PaddleSeg部署模型
|
## 准备PaddleSeg部署模型
|
||||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||||
|
@@ -1,42 +1,30 @@
|
|||||||
[English](README.md) | 简体中文
|
[English](README.md) | 简体中文
|
||||||
# PaddleSeg C++部署示例
|
# PaddleSeg C++部署示例
|
||||||
|
|
||||||
本目录下提供`infer.cc`快速完成Unet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
本目录下提供`infer.cc`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||||
|
|
||||||
在部署前,需确认以下两个步骤
|
在部署前,需自行编译基于昆仑芯XPU的预测库,参考文档[昆仑芯XPU部署环境编译安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/kunlunxin.md)
|
||||||
|
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载
|
||||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
|
|
||||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)
|
|
||||||
|
|
||||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.0以上(x.x.x>=1.0.0)
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
#下载部署示例代码
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd FastDeploy/examples/vision/segmentation/paddleseg/ascend/cpp
|
||||||
|
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
# 使用编译完成的FastDeploy库编译infer_demo
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-ascend
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
|
||||||
make -j
|
make -j
|
||||||
|
|
||||||
# 下载Unet模型文件和测试图片
|
# 下载PP-LiteSeg模型文件和测试图片
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||||
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||||
|
|
||||||
|
|
||||||
# CPU推理
|
|
||||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 0
|
|
||||||
# GPU推理
|
|
||||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 1
|
|
||||||
# GPU上TensorRT推理
|
|
||||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 2
|
|
||||||
# 昆仑芯XPU推理
|
|
||||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 3
|
|
||||||
# 华为昇腾推理
|
# 华为昇腾推理
|
||||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 4
|
./infer_demo PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer cityscapes_demo.png
|
||||||
```
|
```
|
||||||
|
|
||||||
运行完成可视化结果如下图所示
|
运行完成可视化结果如下图所示
|
||||||
@@ -44,12 +32,6 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
|||||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
|
||||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
|
||||||
|
|
||||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
|
||||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
|
||||||
|
|
||||||
## PaddleSeg C++接口
|
## PaddleSeg C++接口
|
||||||
|
|
||||||
### PaddleSeg类
|
### PaddleSeg类
|
||||||
@@ -84,7 +66,7 @@ PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模
|
|||||||
> **参数**
|
> **参数**
|
||||||
>
|
>
|
||||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||||
> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, SegmentationResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||||
|
|
||||||
### 类成员属性
|
### 类成员属性
|
||||||
#### 预处理参数
|
#### 预处理参数
|
||||||
@@ -95,7 +77,12 @@ PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模
|
|||||||
#### 后处理参数
|
#### 后处理参数
|
||||||
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||||
|
|
||||||
- [模型介绍](../../)
|
## 快速链接
|
||||||
|
- [PaddleSeg模型介绍](../../)
|
||||||
- [Python部署](../python)
|
- [Python部署](../python)
|
||||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
|
||||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
## 常见问题
|
||||||
|
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||||
|
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||||
|
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||||
|
)
|
||||||
|
@@ -20,34 +20,6 @@ const char sep = '\\';
|
|||||||
const char sep = '/';
|
const char sep = '/';
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
void CpuInfer(const std::string& model_dir, const std::string& image_file) {
|
|
||||||
auto model_file = model_dir + sep + "model.pdmodel";
|
|
||||||
auto params_file = model_dir + sep + "model.pdiparams";
|
|
||||||
auto config_file = model_dir + sep + "deploy.yaml";
|
|
||||||
auto option = fastdeploy::RuntimeOption();
|
|
||||||
option.UseCpu();
|
|
||||||
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
|
|
||||||
model_file, params_file, config_file, option);
|
|
||||||
|
|
||||||
if (!model.Initialized()) {
|
|
||||||
std::cerr << "Failed to initialize." << std::endl;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
auto im = cv::imread(image_file);
|
|
||||||
|
|
||||||
fastdeploy::vision::SegmentationResult res;
|
|
||||||
if (!model.Predict(im, &res)) {
|
|
||||||
std::cerr << "Failed to predict." << std::endl;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cout << res.Str() << std::endl;
|
|
||||||
auto vis_im = fastdeploy::vision::VisSegmentation(im, res, 0.5);
|
|
||||||
cv::imwrite("vis_result.jpg", vis_im);
|
|
||||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
void KunlunXinInfer(const std::string& model_dir,
|
void KunlunXinInfer(const std::string& model_dir,
|
||||||
const std::string& image_file) {
|
const std::string& image_file) {
|
||||||
auto model_file = model_dir + sep + "model.pdmodel";
|
auto model_file = model_dir + sep + "model.pdmodel";
|
||||||
@@ -77,116 +49,14 @@ void KunlunXinInfer(const std::string& model_dir,
|
|||||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
||||||
}
|
}
|
||||||
|
|
||||||
void GpuInfer(const std::string& model_dir, const std::string& image_file) {
|
|
||||||
auto model_file = model_dir + sep + "model.pdmodel";
|
|
||||||
auto params_file = model_dir + sep + "model.pdiparams";
|
|
||||||
auto config_file = model_dir + sep + "deploy.yaml";
|
|
||||||
|
|
||||||
auto option = fastdeploy::RuntimeOption();
|
|
||||||
option.UseGpu();
|
|
||||||
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
|
|
||||||
model_file, params_file, config_file, option);
|
|
||||||
|
|
||||||
if (!model.Initialized()) {
|
|
||||||
std::cerr << "Failed to initialize." << std::endl;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
auto im = cv::imread(image_file);
|
|
||||||
|
|
||||||
fastdeploy::vision::SegmentationResult res;
|
|
||||||
if (!model.Predict(im, &res)) {
|
|
||||||
std::cerr << "Failed to predict." << std::endl;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cout << res.Str() << std::endl;
|
|
||||||
auto vis_im = fastdeploy::vision::VisSegmentation(im, res, 0.5);
|
|
||||||
cv::imwrite("vis_result.jpg", vis_im);
|
|
||||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
void TrtInfer(const std::string& model_dir, const std::string& image_file) {
|
|
||||||
auto model_file = model_dir + sep + "model.pdmodel";
|
|
||||||
auto params_file = model_dir + sep + "model.pdiparams";
|
|
||||||
auto config_file = model_dir + sep + "deploy.yaml";
|
|
||||||
|
|
||||||
auto option = fastdeploy::RuntimeOption();
|
|
||||||
option.UseGpu();
|
|
||||||
option.UseTrtBackend();
|
|
||||||
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
|
|
||||||
model_file, params_file, config_file, option);
|
|
||||||
|
|
||||||
if (!model.Initialized()) {
|
|
||||||
std::cerr << "Failed to initialize." << std::endl;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
auto im = cv::imread(image_file);
|
|
||||||
|
|
||||||
fastdeploy::vision::SegmentationResult res;
|
|
||||||
if (!model.Predict(im, &res)) {
|
|
||||||
std::cerr << "Failed to predict." << std::endl;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cout << res.Str() << std::endl;
|
|
||||||
auto vis_im = fastdeploy::vision::VisSegmentation(im, res, 0.5);
|
|
||||||
cv::imwrite("vis_result.jpg", vis_im);
|
|
||||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
void AscendInfer(const std::string& model_dir, const std::string& image_file) {
|
|
||||||
auto model_file = model_dir + sep + "model.pdmodel";
|
|
||||||
auto params_file = model_dir + sep + "model.pdiparams";
|
|
||||||
auto config_file = model_dir + sep + "deploy.yaml";
|
|
||||||
auto option = fastdeploy::RuntimeOption();
|
|
||||||
option.UseAscend();
|
|
||||||
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
|
|
||||||
model_file, params_file, config_file, option);
|
|
||||||
|
|
||||||
if (!model.Initialized()) {
|
|
||||||
std::cerr << "Failed to initialize." << std::endl;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
auto im = cv::imread(image_file);
|
|
||||||
|
|
||||||
fastdeploy::vision::SegmentationResult res;
|
|
||||||
if (!model.Predict(im, &res)) {
|
|
||||||
std::cerr << "Failed to predict." << std::endl;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cout << res.Str() << std::endl;
|
|
||||||
auto vis_im = fastdeploy::vision::VisSegmentation(im, res, 0.5);
|
|
||||||
cv::imwrite("vis_result.jpg", vis_im);
|
|
||||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
int main(int argc, char* argv[]) {
|
int main(int argc, char* argv[]) {
|
||||||
if (argc < 4) {
|
if (argc < 3) {
|
||||||
std::cout
|
std::cout
|
||||||
<< "Usage: infer_demo path/to/model_dir path/to/image run_option, "
|
<< "Usage: infer_demo path/to/model_dir path/to/image run_option, "
|
||||||
"e.g ./infer_model ./ppseg_model_dir ./test.jpeg 0"
|
"e.g ./infer_model ./ppseg_model_dir ./test.jpeg"
|
||||||
<< std::endl;
|
|
||||||
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
|
|
||||||
"with gpu; 2: run with gpu and use tensorrt backend; 3: run "
|
|
||||||
"with kunlunxin."
|
|
||||||
<< std::endl;
|
<< std::endl;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (std::atoi(argv[3]) == 0) {
|
|
||||||
CpuInfer(argv[1], argv[2]);
|
|
||||||
} else if (std::atoi(argv[3]) == 1) {
|
|
||||||
GpuInfer(argv[1], argv[2]);
|
|
||||||
} else if (std::atoi(argv[3]) == 2) {
|
|
||||||
TrtInfer(argv[1], argv[2]);
|
|
||||||
} else if (std::atoi(argv[3]) == 3) {
|
|
||||||
KunlunXinInfer(argv[1], argv[2]);
|
KunlunXinInfer(argv[1], argv[2]);
|
||||||
} else if (std::atoi(argv[3]) == 4) {
|
|
||||||
AscendInfer(argv[1], argv[2]);
|
|
||||||
}
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@@ -40,7 +40,7 @@ The visualized result after running is as follows
|
|||||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||||
```
|
```
|
||||||
|
|
||||||
PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/docs/model_export_cn.md) for more information
|
PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) for more information
|
||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
|
@@ -1,35 +1,25 @@
|
|||||||
[English](README.md) | 简体中文
|
[English](README.md) | 简体中文
|
||||||
# PaddleSeg Python部署示例
|
# PaddleSeg Python部署示例
|
||||||
|
|
||||||
在部署前,需确认以下两个步骤
|
本目录下提供`infer.py`快速完成PP-LiteSeg在华为昇腾上部署的示例。
|
||||||
|
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
在部署前,需自行编译基于昆仑芯XPU的FastDeploy wheel 包,参考文档[昆仑芯XPU部署环境编译安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/kunlunxin.md),编译python wheel包并安装
|
||||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
|
|
||||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)
|
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载
|
||||||
|
|
||||||
本目录下提供`infer.py`快速完成Unet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
#下载部署示例代码
|
#下载部署示例代码
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python
|
cd FastDeploy/examples/vision/segmentation/paddleseg/ascend/cpp
|
||||||
|
|
||||||
# 下载Unet模型文件和测试图片
|
# 下载PP-LiteSeg模型文件和测试图片
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||||
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
|
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||||
|
|
||||||
# CPU推理
|
|
||||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
|
|
||||||
# GPU推理
|
|
||||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
|
|
||||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
|
||||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
|
|
||||||
# 昆仑芯XPU推理
|
|
||||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device kunlunxin
|
|
||||||
# 华为昇腾推理
|
# 华为昇腾推理
|
||||||
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device ascend
|
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png
|
||||||
```
|
```
|
||||||
|
|
||||||
运行完成可视化结果如下图所示
|
运行完成可视化结果如下图所示
|
||||||
@@ -43,7 +33,7 @@ python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_
|
|||||||
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||||
```
|
```
|
||||||
|
|
||||||
PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/docs/model_export_cn.md)
|
PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||||
|
|
||||||
**参数**
|
**参数**
|
||||||
|
|
||||||
@@ -67,7 +57,7 @@ PaddleSeg模型加载和初始化,其中model_file, params_file以及config_fi
|
|||||||
|
|
||||||
> **返回**
|
> **返回**
|
||||||
>
|
>
|
||||||
> > 返回`fastdeploy.vision.SegmentationResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
> > 返回`fastdeploy.vision.SegmentationResult`结构体,SegmentationResult结构体说明参考[SegmentationResult结构体介绍](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||||
|
|
||||||
### 类成员属性
|
### 类成员属性
|
||||||
#### 预处理参数
|
#### 预处理参数
|
||||||
@@ -78,9 +68,12 @@ PaddleSeg模型加载和初始化,其中model_file, params_file以及config_fi
|
|||||||
#### 后处理参数
|
#### 后处理参数
|
||||||
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
|
||||||
|
|
||||||
## 其它文档
|
## 快速链接
|
||||||
|
|
||||||
- [PaddleSeg 模型介绍](..)
|
- [PaddleSeg 模型介绍](..)
|
||||||
- [PaddleSeg C++部署](../cpp)
|
- [PaddleSeg C++部署](../cpp)
|
||||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
|
||||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
## 常见问题
|
||||||
|
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||||
|
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||||
|
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
||||||
|
@@ -11,42 +11,13 @@ def parse_arguments():
|
|||||||
"--model", required=True, help="Path of PaddleSeg model.")
|
"--model", required=True, help="Path of PaddleSeg model.")
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--image", type=str, required=True, help="Path of test image file.")
|
"--image", type=str, required=True, help="Path of test image file.")
|
||||||
parser.add_argument(
|
|
||||||
"--device",
|
|
||||||
type=str,
|
|
||||||
default='cpu',
|
|
||||||
help="Type of inference device, support 'kunlunxin', 'cpu' or 'gpu'.")
|
|
||||||
parser.add_argument(
|
|
||||||
"--use_trt",
|
|
||||||
type=ast.literal_eval,
|
|
||||||
default=False,
|
|
||||||
help="Wether to use tensorrt.")
|
|
||||||
return parser.parse_args()
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
def build_option(args):
|
runtime_option = fd.RuntimeOption()
|
||||||
option = fd.RuntimeOption()
|
runtime_option.use_kunlunxin()
|
||||||
|
|
||||||
if args.device.lower() == "gpu":
|
|
||||||
option.use_gpu()
|
|
||||||
|
|
||||||
if args.device.lower() == "kunlunxin":
|
|
||||||
option.use_kunlunxin()
|
|
||||||
|
|
||||||
if args.device.lower() == "ascend":
|
|
||||||
option.use_ascend()
|
|
||||||
|
|
||||||
if args.use_trt:
|
|
||||||
option.use_trt_backend()
|
|
||||||
option.set_trt_input_shape("x", [1, 3, 256, 256], [1, 3, 1024, 1024],
|
|
||||||
[1, 3, 2048, 2048])
|
|
||||||
return option
|
|
||||||
|
|
||||||
|
|
||||||
args = parse_arguments()
|
|
||||||
|
|
||||||
# 配置runtime,加载模型
|
# 配置runtime,加载模型
|
||||||
runtime_option = build_option(args)
|
|
||||||
model_file = os.path.join(args.model, "model.pdmodel")
|
model_file = os.path.join(args.model, "model.pdmodel")
|
||||||
params_file = os.path.join(args.model, "model.pdiparams")
|
params_file = os.path.join(args.model, "model.pdiparams")
|
||||||
config_file = os.path.join(args.model, "deploy.yaml")
|
config_file = os.path.join(args.model, "deploy.yaml")
|
||||||
|
@@ -1,36 +0,0 @@
|
|||||||
English | [简体中文](README_CN.md)
|
|
||||||
|
|
||||||
# PaddleSegmentation Python Simple Serving Demo
|
|
||||||
|
|
||||||
|
|
||||||
## Environment
|
|
||||||
|
|
||||||
- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
|
|
||||||
Server:
|
|
||||||
```bash
|
|
||||||
# Download demo code
|
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
|
||||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python/serving
|
|
||||||
|
|
||||||
# Download PP_LiteSeg model
|
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz
|
|
||||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz
|
|
||||||
|
|
||||||
# Launch server, change the configurations in server.py to select hardware, backend, etc.
|
|
||||||
# and use --host, --port to specify IP and port
|
|
||||||
fastdeploy simple_serving --app server:app
|
|
||||||
```
|
|
||||||
|
|
||||||
Client:
|
|
||||||
```bash
|
|
||||||
# Download demo code
|
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
|
||||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python/serving
|
|
||||||
|
|
||||||
# Download test image
|
|
||||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
|
||||||
|
|
||||||
# Send request and get inference result (Please adapt the IP and port if necessary)
|
|
||||||
python client.py
|
|
||||||
```
|
|
@@ -1,36 +0,0 @@
|
|||||||
简体中文 | [English](README.md)
|
|
||||||
|
|
||||||
# PaddleSegmentation Python轻量服务化部署示例
|
|
||||||
|
|
||||||
在部署前,需确认以下两个步骤
|
|
||||||
|
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
|
|
||||||
服务端:
|
|
||||||
```bash
|
|
||||||
# 下载部署示例代码
|
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
|
||||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python/serving
|
|
||||||
|
|
||||||
# 下载PP_LiteSeg模型文件
|
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz
|
|
||||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz
|
|
||||||
|
|
||||||
# 启动服务,可修改server.py中的配置项来指定硬件、后端等
|
|
||||||
# 可通过--host、--port指定IP和端口号
|
|
||||||
fastdeploy simple_serving --app server:app
|
|
||||||
```
|
|
||||||
|
|
||||||
客户端:
|
|
||||||
```bash
|
|
||||||
# 下载部署示例代码
|
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/python/serving
|
|
||||||
|
|
||||||
# 下载测试图片
|
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
|
||||||
|
|
||||||
# 请求服务,获取推理结果(如有必要,请修改脚本中的IP和端口号)
|
|
||||||
python client.py
|
|
||||||
```
|
|
@@ -1,23 +0,0 @@
|
|||||||
import requests
|
|
||||||
import json
|
|
||||||
import cv2
|
|
||||||
import fastdeploy as fd
|
|
||||||
from fastdeploy.serving.utils import cv2_to_base64
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
url = "http://127.0.0.1:8000/fd/ppliteseg"
|
|
||||||
headers = {"Content-Type": "application/json"}
|
|
||||||
|
|
||||||
im = cv2.imread("cityscapes_demo.png")
|
|
||||||
data = {"data": {"image": cv2_to_base64(im)}, "parameters": {}}
|
|
||||||
|
|
||||||
resp = requests.post(url=url, headers=headers, data=json.dumps(data))
|
|
||||||
if resp.status_code == 200:
|
|
||||||
r_json = json.loads(resp.json()["result"])
|
|
||||||
result = fd.vision.utils.json_to_segmentation(r_json)
|
|
||||||
vis_im = fd.vision.vis_segmentation(im, result, weight=0.5)
|
|
||||||
cv2.imwrite("visualized_result.jpg", vis_im)
|
|
||||||
print("Visualized result save in ./visualized_result.jpg")
|
|
||||||
else:
|
|
||||||
print("Error code:", resp.status_code)
|
|
||||||
print(resp.text)
|
|
@@ -1,38 +0,0 @@
|
|||||||
import fastdeploy as fd
|
|
||||||
from fastdeploy.serving.server import SimpleServer
|
|
||||||
import os
|
|
||||||
import logging
|
|
||||||
|
|
||||||
logging.getLogger().setLevel(logging.INFO)
|
|
||||||
|
|
||||||
# Configurations
|
|
||||||
model_dir = 'PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer'
|
|
||||||
device = 'cpu'
|
|
||||||
use_trt = False
|
|
||||||
|
|
||||||
# Prepare model
|
|
||||||
model_file = os.path.join(model_dir, "model.pdmodel")
|
|
||||||
params_file = os.path.join(model_dir, "model.pdiparams")
|
|
||||||
config_file = os.path.join(model_dir, "deploy.yaml")
|
|
||||||
|
|
||||||
# Setup runtime option to select hardware, backend, etc.
|
|
||||||
option = fd.RuntimeOption()
|
|
||||||
if device.lower() == 'gpu':
|
|
||||||
option.use_gpu()
|
|
||||||
if use_trt:
|
|
||||||
option.use_trt_backend()
|
|
||||||
option.set_trt_cache_file('pp_lite_seg.trt')
|
|
||||||
|
|
||||||
# Create model instance
|
|
||||||
model_instance = fd.vision.segmentation.PaddleSegModel(
|
|
||||||
model_file=model_file,
|
|
||||||
params_file=params_file,
|
|
||||||
config_file=config_file,
|
|
||||||
runtime_option=option)
|
|
||||||
|
|
||||||
# Create server, setup REST API
|
|
||||||
app = SimpleServer()
|
|
||||||
app.register(
|
|
||||||
task_name="fd/ppliteseg",
|
|
||||||
model_handler=fd.serving.handler.VisionModelHandler,
|
|
||||||
predictor=model_instance)
|
|
@@ -1,32 +0,0 @@
|
|||||||
English | [简体中文](README_CN.md)
|
|
||||||
# PaddleSeg Quantitative Model C++ Deployment Example
|
|
||||||
`infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleSeg quantization model deployment on CPU.
|
|
||||||
|
|
||||||
## Deployment Preparations
|
|
||||||
### FastDeploy Environment Preparations
|
|
||||||
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
|
||||||
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
|
||||||
|
|
||||||
### Quantized Model Preparations
|
|
||||||
- 1. You can directly use the quantized model provided by FastDeploy for deployment.
|
|
||||||
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
|
||||||
|
|
||||||
## Take the Quantized PP_LiteSeg_T_STDC1_cityscapes Model as an example for Deployment
|
|
||||||
Run the following commands in this directory to compile and deploy the quantized model. FastDeploy version 0.7.0 or higher is required (x.x.x>=0.7.0).
|
|
||||||
```bash
|
|
||||||
mkdir build
|
|
||||||
cd build
|
|
||||||
# Download pre-compiled FastDeploy libraries. You can choose the appropriate version from `pre-compiled FastDeploy libraries` mentioned above.
|
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
|
||||||
make -j
|
|
||||||
|
|
||||||
# Download the PP_LiteSeg_T_STDC1_cityscapes quantized model and test images provided by FastDeloy.
|
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
|
||||||
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
|
||||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
|
||||||
|
|
||||||
# Use Paddle-Inference inference quantization model on CPU.
|
|
||||||
./infer_demo PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ cityscapes_demo.png 1
|
|
||||||
```
|
|
@@ -1,32 +0,0 @@
|
|||||||
[English](README.md) | 简体中文
|
|
||||||
# PaddleSeg 量化模型 C++部署示例
|
|
||||||
本目录下提供的`infer.cc`,可以帮助用户快速完成PaddleSeg量化模型在CPU上的部署推理加速.
|
|
||||||
|
|
||||||
## 部署准备
|
|
||||||
### FastDeploy环境准备
|
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
|
|
||||||
### 量化模型准备
|
|
||||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
|
||||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
|
||||||
|
|
||||||
## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
|
|
||||||
在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
|
||||||
```bash
|
|
||||||
mkdir build
|
|
||||||
cd build
|
|
||||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
|
||||||
make -j
|
|
||||||
|
|
||||||
# 下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
|
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
|
||||||
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
|
||||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
|
||||||
|
|
||||||
# 在CPU上使用Paddle-Inference推理量化模型
|
|
||||||
./infer_demo PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ cityscapes_demo.png 1
|
|
||||||
```
|
|
@@ -1,29 +0,0 @@
|
|||||||
English | [简体中文](README_CN.md)
|
|
||||||
# PaddleSeg Quantitative Model Python Deployment Example
|
|
||||||
`infer.py` in this directory can help you quickly complete the inference acceleration of PaddleSeg quantization model deployment on CPU/GPU.
|
|
||||||
|
|
||||||
## Deployment Preparations
|
|
||||||
### FastDeploy Environment Preparations
|
|
||||||
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
|
|
||||||
### Quantized Model Preparations
|
|
||||||
- 1. You can directly use the quantized model provided by FastDeploy for deployment.
|
|
||||||
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
|
||||||
|
|
||||||
|
|
||||||
## Take the Quantized PP_LiteSeg_T_STDC1_cityscapes Model as an example for Deployment
|
|
||||||
```bash
|
|
||||||
# Download sample deployment code.
|
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
|
||||||
cd examples/vision/segmentation/paddleseg/quantize/python
|
|
||||||
|
|
||||||
# Download the PP_LiteSeg_T_STDC1_cityscapes quantized model and test images provided by FastDeloy.
|
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
|
||||||
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
|
||||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
|
||||||
|
|
||||||
# Use Paddle-Inference inference quantization model on CPU.
|
|
||||||
python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT --image cityscapes_demo.png --device cpu --backend paddle
|
|
||||||
|
|
||||||
```
|
|
@@ -1,29 +0,0 @@
|
|||||||
[English](README.md) | 简体中文
|
|
||||||
# PaddleSeg 量化模型 Python部署示例
|
|
||||||
本目录下提供的`infer.py`,可以帮助用户快速完成PaddleSeg量化模型在CPU/GPU上的部署推理加速.
|
|
||||||
|
|
||||||
## 部署准备
|
|
||||||
### FastDeploy环境准备
|
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
|
|
||||||
### 量化模型准备
|
|
||||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
|
||||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
|
||||||
|
|
||||||
|
|
||||||
## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
|
|
||||||
```bash
|
|
||||||
# 下载部署示例代码
|
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
|
||||||
cd examples/vision/segmentation/paddleseg/quantize/python
|
|
||||||
|
|
||||||
# 下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
|
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
|
||||||
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
|
||||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
|
||||||
|
|
||||||
# 在CPU上使用Paddle-Inference推理量化模型
|
|
||||||
python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT --image cityscapes_demo.png --device cpu --backend paddle
|
|
||||||
|
|
||||||
```
|
|
@@ -1,76 +0,0 @@
|
|||||||
import fastdeploy as fd
|
|
||||||
import cv2
|
|
||||||
import os
|
|
||||||
|
|
||||||
|
|
||||||
def parse_arguments():
|
|
||||||
import argparse
|
|
||||||
import ast
|
|
||||||
parser = argparse.ArgumentParser()
|
|
||||||
parser.add_argument(
|
|
||||||
"--model", required=True, help="Path of PaddleSeg model.")
|
|
||||||
parser.add_argument(
|
|
||||||
"--image", required=True, help="Path of test image file.")
|
|
||||||
parser.add_argument(
|
|
||||||
"--device",
|
|
||||||
type=str,
|
|
||||||
default='cpu',
|
|
||||||
help="Type of inference device, support 'cpu' or 'gpu'.")
|
|
||||||
parser.add_argument(
|
|
||||||
"--backend",
|
|
||||||
type=str,
|
|
||||||
default="default",
|
|
||||||
help="Type of inference backend, support ort/trt/paddle/openvino, default 'openvino' for cpu, 'tensorrt' for gpu"
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--device_id",
|
|
||||||
type=int,
|
|
||||||
default=0,
|
|
||||||
help="Define which GPU card used to run model.")
|
|
||||||
parser.add_argument(
|
|
||||||
"--cpu_thread_num",
|
|
||||||
type=int,
|
|
||||||
default=9,
|
|
||||||
help="Number of threads while inference on CPU.")
|
|
||||||
return parser.parse_args()
|
|
||||||
|
|
||||||
|
|
||||||
def build_option(args):
|
|
||||||
option = fd.RuntimeOption()
|
|
||||||
if args.device.lower() == "gpu":
|
|
||||||
option.use_gpu(0)
|
|
||||||
|
|
||||||
option.set_cpu_thread_num(args.cpu_thread_num)
|
|
||||||
|
|
||||||
if args.backend.lower() == "trt":
|
|
||||||
assert args.device.lower(
|
|
||||||
) == "gpu", "TensorRT backend require inferences on device GPU."
|
|
||||||
option.use_trt_backend()
|
|
||||||
option.set_trt_cache_file(os.path.join(args.model, "model.trt"))
|
|
||||||
option.set_trt_input_shape("x", [1, 3, 256, 256], [1, 3, 1024, 1024],
|
|
||||||
[1, 3, 2048, 2048])
|
|
||||||
elif args.backend.lower() == "ort":
|
|
||||||
option.use_ort_backend()
|
|
||||||
elif args.backend.lower() == "paddle":
|
|
||||||
option.use_paddle_infer_backend()
|
|
||||||
elif args.backend.lower() == "openvino":
|
|
||||||
assert args.device.lower(
|
|
||||||
) == "cpu", "OpenVINO backend require inference on device CPU."
|
|
||||||
option.use_openvino_backend()
|
|
||||||
return option
|
|
||||||
|
|
||||||
|
|
||||||
args = parse_arguments()
|
|
||||||
|
|
||||||
# 配置runtime,加载模型
|
|
||||||
runtime_option = build_option(args)
|
|
||||||
model_file = os.path.join(args.model, "model.pdmodel")
|
|
||||||
params_file = os.path.join(args.model, "model.pdiparams")
|
|
||||||
config_file = os.path.join(args.model, "deploy.yaml")
|
|
||||||
model = fd.vision.segmentation.PaddleSegModel(
|
|
||||||
model_file, params_file, config_file, runtime_option=runtime_option)
|
|
||||||
|
|
||||||
# 预测图片检测结果
|
|
||||||
im = cv2.imread(args.image)
|
|
||||||
result = model.predict(im)
|
|
||||||
print(result)
|
|
@@ -6,9 +6,28 @@
|
|||||||
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||||
|
|
||||||
目前FastDeploy使用RKNPU2推理PPSeg支持如下模型的部署:
|
目前FastDeploy使用RKNPU2推理PPSeg支持如下模型的部署:
|
||||||
|
- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/unet/README.md)
|
||||||
|
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||||
|
- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/contrib/PP-HumanSeg/README.md)
|
||||||
|
- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/fcn/README.md)
|
||||||
|
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
|
||||||
|
|
||||||
|
## 准备PaddleSeg部署模型
|
||||||
|
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||||
|
|
||||||
|
**注意**
|
||||||
|
- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||||
|
|
||||||
|
## 下载预训练模型
|
||||||
|
|
||||||
|
为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型
|
||||||
|
- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none`
|
||||||
|
- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax`
|
||||||
|
|
||||||
|
开发者可直接下载使用。
|
||||||
|
|
||||||
| 模型 | 参数文件大小 | 输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
| 模型 | 参数文件大小 | 输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||||
|:---------------------------------------------------------------------------------------------------------------------------------------------|:-------|:---------|:-------|:------------|:---------------|
|
|:----------------|:-------|:---------|:-------|:------------|:---------------|
|
||||||
| [Unet-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
|
| [Unet-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
|
||||||
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||||
| [PP-HumanSegV1-Lite(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
|
| [PP-HumanSegV1-Lite(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
|
||||||
@@ -21,14 +40,16 @@
|
|||||||
|
|
||||||
## 准备PaddleSeg部署模型以及转换模型
|
## 准备PaddleSeg部署模型以及转换模型
|
||||||
RKNPU部署模型前需要将Paddle模型转换成RKNN模型,具体步骤如下:
|
RKNPU部署模型前需要将Paddle模型转换成RKNN模型,具体步骤如下:
|
||||||
* Paddle动态图模型转换为ONNX模型,请参考[PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg)
|
* PaddleSeg训练模型导出为推理模型,请参考[PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md),也可以使用上表中的FastDeploy的预导出模型
|
||||||
* ONNX模型转换RKNN模型的过程,请参考[转换文档](../../../../../docs/cn/faq/rknpu2/export.md)进行转换。
|
* Paddle模型转换为ONNX模型,请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)
|
||||||
|
* ONNX模型转换RKNN模型的过程,请参考[转换文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/export.md)进行转换。
|
||||||
|
上述步骤可以可参考以下具体示例
|
||||||
|
|
||||||
## 模型转换example
|
## 模型转换example
|
||||||
|
|
||||||
* [PPHumanSeg](./pp_humanseg.md)
|
* [PPHumanSeg](./pp_humanseg.md)
|
||||||
|
|
||||||
## 详细部署文档
|
## 详细部署文档
|
||||||
- [RKNN总体部署教程](../../../../../docs/cn/faq/rknpu2/rknpu2.md)
|
- [RKNN总体部署教程](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)
|
||||||
- [C++部署](cpp)
|
- [C++部署](cpp)
|
||||||
- [Python部署](python)
|
- [Python部署](python)
|
||||||
|
@@ -8,7 +8,7 @@
|
|||||||
1. 软硬件环境满足要求
|
1. 软硬件环境满足要求
|
||||||
2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
|
2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
|
||||||
|
|
||||||
以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
|
以上步骤请参考[RK2代NPU部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)实现
|
||||||
|
|
||||||
## 生成基本目录文件
|
## 生成基本目录文件
|
||||||
|
|
||||||
@@ -37,7 +37,7 @@ mkdir thirdpartys
|
|||||||
|
|
||||||
### 编译并拷贝SDK到thirdpartys文件夹
|
### 编译并拷贝SDK到thirdpartys文件夹
|
||||||
|
|
||||||
请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
|
请参考[RK2代NPU部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-x-x-x目录,请移动它至thirdpartys目录下.
|
||||||
|
|
||||||
### 拷贝模型文件,以及配置文件至model文件夹
|
### 拷贝模型文件,以及配置文件至model文件夹
|
||||||
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
|
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
|
||||||
|
@@ -2,7 +2,7 @@
|
|||||||
# PPHumanSeg模型部署
|
# PPHumanSeg模型部署
|
||||||
|
|
||||||
## 转换模型
|
## 转换模型
|
||||||
下面以Portait-PP-HumanSegV2_Lite(肖像分割模型)为例子,教大家如何转换PPSeg模型到RKNN模型。
|
下面以Portait-PP-HumanSegV2_Lite(肖像分割模型)为例子,教大家如何转换PaddleSeg模型到RKNN模型。
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 下载Paddle2ONNX仓库
|
# 下载Paddle2ONNX仓库
|
||||||
|
@@ -3,9 +3,9 @@
|
|||||||
|
|
||||||
在部署前,需确认以下步骤
|
在部署前,需确认以下步骤
|
||||||
|
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
|
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/rknpu2/rknpu2.md)
|
||||||
|
|
||||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../matting/)
|
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../../matting/)
|
||||||
|
|
||||||
本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成
|
本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成
|
||||||
|
|
||||||
@@ -32,5 +32,5 @@ RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作
|
|||||||
|
|
||||||
- [PaddleSeg 模型介绍](..)
|
- [PaddleSeg 模型介绍](..)
|
||||||
- [PaddleSeg C++部署](../cpp)
|
- [PaddleSeg C++部署](../cpp)
|
||||||
- [模型预测结果说明](../../../../../../docs/api/vision_results/)
|
- [模型预测结果说明](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result_CN.md)
|
||||||
- [转换PPSeg RKNN模型文档](../README.md)
|
- [转换PaddleSeg模型至RKNN模型文档](../README.md)
|
||||||
|
@@ -1,12 +1,20 @@
|
|||||||
[English](README.md) | 简体中文
|
[English](README.md) | 简体中文
|
||||||
# PP-LiteSeg 量化模型在 RV1126 上的部署
|
# 在瑞芯微 RV1126 上使用 FastDeploy 部署 PaddleSeg 模型
|
||||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-LiteSeg 量化模型到 RV1126 上。
|
瑞芯微 RV1126 是一款编解码芯片,专门面相人工智能的机器视觉领域。目前,FastDeploy 支持在 RV1126 上基于 Paddle-Lite 部署 PaddleSeg 相关模型
|
||||||
|
|
||||||
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
|
## 瑞芯微 RV1126 支持的PaddleSeg模型
|
||||||
|
由于瑞芯微 RV1126 的 NPU 仅支持 INT8 量化模型的部署,因此所支持的量化模型如下:
|
||||||
|
- [PP-LiteSeg 系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||||
|
|
||||||
|
为了方便开发者的测试,下面提供了 PaddleSeg 导出的部分模型,开发者可直接下载使用。
|
||||||
|
|
||||||
|
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||||
|
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||||
|
| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||||
|
>> **注意**: FastDeploy 模型量化的方法及一键自动化压缩工具可以参考[模型量化](../../../quantize/README.md)
|
||||||
|
|
||||||
## 详细部署文档
|
## 详细部署文档
|
||||||
|
|
||||||
在 RV1126 上只支持 C++ 的部署。
|
目前,瑞芯微 RV1126 上只支持C++的部署。
|
||||||
|
|
||||||
- [C++部署](cpp)
|
- [C++部署](cpp)
|
||||||
|
@@ -5,22 +5,22 @@
|
|||||||
|
|
||||||
## 部署准备
|
## 部署准备
|
||||||
### FastDeploy 交叉编译环境准备
|
### FastDeploy 交叉编译环境准备
|
||||||
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
|
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
|
||||||
|
|
||||||
### 模型准备
|
### 模型准备
|
||||||
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
||||||
2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
||||||
3. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
|
3. 模型需要异构计算,异构计算文件可以参考:[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
|
||||||
|
|
||||||
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
||||||
|
|
||||||
## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型
|
## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型
|
||||||
请按照以下步骤完成在 RV1126 上部署 PP-LiteSeg 量化模型:
|
请按照以下步骤完成在 RV1126 上部署 PP-LiteSeg 量化模型:
|
||||||
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/a311d.md#基于-paddle-lite-的-fastdeploy-交叉编译库编译)
|
||||||
|
|
||||||
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||||
```bash
|
```bash
|
||||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp
|
||||||
```
|
```
|
||||||
|
|
||||||
3. 在当前路径下载部署所需的模型和示例图片:
|
3. 在当前路径下载部署所需的模型和示例图片:
|
||||||
@@ -45,7 +45,7 @@ make install
|
|||||||
5. 基于 adb 工具部署 PP-LiteSeg 分割模型到 Rockchip RV1126,可使用如下命令:
|
5. 基于 adb 工具部署 PP-LiteSeg 分割模型到 Rockchip RV1126,可使用如下命令:
|
||||||
```bash
|
```bash
|
||||||
# 进入 install 目录
|
# 进入 install 目录
|
||||||
cd FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp/build/install/
|
cd FastDeploy/examples/vision/segmentation/paddleseg/rockchip/rv1126/cpp/build/install/
|
||||||
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
||||||
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
|
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
|
||||||
```
|
```
|
||||||
@@ -54,4 +54,4 @@ bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
|
|||||||
|
|
||||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
|
||||||
|
|
||||||
需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../quantize/README.md)
|
||||||
|
@@ -3,7 +3,15 @@
|
|||||||
|
|
||||||
## 支持模型列表
|
## 支持模型列表
|
||||||
|
|
||||||
- PP-LiteSeg部署模型实现来自[PaddleSeg PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md)
|
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||||
|
|
||||||
|
为了方便开发者的测试,下面提供了PaddleSeg导出的部分推理模型,开发者可直接下载使用。
|
||||||
|
|
||||||
|
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||||
|
|
||||||
|
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||||
|
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||||
|
| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||||
|
|
||||||
## 准备PP-LiteSeg部署模型以及转换模型
|
## 准备PP-LiteSeg部署模型以及转换模型
|
||||||
|
|
||||||
@@ -93,5 +101,6 @@ model_deploy.py \
|
|||||||
```
|
```
|
||||||
最终获得可以在BM1684x上能够运行的bmodel模型pp_liteseg_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
|
最终获得可以在BM1684x上能够运行的bmodel模型pp_liteseg_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
|
||||||
|
|
||||||
## 其他链接
|
## 快速链接
|
||||||
- [Cpp部署](./cpp)
|
- [Cpp部署](./cpp)
|
||||||
|
- [Python部署](./python)
|
||||||
|
@@ -8,7 +8,7 @@
|
|||||||
1. 软硬件环境满足要求
|
1. 软硬件环境满足要求
|
||||||
2. 根据开发环境,从头编译FastDeploy仓库
|
2. 根据开发环境,从头编译FastDeploy仓库
|
||||||
|
|
||||||
以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
|
以上步骤请参考[SOPHGO部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)实现
|
||||||
|
|
||||||
## 生成基本目录文件
|
## 生成基本目录文件
|
||||||
|
|
||||||
@@ -26,7 +26,7 @@
|
|||||||
|
|
||||||
### 编译并拷贝SDK到thirdpartys文件夹
|
### 编译并拷贝SDK到thirdpartys文件夹
|
||||||
|
|
||||||
请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录.
|
请参考[SOPHGO部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录.
|
||||||
|
|
||||||
### 拷贝模型文件,以及配置文件至model文件夹
|
### 拷贝模型文件,以及配置文件至model文件夹
|
||||||
将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md)
|
将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md)
|
||||||
|
@@ -3,7 +3,7 @@
|
|||||||
|
|
||||||
在部署前,需确认以下步骤
|
在部署前,需确认以下步骤
|
||||||
|
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
|
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)
|
||||||
|
|
||||||
本目录下提供`infer.py`快速完成 pp_liteseg 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
|
本目录下提供`infer.py`快速完成 pp_liteseg 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
|
||||||
|
|
||||||
|
@@ -8,7 +8,7 @@
|
|||||||
|
|
||||||
## 前端部署PP-Humanseg v1模型
|
## 前端部署PP-Humanseg v1模型
|
||||||
|
|
||||||
PP-Humanseg v1模型web demo部署及使用参考[文档](../../../../application/js/web_demo/README.md)
|
PP-Humanseg v1模型web demo部署及使用参考[文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/examples/application/js/README_CN.md)
|
||||||
|
|
||||||
|
|
||||||
## PP-Humanseg v1 js接口
|
## PP-Humanseg v1 js接口
|
||||||
|
Reference in New Issue
Block a user