[Doc]Add English version of documents in examples/ (#1042)

* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md

* Update english version of serving/docs/

* Update title of readme

* Update some links

* Modify a title

* Update some links

* Update en version of java android README

* Modify some titles

* Modify some titles

* Modify some titles

* modify article to document

* update some english version of documents in examples

* Add english version of documents in examples/visions

* Sync to current branch

* Add english version of documents in examples
This commit is contained in:
charl-u
2023-01-06 09:35:12 +08:00
committed by GitHub
parent bb96a6fe8f
commit 1135d33dd7
74 changed files with 2312 additions and 575 deletions

View File

@@ -1,28 +1,29 @@
# PP-LiteSeg 量化模型 C++ 部署示例
English | [简体中文](README_CN.md)
# PP-LiteSeg Quantized Model C++ Deployment Example
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 A311D 上的部署推理加速。
`infer.cc` in this directory can help you quickly complete the inference acceleration of PP-LiteSeg quantization model deployment on A311D.
## 部署准备
### FastDeploy 交叉编译环境准备
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
## Deployment Preparations
### FastDeploy Cross-compile Environment Preparations
1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction)
### 模型准备
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
3. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
### Model Preparations
1. You can directly use the quantized model provided by FastDeploy for deployment.
2. You can use one-click automatical compression tool provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
3. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
For more information, please refer to [Model Quantization](../../quantize/README.md)
## 在 A311D 上部署量化后的 PP-LiteSeg 分割模型
请按照以下步骤完成在 A311D 上部署 PP-LiteSeg 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
## Deploying the Quantized PP-LiteSeg Segmentation model on A311D
Please follow these steps to complete the deployment of the PP-LiteSeg quantization model on A311D.
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
2. 将编译后的库拷贝到当前目录,可使用如下命令:
2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
```
3. 在当前路径下载部署所需的模型和示例图片:
3. Download the model and example images required for deployment in current path.
```bash
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
mkdir models && mkdir images
@@ -33,26 +34,26 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
cp -r cityscapes_demo.png images
```
4. 编译部署示例,可使入如下命令:
4. Compile the deployment example. You can run the following lines:
```bash
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
# After success, an install folder will be created with a running demo and libraries required for deployment.
```
5. 基于 adb 工具部署 PP-LiteSeg 分割模型到晶晨 A311D可使用如下命令
5. Deploy the PP-LiteSeg segmentation model to A311D based on adb. You can run the following lines:
```bash
# 进入 install 目录
# Go to the install directory.
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp/build/install/
# 如下命令表示bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
```
部署成功后运行结果如下:
The output is:
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).

View File

@@ -0,0 +1,59 @@
[English](README.md) | 简体中文
# PP-LiteSeg 量化模型 C++ 部署示例
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 A311D 上的部署推理加速。
## 部署准备
### FastDeploy 交叉编译环境准备
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
### 模型准备
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
3. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
## 在 A311D 上部署量化后的 PP-LiteSeg 分割模型
请按照以下步骤完成在 A311D 上部署 PP-LiteSeg 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddle-lite-的-fastdeploy-交叉编译库编译)
2. 将编译后的库拷贝到当前目录,可使用如下命令:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
```
3. 在当前路径下载部署所需的模型和示例图片:
```bash
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
mkdir models && mkdir images
wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
tar -xvf ppliteseg.tar.gz
cp -r ppliteseg models
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
cp -r cityscapes_demo.png images
```
4. 编译部署示例,可使入如下命令:
```bash
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
```
5. 基于 adb 工具部署 PP-LiteSeg 分割模型到晶晨 A311D可使用如下命令
```bash
# 进入 install 目录
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp/build/install/
# 如下命令表示bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
```
部署成功后运行结果如下:
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)

View File

@@ -1,97 +1,98 @@
# 目标检测 PaddleSeg Android Demo 使用文档
English | [简体中文](README_CN.md)
# PaddleSeg Android Demo for Target Detection
在 Android 上实现实时的人像分割功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
For real-time portrait segmentation on Android, this demo has good ease of use and openness. You can run your own training model in the demo.
## 环境准备
## Environment Preparations
1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)
2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
1. Install the Android Studio tool locally, for details see [Android Stuido official website](https://developer.android.com/studio).
2. Get an Android phone and turn on USB debugging mode. How to turn on: ` Phone Settings -> Find Developer Options -> Turn on Developer Options and USB Debug Mode`.
## 部署步骤
## Deployment Steps
1. 目标检测 PaddleSeg Demo 位于 `fastdeploy/examples/vision/segmentation/paddleseg/android` 目录
2. Android Studio 打开 paddleseg/android 工程
3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
1. Target detection PaddleSeg Demo is located in `fastdeploy/examples/vision/segmentation/paddleseg/android` directory.
2. Please use Android Studio to open paddleseg/android project.
3. Connect your phone to your computer, turn on USB debugging and file transfer mode, and connect your own mobile device on Android Studio (your phone needs to be enabled to allow software installation from USB).
<p align="center">
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/31974251/203257262-71b908ab-bb2b-47d3-9efb-67631687b774.png">
</p>
> **注意:**
>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
> **Notes:**
>> If you encounter an NDK configuration error during importing, compiling or running the program, please open ` File > Project Structure > SDK Location` and change `Andriod SDK location` to your locally configured SDK path.
4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
成功后效果如下图一APP 安装到手机;图二: APP 打开后的效果会自动识别图片中的人物并绘制mask图三APP设置选项点击右上角的设置图片可以设置不同选项进行体验。
4. Click the Run button to automatically compile the APP and install it to your phone. (The process will automatically download the pre-compiled FastDeploy Android library and model files, internet connection required.)
The success interface is as follows. Figure 1: Install APP on phone; Figure 2: The opening interface, it will automatically recognize the person in the picture and draw the mask; Figure 3: APP setting options, click setting in the upper right corner, and you can set different options.
| APP 图标 | APP 效果 | APP设置项
| APP icon | APP effect | APP setting options
| --- | --- | --- |
| <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/203268599-c94018d8-3683-490a-a5c7-a8136a4fa284.jpg"> | <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/203267867-7c51b695-65e6-402e-9826-5d6d5864da87.gif"> | <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/197332983-afbfa6d5-4a3b-4c54-a528-4a3e58441be1.jpg"> |
## PaddleSegModel Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径 model.pdmodel
- paramFile: String, paddle格式的参数文件路径 model.pdiparams
- configFile: String, 模型推理的预处理配置文件,如 deploy.yml
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
## PaddleSegModel Java API Introduction
- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleSegModel initialization parameters are described as follows:
- modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
- paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
- configFile: String, preprocessing configuration file of model inference, e.g. deploy.yml.
- option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
// 构造函数: constructor w/o label file
public PaddleSegModel(); // 空构造函数之后可以调用init初始化
// Constructor w/o label file
public PaddleSegModel(); // An empty constructor, which can be initialised by calling init function later.
public PaddleSegModel(String modelFile, String paramsFile, String configFile);
public PaddleSegModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
// Call init manually w/o label file
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示
- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
// 直接预测:不保存图片以及不渲染结果到Bitmap
// Directly predict: do not save images or render result to Bitmap.
public SegmentationResult predict(Bitmap ARGB8888Bitmap)
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap
// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public SegmentationResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float weight);
public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // 只渲染 不保存图片
// 修改result而非返回result关注性能的用户可以将以下接口与SegmentationResult的CxxBuffer一起使用
public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // Only rendering images without saving.
// Modify result, but not return it. Concerning performance, you can use the following interface with CxxBuffer in SegmentationResult.
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result)
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, String savedImagePath, float weight);
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, boolean rendering, float weight);
```
- 设置竖屏或横屏模式: 对于 PP-HumanSeg系列模型必须要调用该方法设置竖屏模式为true.
- Set vertical or horizontal mode: For PP-HumanSeg series model, you should call this method to set the vertical mode to true.
```java
public void setVerticalScreenFlag(boolean flag);
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
public boolean release(); // Release native resources.
public boolean initialized(); // Check if initialization is successful.
```
- RuntimeOption设置说明
- Runtime Option Setting
```java
public void enableLiteFp16(); // 开启fp16精度推理
public void disableLiteFP16(); // 关闭fp16精度推理
public void setCpuThreadNum(int threadNum); // 设置线程数
public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
public void enableLiteFp16(); // Enable fp16 precision inference
public void disableLiteFP16(); // Disable fp16 precision inference
public void setCpuThreadNum(int threadNum); // Set number of threads.
public void setLitePowerMode(LitePowerMode mode); // Set power mode.
public void setLitePowerMode(String modeStr); // Set power mode by string.
```
- 模型结果SegmentationResult说明
- Segmentation Result
```java
public class SegmentationResult {
public int[] mLabelMap; // 预测到的label map 每个像素位置对应一个label HxW
public float[] mScoreMap; // 预测到的得分 map 每个像素位置对应一个score HxW
public long[] mShape; // label map实际的shape (H,W)
public boolean mContainScoreMap = false; // 是否包含 score map
// 用户可以选择直接使用CxxBuffer而非通过JNI拷贝到Java层
// 该方式可以一定程度上提升性能
public void setCxxBufferFlag(boolean flag); // 设置是否为CxxBuffer模式
public boolean releaseCxxBuffer(); // 手动释放CxxBuffer!!!
public boolean initialized(); // 检测结果是否有效
public int[] mLabelMap; // The predicted label map, each pixel position corresponds to a label HxW.
public float[] mScoreMap; // The predicted score map, each pixel position corresponds to a score HxW.
public long[] mShape; // The real shape(H,W) of label map.
public boolean mContainScoreMap = false; // Whether score map is included.
// You can choose to use CxxBuffer directly instead of copying it to JAVA layer through JNI.
// This method can improve performance to some extent.
public void setCxxBufferFlag(boolean flag); // Set whether the mode is CxxBuffer.
public boolean releaseCxxBuffer(); // Release CxxBuffer manually!!!
public boolean initialized(); // Check if the result is valid.
}
```
其他参考C++/Python对应的SegmentationResult说明: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
Other reference: C++/Python corresponding SegmentationResult description: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md).
- 模型调用示例1使用构造函数以及默认的RuntimeOption
- Model calling example 1: Using constructor and the default RuntimeOption:
```java
import java.nio.ByteBuffer;
import android.graphics.Bitmap;
@@ -100,77 +101,77 @@ import android.opengl.GLES20;
import com.baidu.paddle.fastdeploy.vision.SegmentationResult;
import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel;
// 初始化模型
// Initialise model.
PaddleSegModel model = new PaddleSegModel(
"portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel",
"portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams",
"portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml");
// 如果摄像头为竖屏模式PP-HumanSeg系列需要设置改标记
// If the camera is in portrait mode, the PP-HumanSeg series needs to change the mark.
model.setVerticalScreenFlag(true);
// 读取图片: 以下仅为读取Bitmap的伪代码
// Read Bitmaps: The following is the pseudo code of reading the Bitmap.
ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
// 模型推理
// Model inference.
SegmentationResult result = new SegmentationResult();
result.setCxxBufferFlag(true);
model.predict(ARGB8888ImageBitmap, result);
// 释放CxxBuffer
// Release CxxBuffer.
result.releaseCxxBuffer();
// 或直接预测返回 SegmentationResult
// Or return SegmentationResult directly.
SegmentationResult result = model.predict(ARGB8888ImageBitmap);
// 释放模型资源
// Release model resources.
model.release();
```
- 模型调用示例2: 在合适的程序节点手动调用init并自定义RuntimeOption
- Model calling example 2: Call init function manually at the appropriate program node and customize RuntimeOption.
```java
// import 同上 ...
// import id.
import com.baidu.paddle.fastdeploy.RuntimeOption;
import com.baidu.paddle.fastdeploy.LitePowerMode;
import com.baidu.paddle.fastdeploy.vision.SegmentationResult;
import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel;
// 新建空模型
// Create empty model.
PaddleSegModel model = new PaddleSegModel();
// 模型路径
// Model path.
String modelFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel";
String paramFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams";
String configFile = "portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml";
// 指定RuntimeOption
// Specify RuntimeOption.
RuntimeOption option = new RuntimeOption();
option.setCpuThreadNum(2);
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
option.enableLiteFp16();
// 如果摄像头为竖屏模式PP-HumanSeg系列需要设置改标记
// If the camera is in portrait mode, the PP-HumanSeg series needs to change the mark.
model.setVerticalScreenFlag(true);
// 使用init函数初始化
// Initialise with the init function.
model.init(modelFile, paramFile, configFile, option);
// Bitmap读取、模型预测、资源释放 同上 ...
// Read Bitmap, predict model, release resources, id.
```
更详细的用法请参考 [SegmentationMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java) 中的用法
For details, please refer to [SegmentationMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java).
## 替换 FastDeploy SDK和模型
替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models/portrait_pp_humansegv2_lite_256x144_inference_model`
- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
## Replace FastDeploy SDK and model
Steps to replace the FastDeploy prediction libraries and model are very simple. The location of the prediction library is `app/libs/fastdeploy-android-sdk-xxx.aar`, where `xxx` indicates the version of the prediction library you are currently using. The location of the model is, `app/src/main/assets/models/portrait_pp_humansegv2_lite_256x144_inference_model`.
- Replace FastDeploy Android SDK: Download or compile the latest FastDeploy Android SDK, unzip it and put it in the `app/libs` directory. For details please refer to:
- [Use FastDeploy Java SDK on Android](../../../../../java/android/)
- 替换PaddleSeg模型的步骤:
- 将您的PaddleSeg模型放在 `app/src/main/assets/models` 目录下;
- 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
- Steps for replacing the PaddleSeg model.
- Put your PaddleSeg model in `app/src/main/assets/models`;
- Modify the model path in `app/src/main/res/values/strings.xml`, such as:
```xml
<!-- 将这个路径指修改成您的模型,如 models/human_pp_humansegv1_lite_192x192_inference_model -->
<!-- Modify this path for your model, e.g. models/human_pp_humansegv1_lite_192x192_inference_model -->
<string name="SEGMENTATION_MODEL_DIR_DEFAULT">models/human_pp_humansegv1_lite_192x192_inference_model</string>
```
## 更多参考文档
如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣可以参考以下内容:
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
## Other Documenets
If you are interested in more FastDeploy Java API documents and how to access the FastDeploy C++ API via JNI, you can refer to the following:
- [Use FastDeploy Java SDK on Android](../../../../../java/android/)
- [Use FastDeploy C++ SDK on Android](../../../../../docs/en/faq/use_cpp_sdk_on_android.md)

View File

@@ -0,0 +1,177 @@
[English](README.md) | 简体中文
# 目标检测 PaddleSeg Android Demo 使用文档
在 Android 上实现实时的人像分割功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
## 环境准备
1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
## 部署步骤
1. 目标检测 PaddleSeg Demo 位于 `fastdeploy/examples/vision/segmentation/paddleseg/android` 目录
2. 用 Android Studio 打开 paddleseg/android 工程
3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
<p align="center">
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/31974251/203257262-71b908ab-bb2b-47d3-9efb-67631687b774.png">
</p>
> **注意:**
>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
成功后效果如下图一APP 安装到手机;图二: APP 打开后的效果会自动识别图片中的人物并绘制mask图三APP设置选项点击右上角的设置图片可以设置不同选项进行体验。
| APP 图标 | APP 效果 | APP设置项
| --- | --- | --- |
| <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/203268599-c94018d8-3683-490a-a5c7-a8136a4fa284.jpg"> | <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/203267867-7c51b695-65e6-402e-9826-5d6d5864da87.gif"> | <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/197332983-afbfa6d5-4a3b-4c54-a528-4a3e58441be1.jpg"> |
## PaddleSegModel Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径如 model.pdmodel
- paramFile: String, paddle格式的参数文件路径如 model.pdiparams
- configFile: String, 模型推理的预处理配置文件,如 deploy.yml
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
```java
// 构造函数: constructor w/o label file
public PaddleSegModel(); // 空构造函数之后可以调用init初始化
public PaddleSegModel(String modelFile, String paramsFile, String configFile);
public PaddleSegModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示
```java
// 直接预测不保存图片以及不渲染结果到Bitmap上
public SegmentationResult predict(Bitmap ARGB8888Bitmap)
// 预测并且可视化预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap上
public SegmentationResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float weight);
public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // 只渲染 不保存图片
// 修改result而非返回result关注性能的用户可以将以下接口与SegmentationResult的CxxBuffer一起使用
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result)
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, String savedImagePath, float weight);
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, boolean rendering, float weight);
```
- 设置竖屏或横屏模式: 对于 PP-HumanSeg系列模型必须要调用该方法设置竖屏模式为true.
```java
public void setVerticalScreenFlag(boolean flag);
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
```
- RuntimeOption设置说明
```java
public void enableLiteFp16(); // 开启fp16精度推理
public void disableLiteFP16(); // 关闭fp16精度推理
public void setCpuThreadNum(int threadNum); // 设置线程数
public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
```
- 模型结果SegmentationResult说明
```java
public class SegmentationResult {
public int[] mLabelMap; // 预测到的label map 每个像素位置对应一个label HxW
public float[] mScoreMap; // 预测到的得分 map 每个像素位置对应一个score HxW
public long[] mShape; // label map实际的shape (H,W)
public boolean mContainScoreMap = false; // 是否包含 score map
// 用户可以选择直接使用CxxBuffer而非通过JNI拷贝到Java层
// 该方式可以一定程度上提升性能
public void setCxxBufferFlag(boolean flag); // 设置是否为CxxBuffer模式
public boolean releaseCxxBuffer(); // 手动释放CxxBuffer!!!
public boolean initialized(); // 检测结果是否有效
}
```
其他参考C++/Python对应的SegmentationResult说明: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
- 模型调用示例1使用构造函数以及默认的RuntimeOption
```java
import java.nio.ByteBuffer;
import android.graphics.Bitmap;
import android.opengl.GLES20;
import com.baidu.paddle.fastdeploy.vision.SegmentationResult;
import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel;
// 初始化模型
PaddleSegModel model = new PaddleSegModel(
"portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel",
"portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams",
"portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml");
// 如果摄像头为竖屏模式PP-HumanSeg系列需要设置改标记
model.setVerticalScreenFlag(true);
// 读取图片: 以下仅为读取Bitmap的伪代码
ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
// 模型推理
SegmentationResult result = new SegmentationResult();
result.setCxxBufferFlag(true);
model.predict(ARGB8888ImageBitmap, result);
// 释放CxxBuffer
result.releaseCxxBuffer();
// 或直接预测返回 SegmentationResult
SegmentationResult result = model.predict(ARGB8888ImageBitmap);
// 释放模型资源
model.release();
```
- 模型调用示例2: 在合适的程序节点手动调用init并自定义RuntimeOption
```java
// import 同上 ...
import com.baidu.paddle.fastdeploy.RuntimeOption;
import com.baidu.paddle.fastdeploy.LitePowerMode;
import com.baidu.paddle.fastdeploy.vision.SegmentationResult;
import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel;
// 新建空模型
PaddleSegModel model = new PaddleSegModel();
// 模型路径
String modelFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel";
String paramFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams";
String configFile = "portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml";
// 指定RuntimeOption
RuntimeOption option = new RuntimeOption();
option.setCpuThreadNum(2);
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
option.enableLiteFp16();
// 如果摄像头为竖屏模式PP-HumanSeg系列需要设置改标记
model.setVerticalScreenFlag(true);
// 使用init函数初始化
model.init(modelFile, paramFile, configFile, option);
// Bitmap读取、模型预测、资源释放 同上 ...
```
更详细的用法请参考 [SegmentationMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java) 中的用法
## 替换 FastDeploy SDK和模型
替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models/portrait_pp_humansegv2_lite_256x144_inference_model`。
- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
- 替换PaddleSeg模型的步骤
- 将您的PaddleSeg模型放在 `app/src/main/assets/models` 目录下;
- 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
```xml
<!-- 将这个路径指修改成您的模型,如 models/human_pp_humansegv1_lite_192x192_inference_model -->
<string name="SEGMENTATION_MODEL_DIR_DEFAULT">models/human_pp_humansegv1_lite_192x192_inference_model</string>
```
## 更多参考文档
如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣可以参考以下内容:
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)

View File

@@ -1,36 +1,37 @@
# PaddleSeg 量化模型部署
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
English | [简体中文](README_CN.md)
# PaddleSeg Quantized Model Deployment
FastDeploy already supports the deployment of quantitative models and provides a tool to automatically compress model with just one click.
You can use the one-click automatical model compression tool to quantify and deploy the models, or directly download the quantified models provided by FastDeploy for deployment.
## FastDeploy一键模型自动化压缩工具
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。
## FastDeploy One-Click Automation Model Compression Tool
FastDeploy provides an one-click automatical model compression tool that can quantify a model simply by entering configuration file.
For details, please refer to [one-click automatical compression tool](../../../../../tools/common_tools/auto_compression/).
Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.
## 下载量化完成的PaddleSeg模型
用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
## Download the Quantized PaddleSeg Model
You can also directly download the quantized models in the following table for deployment (click model name to download).
Benchmark表格说明:
- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
Note:
- Runtime latency is the inference latency of the model on various Runtimes, including CPU->GPU data copy, GPU inference, and GPU->CPU data copy time. It does not include the respective pre and post processing time of the models.
- The end-to-end latency is the latency of the model in the actual inference scenario, including the pre and post processing of the model.
- The measured latencies are averaged over 1000 inferences, in milliseconds.
- INT8 + FP16 is to enable the FP16 inference option for Runtime while inferring the INT8 quantization model.
- INT8 + FP16 + PM is the option to use Pinned Memory while inferring INT8 quantization model and turning on FP16, which can speed up the GPU->CPU data copy speed.
- The maximum speedup ratio is obtained by dividing the FP32 latency by the fastest INT8 inference latency.
- The strategy is quantitative distillation training, using a small number of unlabeled data sets to train the quantitative model, and verify the accuracy on the full validation set, INT8 accuracy does not represent the highest INT8 accuracy.
- The CPU is Intel(R) Xeon(R) Gold 6271C with a fixed CPU thread count of 1 in all tests. The GPU is Tesla T4, TensorRT version 8.4.15.
#### Runtime Benchmark
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mIoU | INT8 mIoU | 量化方式 |
| Model |Inference Backends | Hardware | FP32 Runtime Latency | INT8 Runtime Latency | INT8 + FP16 Runtime Latency | INT8+FP16+PM Runtime Latency | Max Speedup | FP32 mIoU | INT8 mIoU | Method |
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar)) | Paddle Inference | CPU | 1138.04| 602.62 |None|None | 1.89 |77.37 | 71.62 |量化蒸馏训练 |
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) | Paddle Inference | CPU | 1138.04| 602.62 |None|None | 1.89 |77.37 | 71.62 |Quantaware Distillation Training |
#### 端到端 Benchmark
| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 mIoU | INT8 mIoU | 量化方式 |
#### End to End Benchmark
| Model |Inference Backends | Hardware | FP32 End2End Latency | INT8 End2End Latency | INT8 + FP16 End2End Latency | INT8+FP16+PM End2End Latency | Max Speedup | FP32 mIoU | INT8 mIoU | Method |
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar)) | Paddle Inference | CPU | 4726.65| 4134.91|None|None | 1.14 |77.37 | 71.62 |量化蒸馏训练 |
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) | Paddle Inference | CPU | 4726.65| 4134.91|None|None | 1.14 |77.37 | 71.62 |Quantaware Distillation Training|
## 详细部署文档
## Detailed Deployment Documents
- [Python部署](python)
- [C++部署](cpp)
- [Python Deployment](python)
- [C++ Deployment](cpp)

View File

@@ -0,0 +1,37 @@
[English](README.md) | 简体中文
# PaddleSeg 量化模型部署
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
## FastDeploy一键模型自动化压缩工具
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。
## 下载量化完成的PaddleSeg模型
用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
Benchmark表格说明:
- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
#### Runtime Benchmark
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mIoU | INT8 mIoU | 量化方式 |
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) | Paddle Inference | CPU | 1138.04| 602.62 |None|None | 1.89 |77.37 | 71.62 |量化蒸馏训练 |
#### 端到端 Benchmark
| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 mIoU | INT8 mIoU | 量化方式 |
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) | Paddle Inference | CPU | 4726.65| 4134.91|None|None | 1.14 |77.37 | 71.62 |量化蒸馏训练 |
## 详细部署文档
- [Python部署](python)
- [C++部署](cpp)

View File

@@ -1,31 +1,32 @@
# PaddleSeg 量化模型 C++部署示例
本目录下提供的`infer.cc`,可以帮助用户快速完成PaddleSeg量化模型在CPU上的部署推理加速.
English | [简体中文](README_CN.md)
# PaddleSeg Quantitative Model C++ Deployment Example
`infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleSeg quantization model deployment on CPU.
## 部署准备
### FastDeploy环境准备
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
## Deployment Preparations
### FastDeploy Environment Preparations
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
### Quantized Model Preparations
- 1. You can directly use the quantized model provided by FastDeploy for deployment.
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
## Take the Quantized PP_LiteSeg_T_STDC1_cityscapes Model as an example for Deployment
Run the following commands in this directory to compile and deploy the quantized model. FastDeploy version 0.7.0 or higher is required (x.x.x>=0.7.0).
```bash
mkdir build
cd build
# 下载FastDeploy预编译库用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
# Download pre-compiled FastDeploy libraries. You can choose the appropriate version from `pre-compiled FastDeploy libraries` mentioned above.
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
#下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
# Download the PP_LiteSeg_T_STDC1_cityscapes quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
# 在CPU上使用Paddle-Inference推理量化模型
# Use Paddle-Inference inference quantization model on CPU.
./infer_demo PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ cityscapes_demo.png 1
```

View File

@@ -0,0 +1,32 @@
[English](README.md) | 简体中文
# PaddleSeg 量化模型 C++部署示例
本目录下提供的`infer.cc`,可以帮助用户快速完成PaddleSeg量化模型在CPU上的部署推理加速.
## 部署准备
### FastDeploy环境准备
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
```bash
mkdir build
cd build
# 下载FastDeploy预编译库用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
# 下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
# 在CPU上使用Paddle-Inference推理量化模型
./infer_demo PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ cityscapes_demo.png 1
```

View File

@@ -1,28 +1,29 @@
# PaddleSeg 量化模型 Python部署示例
本目录下提供的`infer.py`,可以帮助用户快速完成PaddleSeg量化模型在CPU/GPU上的部署推理加速.
English | [简体中文](README_CN.md)
# PaddleSeg Quantitative Model Python Deployment Example
`infer.py` in this directory can help you quickly complete the inference acceleration of PaddleSeg quantization model deployment on CPU/GPU.
## 部署准备
### FastDeploy环境准备
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
## Deployment Preparations
### FastDeploy Environment Preparations
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
### Quantized Model Preparations
- 1. You can directly use the quantized model provided by FastDeploy for deployment.
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
## Take the Quantized PP_LiteSeg_T_STDC1_cityscapes Model as an example for Deployment
```bash
#下载部署示例代码
# Download sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/segmentation/paddleseg/quantize/python
#下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
# Download the PP_LiteSeg_T_STDC1_cityscapes quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
# 在CPU上使用Paddle-Inference推理量化模型
# Use Paddle-Inference inference quantization model on CPU.
python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT --image cityscapes_demo.png --device cpu --backend paddle
```

View File

@@ -0,0 +1,29 @@
[English](README.md) | 简体中文
# PaddleSeg 量化模型 Python部署示例
本目录下提供的`infer.py`,可以帮助用户快速完成PaddleSeg量化模型在CPU/GPU上的部署推理加速.
## 部署准备
### FastDeploy环境准备
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
```bash
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/segmentation/paddleseg/quantize/python
# 下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
# 在CPU上使用Paddle-Inference推理量化模型
python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT --image cityscapes_demo.png --device cpu --backend paddle
```

View File

@@ -1,33 +1,34 @@
# PaddleSeg 模型部署
English | [简体中文](README_CN.md)
# PaddleSeg Model Deployment
## 模型版本说明
## Model Version
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
目前FastDeploy使用RKNPU2推理PPSeg支持如下模型的部署:
Currently FastDeploy using RKNPU2 to infer PPSeg supports the following model deployments:
| 模型 | 参数文件大小 | 输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
| Model | Parameter File Size | Input Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|:---------------------------------------------------------------------------------------------------------------------------------------------|:-------|:---------|:-------|:------------|:---------------|
| [Unet-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
| [PP-HumanSegV1-Lite(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
| [PP-HumanSegV2-Lite(通用人像分割模型)](https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
| [PP-HumanSegV2-Mobile(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
| [PP-HumanSegV1-Server(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
| [Portait-PP-HumanSegV2_Lite(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
| [PP-HumanSegV1-Lite(Universal portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
| [PP-HumanSegV2-Lite(Universal portrait segmentation model)](https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
| [PP-HumanSegV2-Mobile(Universal portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
| [PP-HumanSegV1-Server(Universal portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
| [Portait-PP-HumanSegV2_Lite(Portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
| [FCN-HRNet-W18-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
| [Deeplabv3-ResNet101-OS8-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |
## 准备PaddleSeg部署模型以及转换模型
RKNPU部署模型前需要将Paddle模型转换成RKNN模型具体步骤如下:
* Paddle动态图模型转换为ONNX模型请参考[PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg)
* ONNX模型转换RKNN模型的过程请参考[转换文档](../../../../../docs/cn/faq/rknpu2/export.md)进行转换。
## Prepare PaddleSeg Deployment Model and Conversion Model
RKNPU needs to convert the Paddle model to RKNN model before deploying, the steps are as follows:
* For the conversion of Paddle dynamic diagram model to ONNX model, please refer to [PaddleSeg Model Export](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg).
* For the process of converting ONNX model to RKNN model, please refer to [Conversion document](../../../../../docs/en/faq/rknpu2/export.md).
## 模型转换example
## An example of Model Conversion
* [PPHumanSeg](./pp_humanseg.md)
* [PPHumanSeg](./pp_humanseg_EN.md)
## 详细部署文档
- [RKNN总体部署教程](../../../../../docs/cn/faq/rknpu2/rknpu2.md)
- [C++部署](cpp)
- [Python部署](python)
## Detailed Deployment Document
- [Overall RKNN Deployment Guidance](../../../../../docs/en/faq/rknpu2/rknpu2.md)
- [Deploy with C++](cpp)
- [Deploy with Python](python)

View File

@@ -0,0 +1,34 @@
[English](README.md) | 简体中文
# PaddleSeg 模型部署
## 模型版本说明
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
目前FastDeploy使用RKNPU2推理PPSeg支持如下模型的部署:
| 模型 | 参数文件大小 | 输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|:---------------------------------------------------------------------------------------------------------------------------------------------|:-------|:---------|:-------|:------------|:---------------|
| [Unet-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
| [PP-HumanSegV1-Lite(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
| [PP-HumanSegV2-Lite(通用人像分割模型)](https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
| [PP-HumanSegV2-Mobile(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
| [PP-HumanSegV1-Server(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
| [Portait-PP-HumanSegV2_Lite(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
| [FCN-HRNet-W18-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
| [Deeplabv3-ResNet101-OS8-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |
## 准备PaddleSeg部署模型以及转换模型
RKNPU部署模型前需要将Paddle模型转换成RKNN模型具体步骤如下:
* Paddle动态图模型转换为ONNX模型请参考[PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg)
* ONNX模型转换RKNN模型的过程请参考[转换文档](../../../../../docs/cn/faq/rknpu2/export.md)进行转换。
## 模型转换example
* [PPHumanSeg](./pp_humanseg.md)
## 详细部署文档
- [RKNN总体部署教程](../../../../../docs/cn/faq/rknpu2/rknpu2.md)
- [C++部署](cpp)
- [Python部署](python)

View File

@@ -1,30 +1,31 @@
# PaddleSeg C++部署示例
English | [简体中文](README_CN.md)
# PaddleSeg Deployment Examples for C++
本目录下用于展示PaddleSeg系列模型在RKNPU2上的部署以下的部署过程以PPHumanSeg为例子。
This directory demonstrates the deployment of PaddleSeg series models on RKNPU2. The following deployment process takes PHumanSeg as an example.
在部署前,需确认以下两个步骤:
Before deployment, the following two steps need to be confirmed:
1. 软硬件环境满足要求
2. 根据开发环境下载预编译部署库或者从头编译FastDeploy仓库
1. Hardware and software environment meets the requirements.
2. Download the pre-compiled deployment repository or compile the FastDeploy repository from scratch according to the development environment.
以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
For the above steps, please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md).
## 生成基本目录文件
## Generate Basic Directory Files
该例程由以下几个部分组成
The routine consists of the following parts:
```text
.
├── CMakeLists.txt
├── build # 编译文件夹
├── image # 存放图片的文件夹
├── build # Compile Folder
├── image # Folder for images
├── infer_cpu_npu.cc
├── infer_cpu_npu.h
├── main.cc
├── model # 存放模型文件的文件夹
└── thirdpartys # 存放sdk的文件夹
├── model # Folder for models
└── thirdpartys # Folder for sdk
```
首先需要先生成目录结构
First, please build a directory structure
```bash
mkdir build
mkdir images
@@ -32,24 +33,23 @@ mkdir model
mkdir thirdpartys
```
## 编译
## Compile
### 编译并拷贝SDK到thirdpartys文件夹
### Compile and Copy SDK to folder thirdpartys
请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK编译完成后将在build目录下生成
fastdeploy-0.0.3目录请移动它至thirdpartys目录下.
Please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory, please move it to the thirdpartys directory.
### 拷贝模型文件以及配置文件至model文件夹
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中将生成ONNX文件以及对应的yaml配置文件请将配置文件存放到model文件夹内。
转换为RKNN后的模型文件也需要拷贝至model输入以下命令下载使用(模型文件为RK3588RK3568需要重新[转换PPSeg RKNN模型](../README.md))
### Copy model and configuration files to folder Model
In the process of Paddle dynamic map model -> Paddle static map model -> ONNX mdoel, ONNX file and the corresponding yaml configuration file will be generated. Please move the configuration file to the folder model.
After converting to RKNN, the model file also needs to be copied to folder model. Run the following command to download and use (the model file is RK3588. RK3568 needs to be [reconverted to PPSeg RKNN model](../README.md)).
### 准备测试图片至image文件夹
### Prepare Test Images to folder image
```bash
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
unzip -qo images.zip
```
### 编译example
### Compile example
```bash
cd build
@@ -58,17 +58,16 @@ make -j8
make install
```
## 运行例程
## Running Routines
```bash
cd ./build/install
./rknpu_test model/Portrait_PP_HumanSegV2_Lite_256x144_infer/ images/portrait_heng.jpg
```
## 注意事项
RKNPU上对模型的输入要求是使用NHWC格式且图片归一化操作会在转RKNN模型时内嵌到模型中因此我们在使用FastDeploy部署时
需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
## Notes
The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisableNormalizeAndPermute(C++) or disable_normalize_and_permute(Python) first when deploying with FastDeploy to disable normalization and data format conversion in the preprocessing stage.
- [模型介绍](../../)
- [Python部署](../python)
- [转换PPSeg RKNN模型文档](../README.md)
- [Model Description](../../)
- [Python Deployment](../python)
- [Convert PPSeg and RKNN model](../README.md)

View File

@@ -0,0 +1,73 @@
[English](README.md) | 简体中文
# PaddleSeg C++部署示例
本目录下用于展示PaddleSeg系列模型在RKNPU2上的部署以下的部署过程以PPHumanSeg为例子。
在部署前,需确认以下两个步骤:
1. 软硬件环境满足要求
2. 根据开发环境下载预编译部署库或者从头编译FastDeploy仓库
以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
## 生成基本目录文件
该例程由以下几个部分组成
```text
.
├── CMakeLists.txt
├── build # 编译文件夹
├── image # 存放图片的文件夹
├── infer_cpu_npu.cc
├── infer_cpu_npu.h
├── main.cc
├── model # 存放模型文件的文件夹
└── thirdpartys # 存放sdk的文件夹
```
首先需要先生成目录结构
```bash
mkdir build
mkdir images
mkdir model
mkdir thirdpartys
```
## 编译
### 编译并拷贝SDK到thirdpartys文件夹
请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK编译完成后将在build目录下生成fastdeploy-0.0.3目录请移动它至thirdpartys目录下.
### 拷贝模型文件以及配置文件至model文件夹
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中将生成ONNX文件以及对应的yaml配置文件请将配置文件存放到model文件夹内。
转换为RKNN后的模型文件也需要拷贝至model输入以下命令下载使用(模型文件为RK3588RK3568需要重新[转换PPSeg RKNN模型](../README.md))。
### 准备测试图片至image文件夹
```bash
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
unzip -qo images.zip
```
### 编译example
```bash
cd build
cmake ..
make -j8
make install
```
## 运行例程
```bash
cd ./build/install
./rknpu_test model/Portrait_PP_HumanSegV2_Lite_256x144_infer/ images/portrait_heng.jpg
```
## 注意事项
RKNPU上对模型的输入要求是使用NHWC格式且图片归一化操作会在转RKNN模型时内嵌到模型中因此我们在使用FastDeploy部署时需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
- [模型介绍](../../)
- [Python部署](../python)
- [转换PPSeg RKNN模型文档](../README.md)

View File

@@ -1,3 +1,4 @@
[English](pp_humanseg_EN.md) | 简体中文
# PPHumanSeg模型部署
## 转换模型

View File

@@ -0,0 +1,81 @@
English | [简体中文](pp_humanseg.md)
# PPHumanSeg Model Deployment
## Converting Model
The following is an example of Portait-PP-HumanSegV2_Lite (portrait segmentation model), showing how to convert PPSeg model to RKNN model.
```bash
# Download Paddle2ONNX repository.
git clone https://github.com/PaddlePaddle/Paddle2ONNX
# Download the Paddle static map model and fix the input shape.
## Go to the directory where the input shape is fixed for the Paddle static map model.
cd Paddle2ONNX/tools/paddle
## Download and unzip Paddle static map model.
wget https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz
tar xvf Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz
python paddle_infer_shape.py --model_dir Portrait_PP_HumanSegV2_Lite_256x144_infer/ \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--save_dir Portrait_PP_HumanSegV2_Lite_256x144_infer \
--input_shape_dict="{'x':[1,3,144,256]}"
# Converting static map model to ONNX model, note that the save_file here aligns with the zip name.
paddle2onnx --model_dir Portrait_PP_HumanSegV2_Lite_256x144_infer \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--save_file Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer.onnx \
--enable_dev_version True
# Convert ONNX model to RKNN model.
# Copy the ONNX model directory to the Fastdeploy root directory.
cp -r ./Portrait_PP_HumanSegV2_Lite_256x144_infer /path/to/Fastdeploy
# Convert model, the model will be generated in the Portrait_PP_HumanSegV2_Lite_256x144_infer directory.
python tools/rknpu2/export.py \
--config_path tools/rknpu2/config/Portrait_PP_HumanSegV2_Lite_256x144_infer.yaml \
--target_platform rk3588
```
## Modify yaml Configuration File
In the **An example of Model Conversion** part, we fixed the shape of the model, so the corresponding yaml file needs to be modified as follows:
**The original yaml file**
```yaml
Deploy:
input_shape:
- -1
- 3
- -1
- -1
model: model.pdmodel
output_dtype: float32
output_op: none
params: model.pdiparams
transforms:
- target_size:
- 256
- 144
type: Resize
- type: Normalize
```
**The modified yaml file**
```yaml
Deploy:
input_shape:
- 1
- 3
- 144
- 256
model: model.pdmodel
output_dtype: float32
output_op: none
params: model.pdiparams
transforms:
- target_size:
- 256
- 144
type: Resize
- type: Normalize
```

View File

@@ -1,36 +1,36 @@
# PaddleSeg Python部署示例
English | [简体中文](README_CN.md)
# PaddleSeg Deployment Examples for Python
在部署前,需确认以下两个步骤
Before deployment, the following step need to be confirmed:
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
- 1. Hardware and software environment meets the requirements, please refer to [Environment Requirements for FastDeploy](../../../../../../docs/en/build_and_install/rknpu2.md).
注意】如你部署的为**PP-Matting****PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../matting/)
Note】If you are deploying **PP-Matting**, **PP-HumanMatting** or **ModNet**, please refer to [Matting Model Deployment](../../../../matting/).
本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成
This directory provides `infer.py` for a quick example of PPHumanseg deployment on RKNPU. This can be done by running the following script.
```bash
# 下载部署示例代码
# Download the deploying demo code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/segmentation/paddleseg/python
# 下载图片
# Download images.
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
unzip images.zip
# 推理
# Inference.
python3 infer.py --model_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn \
--config_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/deploy.yaml \
--image images/portrait_heng.jpg
```
## 注意事项
RKNPU上对模型的输入要求是使用NHWC格式且图片归一化操作会在转RKNN模型时内嵌到模型中因此我们在使用FastDeploy部署时
需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
## Notes
The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisableNormalizeAndPermute(C++) or disable_normalize_and_permute(Python) first when deploying with FastDeploy to disable normalization and data format conversion in the preprocessing stage.
## 其它文档
## Other Documents
- [PaddleSeg 模型介绍](..)
- [PaddleSeg C++部署](../cpp)
- [模型预测结果说明](../../../../../../docs/api/vision_results/)
- [转换PPSeg RKNN模型文档](../README.md)
- [PaddleSeg Model Description](..)
- [PaddleSeg C++ Deployment](../cpp)
- [Description of the prediction](../../../../../../docs/api/vision_results/)
- [Convert PPSeg and RKNN model](../README.md)

View File

@@ -0,0 +1,36 @@
[English](README.md) | 简体中文
# PaddleSeg Python部署示例
在部署前,需确认以下步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../matting/)
本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成
```bash
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/segmentation/paddleseg/python
# 下载图片
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
unzip images.zip
# 推理
python3 infer.py --model_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn \
--config_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/deploy.yaml \
--image images/portrait_heng.jpg
```
## 注意事项
RKNPU上对模型的输入要求是使用NHWC格式且图片归一化操作会在转RKNN模型时内嵌到模型中因此我们在使用FastDeploy部署时需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
## 其它文档
- [PaddleSeg 模型介绍](..)
- [PaddleSeg C++部署](../cpp)
- [模型预测结果说明](../../../../../../docs/api/vision_results/)
- [转换PPSeg RKNN模型文档](../README.md)

View File

@@ -1,28 +1,29 @@
# PP-LiteSeg 量化模型 C++ 部署示例
English | [简体中文](README_CN.md)
# PP-LiteSeg Quantitative Model C++ Deployment Example
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 RV1126 上的部署推理加速。
`infer.cc` in this directory can help you quickly complete the inference acceleration of PP-LiteSeg quantization model deployment on RV1126.
## 部署准备
### FastDeploy 交叉编译环境准备
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
## Deployment Preparations
### FastDeploy Cross-compile Environment Preparations
1. For the software and hardware environment, and the cross-compile environment, please refer to [Preparations for FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/rv1126.md#Cross-compilation-environment-construction).
### 模型准备
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
3. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
### Model Preparations
1. You can directly use the quantized model provided by FastDeploy for deployment.
2. You can use one-click automatical compression tool provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
3. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
For more information, please refer to [Model Quantization](../../quantize/README.md).
## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型
请按照以下步骤完成在 RV1126 上部署 PP-LiteSeg 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
## Deploying the Quantized PP-LiteSeg Segmentation model on RV1126
Please follow these steps to complete the deployment of the PP-LiteSeg quantization model on RV1126.
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/rv1126.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite).
2. 将编译后的库拷贝到当前目录,可使用如下命令:
2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp
```
3. 在当前路径下载部署所需的模型和示例图片:
3. Download the model and example images required for deployment in current path.
```bash
mkdir models && mkdir images
wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
@@ -32,25 +33,25 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
cp -r cityscapes_demo.png images
```
4. 编译部署示例,可使入如下命令:
4. Compile the deployment example. You can run the following lines:
```bash
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
make -j8
make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
# After success, an install folder will be created with a running demo and libraries required for deployment.
```
5. 基于 adb 工具部署 PP-LiteSeg 分割模型到 Rockchip RV1126可使用如下命令
5. Deploy the PP-LiteSeg segmentation model to Rockchip RV1126 based on adb. You can run the following lines:
```bash
# 进入 install 目录
# Go to the install directory.
cd FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp/build/install/
# 如下命令表示bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
```
部署成功后运行结果如下:
The output is:
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
Please note that the model deployed on RV1126 needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).

View File

@@ -0,0 +1,57 @@
[English](README.md) | 简体中文
# PP-LiteSeg 量化模型 C++ 部署示例
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 RV1126 上的部署推理加速。
## 部署准备
### FastDeploy 交叉编译环境准备
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
### 模型准备
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
3. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型
请按照以下步骤完成在 RV1126 上部署 PP-LiteSeg 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
2. 将编译后的库拷贝到当前目录,可使用如下命令:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp
```
3. 在当前路径下载部署所需的模型和示例图片:
```bash
mkdir models && mkdir images
wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
tar -xvf ppliteseg.tar.gz
cp -r ppliteseg models
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
cp -r cityscapes_demo.png images
```
4. 编译部署示例,可使入如下命令:
```bash
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
make -j8
make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
```
5. 基于 adb 工具部署 PP-LiteSeg 分割模型到 Rockchip RV1126可使用如下命令
```bash
# 进入 install 目录
cd FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp/build/install/
# 如下命令表示bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
```
部署成功后运行结果如下:
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)

View File

@@ -1,33 +1,34 @@
# PaddleSeg C++部署示例
English | [简体中文](README_CN.md)
# PaddleSeg C++ Deployment Example
## 支持模型列表
## Supporting Model List
- PP-LiteSeg部署模型实现来自[PaddleSeg PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md)
- PP-LiteSeg deployment models are from [PaddleSeg PP-LiteSeg series model](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md).
## 准备PP-LiteSeg部署模型以及转换模型
## PP-LiteSeg Model Deployment and Conversion Preparations
SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型具体步骤如下:
- 下载Paddle模型[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)
- Pddle模型转换为ONNX模型请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)
- ONNX模型转换bmodel模型的过程请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
Befor SOPHGO-TPU model deployment, you should first convert Paddle model to bmodel model. Specific steps are as follows:
- Download Paddle model: [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz).
- Convert Paddle model to ONNX model. Please refer to [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX).
- For the process of converting ONNX model to bmodel, please refer to [TPU-MLIR](https://github.com/sophgo/tpu-mlir).
## 模型转换example
## Model Converting Example
下面以[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)为例子,教大家如何转换Paddle模型到SOPHGO-TPU模型
Here we take [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz) as an example to show you how to convert Paddle model to SOPHGO-TPU model.
### 下载PP-LiteSeg-B(STDC2)-cityscapes-without-argmax模型,并转换为ONNX模型
### Download PP-LiteSeg-B(STDC2)-cityscapes-without-argmax, and convert it to ONNX
```shell
https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
tar xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
# 修改PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer模型的输入shape由动态输入变成固定输入
# Modify the input shape of PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer model from dynamic input to constant input.
python paddle_infer_shape.py --model_dir PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--save_dir pp_liteseg_fix \
--input_shape_dict="{'x':[1,3,512,512]}"
#将固定输入的Paddle模型转换成ONNX模型
# Convert constant input Paddle model to ONNX model.
paddle2onnx --model_dir pp_liteseg_fix \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
@@ -35,32 +36,32 @@ paddle2onnx --model_dir pp_liteseg_fix \
--enable_dev_version True
```
### 导出bmodel模型
### Export bmodel
以转换BM1684x的bmodel模型为例子我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)
### 1. 安装
Take converting BM1684x model to bmodel as an example. You need to download [TPU-MLIR](https://github.com/sophgo/tpu-mlir) project. For the process of installation, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
### 1. Installation
``` shell
docker pull sophgo/tpuc_dev:latest
# myname1234是一个示例,也可以设置其他名字
# myname1234 is just an example, you can customize your own name.
docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
source ./envsetup.sh
./build.sh
```
### 2. ONNX模型转换为bmodel模型
### 2. Convert ONNX model to bmodel
``` shell
mkdir pp_liteseg && cd pp_liteseg
#在该文件中放入测试图片同时将上一步转换的pp_liteseg.onnx放入该文件夹中
# Put the test image in this file, and put the converted pp_liteseg.onnx into this folder.
cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
cp -rf ${REGRESSION_PATH}/image .
#放入onnx模型文件pp_liteseg.onnx
# Put in the onnx model file pp_liteseg.onnx.
mkdir workspace && cd workspace
#将ONNX模型转换为mlir模型其中参数--output_names可以通过NETRON查看
# Convert ONNX model to mlir model, the parameter --output_names can be viewed via NETRON.
model_transform.py \
--model_name pp_liteseg \
--model_def ../pp_liteseg.onnx \
@@ -74,7 +75,7 @@ model_transform.py \
--test_result pp_liteseg_top_outputs.npz \
--mlir pp_liteseg.mlir
#将mlir模型转换为BM1684xF32 bmodel模型
# Convert mlir model to BM1684x F32 bmodel.
model_deploy.py \
--mlir pp_liteseg.mlir \
--quantize F32 \
@@ -83,7 +84,7 @@ model_deploy.py \
--test_reference pp_liteseg_top_outputs.npz \
--model pp_liteseg_1684x_f32.bmodel
```
最终获得可以在BM1684x上能够运行的bmodel模型pp_liteseg_1684x_f32.bmodel。如果需要进一步对模型进行加速可以将ONNX模型转换为INT8 bmodel具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)
The final bmodel, pp_liteseg_1684x_f32.bmodel, can run on BM1684x. If you want to further accelerate the model, you can convert ONNX model to INT8 bmodel. For details, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
## 其他链接
- [Cpp部署](./cpp)
## Other Documents
- [Cpp Deployment](./cpp)

View File

@@ -0,0 +1,90 @@
[English](README.md) | 简体中文
# PaddleSeg C++部署示例
## 支持模型列表
- PP-LiteSeg部署模型实现来自[PaddleSeg PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md)
## 准备PP-LiteSeg部署模型以及转换模型
SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型具体步骤如下:
- 下载Paddle模型[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)
- Paddle模型转换为ONNX模型请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)
- ONNX模型转换bmodel模型的过程请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
## 模型转换example
下面以[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)为例子,教大家如何转换Paddle模型到SOPHGO-TPU模型
### 下载PP-LiteSeg-B(STDC2)-cityscapes-without-argmax模型,并转换为ONNX模型
```shell
https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
tar xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
# 修改PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer模型的输入shape由动态输入变成固定输入
python paddle_infer_shape.py --model_dir PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--save_dir pp_liteseg_fix \
--input_shape_dict="{'x':[1,3,512,512]}"
#将固定输入的Paddle模型转换成ONNX模型
paddle2onnx --model_dir pp_liteseg_fix \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--save_file pp_liteseg.onnx \
--enable_dev_version True
```
### 导出bmodel模型
以转换BM1684x的bmodel模型为例子我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
### 1. 安装
``` shell
docker pull sophgo/tpuc_dev:latest
# myname1234是一个示例也可以设置其他名字
docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
source ./envsetup.sh
./build.sh
```
### 2. ONNX模型转换为bmodel模型
``` shell
mkdir pp_liteseg && cd pp_liteseg
#在该文件中放入测试图片同时将上一步转换的pp_liteseg.onnx放入该文件夹中
cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
cp -rf ${REGRESSION_PATH}/image .
#放入onnx模型文件pp_liteseg.onnx
mkdir workspace && cd workspace
#将ONNX模型转换为mlir模型其中参数--output_names可以通过NETRON查看
model_transform.py \
--model_name pp_liteseg \
--model_def ../pp_liteseg.onnx \
--input_shapes [[1,3,512,512]] \
--mean 0.0,0.0,0.0 \
--scale 0.0039216,0.0039216,0.0039216 \
--keep_aspect_ratio \
--pixel_format rgb \
--output_names bilinear_interp_v2_6.tmp_0 \
--test_input ../image/dog.jpg \
--test_result pp_liteseg_top_outputs.npz \
--mlir pp_liteseg.mlir
#将mlir模型转换为BM1684x的F32 bmodel模型
model_deploy.py \
--mlir pp_liteseg.mlir \
--quantize F32 \
--chip bm1684x \
--test_input pp_liteseg_in_f32.npz \
--test_reference pp_liteseg_top_outputs.npz \
--model pp_liteseg_1684x_f32.bmodel
```
最终获得可以在BM1684x上能够运行的bmodel模型pp_liteseg_1684x_f32.bmodel。如果需要进一步对模型进行加速可以将ONNX模型转换为INT8 bmodel具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
## 其他链接
- [Cpp部署](./cpp)

View File

@@ -1,43 +1,44 @@
# PaddleSeg C++部署示例
English | [简体中文](README_CN.md)
# PaddleSeg C++ Deployment Example
本目录下提供`infer.cc`快速完成pp_liteseg模型在SOPHGO BM1684x板子上加速部署的示例。
`infer.cc` in this directory provides a quick example of accelerated deployment of the pp_liteseg model on SOPHGO BM1684x.
在部署前,需确认以下两个步骤:
Before deployment, the following two steps need to be confirmed:
1. 软硬件环境满足要求
2. 根据开发环境从头编译FastDeploy仓库
1. Hardware and software environment meets the requirements.
2. Compile the FastDeploy repository from scratch according to the development environment.
以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
For the above steps, please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md).
## 生成基本目录文件
## Generate Basic Directory Files
该例程由以下几个部分组成
The routine consists of the following parts:
```text
.
├── CMakeLists.txt
├── build # 编译文件夹
├── image # 存放图片的文件夹
├── build # Compile Folder
├── image # Folder for images
├── infer.cc
└── model # 存放模型文件的文件夹
└── model # Folder for models
```
## 编译
## Compile
### 编译并拷贝SDK到thirdpartys文件夹
### Compile and Copy SDK to folder thirdpartys
请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK编译完成后将在build目录下生成fastdeploy-0.0.3目录.
Please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory.
### 拷贝模型文件以及配置文件至model文件夹
将Paddle模型转换为SOPHGO bmodel模型转换步骤参考[文档](../README.md)
将转换后的SOPHGO bmodel模型文件拷贝至model
### Copy model and configuration files to folder Model
Convert Paddle model to SOPHGO bmodel model. For the conversion steps, please refer to [Document](../README.md).
Please copy the converted SOPHGO bmodel to folder model.
### 准备测试图片至image文件夹
### Prepare Test Images to folder image
```bash
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
cp cityscapes_demo.png ./images
```
### 编译example
### Compile example
```bash
cd build
@@ -45,12 +46,12 @@ cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
make
```
## 运行例程
## Running Routines
```bash
./infer_demo model images/cityscapes_demo.png
```
- [模型介绍](../../)
- [模型转换](../)
- [Model Description](../../)
- [Model Conversion](../)

View File

@@ -0,0 +1,57 @@
[English](README.md) | 简体中文
# PaddleSeg C++部署示例
本目录下提供`infer.cc`快速完成pp_liteseg模型在SOPHGO BM1684x板子上加速部署的示例。
在部署前,需确认以下两个步骤:
1. 软硬件环境满足要求
2. 根据开发环境从头编译FastDeploy仓库
以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
## 生成基本目录文件
该例程由以下几个部分组成
```text
.
├── CMakeLists.txt
├── build # 编译文件夹
├── image # 存放图片的文件夹
├── infer.cc
└── model # 存放模型文件的文件夹
```
## 编译
### 编译并拷贝SDK到thirdpartys文件夹
请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK编译完成后将在build目录下生成fastdeploy-0.0.3目录.
### 拷贝模型文件以及配置文件至model文件夹
将Paddle模型转换为SOPHGO bmodel模型转换步骤参考[文档](../README.md)
将转换后的SOPHGO bmodel模型文件拷贝至model中
### 准备测试图片至image文件夹
```bash
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
cp cityscapes_demo.png ./images
```
### 编译example
```bash
cd build
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
make
```
## 运行例程
```bash
./infer_demo model images/cityscapes_demo.png
```
- [模型介绍](../../)
- [模型转换](../)

View File

@@ -1,26 +1,27 @@
# PaddleSeg Python部署示例
English | [简体中文](README_CN.md)
# PaddleSeg Python Deployment Example
在部署前,需确认以下两个步骤
Before deployment, the following step need to be confirmed:
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
- 1. Hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirement](../../../../../../docs/en/build_and_install/sophgo.md).
本目录下提供`infer.py`快速完成 pp_liteseg SOPHGO TPU上部署的示例。执行如下脚本即可完成
`infer.py` in this directory provides a quick example of deployment of the pp_liteseg model on SOPHGO TPU. Please run the following script:
```bash
# 下载部署示例代码
# Download the sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/segmentation/paddleseg/sophgo/python
# 下载图片
# Download images.
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
# 推理
# Inference.
python3 infer.py --model_file ./bmodel/pp_liteseg_1684x_f32.bmodel --config_file ./bmodel/deploy.yaml --image cityscapes_demo.png
# 运行完成后返回结果如下所示
运行结果保存在sophgo_img.png
# The returned result.
The result is saved as sophgo_img.png.
```
## 其它文档
- [pp_liteseg C++部署](../cpp)
- [转换 pp_liteseg SOPHGO模型文档](../README.md)
## Other Documents
- [pp_liteseg C++ Deployment](../cpp)
- [Converting pp_liteseg SOPHGO model](../README.md)

View File

@@ -0,0 +1,27 @@
[English](README.md) | 简体中文
# PaddleSeg Python部署示例
在部署前,需确认以下步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
本目录下提供`infer.py`快速完成 pp_liteseg 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
```bash
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/segmentation/paddleseg/sophgo/python
# 下载图片
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
# 推理
python3 infer.py --model_file ./bmodel/pp_liteseg_1684x_f32.bmodel --config_file ./bmodel/deploy.yaml --image cityscapes_demo.png
# 运行完成后返回结果如下所示
运行结果保存在sophgo_img.png中
```
## 其它文档
- [pp_liteseg C++部署](../cpp)
- [转换 pp_liteseg SOPHGO模型文档](../README.md)

View File

@@ -1,43 +1,44 @@
# PP-Humanseg v1模型前端部署
English | [简体中文](README_CN.md)
# PP-Humanseg v1 Model Frontend Deployment
## 模型版本说明
## Model Version
- [PP-HumanSeg Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/contrib/PP-HumanSeg/)
## 前端部署PP-Humanseg v1模型
## Deploy PP-Humanseg v1 Model on Frontend
PP-Humanseg v1模型web demo部署及使用参考[文档](../../../../application/js/web_demo/README.md)
To deploy and use PP-Humanseg v1 model of web demo, please refer to [document](../../../../application/js/web_demo/README.md).
## PP-Humanseg v1 js接口
## PP-Humanseg v1 js interface
```
import * as humanSeg from "@paddle-js-models/humanseg";
# 模型加载与初始化
# Load and initialise model
await humanSeg.load(Config);
# 人像分割
# Portrait segmentation
const res = humanSeg.getGrayValue(input)
# 提取人像与背景的二值图
# Extract the binary map of portrait and background
humanSeg.drawMask(res)
# 用于替换背景的可视化函数
# Visualization function for background replacement
humanSeg.drawHumanSeg(res)
# 背景虚化
# Blur background
humanSeg.blurBackground(res)
```
**load()函数参数**
> * **Config**(dict): PP-Humanseg模型配置参数默认为{modelpath : 'https://paddlejs.bj.bcebos.com/models/fuse/humanseg/humanseg_398x224_fuse_activation/model.json', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5], enableLightModel: false}modelPath为默认的PP-Humanseg js模型meanstd分别为预处理的均值和标准差enableLightModel为是否使用更轻量的模型。
**Parameters in function load()**
> * **Config**(dict): Configuration parameter for PP-Humanseg model, default is {modelpath : 'https://paddlejs.bj.bcebos.com/models/fuse/humanseg/humanseg_398x224_fuse_activation/model.json', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5], enableLightModel: false}modelPath is the default PP-Humanseg js model. Mean, std respectively represent the mean and standard deviation of the preprocessing, and enableLightModel represents whether to use a lighter model.
**getGrayValue()函数参数**
> * **input**(HTMLImageElement | HTMLVideoElement | HTMLCanvasElement): 输入图像参数。
**Parameters in function getGrayValue()**
> * **input**(HTMLImageElement | HTMLVideoElement | HTMLCanvasElement): Input image parameter.
**drawMask()函数参数**
> * **seg_values**(number[]): 输入参数一般是getGrayValue函数计算的结果作为输入
**Parameters in function drawMask()**
> * **seg_values**(number[]): Input parameter, generally the result of function getGrayValue is used as input.
**blurBackground()函数参数**
> * **seg_values**(number[]): 输入参数一般是getGrayValue函数计算的结果作为输入
**Parameters in function blurBackground()**
> * **seg_values**(number[]): Input parameter, generally the result of function getGrayValue is used as input.
**drawHumanSeg()函数参数**
> * **seg_values**(number[]): 输入参数一般是getGrayValue函数计算的结果作为输入
**Parameters in function drawHumanSeg()**
> * **seg_values**(number[]): Input parameter, generally the result of function getGrayValue is used as input.

View File

@@ -0,0 +1,44 @@
[English](README.md) | 简体中文
# PP-Humanseg v1模型前端部署
## 模型版本说明
- [PP-HumanSeg Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/contrib/PP-HumanSeg/)
## 前端部署PP-Humanseg v1模型
PP-Humanseg v1模型web demo部署及使用参考[文档](../../../../application/js/web_demo/README.md)
## PP-Humanseg v1 js接口
```
import * as humanSeg from "@paddle-js-models/humanseg";
# 模型加载与初始化
await humanSeg.load(Config);
# 人像分割
const res = humanSeg.getGrayValue(input)
# 提取人像与背景的二值图
humanSeg.drawMask(res)
# 用于替换背景的可视化函数
humanSeg.drawHumanSeg(res)
# 背景虚化
humanSeg.blurBackground(res)
```
**load()函数参数**
> * **Config**(dict): PP-Humanseg模型配置参数默认为{modelpath : 'https://paddlejs.bj.bcebos.com/models/fuse/humanseg/humanseg_398x224_fuse_activation/model.json', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5], enableLightModel: false}modelPath为默认的PP-Humanseg js模型meanstd分别为预处理的均值和标准差enableLightModel为是否使用更轻量的模型。
**getGrayValue()函数参数**
> * **input**(HTMLImageElement | HTMLVideoElement | HTMLCanvasElement): 输入图像参数。
**drawMask()函数参数**
> * **seg_values**(number[]): 输入参数一般是getGrayValue函数计算的结果作为输入
**blurBackground()函数参数**
> * **seg_values**(number[]): 输入参数一般是getGrayValue函数计算的结果作为输入
**drawHumanSeg()函数参数**
> * **seg_values**(number[]): 输入参数一般是getGrayValue函数计算的结果作为输入