[Docs] Pick seg fastdeploy docs from PaddleSeg (#1482)
* [Docs] Pick seg fastdeploy docs from PaddleSeg * [Docs] update seg docs * [Docs] Add c&csharp examples for seg * [Docs] Add c&csharp examples for seg * [Doc] Update paddleseg README.md * Update README.md
@@ -1,32 +1,139 @@
|
||||
# PaddleSeg高性能全场景模型部署方案—FastDeploy
|
||||
|
||||
## FastDeploy介绍
|
||||
## 目录
|
||||
- [FastDeploy介绍](#FastDeploy介绍)
|
||||
- [语义分割模型部署](#语义分割模型部署)
|
||||
- [Matting模型部署](#Matting模型部署)
|
||||
- [常见问题](#常见问题)
|
||||
|
||||
[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)是一款全场景、易用灵活、极致高效的AI推理部署工具,使用FastDeploy可以简单高效的在10+款硬件上对PaddleSeg模型进行快速部署
|
||||
## 1. FastDeploy介绍
|
||||
<div id="FastDeploy介绍"></div>
|
||||
|
||||
## 支持如下的硬件部署
|
||||
**[⚡️FastDeploy](https://github.com/PaddlePaddle/FastDeploy)**是一款**全场景**、**易用灵活**、**极致高效**的AI推理部署工具,支持**云边端**部署。使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、瑞芯微、晶晨、算能等10+款硬件上对PaddleSeg模型进行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、RKNPU2、SOPHGO等多种推理后端。
|
||||
|
||||
| 硬件支持列表 | | | |
|
||||
|:----- | :-- | :-- | :-- |
|
||||
| [NVIDIA GPU](cpu-gpu) | [X86 CPU](cpu-gpu)| [飞腾CPU](cpu-gpu) | [ARM CPU](cpu-gpu) |
|
||||
| [Intel GPU(独立显卡/集成显卡)](cpu-gpu) | [昆仑](kunlun) | [昇腾](ascend) | [瑞芯微](rockchip) |
|
||||
| [晶晨](amlogic) | [算能](sophgo) |
|
||||
<div align="center">
|
||||
|
||||
<img src="https://user-images.githubusercontent.com/31974251/224941235-d5ea4ed0-7626-4c62-8bbd-8e4fad1e72ad.png" >
|
||||
|
||||
## 更多部署方式
|
||||
</div>
|
||||
|
||||
- [Android ARM CPU部署](android)
|
||||
- [服务化Serving部署](serving)
|
||||
- [web部署](web)
|
||||
- [模型自动化压缩工具](quantize)
|
||||
## 2. 语义分割模型部署
|
||||
<div id="语义分割模型部署"></div>
|
||||
|
||||
### 2.1 硬件支持列表
|
||||
|
||||
## 常见问题
|
||||
|硬件类型|该硬件是否支持|使用指南|Python|C++|
|
||||
|:---:|:---:|:---:|:---:|:---:|
|
||||
|X86 CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
||||
|NVIDIA GPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
||||
|飞腾CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
||||
|ARM CPU|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
||||
|Intel GPU(集成显卡)|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
||||
|Intel GPU(独立显卡)|✅|[链接](semantic_segmentation/cpu-gpu)|✅|✅|
|
||||
|昆仑|✅|[链接](semantic_segmentation/kunlun)|✅|✅|
|
||||
|昇腾|✅|[链接](semantic_segmentation/ascend)|✅|✅|
|
||||
|瑞芯微|✅|[链接](semantic_segmentation/rockchip)|✅|✅|
|
||||
|晶晨|✅|[链接](semantic_segmentation/amlogic)|--|✅|✅|
|
||||
|算能|✅|[链接](semantic_segmentation/sophgo)|✅|✅|
|
||||
|
||||
遇到问题可查看常见问题集合文档或搜索FastDeploy issues,链接如下:
|
||||
### 2.2. 详细使用文档
|
||||
- X86 CPU
|
||||
- [部署模型准备](semantic_segmentation/cpu-gpu)
|
||||
- [Python部署示例](semantic_segmentation/cpu-gpu/python/)
|
||||
- [C++部署示例](semantic_segmentation/cpu-gpu/cpp/)
|
||||
- NVIDIA GPU
|
||||
- [部署模型准备](semantic_segmentation/cpu-gpu)
|
||||
- [Python部署示例](semantic_segmentation/cpu-gpu/python/)
|
||||
- [C++部署示例](semantic_segmentation/cpu-gpu/cpp/)
|
||||
- 飞腾CPU
|
||||
- [部署模型准备](semantic_segmentation/cpu-gpu)
|
||||
- [Python部署示例](semantic_segmentation/cpu-gpu/python/)
|
||||
- [C++部署示例](semantic_segmentation/cpu-gpu/cpp/)
|
||||
- ARM CPU
|
||||
- [部署模型准备](semantic_segmentation/cpu-gpu)
|
||||
- [Python部署示例](semantic_segmentation/cpu-gpu/python/)
|
||||
- [C++部署示例](semantic_segmentation/cpu-gpu/cpp/)
|
||||
- Intel GPU
|
||||
- [部署模型准备](semantic_segmentation/cpu-gpu)
|
||||
- [Python部署示例](semantic_segmentation/cpu-gpu/python/)
|
||||
- [C++部署示例](semantic_segmentation/cpu-gpu/cpp/)
|
||||
- 昆仑 XPU
|
||||
- [部署模型准备](semantic_segmentation/kunlun)
|
||||
- [Python部署示例](semantic_segmentation/kunlun/python/)
|
||||
- [C++部署示例](semantic_segmentation/kunlun/cpp/)
|
||||
- 昇腾 Ascend
|
||||
- [部署模型准备](semantic_segmentation/ascend)
|
||||
- [Python部署示例](semantic_segmentation/ascend/python/)
|
||||
- [C++部署示例](semantic_segmentation/ascend/cpp/)
|
||||
- 瑞芯微 Rockchip
|
||||
- [部署模型准备](semantic_segmentation/rockchip/)
|
||||
- [Python部署示例](semantic_segmentation/rockchip/rknpu2/)
|
||||
- [C++部署示例](semantic_segmentation/rockchip/rknpu2/)
|
||||
- 晶晨 Amlogic
|
||||
- [部署模型准备](semantic_segmentation/amlogic/a311d/)
|
||||
- [C++部署示例](semantic_segmentation/amlogic/a311d/cpp/)
|
||||
- 算能 Sophgo
|
||||
- [部署模型准备](semantic_segmentation/sophgo/)
|
||||
- [Python部署示例](semantic_segmentation/sophgo/python/)
|
||||
- [C++部署示例](semantic_segmentation/sophgo/cpp/)
|
||||
|
||||
### 2.3 更多部署方式
|
||||
|
||||
- [Android ARM CPU部署](semantic_segmentation/android)
|
||||
- [服务化Serving部署](semantic_segmentation/serving)
|
||||
- [web部署](semantic_segmentation/web)
|
||||
- [模型自动化压缩工具](semantic_segmentation/quantize)
|
||||
|
||||
## 3. Matting模型部署
|
||||
<div id="Matting模型部署"></div>
|
||||
|
||||
### 3.1 硬件支持列表
|
||||
|
||||
|硬件类型|该硬件是否支持|使用指南|Python|C++|
|
||||
|:---:|:---:|:---:|:---:|:---:|
|
||||
|X86 CPU|✅|[链接](matting/cpu-gpu)|✅|✅|
|
||||
|NVIDIA GPU|✅|[链接](matting/cpu-gpu)|✅|✅|
|
||||
|飞腾CPU|✅|[链接](matting/cpu-gpu)|✅|✅|
|
||||
|ARM CPU|✅|[链接](matting/cpu-gpu)|✅|✅|
|
||||
|Intel GPU(集成显卡)|✅|[链接](matting/cpu-gpu)|✅|✅|
|
||||
|Intel GPU(独立显卡)|✅|[链接](matting/cpu-gpu)|✅|✅|
|
||||
|昆仑|✅|[链接](matting/kunlun)|✅|✅|
|
||||
|昇腾|✅|[链接](matting/ascend)|✅|✅|
|
||||
|
||||
### 3.2 详细使用文档
|
||||
- X86 CPU
|
||||
- [部署模型准备](matting/cpu-gpu)
|
||||
- [Python部署示例](matting/cpu-gpu/python/)
|
||||
- [C++部署示例](matting/cpu-gpu/cpp/)
|
||||
- NVIDIA GPU
|
||||
- [部署模型准备](matting/cpu-gpu)
|
||||
- [Python部署示例](matting/cpu-gpu/python/)
|
||||
- [C++部署示例](matting/cpu-gpu/cpp/)
|
||||
- 飞腾CPU
|
||||
- [部署模型准备](matting/cpu-gpu)
|
||||
- [Python部署示例](matting/cpu-gpu/python/)
|
||||
- [C++部署示例](matting/cpu-gpu/cpp/)
|
||||
- ARM CPU
|
||||
- [部署模型准备](matting/cpu-gpu)
|
||||
- [Python部署示例](matting/cpu-gpu/python/)
|
||||
- [C++部署示例](matting/cpu-gpu/cpp/)
|
||||
- Intel GPU
|
||||
- [部署模型准备](matting/cpu-gpu)
|
||||
- [Python部署示例](matting/cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- 昆仑 XPU
|
||||
- [部署模型准备](matting/kunlun)
|
||||
- [Python部署示例](matting/kunlun/README.md)
|
||||
- [C++部署示例](matting/kunlun/README.md)
|
||||
- 昇腾 Ascend
|
||||
- [部署模型准备](matting/ascend)
|
||||
- [Python部署示例](matting/ascend/README.md)
|
||||
- [C++部署示例](matting/ascend/README.md)
|
||||
|
||||
## 4. 常见问题
|
||||
<div id="常见问题"></div>
|
||||
|
||||
遇到问题可查看常见问题集合,搜索FastDeploy issue,*或给FastDeploy提交[issue](https://github.com/PaddlePaddle/FastDeploy/issues)*:
|
||||
|
||||
[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq)
|
||||
|
||||
[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues)
|
||||
|
||||
若以上方式都无法解决问题,欢迎给FastDeploy提交新的[issue](https://github.com/PaddlePaddle/FastDeploy/issues)
|
||||
|
@@ -1,177 +0,0 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleSeg Android Demo for Image Segmentation
|
||||
|
||||
For real-time portrait segmentation on Android, this demo has good ease of use and openness. You can run your own training model in the demo.
|
||||
|
||||
## Environment Preparations
|
||||
|
||||
1. Install the Android Studio tool locally, for details see [Android Stuido official website](https://developer.android.com/studio).
|
||||
2. Get an Android phone and turn on USB debugging mode. How to turn on: ` Phone Settings -> Find Developer Options -> Turn on Developer Options and USB Debug Mode`.
|
||||
|
||||
## Deployment Steps
|
||||
|
||||
1. Image Segmentation PaddleSeg Demo is located in `fastdeploy/examples/vision/segmentation/paddleseg/android` directory.
|
||||
2. Please use Android Studio to open paddleseg/android project.
|
||||
3. Connect your phone to your computer, turn on USB debugging and file transfer mode, and connect your own mobile device on Android Studio (your phone needs to be enabled to allow software installation from USB).
|
||||
|
||||
<p align="center">
|
||||
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/31974251/203257262-71b908ab-bb2b-47d3-9efb-67631687b774.png">
|
||||
</p>
|
||||
|
||||
> **Notes:**
|
||||
>> If you encounter an NDK configuration error during importing, compiling or running the program, please open ` File > Project Structure > SDK Location` and change `Andriod SDK location` to your locally configured SDK path.
|
||||
|
||||
4. Click the Run button to automatically compile the APP and install it to your phone. (The process will automatically download the pre-compiled FastDeploy Android library and model files, internet connection required.)
|
||||
The success interface is as follows. Figure 1: Install APP on phone; Figure 2: The opening interface, it will automatically recognize the person in the picture and draw the mask; Figure 3: APP setting options, click setting in the upper right corner, and you can set different options.
|
||||
|
||||
| APP icon | APP effect | APP setting options
|
||||
| --- | --- | --- |
|
||||
| <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/203268599-c94018d8-3683-490a-a5c7-a8136a4fa284.jpg"> | <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/203267867-7c51b695-65e6-402e-9826-5d6d5864da87.gif"> | <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/197332983-afbfa6d5-4a3b-4c54-a528-4a3e58441be1.jpg"> |
|
||||
|
||||
|
||||
## PaddleSegModel Java API Introduction
|
||||
- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleSegModel initialization parameters are described as follows:
|
||||
- modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
|
||||
- paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
|
||||
- configFile: String, preprocessing configuration file of model inference, e.g. deploy.yml.
|
||||
- option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
|
||||
|
||||
```java
|
||||
// Constructor w/o label file
|
||||
public PaddleSegModel(); // An empty constructor, which can be initialised by calling init function later.
|
||||
public PaddleSegModel(String modelFile, String paramsFile, String configFile);
|
||||
public PaddleSegModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
|
||||
// Call init manually w/o label file
|
||||
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
|
||||
```
|
||||
- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
|
||||
```java
|
||||
// Directly predict: do not save images or render result to Bitmap.
|
||||
public SegmentationResult predict(Bitmap ARGB8888Bitmap);
|
||||
// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
|
||||
public SegmentationResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float weight);
|
||||
public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // Only rendering images without saving.
|
||||
// Modify result, but not return it. Concerning performance, you can use the following interface with CxxBuffer in SegmentationResult.
|
||||
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result);
|
||||
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, String savedImagePath, float weight);
|
||||
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, boolean rendering, float weight);
|
||||
```
|
||||
- Set vertical or horizontal mode: For PP-HumanSeg series model, you should call this method to set the vertical mode to true.
|
||||
```java
|
||||
public void setVerticalScreenFlag(boolean flag);
|
||||
```
|
||||
- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
|
||||
```java
|
||||
public boolean release(); // Release native resources.
|
||||
public boolean initialized(); // Check if initialization is successful.
|
||||
```
|
||||
|
||||
- Runtime Option Setting
|
||||
```java
|
||||
public void enableLiteFp16(); // Enable fp16 precision inference
|
||||
public void disableLiteFP16(); // Disable fp16 precision inference
|
||||
public void setCpuThreadNum(int threadNum); // Set number of threads.
|
||||
public void setLitePowerMode(LitePowerMode mode); // Set power mode.
|
||||
public void setLitePowerMode(String modeStr); // Set power mode by string.
|
||||
```
|
||||
|
||||
- Segmentation Result
|
||||
```java
|
||||
public class SegmentationResult {
|
||||
public int[] mLabelMap; // The predicted label map, each pixel position corresponds to a label HxW.
|
||||
public float[] mScoreMap; // The predicted score map, each pixel position corresponds to a score HxW.
|
||||
public long[] mShape; // The real shape(H,W) of label map.
|
||||
public boolean mContainScoreMap = false; // Whether score map is included.
|
||||
// You can choose to use CxxBuffer directly instead of copying it to JAVA layer through JNI.
|
||||
// This method can improve performance to some extent.
|
||||
public void setCxxBufferFlag(boolean flag); // Set whether the mode is CxxBuffer.
|
||||
public boolean releaseCxxBuffer(); // Release CxxBuffer manually!!!
|
||||
public boolean initialized(); // Check if the result is valid.
|
||||
}
|
||||
```
|
||||
Other reference: C++/Python corresponding SegmentationResult description: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md).
|
||||
|
||||
|
||||
- Model calling example 1: Using constructor and the default RuntimeOption:
|
||||
```java
|
||||
import java.nio.ByteBuffer;
|
||||
import android.graphics.Bitmap;
|
||||
import android.opengl.GLES20;
|
||||
|
||||
import com.baidu.paddle.fastdeploy.vision.SegmentationResult;
|
||||
import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel;
|
||||
|
||||
// Initialise model.
|
||||
PaddleSegModel model = new PaddleSegModel(
|
||||
"portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel",
|
||||
"portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams",
|
||||
"portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml");
|
||||
|
||||
// If the camera is in portrait mode, the PP-HumanSeg series needs to change the mark.
|
||||
model.setVerticalScreenFlag(true);
|
||||
|
||||
// Read Bitmaps: The following is the pseudo code of reading the Bitmap.
|
||||
ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
|
||||
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
|
||||
Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
|
||||
ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
|
||||
|
||||
// Model inference.
|
||||
SegmentationResult result = new SegmentationResult();
|
||||
result.setCxxBufferFlag(true);
|
||||
|
||||
model.predict(ARGB8888ImageBitmap, result);
|
||||
|
||||
// Release CxxBuffer.
|
||||
result.releaseCxxBuffer();
|
||||
|
||||
// Or return SegmentationResult directly.
|
||||
SegmentationResult result = model.predict(ARGB8888ImageBitmap);
|
||||
|
||||
// Release model resources.
|
||||
model.release();
|
||||
```
|
||||
|
||||
- Model calling example 2: Call init function manually at the appropriate program node and customize RuntimeOption.
|
||||
```java
|
||||
// import id.
|
||||
import com.baidu.paddle.fastdeploy.RuntimeOption;
|
||||
import com.baidu.paddle.fastdeploy.LitePowerMode;
|
||||
import com.baidu.paddle.fastdeploy.vision.SegmentationResult;
|
||||
import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel;
|
||||
// Create empty model.
|
||||
PaddleSegModel model = new PaddleSegModel();
|
||||
// Model path.
|
||||
String modelFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel";
|
||||
String paramFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams";
|
||||
String configFile = "portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml";
|
||||
// Specify RuntimeOption.
|
||||
RuntimeOption option = new RuntimeOption();
|
||||
option.setCpuThreadNum(2);
|
||||
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
|
||||
option.enableLiteFp16();
|
||||
// If the camera is in portrait mode, the PP-HumanSeg series needs to change the mark.
|
||||
model.setVerticalScreenFlag(true);
|
||||
// Initialise with the init function.
|
||||
model.init(modelFile, paramFile, configFile, option);
|
||||
// Read Bitmap, predict model, release resources, id.
|
||||
```
|
||||
For details, please refer to [SegmentationMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java).
|
||||
|
||||
## Replace FastDeploy SDK and model
|
||||
Steps to replace the FastDeploy prediction libraries and model are very simple. The location of the prediction library is `app/libs/fastdeploy-android-sdk-xxx.aar`, where `xxx` indicates the version of the prediction library you are currently using. The location of the model is, `app/src/main/assets/models/portrait_pp_humansegv2_lite_256x144_inference_model`.
|
||||
- Replace FastDeploy Android SDK: Download or compile the latest FastDeploy Android SDK, unzip it and put it in the `app/libs` directory. For details please refer to:
|
||||
- [Use FastDeploy Java SDK on Android](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android)
|
||||
|
||||
- Steps for replacing the PaddleSeg model.
|
||||
- Put your PaddleSeg model in `app/src/main/assets/models`;
|
||||
- Modify the model path in `app/src/main/res/values/strings.xml`, such as:
|
||||
```xml
|
||||
<!-- Modify this path for your model, e.g. models/human_pp_humansegv1_lite_192x192_inference_model -->
|
||||
<string name="SEGMENTATION_MODEL_DIR_DEFAULT">models/human_pp_humansegv1_lite_192x192_inference_model</string>
|
||||
```
|
||||
|
||||
## Other Documenets
|
||||
If you are interested in more FastDeploy Java API documents and how to access the FastDeploy C++ API via JNI, you can refer to the following:
|
||||
- [Use FastDeploy Java SDK on Android](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android)
|
||||
- [Use FastDeploy C++ SDK on Android](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_cpp_sdk_on_android.md)
|
@@ -1,45 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg Python部署示例
|
||||
本目录下提供`infer.py`快速完成PP-LiteSeg在CPU/GPU,以及GPU上通过Paddle-TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
## 部署环境准备
|
||||
|
||||
在部署前,需确认软硬件环境,同时下载预编译python wheel 包,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)
|
||||
|
||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../ppmatting)
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/cpu-gpu/python
|
||||
|
||||
# 下载Unet模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
tar -xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
|
||||
# GPU上使用Paddle-TensorRT推理 (注意:Paddle-TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
## 快速链接
|
||||
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
|
||||
## 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
||||
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
|
||||
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md)
|
54
examples/vision/segmentation/paddleseg/matting/README.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# PaddleSeg Matting模型高性能全场景部署方案-FastDeploy
|
||||
|
||||
## 1. FastDeploy介绍
|
||||
**[⚡️FastDeploy](https://github.com/PaddlePaddle/FastDeploy)**是一款**全场景**、**易用灵活**、**极致高效**的AI推理部署工具,支持**云边端**部署。使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、瑞芯微、晶晨、算能等10+款硬件上对PaddleSeg Matting模型进行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、RKNPU2、SOPHGO等多种推理后端。
|
||||
|
||||
## 2. 硬件支持列表
|
||||
|
||||
|硬件类型|该硬件是否支持|使用指南|Python|C++|
|
||||
|:---:|:---:|:---:|:---:|:---:|
|
||||
|X86 CPU|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|NVIDIA GPU|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|飞腾CPU|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|ARM CPU|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|Intel GPU(集成显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|Intel GPU(独立显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|昆仑|✅|[链接](kunlun)|✅|✅|
|
||||
|昇腾|✅|[链接](ascend)|✅|✅|
|
||||
|
||||
## 3. 详细使用文档
|
||||
- X86 CPU
|
||||
- [部署模型准备](cpu-gpu)
|
||||
- [Python部署示例](cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- NVIDIA GPU
|
||||
- [部署模型准备](cpu-gpu)
|
||||
- [Python部署示例](cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- 飞腾CPU
|
||||
- [部署模型准备](cpu-gpu)
|
||||
- [Python部署示例](cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- ARM CPU
|
||||
- [部署模型准备](cpu-gpu)
|
||||
- [Python部署示例](cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- Intel GPU
|
||||
- [部署模型准备](cpu-gpu)
|
||||
- [Python部署示例](cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- 昆仑 XPU
|
||||
- [部署模型准备](kunlun)
|
||||
- [Python部署示例](kunlun/README.md)
|
||||
- [C++部署示例](kunlun/README.md)
|
||||
- 昇腾 Ascend
|
||||
- [部署模型准备](ascend)
|
||||
- [Python部署示例](ascend/README.md)
|
||||
- [C++部署示例](ascend/README.md)
|
||||
|
||||
## 4. 常见问题
|
||||
|
||||
遇到问题可查看常见问题集合,搜索FastDeploy issue,*或给FastDeploy提交[issue](https://github.com/PaddlePaddle/FastDeploy/issues)*:
|
||||
|
||||
[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq)
|
||||
[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues)
|
@@ -0,0 +1,31 @@
|
||||
# PaddleSeg Matting模型高性能全场景部署方案-FastDeploy
|
||||
|
||||
## 1. 说明
|
||||
PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Matting模型
|
||||
|
||||
## 2. 使用预导出的模型列表
|
||||
为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。**注意**`deploy.yaml`文件记录导出模型的`input_shape`以及预处理信息,若不满足要求,用户可重新导出相关模型。
|
||||
|
||||
| 模型 | 参数大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :------ |
|
||||
| [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - |
|
||||
| [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - |
|
||||
| [PP-HumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - |
|
||||
| [Modnet-ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - |
|
||||
| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - |
|
||||
| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - |
|
||||
|
||||
## 3. 自行导出PaddleSeg部署模型
|
||||
### 3.1 模型版本
|
||||
|
||||
支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) 高于2.6版本的Matting模型,目前FastDeploy中测试过模型如下:
|
||||
- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
|
||||
### 3.2 模型导出
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
## 4. 详细的部署示例
|
||||
- [Python部署](../cpu-gpu/python)
|
||||
- [C++部署](../cpu-gpu/cpp)
|
@@ -1,32 +1,10 @@
|
||||
# PaddleSeg Matting模型高性能全场景部署方案-FastDeploy
|
||||
|
||||
PaddleSeg通过[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)支持在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)、昆仑芯、华为昇腾硬件上部署Matting模型
|
||||
## 1. 说明
|
||||
PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Matting模型
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Matting模型
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
||||
- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
|
||||
|
||||
## 准备PaddleSeg部署模型
|
||||
在部署前,需要先将Matting模型导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
|
||||
**注意**
|
||||
- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
## 预导出的推理模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。
|
||||
|
||||
其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。
|
||||
|
||||
>> **注意**`deploy.yaml`文件记录导出模型的`input_shape`以及预处理信息,若不满足要求,用户可重新导出相关模型
|
||||
## 2. 使用预导出的模型列表
|
||||
为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。**注意**`deploy.yaml`文件记录导出模型的`input_shape`以及预处理信息,若不满足要求,用户可重新导出相关模型。
|
||||
|
||||
| 模型 | 参数大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :------ |
|
||||
@@ -37,7 +15,17 @@ PaddleSeg通过[FastDeploy](https://github.com/PaddlePaddle/FastDeploy)支持在
|
||||
| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - |
|
||||
| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - |
|
||||
|
||||
## 详细部署文档
|
||||
## 3. 自行导出PaddleSeg部署模型
|
||||
### 3.1 模型版本
|
||||
|
||||
支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) 高于2.6版本的Matting模型,目前FastDeploy中测试过模型如下:
|
||||
- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
|
||||
### 3.2 模型导出
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
## 4. 详细的部署示例
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
@@ -1,14 +1,11 @@
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
|
||||
# 添加FastDeploy依赖头文件
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
@@ -1,20 +1,36 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-Matting C++部署示例
|
||||
# PP-Matting CPU-GPU C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成PP-Matting在CPU/GPU、昆仑芯、华为昇腾以及GPU上通过Paddle-TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认软硬件环境,同时下载预编译部署库,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install)
|
||||
## 1. 说明
|
||||
PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Matting模型
|
||||
|
||||
>> **注意** 只有CPU、GPU提供预编译库,华为昇腾以及昆仑芯需要参考以上文档自行编译部署环境
|
||||
## 2. 部署环境准备
|
||||
在部署前,需确认软硬件环境,同时下载预编译部署库,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install),**注意** 只有CPU、GPU提供预编译库,华为昇腾以及昆仑芯需要参考以上文档自行编译部署环境。
|
||||
|
||||
## 3. 部署模型准备
|
||||
在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md)。
|
||||
|
||||
## 4. 运行部署示例
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.0以上(x.x.x>=1.0.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/matting/cpp-gpu/cpp
|
||||
# # 如果您希望从PaddleSeg下载示例代码,请运行
|
||||
# git clone https://github.com/PaddlePaddle/PaddleSeg.git
|
||||
# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
|
||||
# # git checkout develop
|
||||
# cd PaddleSeg/deploy/fastdeploy/matting/cpp-gpu/cpp
|
||||
|
||||
# 编译部署示例
|
||||
mkdir build && cd build
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
@@ -24,7 +40,6 @@ tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
|
||||
# GPU推理
|
||||
@@ -34,7 +49,7 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
# 昆仑芯XPU推理
|
||||
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 3
|
||||
```
|
||||
>> ***注意** 以上示例未提供华为昇腾的示例,在编译好昇腾部署环境后,只需改造一行代码,将示例文件中KunlunXinInfer方法的`option.UseKunlunXin()`为`option.UseAscend()`就可以完成在华为昇腾上的推理部署
|
||||
**注意** 以上示例未提供华为昇腾的示例,在编译好昇腾部署环境后,只需改造一行代码,将示例文件中KunlunXinInfer方法的`option.UseKunlunXin()`为`option.UseAscend()`就可以完成在华为昇腾上的推理部署
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div width="840">
|
||||
@@ -45,14 +60,14 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
</div>
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## 快速链接
|
||||
## 5. 更多指南
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](../../)
|
||||
- [Python部署](../python)
|
||||
|
||||
## 常见问题
|
||||
## 6. 常见问题
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
@@ -1,24 +1,35 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-Matting Python部署示例
|
||||
# PP-Matting CPU-GPU Python部署示例
|
||||
|
||||
本目录下提供`infer.py`快速完成PP-Matting在CPU/GPU、昆仑芯、华为昇腾,以及GPU上通过Paddle-TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
## 部署环境准备
|
||||
## 1. 说明
|
||||
PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Matting模型
|
||||
|
||||
在部署前,需确认软硬件环境,同时下载预编译python wheel 包,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install)
|
||||
>> **注意** 只有CPU、GPU提供预编译库,华为昇腾以及昆仑芯需要参考以上文档自行编译部署环境
|
||||
## 2. 部署环境准备
|
||||
在部署前,需确认软硬件环境,同时下载预编译部署库,参考文档[FastDeploy预编译库安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install),**注意** 只有CPU、GPU提供预编译库,华为昇腾以及昆仑芯需要参考以上文档自行编译部署环境。
|
||||
|
||||
## 3. 部署模型准备
|
||||
在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md)。
|
||||
|
||||
## 4. 运行部署示例
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/matting/ppmatting/python
|
||||
cd FastDeploy/examples/vision/segmentation/matting/cpp-gpu/python
|
||||
# # 如果您希望从PaddleSeg下载示例代码,请运行
|
||||
# git clone https://github.com/PaddlePaddle/PaddleSeg.git
|
||||
# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
|
||||
# # git checkout develop
|
||||
# cd PaddleSeg/deploy/fastdeploy/matting/cpp-gpu/python
|
||||
|
||||
# 下载PP-Matting模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
|
||||
tar -xvf PP-Matting-512.tgz
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu
|
||||
# GPU推理
|
||||
@@ -28,7 +39,7 @@ python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bg
|
||||
# 昆仑芯XPU推理
|
||||
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device kunlunxin
|
||||
```
|
||||
>> ***注意** 以上示例未提供华为昇腾的示例,在编译好昇腾部署环境后,只需改造一行代码,将示例文件中的`option.use_kunlunxin()`为`option.use_ascend()`就可以完成在华为昇腾上的推理部署
|
||||
**注意** 以上示例未提供华为昇腾的示例,在编译好昇腾部署环境后,只需改造一行代码,将示例文件中的`option.use_kunlunxin()`为`option.use_ascend()`就可以完成在华为昇腾上的推理部署
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div width="840">
|
||||
@@ -38,12 +49,12 @@ python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bg
|
||||
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
|
||||
</div>
|
||||
|
||||
## 快速链接
|
||||
## 5. 更多指南
|
||||
- [PaddleSeg python API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/semantic_segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
|
||||
## 常见问题
|
||||
## 6. 常见问题
|
||||
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)
|
||||
- [如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
@@ -51,7 +51,7 @@ def build_option(args):
|
||||
|
||||
args = parse_arguments()
|
||||
|
||||
# 配置runtime,加载模型
|
||||
# setup runtime
|
||||
runtime_option = build_option(args)
|
||||
model_file = os.path.join(args.model, "model.pdmodel")
|
||||
params_file = os.path.join(args.model, "model.pdiparams")
|
||||
@@ -59,12 +59,13 @@ config_file = os.path.join(args.model, "deploy.yaml")
|
||||
model = fd.vision.matting.PPMatting(
|
||||
model_file, params_file, config_file, runtime_option=runtime_option)
|
||||
|
||||
# 预测图片抠图结果
|
||||
# predict
|
||||
im = cv2.imread(args.image)
|
||||
bg = cv2.imread(args.bg)
|
||||
result = model.predict(im)
|
||||
print(result)
|
||||
# 可视化结果
|
||||
|
||||
# visualize
|
||||
vis_im = fd.vision.vis_matting(im, result)
|
||||
vis_im_with_bg = fd.vision.swap_background(im, bg, result)
|
||||
cv2.imwrite("visualized_result_fg.png", vis_im)
|
@@ -0,0 +1,31 @@
|
||||
# PaddleSeg Matting模型高性能全场景部署方案-FastDeploy
|
||||
|
||||
## 1. 说明
|
||||
PaddleSeg支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署Matting模型
|
||||
|
||||
## 2. 使用预导出的模型列表
|
||||
为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。**注意**`deploy.yaml`文件记录导出模型的`input_shape`以及预处理信息,若不满足要求,用户可重新导出相关模型。
|
||||
|
||||
| 模型 | 参数大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :------ |
|
||||
| [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - |
|
||||
| [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - |
|
||||
| [PP-HumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - |
|
||||
| [Modnet-ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - |
|
||||
| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - |
|
||||
| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - |
|
||||
|
||||
## 3. 自行导出PaddleSeg部署模型
|
||||
### 3.1 模型版本
|
||||
|
||||
支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) 高于2.6版本的Matting模型,目前FastDeploy中测试过模型如下:
|
||||
- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting)
|
||||
|
||||
### 3.2 模型导出
|
||||
PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/develop/Matting),**注意**:PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
|
||||
|
||||
## 4. 详细的部署示例
|
||||
- [Python部署](../cpu-gpu/python)
|
||||
- [C++部署](../cpu-gpu/cpp)
|
@@ -1,45 +0,0 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleSeg在瑞芯微NPU上通过FastDeploy部署模型
|
||||
|
||||
## PaddleSeg支持部署的瑞芯微的芯片型号
|
||||
支持如下芯片的部署
|
||||
- Rockchip RV1109
|
||||
- Rockchip RV1126
|
||||
- Rockchip RK1808
|
||||
|
||||
>> **注意**:需要注意的是,芯原(verisilicon)作为 IP 设计厂商,本身并不提供实体SoC产品,而是授权其 IP 给芯片厂商,如:晶晨(Amlogic),瑞芯微(Rockchip)等。因此本文是适用于被芯原授权了 NPU IP 的芯片产品。只要芯片产品没有大副修改芯原的底层库,则该芯片就可以使用本文档作为 Paddle Lite 推理部署的参考和教程。在本文中,晶晨 SoC 中的 NPU 和 瑞芯微 SoC 中的 NPU 统称为芯原 NPU。
|
||||
瑞芯微 RV1126 是一款编解码芯片,专门面相人工智能的机器视觉领域。
|
||||
|
||||
本示例基于RV1126来介绍如何使用FastDeploy部署PaddleSeg模型
|
||||
|
||||
PaddleSeg支持通过FastDeploy在RV1126上基于Paddle-Lite部署相关Segmentation模型
|
||||
|
||||
## 瑞芯微 RV1126支持的PaddleSeg模型
|
||||
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型
|
||||
|
||||
目前瑞芯微 RV1126 的 NPU 支持的量化模型如下:
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
## 预导出的量化推理模型
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。
|
||||
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||
**注意**
|
||||
- PaddleSeg量化模型包含`model.pdmodel`、`model.pdiparams`、`deploy.yaml`和`subgraph.txt`四个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息,subgraph.txt是为了异构计算而存储的配置文件
|
||||
- 若以上列表中无满足要求的模型,可参考下方教程自行导出适配A311D的模型
|
||||
|
||||
## PaddleSeg动态图模型导出为RV1126支持的INT8模型
|
||||
模型导出分为以下两步
|
||||
1. PaddleSeg训练的动态图模型导出为推理静态图模型,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
瑞芯微RV1126仅支持INT8
|
||||
2. 将推理模型量化压缩为INT8模型,FastDeploy模型量化的方法及一键自动化压缩工具可以参考[模型量化](../../../quantize/README.md)
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
目前,瑞芯微 RV1126 上只支持C++的部署。
|
||||
|
||||
- [C++部署](cpp)
|
@@ -0,0 +1,75 @@
|
||||
# PaddleSeg语义分割模型高性能全场景部署方案-FastDeploy
|
||||
|
||||
## 1. FastDeploy介绍
|
||||
**[⚡️FastDeploy](https://github.com/PaddlePaddle/FastDeploy)**是一款**全场景**、**易用灵活**、**极致高效**的AI推理部署工具,支持**云边端**部署。使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、瑞芯微、晶晨、算能等10+款硬件上对PaddleSeg语义分割模型进行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、RKNPU2、SOPHGO等多种推理后端。
|
||||
|
||||
## 2. 硬件支持列表
|
||||
|
||||
|硬件类型|该硬件是否支持|使用指南|Python|C++|
|
||||
|:---:|:---:|:---:|:---:|:---:|
|
||||
|X86 CPU|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|NVIDIA GPU|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|飞腾CPU|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|ARM CPU|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|Intel GPU(集成显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|Intel GPU(独立显卡)|✅|[链接](cpu-gpu)|✅|✅|
|
||||
|昆仑|✅|[链接](kunlun)|✅|✅|
|
||||
|昇腾|✅|[链接](ascend)|✅|✅|
|
||||
|瑞芯微|✅|[链接](rockchip)|✅|✅|
|
||||
|晶晨|✅|[链接](amlogic)|--|✅|
|
||||
|算能|✅|[链接](sophgo)|✅|✅|
|
||||
|
||||
## 3. 详细使用文档
|
||||
- X86 CPU
|
||||
- [部署模型准备](cpu-gpu)
|
||||
- [Python部署示例](cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- NVIDIA GPU
|
||||
- [部署模型准备](cpu-gpu)
|
||||
- [Python部署示例](cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- 飞腾CPU
|
||||
- [部署模型准备](cpu-gpu)
|
||||
- [Python部署示例](cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- ARM CPU
|
||||
- [部署模型准备](cpu-gpu)
|
||||
- [Python部署示例](cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- Intel GPU
|
||||
- [部署模型准备](cpu-gpu)
|
||||
- [Python部署示例](cpu-gpu/python/)
|
||||
- [C++部署示例](cpu-gpu/cpp/)
|
||||
- 昆仑 XPU
|
||||
- [部署模型准备](kunlun)
|
||||
- [Python部署示例](kunlun/python/)
|
||||
- [C++部署示例](kunlun/cpp/)
|
||||
- 昇腾 Ascend
|
||||
- [部署模型准备](ascend)
|
||||
- [Python部署示例](ascend/python/)
|
||||
- [C++部署示例](ascend/cpp/)
|
||||
- 瑞芯微 Rockchip
|
||||
- [部署模型准备](rockchip/)
|
||||
- [Python部署示例](rockchip/rknpu2/)
|
||||
- [C++部署示例](rockchip/rknpu2/)
|
||||
- 晶晨 Amlogic
|
||||
- [部署模型准备](amlogic/a311d/)
|
||||
- [C++部署示例](amlogic/a311d/cpp/)
|
||||
- 算能 Sophgo
|
||||
- [部署模型准备](sophgo/)
|
||||
- [Python部署示例](sophgo/python/)
|
||||
- [C++部署示例](sophgo/cpp/)
|
||||
|
||||
## 4. 更多部署方式
|
||||
|
||||
- [Android ARM CPU部署](android)
|
||||
- [服务化Serving部署](serving)
|
||||
- [web部署](web)
|
||||
- [模型自动化压缩工具](quantize)
|
||||
|
||||
## 5. 常见问题
|
||||
|
||||
遇到问题可查看常见问题集合,搜索FastDeploy issue,*或给FastDeploy提交[issue](https://github.com/PaddlePaddle/FastDeploy/issues)*:
|
||||
|
||||
[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq)
|
||||
[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues)
|
@@ -1,30 +1,17 @@
|
||||
[English](README.md) | 简体中文
|
||||
|
||||
# PaddleSeg在晶晨NPU上通过FastDeploy部署模型
|
||||
# PaddleSeg 语义分割模型在晶晨NPU上的部署方案-FastDeploy
|
||||
|
||||
## PaddleSeg支持部署的晶晨芯片型号
|
||||
支持如下芯片的部署
|
||||
## 1. 说明
|
||||
|
||||
晶晨A311D是一款先进的AI应用处理器。PaddleSeg支持通过FastDeploy在A311D上基于Paddle-Lite部署相关Segmentation模型。**注意**:需要注意的是,芯原(verisilicon)作为 IP 设计厂商,本身并不提供实体SoC产品,而是授权其 IP 给芯片厂商,如:晶晨(Amlogic),瑞芯微(Rockchip)等。因此本文是适用于被芯原授权了 NPU IP 的芯片产品。只要芯片产品没有大副修改芯原的底层库,则该芯片就可以使用本文档作为 Paddle Lite 推理部署的参考和教程。在本文中,晶晨 SoC 中的 NPU 和 瑞芯微 SoC 中的 NPU 统称为芯原 NPU。目前支持如下芯片的部署:
|
||||
- Amlogic A311D
|
||||
- Amlogic C308X
|
||||
- Amlogic S905D3
|
||||
|
||||
本示例基于晶晨A311D来介绍如何使用FastDeploy部署PaddleSeg模型
|
||||
|
||||
晶晨A311D是一款先进的AI应用处理器。PaddleSeg支持通过FastDeploy在A311D上基于Paddle-Lite部署相关Segmentation模型
|
||||
|
||||
>> **注意**:需要注意的是,芯原(verisilicon)作为 IP 设计厂商,本身并不提供实体SoC产品,而是授权其 IP 给芯片厂商,如:晶晨(Amlogic),瑞芯微(Rockchip)等。因此本文是适用于被芯原授权了 NPU IP 的芯片产品。只要芯片产品没有大副修改芯原的底层库,则该芯片就可以使用本文档作为 Paddle Lite 推理部署的参考和教程。在本文中,晶晨 SoC 中的 NPU 和 瑞芯微 SoC 中的 NPU 统称为芯原 NPU。
|
||||
|
||||
## 晶晨A311D支持的PaddleSeg模型
|
||||
|
||||
- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)
|
||||
>> **注意**:支持PaddleSeg高于2.6版本的Segmentation模型
|
||||
|
||||
目前晶晨A311D所支持的PaddleSeg模型如下:
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
## 预导出的量化推理模型
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分量化后的推理模型,开发者可直接下载使用。
|
||||
本示例基于晶晨A311D来介绍如何使用FastDeploy部署PaddleSeg模型。
|
||||
|
||||
## 2. 使用预导出的模型列表
|
||||
| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes-without-argmax](https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz)| 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
|
||||
@@ -32,13 +19,19 @@
|
||||
- PaddleSeg量化模型包含`model.pdmodel`、`model.pdiparams`、`deploy.yaml`和`subgraph.txt`四个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息,subgraph.txt是为了异构计算而存储的配置文件
|
||||
- 若以上列表中无满足要求的模型,可参考下方教程自行导出适配A311D的模型
|
||||
|
||||
## PaddleSeg动态图模型导出为A311D支持的INT8模型
|
||||
## 3. 自行导出晶晨A311D支持的PaddleSeg模型
|
||||
|
||||
### 3.1 模型版本
|
||||
- 支持[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)高于2.6版本的Segmentation模型,目前FastDeploy测试过可在晶晨A311D成功部署的模型:
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/pp_liteseg/README.md)
|
||||
|
||||
### 3.2 PaddleSeg动态图模型导出为A311D支持的INT8模型
|
||||
模型导出分为以下两步
|
||||
1. PaddleSeg训练的动态图模型导出为推理静态图模型,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
|
||||
晶晨A311D仅支持INT8
|
||||
2. 将推理模型量化压缩为INT8模型,FastDeploy模型量化的方法及一键自动化压缩工具可以参考[模型量化](../../../quantize/README.md)
|
||||
|
||||
## 详细部署文档
|
||||
## 4. 详细部署示例
|
||||
|
||||
目前,A311D上只支持C++的部署。
|
||||
|
@@ -1,17 +1,14 @@
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
|
||||
# 添加FastDeploy依赖头文件
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
include_directories(${FastDeploy_INCLUDE_DIRS})
|
||||
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
||||
|
||||
set(CMAKE_INSTALL_PREFIX ${CMAKE_SOURCE_DIR}/build/install)
|
@@ -1,28 +1,35 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-LiteSeg 量化模型 C++ 部署示例
|
||||
# PaddleSeg TIMVX A311D C++ 部署示例
|
||||
|
||||
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在晶晨 A311D 上的部署推理加速。
|
||||
|
||||
## 部署准备
|
||||
### FastDeploy 交叉编译环境准备
|
||||
软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
## 1. 部署环境准备
|
||||
### 1.1 FastDeploy 交叉编译环境准备
|
||||
软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 晶晨 A311d 编译文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
|
||||
### 模型准备
|
||||
1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README.md#晶晨a311d支持的paddleseg模型)进行部署。
|
||||
2. 若FastDeploy没有提供满足要求的量化模型,用户可以参考[PaddleSeg动态图模型导出为A311D支持的INT8模型](../README.md#paddleseg动态图模型导出为a311d支持的int8模型)自行导出或训练量化模型
|
||||
## 2. 部署模型准备
|
||||
1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README.md)进行部署。
|
||||
2. 若FastDeploy没有提供满足要求的量化模型,用户可以参考[PaddleSeg动态图模型导出为A311D支持的INT8模型](../README.md)自行导出或训练量化模型
|
||||
3. 若上述导出或训练的模型出现精度下降或者报错,则需要使用异构计算,使得模型算子部分跑在A311D的ARM CPU上进行调试以及精度验证,其中异构计算所需的文件是subgraph.txt。具体关于异构计算可参考:[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md)。
|
||||
|
||||
## 在 A311D 上部署量化后的 PP-LiteSeg 分割模型
|
||||
## 3. 在 A311D 上部署量化后的 PP-LiteSeg 分割模型
|
||||
请按照以下步骤完成在 A311D 上部署 PP-LiteSeg 量化模型:
|
||||
|
||||
1. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||
```bash
|
||||
cp -r FastDeploy/build/fastdeploy-timvx/ path/to/paddleseg/amlogic/a311d/cpp
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/semantic_segmentation/amlogic/a311d/cpp
|
||||
# # 如果您希望从PaddleSeg下载示例代码,请运行
|
||||
# git clone https://github.com/PaddlePaddle/PaddleSeg.git
|
||||
# # 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到develop分支
|
||||
# # git checkout develop
|
||||
# cp -r FastDeploy/build/fastdeploy-timvx/ PaddleSeg/deploy/fastdeploy/semantic_segmentation/amlogic/a311d/cpp
|
||||
```
|
||||
|
||||
2. 在当前路径下载部署所需的模型和示例图片:
|
||||
```bash
|
||||
cd path/to/paddleseg/amlogic/a311d/cpp
|
||||
cd FastDeploy/examples/vision/segmentation/semantic_segmentation/amlogic/a311d/cpp
|
||||
mkdir models && mkdir images
|
||||
wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
|
||||
tar -xvf ppliteseg.tar.gz
|
||||
@@ -33,7 +40,7 @@ cp -r cityscapes_demo.png images
|
||||
|
||||
3. 编译部署示例,可使入如下命令:
|
||||
```bash
|
||||
cd path/to/paddleseg/amlogic/a311d/cpp
|
||||
cd FastDeploy/examples/vision/segmentation/semantic_segmentation/amlogic/a311d/cpp
|
||||
mkdir build && cd build
|
||||
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
||||
make -j8
|
||||
@@ -54,6 +61,6 @@ bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/205544166-9b2719ff-ed82-4908-b90a-095de47392e1.png">
|
||||
|
||||
## 快速链接
|
||||
## 4. 更多指南
|
||||
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
|
||||
- [FastDeploy部署PaddleSeg模型概览](../../)
|
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
Before Width: | Height: | Size: 9.7 KiB After Width: | Height: | Size: 9.7 KiB |
Before Width: | Height: | Size: 455 B After Width: | Height: | Size: 455 B |
Before Width: | Height: | Size: 414 B After Width: | Height: | Size: 414 B |
Before Width: | Height: | Size: 6.0 KiB After Width: | Height: | Size: 6.0 KiB |
Before Width: | Height: | Size: 6.0 KiB After Width: | Height: | Size: 6.0 KiB |
Before Width: | Height: | Size: 7.8 KiB After Width: | Height: | Size: 7.8 KiB |
Before Width: | Height: | Size: 7.8 KiB After Width: | Height: | Size: 7.8 KiB |
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 1.9 KiB |
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 944 B After Width: | Height: | Size: 944 B |
Before Width: | Height: | Size: 2.8 KiB After Width: | Height: | Size: 2.8 KiB |
Before Width: | Height: | Size: 4.8 KiB After Width: | Height: | Size: 4.8 KiB |
Before Width: | Height: | Size: 4.8 KiB After Width: | Height: | Size: 4.8 KiB |
Before Width: | Height: | Size: 7.3 KiB After Width: | Height: | Size: 7.3 KiB |
Before Width: | Height: | Size: 7.4 KiB After Width: | Height: | Size: 7.4 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 1.5 KiB |
Before Width: | Height: | Size: 9.8 KiB After Width: | Height: | Size: 9.8 KiB |
Before Width: | Height: | Size: 2.9 KiB After Width: | Height: | Size: 2.9 KiB |
Before Width: | Height: | Size: 4.8 KiB After Width: | Height: | Size: 4.8 KiB |
Before Width: | Height: | Size: 2.0 KiB After Width: | Height: | Size: 2.0 KiB |
Before Width: | Height: | Size: 2.7 KiB After Width: | Height: | Size: 2.7 KiB |
Before Width: | Height: | Size: 4.4 KiB After Width: | Height: | Size: 4.4 KiB |
Before Width: | Height: | Size: 6.7 KiB After Width: | Height: | Size: 6.7 KiB |
Before Width: | Height: | Size: 6.2 KiB After Width: | Height: | Size: 6.2 KiB |
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
Before Width: | Height: | Size: 8.9 KiB After Width: | Height: | Size: 8.9 KiB |
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |