mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-21 15:49:31 +08:00
[Backend] Add RKNPU2 backend support (#456)
* 10-29/14:05 * 新增cmake * 新增rknpu2 backend * 10-29/14:43 * Runtime fd_type新增RKNPU代码 * 10-29/15:02 * 新增ppseg RKNPU2推理代码 * 10-29/15:46 * 新增ppseg RKNPU2 cpp example代码 * 10-29/15:51 * 新增README文档 * 10-29/15:51 * 按照要求修改部分注释以及变量名称 * 10-29/15:51 * 修复重命名之后,cc文件中的部分代码还用旧函数名的bug * 10-29/22:32 * str(Device::NPU)将输出NPU而不是UNKOWN * 修改runtime文件中的注释格式 * 新增Building Summary ENABLE_RKNPU2_BACKEND输出 * pybind新增支持rknpu2 * 新增python编译选项 * 新增PPSeg Python代码 * 新增以及更新各种文档 * 10-30/14:11 * 尝试修复编译cuda时产生的错误 * 10-30/19:27 * 修改CpuName和CoreMask层级 * 修改ppseg rknn推理层级 * 图片将移动到网络进行下载 * 10-30/19:39 * 更新文档 * 10-30/19:39 * 更新文档 * 更新ppseg rknpu2 example中的函数命名方式 * 更新ppseg rknpu2 example为一个cc文件 * 修复disable_normalize_and_permute部分的逻辑错误 * 移除rknpu2初始化时的无用参数 * 10-30/19:39 * 尝试重置python代码 * 10-30/10:16 * rknpu2_config.h文件不再包含rknn_api头文件防止出现导入错误的问题 * 10-31/14:31 * 修改pybind,支持最新的rknpu2 backends * 再次支持ppseg python推理 * 移动cpuname 和 coremask的层级 * 10-31/15:35 * 尝试修复rknpu2导入错误 * 10-31/19:00 * 新增RKNPU2模型导出代码以及其对应的文档 * 更新大量文档错误 * 10-31/19:00 * 现在编译完fastdeploy仓库后无需重新设置RKNN2_TARGET_SOC * 10-31/19:26 * 修改部分错误文档 * 10-31/19:26 * 修复错误删除的部分 * 修复各种错误文档 * 修复FastDeploy.cmake在设置RKNN2_TARGET_SOC错误时,提示错误的信息 * 修复rknpu2_backend.cc中存在的中文注释 * 10-31/20:45 * 删除无用的注释 * 10-31/20:45 * 按照要求修改Device::NPU为Device::RKNPU,硬件将共用valid_hardware_backends * 删除无用注释以及debug代码 * 11-01/09:45 * 更新变量命名方式 * 11-01/10:16 * 修改部分文档,修改函数命名方式 Co-authored-by: Jason <jiangjiajun@baidu.com>
This commit is contained in:
52
examples/vision/segmentation/paddleseg/rknpu2/README.md
Normal file
52
examples/vision/segmentation/paddleseg/rknpu2/README.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# PaddleSeg 模型部署
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
||||
- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/unet/README.md)
|
||||
- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md)
|
||||
- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/contrib/PP-HumanSeg/README.md)
|
||||
- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/fcn/README.md)
|
||||
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/deeplabv3/README.md)
|
||||
|
||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting)
|
||||
|
||||
## 准备PaddleSeg部署模型以及转换模型
|
||||
|
||||
RKNPU部署模型前需要将模型转换成RKNN模型,其过程一般可以简化为如下步骤:
|
||||
* Paddle动态图模型 -> ONNX模型 -> RKNN模型。
|
||||
* Paddle动态图模型 转换为 ONNX模型的过程请参考([PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg))。
|
||||
* 对于ONNX模型 转换 RKNN模型的过程,请参考[转换文档](../../../../../docs/cn/faq/rknpu2/export.md)进行转换。
|
||||
以PPHumanSeg为例,在获取到ONNX模型后,其转换为RK3588步骤如下:
|
||||
* 编写config.yaml文件
|
||||
```yaml
|
||||
model_path: ./portrait_pp_humansegv2_lite_256x144_pretrained.onnx
|
||||
output_folder: ./
|
||||
target_platform: RK3588
|
||||
normalize:
|
||||
mean: [0.5,0.5,0.5]
|
||||
std: [0.5,0.5,0.5]
|
||||
outputs: None
|
||||
```
|
||||
* 执行转换代码
|
||||
```bash
|
||||
python /path/to/fastDeploy/toosl/export.py --config_path=/path/to/fastdeploy/tools/rknpu2/config/ppset_config.yaml
|
||||
```
|
||||
|
||||
## 下载预训练模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型(导出方式为:**指定**`--input_shape`,**指定**`--output_op none`,**指定**`--without_argmax`),开发者可直接下载使用。
|
||||
|
||||
| 任务场景 | 模型 | 模型版本(表示已经测试的版本) | 大小 | ONNX/RKNN是否支持 | ONNX/RKNN速度(ms) |
|
||||
|------------------|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------|-----|---------------|-----------------|
|
||||
| Segmentation | PP-LiteSeg | [PP_LiteSeg_T_STDC1_cityscapes](https://bj.bcebos.com/fastdeploy/models/rknn2/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_3588.tgz) | - | True/True | 6634/5598 |
|
||||
| Segmentation | PP-HumanSegV2Lite | [portrait](https://bj.bcebos.com/fastdeploy/models/rknn2/portrait_pp_humansegv2_lite_256x144_inference_model_without_softmax_3588.tgz) | - | True/True | 456/266 |
|
||||
| Segmentation | PP-HumanSegV2Lite | [human](https://bj.bcebos.com/fastdeploy/models/rknn2/human_pp_humansegv2_lite_192x192_pretrained_3588.tgz) | - | True/True | 496/256 |
|
||||
|
||||
## 详细部署文档
|
||||
- [RKNN总体部署教程](../../../../../docs/cn/faq/rknpu2.md)
|
||||
- [C++部署](cpp)
|
||||
- [Python部署](python)
|
@@ -0,0 +1,36 @@
|
||||
CMAKE_MINIMUM_REQUIRED(VERSION 3.10)
|
||||
project(rknpu_test)
|
||||
|
||||
set(CMAKE_CXX_STANDARD 14)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
set(FASTDEPLOY_INSTALL_DIR "thirdpartys/fastdeploy-0.0.3")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeployConfig.cmake)
|
||||
include_directories(${FastDeploy_INCLUDE_DIRS})
|
||||
add_executable(rknpu_test infer.cc)
|
||||
target_link_libraries(rknpu_test
|
||||
${FastDeploy_LIBS}
|
||||
)
|
||||
|
||||
set(CMAKE_INSTALL_PREFIX ${CMAKE_SOURCE_DIR}/build/install)
|
||||
|
||||
install(TARGETS rknpu_test DESTINATION ./)
|
||||
|
||||
install(DIRECTORY model DESTINATION ./)
|
||||
install(DIRECTORY images DESTINATION ./)
|
||||
|
||||
file(GLOB FASTDEPLOY_LIBS ${FASTDEPLOY_INSTALL_DIR}/lib/*)
|
||||
message("${FASTDEPLOY_LIBS}")
|
||||
install(PROGRAMS ${FASTDEPLOY_LIBS} DESTINATION lib)
|
||||
|
||||
file(GLOB ONNXRUNTIME_LIBS ${FASTDEPLOY_INSTALL_DIR}/third_libs/install/onnxruntime/lib/*)
|
||||
install(PROGRAMS ${ONNXRUNTIME_LIBS} DESTINATION lib)
|
||||
|
||||
install(DIRECTORY ${FASTDEPLOY_INSTALL_DIR}/third_libs/install/opencv/lib DESTINATION ./)
|
||||
|
||||
file(GLOB PADDLETOONNX_LIBS ${FASTDEPLOY_INSTALL_DIR}/third_libs/install/paddle2onnx/lib/*)
|
||||
install(PROGRAMS ${PADDLETOONNX_LIBS} DESTINATION lib)
|
||||
|
||||
file(GLOB RKNPU2_LIBS ${FASTDEPLOY_INSTALL_DIR}/third_libs/install/rknpu2_runtime/RK3588/lib/*)
|
||||
install(PROGRAMS ${RKNPU2_LIBS} DESTINATION lib)
|
84
examples/vision/segmentation/paddleseg/rknpu2/cpp/README.md
Normal file
84
examples/vision/segmentation/paddleseg/rknpu2/cpp/README.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# PaddleSeg C++部署示例
|
||||
|
||||
本目录下用于展示PaddleSeg系列模型在RKNPU2上的部署,以下的部署过程以PPHumanSeg为例子。
|
||||
|
||||
在部署前,需确认以下两个步骤:
|
||||
|
||||
1. 软硬件环境满足要求
|
||||
2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
|
||||
|
||||
以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
|
||||
|
||||
## 生成基本目录文件
|
||||
|
||||
该例程由以下几个部分组成
|
||||
```text
|
||||
.
|
||||
├── CMakeLists.txt
|
||||
├── build # 编译文件夹
|
||||
├── image # 存放图片的文件夹
|
||||
├── infer_cpu_npu.cc
|
||||
├── infer_cpu_npu.h
|
||||
├── main.cc
|
||||
├── model # 存放模型文件的文件夹
|
||||
└── thirdpartys # 存放sdk的文件夹
|
||||
```
|
||||
|
||||
首先需要先生成目录结构
|
||||
```bash
|
||||
mkdir build
|
||||
mkdir images
|
||||
mkdir model
|
||||
mkdir thirdpartys
|
||||
```
|
||||
|
||||
## 编译
|
||||
|
||||
### 编译并拷贝SDK到thirdpartys文件夹
|
||||
|
||||
请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成
|
||||
fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
|
||||
|
||||
### 拷贝模型文件,以及配置文件至model文件夹
|
||||
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
|
||||
转换为RKNN后的模型文件也需要拷贝至model,这里提供了转换好的文件,输入以下命令下载使用(模型文件为RK3588,RK3568需要重新[转换PPSeg RKNN模型](../README.md))。
|
||||
```bash
|
||||
cd model
|
||||
wget https://bj.bcebos.com/fastdeploy/models/rknn2/human_pp_humansegv2_lite_192x192_pretrained_3588.tgz
|
||||
tar xvf human_pp_humansegv2_lite_192x192_pretrained_3588.tgz
|
||||
cp -r ./human_pp_humansegv2_lite_192x192_pretrained_3588 ./model
|
||||
```
|
||||
|
||||
### 准备测试图片至image文件夹
|
||||
```bash
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
|
||||
unzip -qo images.zip
|
||||
```
|
||||
|
||||
### 编译example
|
||||
|
||||
```bash
|
||||
cd build
|
||||
cmake ..
|
||||
make -j8
|
||||
make install
|
||||
```
|
||||
|
||||
## 运行例程
|
||||
|
||||
```bash
|
||||
cd ./build/install
|
||||
./rknpu_test
|
||||
```
|
||||
|
||||
## 运行结果展示
|
||||
运行后将在install文件夹下生成human_pp_humansegv2_lite_npu_result.jpg文件,如下图:
|
||||

|
||||
|
||||
## 注意事项
|
||||
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,
|
||||
需要先调用DisableNormalizePermute(C++)或`disable_normalize_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [转换PPSeg RKNN模型文档](../README.md)
|
84
examples/vision/segmentation/paddleseg/rknpu2/cpp/infer.cc
Normal file
84
examples/vision/segmentation/paddleseg/rknpu2/cpp/infer.cc
Normal file
@@ -0,0 +1,84 @@
|
||||
#include <iostream>
|
||||
#include <string>
|
||||
#include "fastdeploy/vision.h"
|
||||
|
||||
void InferHumanPPHumansegv2Lite(const std::string& device = "cpu");
|
||||
|
||||
int main() {
|
||||
InferHumanPPHumansegv2Lite("npu");
|
||||
return 0;
|
||||
}
|
||||
|
||||
fastdeploy::RuntimeOption GetOption(const std::string& device) {
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
if (device == "npu") {
|
||||
option.UseRKNPU2();
|
||||
} else {
|
||||
option.UseCpu();
|
||||
}
|
||||
return option;
|
||||
}
|
||||
|
||||
fastdeploy::ModelFormat GetFormat(const std::string& device) {
|
||||
auto format = fastdeploy::ModelFormat::ONNX;
|
||||
if (device == "npu") {
|
||||
format = fastdeploy::ModelFormat::RKNN;
|
||||
} else {
|
||||
format = fastdeploy::ModelFormat::ONNX;
|
||||
}
|
||||
return format;
|
||||
}
|
||||
|
||||
std::string GetModelPath(std::string& model_path, const std::string& device) {
|
||||
if (device == "npu") {
|
||||
model_path += "rknn";
|
||||
} else {
|
||||
model_path += "onnx";
|
||||
}
|
||||
return model_path;
|
||||
}
|
||||
|
||||
void InferHumanPPHumansegv2Lite(const std::string& device) {
|
||||
std::string model_file =
|
||||
"./model/human_pp_humansegv2_lite_192x192_pretrained_3588/"
|
||||
"human_pp_humansegv2_lite_192x192_pretrained_3588.";
|
||||
std::string params_file;
|
||||
std::string config_file =
|
||||
"./model/human_pp_humansegv2_lite_192x192_pretrained_3588/deploy.yaml";
|
||||
|
||||
fastdeploy::RuntimeOption option = GetOption(device);
|
||||
fastdeploy::ModelFormat format = GetFormat(device);
|
||||
model_file = GetModelPath(model_file, device);
|
||||
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
|
||||
model_file, params_file, config_file, option, format);
|
||||
|
||||
if (!model.Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return;
|
||||
}
|
||||
auto image_file =
|
||||
"./images/portrait_heng.jpg";
|
||||
auto im = cv::imread(image_file);
|
||||
|
||||
if (device == "npu") {
|
||||
model.DisableNormalizeAndPermute();
|
||||
}
|
||||
|
||||
fastdeploy::vision::SegmentationResult res;
|
||||
clock_t start = clock();
|
||||
if (!model.Predict(&im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
clock_t end = clock();
|
||||
auto dur = (double)(end - start);
|
||||
printf("infer_human_pp_humansegv2_lite_npu use time:%f\n",
|
||||
(dur / CLOCKS_PER_SEC));
|
||||
|
||||
std::cout << res.Str() << std::endl;
|
||||
auto vis_im = fastdeploy::vision::VisSegmentation(im, res);
|
||||
cv::imwrite("human_pp_humansegv2_lite_npu_result.jpg", vis_im);
|
||||
std::cout
|
||||
<< "Visualized result saved in ./human_pp_humansegv2_lite_npu_result.jpg"
|
||||
<< std::endl;
|
||||
}
|
@@ -0,0 +1,44 @@
|
||||
# PaddleSeg Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
|
||||
|
||||
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)
|
||||
|
||||
本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/segmentation/paddleseg/python
|
||||
|
||||
# 下载模型
|
||||
wget https://bj.bcebos.com/fastdeploy/models/rknn2/human_pp_humansegv2_lite_192x192_pretrained_3588.tgz
|
||||
tar xvf human_pp_humansegv2_lite_192x192_pretrained_3588.tgz
|
||||
|
||||
# 下载图片
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
|
||||
unzip images.zip
|
||||
|
||||
# 推理
|
||||
python3 infer.py --model_file ./human_pp_humansegv2_lite_192x192_pretrained_3588/human_pp_humansegv2_lite_192x192_pretrained_3588.rknn \
|
||||
--config_file ./human_pp_humansegv2_lite_192x192_pretrained_3588/deploy.yaml \
|
||||
--image images/portrait_heng.jpg
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/16222477/191712880-91ae128d-247a-43e0-b1e3-cafae78431e0.jpg", width=512px, height=256px />
|
||||
</div>
|
||||
|
||||
|
||||
## 注意事项
|
||||
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,
|
||||
需要先调用DisableNormalizePermute(C++)或`disable_normalize_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
|
||||
## 其它文档
|
||||
|
||||
- [PaddleSeg 模型介绍](..)
|
||||
- [PaddleSeg C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../../docs/api/vision_results/)
|
||||
- [转换PPSeg RKNN模型文档](../README.md)
|
@@ -0,0 +1,44 @@
|
||||
import fastdeploy as fd
|
||||
import cv2
|
||||
import os
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
import argparse
|
||||
import ast
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--model_file", required=True, help="Path of PaddleSeg model.")
|
||||
parser.add_argument(
|
||||
"--config_file", required=True, help="Path of PaddleSeg config.")
|
||||
parser.add_argument(
|
||||
"--image", type=str, required=True, help="Path of test image file.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def build_option(args):
|
||||
option = fd.RuntimeOption()
|
||||
option.use_rknpu2()
|
||||
return option
|
||||
|
||||
|
||||
args = parse_arguments()
|
||||
|
||||
# 配置runtime,加载模型
|
||||
runtime_option = build_option(args)
|
||||
model_file = args.model_file
|
||||
params_file = ""
|
||||
config_file = args.config_file
|
||||
model = fd.vision.segmentation.PaddleSegModel(
|
||||
model_file, params_file, config_file, runtime_option=runtime_option,model_format=fd.ModelFormat.RKNN)
|
||||
|
||||
model.disable_normalize_and_permute()
|
||||
|
||||
# 预测图片分割结果
|
||||
im = cv2.imread(args.image)
|
||||
result = model.predict(im.copy())
|
||||
print(result)
|
||||
|
||||
# 可视化结果
|
||||
vis_im = fd.vision.vis_segmentation(im, result, weight=0.5)
|
||||
cv2.imwrite("vis_img.png", vis_im)
|
Reference in New Issue
Block a user