[Backend] Add RKNPU2 backend support (#456)

* 10-29/14:05
* 新增cmake
* 新增rknpu2 backend

* 10-29/14:43
* Runtime fd_type新增RKNPU代码

* 10-29/15:02
* 新增ppseg RKNPU2推理代码

* 10-29/15:46
* 新增ppseg RKNPU2 cpp example代码

* 10-29/15:51
* 新增README文档

* 10-29/15:51
* 按照要求修改部分注释以及变量名称

* 10-29/15:51
* 修复重命名之后,cc文件中的部分代码还用旧函数名的bug

* 10-29/22:32
* str(Device::NPU)将输出NPU而不是UNKOWN
* 修改runtime文件中的注释格式
* 新增Building Summary ENABLE_RKNPU2_BACKEND输出
* pybind新增支持rknpu2
* 新增python编译选项
* 新增PPSeg Python代码
* 新增以及更新各种文档

* 10-30/14:11
* 尝试修复编译cuda时产生的错误

* 10-30/19:27
* 修改CpuName和CoreMask层级
* 修改ppseg rknn推理层级
* 图片将移动到网络进行下载

* 10-30/19:39
* 更新文档

* 10-30/19:39
* 更新文档
* 更新ppseg rknpu2 example中的函数命名方式
* 更新ppseg rknpu2 example为一个cc文件
* 修复disable_normalize_and_permute部分的逻辑错误
* 移除rknpu2初始化时的无用参数

* 10-30/19:39
* 尝试重置python代码

* 10-30/10:16
* rknpu2_config.h文件不再包含rknn_api头文件防止出现导入错误的问题

* 10-31/14:31
* 修改pybind,支持最新的rknpu2 backends
* 再次支持ppseg python推理
* 移动cpuname 和 coremask的层级

* 10-31/15:35
* 尝试修复rknpu2导入错误

* 10-31/19:00
* 新增RKNPU2模型导出代码以及其对应的文档
* 更新大量文档错误

* 10-31/19:00
* 现在编译完fastdeploy仓库后无需重新设置RKNN2_TARGET_SOC

* 10-31/19:26
* 修改部分错误文档

* 10-31/19:26
* 修复错误删除的部分
* 修复各种错误文档
* 修复FastDeploy.cmake在设置RKNN2_TARGET_SOC错误时,提示错误的信息
* 修复rknpu2_backend.cc中存在的中文注释

* 10-31/20:45
* 删除无用的注释

* 10-31/20:45
* 按照要求修改Device::NPU为Device::RKNPU,硬件将共用valid_hardware_backends
* 删除无用注释以及debug代码

* 11-01/09:45
* 更新变量命名方式

* 11-01/10:16
* 修改部分文档,修改函数命名方式

Co-authored-by: Jason <jiangjiajun@baidu.com>
This commit is contained in:
Zheng_Bicheng
2022-11-01 11:14:05 +08:00
committed by GitHub
parent bb00e0757e
commit 4ffcfbe726
37 changed files with 1567 additions and 74 deletions

View File

@@ -28,6 +28,7 @@ PaddleSegModel::PaddleSegModel(const std::string& model_file,
config_file_ = config_file;
valid_cpu_backends = {Backend::OPENVINO, Backend::PDINFER, Backend::ORT, Backend::LITE};
valid_gpu_backends = {Backend::PDINFER, Backend::ORT, Backend::TRT};
valid_rknpu_backends = {Backend::RKNPU2};
runtime_option = custom_option;
runtime_option.model_format = model_format;
runtime_option.model_file = model_file;
@@ -67,16 +68,17 @@ bool PaddleSegModel::BuildPreprocessPipelineFromConfig() {
FDASSERT(op.IsMap(),
"Require the transform information in yaml be Map type.");
if (op["type"].as<std::string>() == "Normalize") {
std::vector<float> mean = {0.5, 0.5, 0.5};
std::vector<float> std = {0.5, 0.5, 0.5};
if (op["mean"]) {
mean = op["mean"].as<std::vector<float>>();
if(!(this->disable_normalize_and_permute)){
std::vector<float> mean = {0.5, 0.5, 0.5};
std::vector<float> std = {0.5, 0.5, 0.5};
if (op["mean"]) {
mean = op["mean"].as<std::vector<float>>();
}
if (op["std"]) {
std = op["std"].as<std::vector<float>>();
}
processors_.push_back(std::make_shared<Normalize>(mean, std));
}
if (op["std"]) {
std = op["std"].as<std::vector<float>>();
}
processors_.push_back(std::make_shared<Normalize>(mean, std));
} else if (op["type"].as<std::string>() == "Resize") {
yml_contain_resize_op = true;
const auto& target_size = op["target_size"];
@@ -101,7 +103,7 @@ bool PaddleSegModel::BuildPreprocessPipelineFromConfig() {
if (input_height == -1 || input_width == -1) {
FDWARNING << "The exported PaddleSeg model is with dynamic shape input, "
<< "which is not supported by ONNX Runtime and Tensorrt. "
<< "Only OpenVINO and Paddle Inference are available now. "
<< "Only OpenVINO and Paddle Inference are available now. "
<< "For using ONNX Runtime or Tensorrt, "
<< "Please refer to https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export.md"
<< " to export model with fixed input shape."
@@ -130,7 +132,9 @@ bool PaddleSegModel::BuildPreprocessPipelineFromConfig() {
<< "." << std::endl;
}
}
processors_.push_back(std::make_shared<HWC2CHW>());
if(!(this->disable_normalize_and_permute)){
processors_.push_back(std::make_shared<HWC2CHW>());
}
return true;
}
@@ -357,6 +361,14 @@ bool PaddleSegModel::Predict(cv::Mat* im, SegmentationResult* result) {
return true;
}
void PaddleSegModel::DisableNormalizeAndPermute(){
this->disable_normalize_and_permute = true;
// the DisableNormalizeAndPermute function will be invalid if the configuration file is loaded during preprocessing
if (!BuildPreprocessPipelineFromConfig()) {
FDERROR << "Failed to build preprocess pipeline from configuration file." << std::endl;
}
}
} // namespace segmentation
} // namespace vision
} // namespace fastdeploy