[Docs] Pick seg fastdeploy docs from PaddleSeg (#1482)

* [Docs] Pick seg fastdeploy docs from PaddleSeg

* [Docs] update seg docs

* [Docs] Add c&csharp examples for seg

* [Docs] Add c&csharp examples for seg

* [Doc] Update paddleseg README.md

* Update README.md
This commit is contained in:
DefTruth
2023-03-17 11:22:46 +08:00
committed by GitHub
parent 3b1343c726
commit 5b143219ce
177 changed files with 1019 additions and 815 deletions

View File

@@ -0,0 +1,17 @@
PROJECT(infer_demo C CXX)
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
# 指定下载解压后的fastdeploy库路径
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
set(ENABLE_LITE_BACKEND OFF)
#set(FDLIB ${FASTDEPLOY_INSTALL_DIR})
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
# 添加FastDeploy依赖头文件
include_directories(${FASTDEPLOY_INCS})
include_directories(${FastDeploy_INCLUDE_DIRS})
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
# 添加FastDeploy库依赖
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})

View File

@@ -0,0 +1,71 @@
[English](README.md) | 简体中文
# PaddleSeg 算能 C++ 部署示例
本目录下提供`infer.cc`快速完成PP-LiteSeg在SOPHGO BM1684x板子上加速部署的示例。
## 1. 部署环境准备
在部署前,需自行编译基于算能硬件的预测库,参考文档[算能硬件部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#算能硬件部署环境)
## 2. 部署模型准备
在部署前,请准备好您所需要运行的推理模型,你可以选择使用[预导出的推理模型](../README.md)或者[自行导出PaddleSeg部署模型](../README.md)。
## 3. 生成基本目录文件
该例程由以下几个部分组成
```text
.
├── CMakeLists.txt
├── fastdeploy-sophgo # 编译文件夹
├── image # 存放图片的文件夹
├── infer.cc
└── model # 存放模型文件的文件夹
```
## 4. 运行部署示例
### 4.1 编译FastDeploy
请参考[SOPHGO部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)编译SDK编译完成后将在build目录下生成fastdeploy-sophgo目录。拷贝fastdeploy-sophgo至当前目录
### 4.2 下载部署示例代码
```bash
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/segmentation/semantic_segmentation/sophgo/cpp
# # 如果您希望从PaddleSeg下载示例代码请运行
# git clone https://github.com/PaddlePaddle/PaddleSeg.git
# # 注意如果当前分支找不到下面的fastdeploy测试代码请切换到develop分支
# # git checkout develop
# cd PaddleSeg/deploy/fastdeploy/semantic_segmentation/sophgo/cpp
```
### 4.3 拷贝模型文件以及配置文件至model文件夹
将Paddle模型转换为SOPHGO bmodel模型转换步骤参考[文档](../README.md)
将转换后的SOPHGO bmodel模型文件拷贝至model中
### 4.4 准备测试图片至image文件夹
```bash
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
cp cityscapes_demo.png ./images
```
### 4.5 编译example
```bash
cd build
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-sophgo
make
```
### 4.6 运行例程
```bash
./infer_demo model images/cityscapes_demo.png
```
## 5. 更多指南
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
- [FastDeploy部署PaddleSeg模型概览](../../)
- [Python部署](../python)
- [模型转换](../README.md)

View File

@@ -0,0 +1,69 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <iostream>
#include <string>
#include "fastdeploy/vision.h"
void SophgoInfer(const std::string& model_dir, const std::string& image_file) {
std::string model_file = model_dir + "/pp_liteseg_1684x_f32.bmodel";
std::string params_file;
std::string config_file = model_dir + "/deploy.yaml";
auto option = fastdeploy::RuntimeOption();
option.UseSophgo();
auto model_format = fastdeploy::ModelFormat::SOPHGO;
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
model_file, params_file, config_file, option, model_format);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}
// model.GetPreprocessor().DisableNormalizeAndPermute();
fastdeploy::TimeCounter tc;
tc.Start();
auto im_org = cv::imread(image_file);
// the input of bmodel should be fixed
int new_width = 512;
int new_height = 512;
cv::Mat im;
cv::resize(im_org, im, cv::Size(new_width, new_height), cv::INTER_LINEAR);
fastdeploy::vision::SegmentationResult res;
if (!model.Predict(&im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
auto vis_im = fastdeploy::vision::VisSegmentation(im, res);
tc.End();
tc.PrintInfo("PPSeg in Sophgo");
cv::imwrite("infer_sophgo.jpg", vis_im);
std::cout << "Visualized result saved in ./infer_sophgo.jpg" << std::endl;
}
int main(int argc, char* argv[]) {
if (argc < 3) {
std::cout
<< "Usage: infer_demo path/to/model_dir path/to/image run_option, "
"e.g ./infer_model ./bmodel ./test.jpeg"
<< std::endl;
return -1;
}
SophgoInfer(argv[1], argv[2]);
return 0;
}