[Model] add pptracking model (#357)

* add override mark

* delete some

* recovery

* recovery

* add tracking

* add tracking py_bind and example

* add pptracking

* add pptracking

* iomanip head file

* add opencv_video lib

* add python libs package

Signed-off-by: ChaoII <849453582@qq.com>

* complete comments

Signed-off-by: ChaoII <849453582@qq.com>

* add jdeTracker_ member variable

Signed-off-by: ChaoII <849453582@qq.com>

* add 'FASTDEPLOY_DECL' macro

Signed-off-by: ChaoII <849453582@qq.com>

* remove kwargs params

Signed-off-by: ChaoII <849453582@qq.com>

* [Doc]update pptracking docs

* delete 'ENABLE_PADDLE_FRONTEND' switch

* add pptracking unit test

* update pptracking unit test

Signed-off-by: ChaoII <849453582@qq.com>

* modify test video file path and remove trt test

* update unit test model url

* remove 'FASTDEPLOY_DECL' macro

Signed-off-by: ChaoII <849453582@qq.com>

* fix build python packages about pptracking on win32

Signed-off-by: ChaoII <849453582@qq.com>

Signed-off-by: ChaoII <849453582@qq.com>
Co-authored-by: Jason <jiangjiajun@baidu.com>
This commit is contained in:
ChaoII
2022-10-26 14:27:55 +08:00
committed by GitHub
parent da7247aa41
commit ba501fd963
38 changed files with 2959 additions and 16 deletions

View File

@@ -0,0 +1,14 @@
PROJECT(infer_demo C CXX)
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
# 指定下载解压后的fastdeploy库路径
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
# 添加FastDeploy依赖头文件
include_directories(${FASTDEPLOY_INCS})
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
# 添加FastDeploy库依赖
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})

View File

@@ -0,0 +1,79 @@
# PP-Tracking C++部署示例
本目录下提供`infer.cc`快速完成PP-Tracking在CPU/GPU以及GPU上通过TensorRT加速部署的示例。
在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
以Linux上 PP-Tracking 推理为例在本目录执行如下命令即可完成编译测试如若只需在CPU上部署可在[Fastdeploy C++预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md/CPP_prebuilt_libraries.md)下载CPU推理库
```bash
#下载SDK编译模型examples代码SDK中包含了examples代码
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-gpu-0.3.0.tgz
tar xvf fastdeploy-linux-x64-gpu-0.3.0.tgz
cd fastdeploy-linux-x64-gpu-0.3.0/examples/vision/tracking/pptracking/cpp/
mkdir build && cd build
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/../../../../../../../fastdeploy-linux-x64-gpu-0.3.0
make -j
# 下载PP-Tracking模型文件和测试视频
wget https://bj.bcebos.com/paddlehub/fastdeploy/fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz
tar -xvf fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/person.mp4
wget https://bj.bcebos.com/paddlehub/fastdeploy/person.mp4
# CPU推理
./infer_demo fairmot_hrnetv2_w18_dlafpn_30e_576x320 person.mp4 0
# GPU推理
./infer_demo fairmot_hrnetv2_w18_dlafpn_30e_576x320 person.mp4 1
# GPU上TensorRT推理
./infer_demo fairmot_hrnetv2_w18_dlafpn_30e_576x320 person.mp4 2
```
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
## PP-Tracking C++接口
### PPTracking类
```c++
fastdeploy::vision::tracking::PPTracking(
const string& model_file,
const string& params_file = "",
const string& config_file,
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::PADDLE)
```
PP-Tracking模型加载和初始化其中model_file为导出的Paddle模型格式。
**参数**
> * **model_file**(str): 模型文件路径
> * **params_file**(str): 参数文件路径
> * **config_file**(str): 推理部署配置文件
> * **runtime_option**(RuntimeOption): 后端推理配置默认为None即采用默认配置
> * **model_format**(ModelFormat): 模型格式默认为Paddle格式
#### Predict函数
> ```c++
> PPTracking::Predict(cv::Mat* im, MOTResult* result)
> ```
>
> 模型预测接口,输入图像直接输出检测结果。
>
> **参数**
>
> > * **im**: 输入图像注意需为HWCBGR格式
> > * **result**: 检测结果包括检测框跟踪id各个框的置信度对象类别idMOTResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
- [模型介绍](../../)
- [Python部署](../python)
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -0,0 +1,158 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "fastdeploy/vision.h"
#ifdef WIN32
const char sep = '\\';
#else
const char sep = '/';
#endif
void CpuInfer(const std::string& model_dir, const std::string& video_file) {
auto model_file = model_dir + sep + "model.pdmodel";
auto params_file = model_dir + sep + "model.pdiparams";
auto config_file = model_dir + sep + "infer_cfg.yml";
auto model = fastdeploy::vision::tracking::PPTracking(
model_file, params_file, config_file);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}
fastdeploy::vision::MOTResult result;
cv::Mat frame;
int frame_id=0;
cv::VideoCapture capture(video_file);
// according to the time of prediction to calculate fps
float fps= 0.0f;
while (capture.read(frame)) {
if (frame.empty()) {
break;
}
if (!model.Predict(&frame, &result)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
// std::cout << result.Str() << std::endl;
cv::Mat out_img = fastdeploy::vision::VisMOT(frame, result, fps , frame_id);
cv::imshow("mot",out_img);
cv::waitKey(30);
frame_id++;
}
capture.release();
cv::destroyAllWindows();
}
void GpuInfer(const std::string& model_dir, const std::string& video_file) {
auto model_file = model_dir + sep + "model.pdmodel";
auto params_file = model_dir + sep + "model.pdiparams";
auto config_file = model_dir + sep + "infer_cfg.yml";
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
auto model = fastdeploy::vision::tracking::PPTracking(
model_file, params_file, config_file, option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}
fastdeploy::vision::MOTResult result;
cv::Mat frame;
int frame_id=0;
cv::VideoCapture capture(video_file);
// according to the time of prediction to calculate fps
float fps= 0.0f;
while (capture.read(frame)) {
if (frame.empty()) {
break;
}
if (!model.Predict(&frame, &result)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
// std::cout << result.Str() << std::endl;
cv::Mat out_img = fastdeploy::vision::VisMOT(frame, result, fps , frame_id);
cv::imshow("mot",out_img);
cv::waitKey(30);
frame_id++;
}
capture.release();
cv::destroyAllWindows();
}
void TrtInfer(const std::string& model_dir, const std::string& video_file) {
auto model_file = model_dir + sep + "model.pdmodel";
auto params_file = model_dir + sep + "model.pdiparams";
auto config_file = model_dir + sep + "infer_cfg.yml";
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
option.UseTrtBackend();
auto model = fastdeploy::vision::tracking::PPTracking(
model_file, params_file, config_file, option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}
fastdeploy::vision::MOTResult result;
cv::Mat frame;
int frame_id=0;
cv::VideoCapture capture(video_file);
// according to the time of prediction to calculate fps
float fps= 0.0f;
while (capture.read(frame)) {
if (frame.empty()) {
break;
}
if (!model.Predict(&frame, &result)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
// std::cout << result.Str() << std::endl;
cv::Mat out_img = fastdeploy::vision::VisMOT(frame, result, fps , frame_id);
cv::imshow("mot",out_img);
cv::waitKey(30);
frame_id++;
}
capture.release();
cv::destroyAllWindows();
}
int main(int argc, char* argv[]) {
if (argc < 4) {
std::cout
<< "Usage: infer_demo path/to/model_dir path/to/video run_option, "
"e.g ./infer_model ./pptracking_model_dir ./person.mp4 0"
<< std::endl;
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
"with gpu; 2: run with gpu and use tensorrt backend."
<< std::endl;
return -1;
}
if (std::atoi(argv[3]) == 0) {
CpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 1) {
GpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 2) {
TrtInfer(argv[1], argv[2]);
}
return 0;
}