[Model] add tracking trail on vis_mot (#461)

* add override mark

* delete some

* recovery

* recovery

* add tracking

* add tracking py_bind and example

* add pptracking

* add pptracking

* iomanip head file

* add opencv_video lib

* add python libs package

Signed-off-by: ChaoII <849453582@qq.com>

* complete comments

Signed-off-by: ChaoII <849453582@qq.com>

* add jdeTracker_ member variable

Signed-off-by: ChaoII <849453582@qq.com>

* add 'FASTDEPLOY_DECL' macro

Signed-off-by: ChaoII <849453582@qq.com>

* remove kwargs params

Signed-off-by: ChaoII <849453582@qq.com>

* [Doc]update pptracking docs

* delete 'ENABLE_PADDLE_FRONTEND' switch

* add pptracking unit test

* update pptracking unit test

Signed-off-by: ChaoII <849453582@qq.com>

* modify test video file path and remove trt test

* update unit test model url

* remove 'FASTDEPLOY_DECL' macro

Signed-off-by: ChaoII <849453582@qq.com>

* fix build python packages about pptracking on win32

Signed-off-by: ChaoII <849453582@qq.com>

* update comment

Signed-off-by: ChaoII <849453582@qq.com>

* add pptracking model explain

Signed-off-by: ChaoII <849453582@qq.com>

* add tracking trail on vis_mot

* add tracking trail

* modify code for  some suggestion

* remove unused import

* fix import bug

Signed-off-by: ChaoII <849453582@qq.com>
Co-authored-by: Jason <jiangjiajun@baidu.com>
This commit is contained in:
ChaoII
2022-11-03 09:57:07 +08:00
committed by GitHub
parent 328212f270
commit 22d60fdadf
16 changed files with 208 additions and 116 deletions

View File

@@ -37,4 +37,3 @@ fastdeploy.vision.MOTResult
- **ids**(list of list(float)):成员变量表示单帧画面中所有目标的id其元素个数与`boxes`一致 - **ids**(list of list(float)):成员变量表示单帧画面中所有目标的id其元素个数与`boxes`一致
- **scores**(list of float): 成员变量,表示单帧画面检测出来的所有目标置信度 - **scores**(list of float): 成员变量,表示单帧画面检测出来的所有目标置信度
- **class_ids**(list of int): 成员变量,表示单帧画面出来的所有目标类别 - **class_ids**(list of int): 成员变量,表示单帧画面出来的所有目标类别

View File

@@ -3,7 +3,7 @@
本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型 本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型
| 任务类型 | 说明 | 预测结果结构体 | | 任务类型 | 说明 | 预测结果结构体 |
|:-------------- |:----------------------------------- |:-------------------------------------------------------------------------------- | |:------------------|:------------------------------------------------|:-------------------------------------------------------------------------------------|
| Detection | 目标检测,输入图像,检测图像中物体位置,并返回检测框坐标及类别和置信度 | [DetectionResult](../../docs/api/vision_results/detection_result.md) | | Detection | 目标检测,输入图像,检测图像中物体位置,并返回检测框坐标及类别和置信度 | [DetectionResult](../../docs/api/vision_results/detection_result.md) |
| Segmentation | 语义分割,输入图像,给出图像中每个像素的分类及置信度 | [SegmentationResult](../../docs/api/vision_results/segmentation_result.md) | | Segmentation | 语义分割,输入图像,给出图像中每个像素的分类及置信度 | [SegmentationResult](../../docs/api/vision_results/segmentation_result.md) |
| Classification | 图像分类,输入图像,给出图像的分类结果和置信度 | [ClassifyResult](../../docs/api/vision_results/classification_result.md) | | Classification | 图像分类,输入图像,给出图像的分类结果和置信度 | [ClassifyResult](../../docs/api/vision_results/classification_result.md) |
@@ -12,6 +12,8 @@
| FaceRecognition | 人脸识别输入图像返回可用于相似度计算的人脸特征的embedding | [FaceRecognitionResult](../../docs/api/vision_results/face_recognition_result.md) | | FaceRecognition | 人脸识别输入图像返回可用于相似度计算的人脸特征的embedding | [FaceRecognitionResult](../../docs/api/vision_results/face_recognition_result.md) |
| Matting | 抠图输入图像返回图片的前景每个像素点的Alpha值 | [MattingResult](../../docs/api/vision_results/matting_result.md) | | Matting | 抠图输入图像返回图片的前景每个像素点的Alpha值 | [MattingResult](../../docs/api/vision_results/matting_result.md) |
| OCR | 文本框检测,分类,文本框内容识别,输入图像,返回文本框坐标,文本框的方向类别以及框内的文本内容 | [OCRResult](../../docs/api/vision_results/ocr_result.md) | | OCR | 文本框检测,分类,文本框内容识别,输入图像,返回文本框坐标,文本框的方向类别以及框内的文本内容 | [OCRResult](../../docs/api/vision_results/ocr_result.md) |
| MOT | 多目标跟踪输入图像检测图像中物体位置并返回检测框坐标对象id及类别置信度 | [MOTResult](../../docs/api/vision_results/mot_result.md) |
## FastDeploy API设计 ## FastDeploy API设计
视觉模型具有较有统一任务范式在设计API时包括C++/PythonFastDeploy将视觉模型的部署拆分为四个步骤 视觉模型具有较有统一任务范式在设计API时包括C++/PythonFastDeploy将视觉模型的部署拆分为四个步骤

View File

@@ -33,11 +33,13 @@ void CpuInfer(const std::string& model_dir, const std::string& video_file) {
} }
fastdeploy::vision::MOTResult result; fastdeploy::vision::MOTResult result;
fastdeploy::vision::tracking::TrailRecorder recorder;
// during each prediction, data is inserted into the recorder. As the number of predictions increases,
// the memory will continue to grow. You can cancel the insertion through 'UnbindRecorder'.
// int count = 0; // unbind condition
model.BindRecorder(&recorder);
cv::Mat frame; cv::Mat frame;
int frame_id=0;
cv::VideoCapture capture(video_file); cv::VideoCapture capture(video_file);
// according to the time of prediction to calculate fps
float fps= 0.0f;
while (capture.read(frame)) { while (capture.read(frame)) {
if (frame.empty()) { if (frame.empty()) {
break; break;
@@ -46,12 +48,14 @@ void CpuInfer(const std::string& model_dir, const std::string& video_file) {
std::cerr << "Failed to predict." << std::endl; std::cerr << "Failed to predict." << std::endl;
return; return;
} }
// such as adding this code can cancel trail datat bind
// if(count++ == 10) model.UnbindRecorder();
// std::cout << result.Str() << std::endl; // std::cout << result.Str() << std::endl;
cv::Mat out_img = fastdeploy::vision::VisMOT(frame, result, fps , frame_id); cv::Mat out_img = fastdeploy::vision::VisMOT(frame, result, 0.0, &recorder);
cv::imshow("mot",out_img); cv::imshow("mot",out_img);
cv::waitKey(30); cv::waitKey(30);
frame_id++;
} }
model.UnbindRecorder();
capture.release(); capture.release();
cv::destroyAllWindows(); cv::destroyAllWindows();
} }
@@ -72,11 +76,13 @@ void GpuInfer(const std::string& model_dir, const std::string& video_file) {
} }
fastdeploy::vision::MOTResult result; fastdeploy::vision::MOTResult result;
fastdeploy::vision::tracking::TrailRecorder trail_recorder;
// during each prediction, data is inserted into the recorder. As the number of predictions increases,
// the memory will continue to grow. You can cancel the insertion through 'UnbindRecorder'.
// int count = 0; // unbind condition
model.BindRecorder(&trail_recorder);
cv::Mat frame; cv::Mat frame;
int frame_id=0;
cv::VideoCapture capture(video_file); cv::VideoCapture capture(video_file);
// according to the time of prediction to calculate fps
float fps= 0.0f;
while (capture.read(frame)) { while (capture.read(frame)) {
if (frame.empty()) { if (frame.empty()) {
break; break;
@@ -85,12 +91,14 @@ void GpuInfer(const std::string& model_dir, const std::string& video_file) {
std::cerr << "Failed to predict." << std::endl; std::cerr << "Failed to predict." << std::endl;
return; return;
} }
// such as adding this code can cancel trail datat bind
//if(count++ == 10) model.UnbindRecorder();
// std::cout << result.Str() << std::endl; // std::cout << result.Str() << std::endl;
cv::Mat out_img = fastdeploy::vision::VisMOT(frame, result, fps , frame_id); cv::Mat out_img = fastdeploy::vision::VisMOT(frame, result, 0.0, &trail_recorder);
cv::imshow("mot",out_img); cv::imshow("mot",out_img);
cv::waitKey(30); cv::waitKey(30);
frame_id++;
} }
model.UnbindRecorder();
capture.release(); capture.release();
cv::destroyAllWindows(); cv::destroyAllWindows();
} }
@@ -112,11 +120,13 @@ void TrtInfer(const std::string& model_dir, const std::string& video_file) {
} }
fastdeploy::vision::MOTResult result; fastdeploy::vision::MOTResult result;
fastdeploy::vision::tracking::TrailRecorder recorder;
//during each prediction, data is inserted into the recorder. As the number of predictions increases,
//the memory will continue to grow. You can cancel the insertion through 'UnbindRecorder'.
// int count = 0; // unbind condition
model.BindRecorder(&recorder);
cv::Mat frame; cv::Mat frame;
int frame_id=0;
cv::VideoCapture capture(video_file); cv::VideoCapture capture(video_file);
// according to the time of prediction to calculate fps
float fps= 0.0f;
while (capture.read(frame)) { while (capture.read(frame)) {
if (frame.empty()) { if (frame.empty()) {
break; break;
@@ -125,12 +135,14 @@ void TrtInfer(const std::string& model_dir, const std::string& video_file) {
std::cerr << "Failed to predict." << std::endl; std::cerr << "Failed to predict." << std::endl;
return; return;
} }
// such as adding this code can cancel trail datat bind
// if(count++ == 10) model.UnbindRecorder();
// std::cout << result.Str() << std::endl; // std::cout << result.Str() << std::endl;
cv::Mat out_img = fastdeploy::vision::VisMOT(frame, result, fps , frame_id); cv::Mat out_img = fastdeploy::vision::VisMOT(frame, result, 0.0, &recorder);
cv::imshow("mot",out_img); cv::imshow("mot",out_img);
cv::waitKey(30); cv::waitKey(30);
frame_id++;
} }
model.UnbindRecorder();
capture.release(); capture.release();
cv::destroyAllWindows(); cv::destroyAllWindows();
} }

View File

@@ -14,7 +14,6 @@
import fastdeploy as fd import fastdeploy as fd
import cv2 import cv2
import time
import os import os
@@ -60,20 +59,26 @@ config_file = os.path.join(args.model, "infer_cfg.yml")
model = fd.vision.tracking.PPTracking( model = fd.vision.tracking.PPTracking(
model_file, params_file, config_file, runtime_option=runtime_option) model_file, params_file, config_file, runtime_option=runtime_option)
# 初始化轨迹记录器
recorder = fd.vision.tracking.TrailRecorder()
# 绑定记录器 注意每次预测时往trail_recorder里面插入数据随着预测次数的增加内存会不断地增长
# 可以通过unbind_recorder()方法来解除绑定
model.bind_recorder(recorder)
# 预测图片分割结果 # 预测图片分割结果
cap = cv2.VideoCapture(args.video) cap = cv2.VideoCapture(args.video)
frame_id = 0 # count = 0
while True: while True:
start_time = time.time()
frame_id = frame_id+1
_, frame = cap.read() _, frame = cap.read()
if frame is None: if frame is None:
break break
result = model.predict(frame) result = model.predict(frame)
end_time = time.time() # count += 1
fps = 1.0/(end_time-start_time) # if count == 10:
img = fd.vision.vis_mot(frame, result, fps, frame_id) # model.unbind_recorder()
img = fd.vision.vis_mot(frame, result, 0.0, recorder)
cv2.imshow("video", img) cv2.imshow("video", img)
cv2.waitKey(30) if cv2.waitKey(30) == ord("q"):
break
model.unbind_recorder()
cap.release() cap.release()
cv2.destroyAllWindows() cv2.destroyAllWindows()

View File

@@ -14,6 +14,7 @@
#pragma once #pragma once
#include "fastdeploy/fastdeploy_model.h" #include "fastdeploy/fastdeploy_model.h"
#include "opencv2/core/core.hpp" #include "opencv2/core/core.hpp"
#include <set>
namespace fastdeploy { namespace fastdeploy {
/** \brief All C++ FastDeploy Vision Models APIs are defined inside this namespace /** \brief All C++ FastDeploy Vision Models APIs are defined inside this namespace
@@ -171,6 +172,7 @@ struct FASTDEPLOY_DECL MOTResult : public BaseResult {
/** \brief The classify label id for all the tracking object /** \brief The classify label id for all the tracking object
*/ */
std::vector<int> class_ids; std::vector<int> class_ids;
ResultType type = ResultType::MOT; ResultType type = ResultType::MOT;
/// Clear MOT result /// Clear MOT result
void Clear(); void Clear();

View File

@@ -161,9 +161,7 @@ bool PPTracking::Initialize() {
return false; return false;
} }
// create JDETracker instance // create JDETracker instance
std::unique_ptr<JDETracker> jdeTracker(new JDETracker); jdeTracker_ = std::unique_ptr<JDETracker>(new JDETracker);
jdeTracker_ = std::move(jdeTracker);
return true; return true;
} }
@@ -245,7 +243,6 @@ bool PPTracking::Postprocess(std::vector<FDTensor>& infer_result, MOTResult *res
cv::Mat dets(bbox_shape[0], 6, CV_32FC1, bbox_data); cv::Mat dets(bbox_shape[0], 6, CV_32FC1, bbox_data);
cv::Mat emb(bbox_shape[0], emb_shape[1], CV_32FC1, emb_data); cv::Mat emb(bbox_shape[0], emb_shape[1], CV_32FC1, emb_data);
result->Clear(); result->Clear();
std::vector<Track> tracks; std::vector<Track> tracks;
std::vector<int> valid; std::vector<int> valid;
@@ -264,7 +261,6 @@ bool PPTracking::Postprocess(std::vector<FDTensor>& infer_result, MOTResult *res
result->boxes.push_back(box); result->boxes.push_back(box);
result->ids.push_back(1); result->ids.push_back(1);
result->scores.push_back(*dets.ptr<float>(0, 4)); result->scores.push_back(*dets.ptr<float>(0, 4));
} else { } else {
std::vector<Track>::iterator titer; std::vector<Track>::iterator titer;
for (titer = tracks.begin(); titer != tracks.end(); ++titer) { for (titer = tracks.begin(); titer != tracks.end(); ++titer) {
@@ -285,9 +281,36 @@ bool PPTracking::Postprocess(std::vector<FDTensor>& infer_result, MOTResult *res
} }
} }
} }
if (!is_record_trail_) return true;
int nums = result->boxes.size();
for (int i=0; i<nums; i++) {
float center_x = (result->boxes[i][0] + result->boxes[i][2]) / 2;
float center_y = (result->boxes[i][1] + result->boxes[i][3]) / 2;
int id = result->ids[i];
recorder_->Add(id,{int(center_x), int(center_y)});
}
return true; return true;
} }
void PPTracking::BindRecorder(TrailRecorder* recorder){
recorder_ = recorder;
is_record_trail_ = true;
}
void PPTracking::UnbindRecorder(){
is_record_trail_ = false;
std::map<int, std::vector<std::array<int, 2>>>::iterator iter;
for(iter = recorder_->records.begin(); iter != recorder_->records.end(); iter++){
iter->second.clear();
iter->second.shrink_to_fit();
}
recorder_->records.clear();
std::map<int, std::vector<std::array<int, 2>>>().swap(recorder_->records);
recorder_ = nullptr;
}
} // namespace tracking } // namespace tracking
} // namespace vision } // namespace vision
} // namespace fastdeploy } // namespace fastdeploy

View File

@@ -14,6 +14,7 @@
#pragma once #pragma once
#include <map>
#include "fastdeploy/vision/common/processors/transform.h" #include "fastdeploy/vision/common/processors/transform.h"
#include "fastdeploy/fastdeploy_model.h" #include "fastdeploy/fastdeploy_model.h"
#include "fastdeploy/vision/common/result.h" #include "fastdeploy/vision/common/result.h"
@@ -22,6 +23,21 @@
namespace fastdeploy { namespace fastdeploy {
namespace vision { namespace vision {
namespace tracking { namespace tracking {
struct TrailRecorder{
std::map<int, std::vector<std::array<int, 2>>> records;
void Add(int id, const std::array<int, 2>& record);
};
inline void TrailRecorder::Add(int id, const std::array<int, 2>& record) {
auto iter = records.find(id);
if (iter != records.end()) {
auto trail = records[id];
trail.push_back(record);
records[id] = trail;
} else {
records[id] = {record};
}
}
class FASTDEPLOY_DECL PPTracking: public FastDeployModel { class FASTDEPLOY_DECL PPTracking: public FastDeployModel {
public: public:
@@ -49,6 +65,14 @@ class FASTDEPLOY_DECL PPTracking: public FastDeployModel {
* \return true if the prediction successed, otherwise false * \return true if the prediction successed, otherwise false
*/ */
virtual bool Predict(cv::Mat* img, MOTResult* result); virtual bool Predict(cv::Mat* img, MOTResult* result);
/** \brief bind tracking trail struct
*
* \param[in] recorder The MOT trail will record the trail of object
*/
void BindRecorder(TrailRecorder* recorder);
/** \brief cancel binding and clear trail information
*/
void UnbindRecorder();
private: private:
bool BuildPreprocessPipelineFromConfig(); bool BuildPreprocessPipelineFromConfig();
@@ -65,8 +89,11 @@ class FASTDEPLOY_DECL PPTracking: public FastDeployModel {
float conf_thresh_; float conf_thresh_;
float tracked_thresh_; float tracked_thresh_;
float min_box_area_; float min_box_area_;
bool is_record_trail_ = false;
std::unique_ptr<JDETracker> jdeTracker_; std::unique_ptr<JDETracker> jdeTracker_;
TrailRecorder *recorder_ = nullptr;
}; };
} // namespace tracking } // namespace tracking
} // namespace vision } // namespace vision
} // namespace fastdeploy } // namespace fastdeploy

View File

@@ -15,6 +15,11 @@
namespace fastdeploy { namespace fastdeploy {
void BindPPTracking(pybind11::module &m) { void BindPPTracking(pybind11::module &m) {
pybind11::class_<vision::tracking::TrailRecorder>(m, "TrailRecorder")
.def(pybind11::init<>())
.def_readwrite("records", &vision::tracking::TrailRecorder::records)
.def("add", &vision::tracking::TrailRecorder::Add);
pybind11::class_<vision::tracking::PPTracking, FastDeployModel>( pybind11::class_<vision::tracking::PPTracking, FastDeployModel>(
m, "PPTracking") m, "PPTracking")
.def(pybind11::init<std::string, std::string, std::string, RuntimeOption, .def(pybind11::init<std::string, std::string, std::string, RuntimeOption,
@@ -26,6 +31,8 @@ void BindPPTracking(pybind11::module &m) {
vision::MOTResult *res = new vision::MOTResult(); vision::MOTResult *res = new vision::MOTResult();
self.Predict(&mat, res); self.Predict(&mat, res);
return res; return res;
}); })
.def("bind_recorder", &vision::tracking::PPTracking::BindRecorder)
.def("unbind_recorder", &vision::tracking::PPTracking::UnbindRecorder);
} }
} // namespace fastdeploy } // namespace fastdeploy

View File

@@ -118,7 +118,7 @@ void Trajectory::update(Trajectory *traj,
if (update_embedding_) update_embedding(traj->current_embedding); if (update_embedding_) update_embedding(traj->current_embedding);
} }
void Trajectory::activate(int &cnt,int timestamp_) { void Trajectory::activate(int &cnt, int timestamp_) {
id = next_id(cnt); id = next_id(cnt);
TKalmanFilter::init(cv::Mat(xyah)); TKalmanFilter::init(cv::Mat(xyah));
length = 0; length = 0;
@@ -130,7 +130,7 @@ void Trajectory::activate(int &cnt,int timestamp_) {
starttime = timestamp_; starttime = timestamp_;
} }
void Trajectory::reactivate(Trajectory *traj,int &cnt, int timestamp_, bool newid) { void Trajectory::reactivate(Trajectory *traj, int &cnt, int timestamp_, bool newid) {
TKalmanFilter::correct(cv::Mat(traj->xyah)); TKalmanFilter::correct(cv::Mat(traj->xyah));
update_embedding(traj->current_embedding); update_embedding(traj->current_embedding);
length = 0; length = 0;

View File

@@ -74,8 +74,8 @@ class FASTDEPLOY_DECL Trajectory : public TKalmanFilter {
virtual void update(Trajectory *traj, virtual void update(Trajectory *traj,
int timestamp, int timestamp,
bool update_embedding = true); bool update_embedding = true);
virtual void activate(int& cnt, int timestamp); virtual void activate(int &cnt, int timestamp);
virtual void reactivate(Trajectory *traj, int & cnt,int timestamp, bool newid = false); virtual void reactivate(Trajectory *traj, int &cnt, int timestamp, bool newid = false);
virtual void mark_lost(void); virtual void mark_lost(void);
virtual void mark_removed(void); virtual void mark_removed(void);

View File

@@ -25,40 +25,31 @@ cv::Scalar GetMOTBoxColor(int idx) {
return color; return color;
} }
cv::Mat VisMOT(const cv::Mat &img, const MOTResult &results,
cv::Mat VisMOT(const cv::Mat &img, const MOTResult &results, float fps, int frame_id) { float score_threshold, tracking::TrailRecorder* recorder) {
cv::Mat vis_img = img.clone(); cv::Mat vis_img = img.clone();
int im_h = img.rows; int im_h = img.rows;
int im_w = img.cols; int im_w = img.cols;
float text_scale = std::max(1, static_cast<int>(im_w / 1600.)); float text_scale = std::max(1, static_cast<int>(im_w / 1600.));
float text_thickness = 2.; float text_thickness = 2.;
float line_thickness = std::max(1, static_cast<int>(im_w / 500.)); float line_thickness = std::max(1, static_cast<int>(im_w / 500.));
std::ostringstream oss;
oss << std::setiosflags(std::ios::fixed) << std::setprecision(4);
oss << "frame: " << frame_id << " ";
oss << "fps: " << fps << " ";
oss << "num: " << results.boxes.size();
std::string text = oss.str();
cv::Point origin;
origin.x = 0;
origin.y = static_cast<int>(15 * text_scale);
cv::putText(vis_img,
text,
origin,
cv::FONT_HERSHEY_PLAIN,
text_scale,
cv::Scalar(0, 0, 255),
text_thickness);
for (int i = 0; i < results.boxes.size(); ++i) { for (int i = 0; i < results.boxes.size(); ++i) {
if (results.scores[i] < score_threshold) {
continue;
}
const int obj_id = results.ids[i]; const int obj_id = results.ids[i];
const float score = results.scores[i]; const float score = results.scores[i];
cv::Scalar color = GetMOTBoxColor(obj_id); cv::Scalar color = GetMOTBoxColor(obj_id);
if (recorder != nullptr){
int id = results.ids[i];
auto iter = recorder->records.find(id);
if (iter != recorder->records.end()) {
for (int j = 0; j < iter->second.size(); j++) {
cv::Point center(iter->second[j][0], iter->second[j][1]);
cv::circle(vis_img, center, text_thickness, color);
}
}
}
cv::Point pt1 = cv::Point(results.boxes[i][0], results.boxes[i][1]); cv::Point pt1 = cv::Point(results.boxes[i][0], results.boxes[i][1]);
cv::Point pt2 = cv::Point(results.boxes[i][2], results.boxes[i][3]); cv::Point pt2 = cv::Point(results.boxes[i][2], results.boxes[i][3]);
cv::Point id_pt = cv::Point id_pt =
@@ -66,7 +57,6 @@ cv::Mat VisMOT(const cv::Mat &img, const MOTResult &results, float fps, int fram
cv::Point score_pt = cv::Point score_pt =
cv::Point(results.boxes[i][0], results.boxes[i][1] - 10); cv::Point(results.boxes[i][0], results.boxes[i][1] - 10);
cv::rectangle(vis_img, pt1, pt2, color, line_thickness); cv::rectangle(vis_img, pt1, pt2, color, line_thickness);
std::ostringstream idoss; std::ostringstream idoss;
idoss << std::setiosflags(std::ios::fixed) << std::setprecision(4); idoss << std::setiosflags(std::ios::fixed) << std::setprecision(4);
idoss << obj_id; idoss << obj_id;
@@ -77,7 +67,7 @@ cv::Mat VisMOT(const cv::Mat &img, const MOTResult &results, float fps, int fram
id_pt, id_pt,
cv::FONT_HERSHEY_PLAIN, cv::FONT_HERSHEY_PLAIN,
text_scale, text_scale,
cv::Scalar(0, 255, 255), color,
text_thickness); text_thickness);
std::ostringstream soss; std::ostringstream soss;
@@ -90,7 +80,7 @@ cv::Mat VisMOT(const cv::Mat &img, const MOTResult &results, float fps, int fram
score_pt, score_pt,
cv::FONT_HERSHEY_PLAIN, cv::FONT_HERSHEY_PLAIN,
text_scale, text_scale,
cv::Scalar(0, 255, 255), color,
text_thickness); text_thickness);
} }
return vis_img; return vis_img;

View File

@@ -17,6 +17,8 @@
#include "fastdeploy/vision/common/result.h" #include "fastdeploy/vision/common/result.h"
#include "opencv2/imgproc/imgproc.hpp" #include "opencv2/imgproc/imgproc.hpp"
#include "fastdeploy/vision/tracking/pptracking/model.h"
namespace fastdeploy { namespace fastdeploy {
namespace vision { namespace vision {
@@ -81,8 +83,9 @@ FASTDEPLOY_DECL cv::Mat VisMatting(const cv::Mat& im,
bool remove_small_connected_area = false); bool remove_small_connected_area = false);
FASTDEPLOY_DECL cv::Mat VisOcr(const cv::Mat& im, const OCRResult& ocr_result); FASTDEPLOY_DECL cv::Mat VisOcr(const cv::Mat& im, const OCRResult& ocr_result);
FASTDEPLOY_DECL cv::Mat VisMOT(const cv::Mat& img,const MOTResult& results, float fps=0.0, int frame_id=0); FASTDEPLOY_DECL cv::Mat VisMOT(const cv::Mat& img, const MOTResult& results,
float score_threshold = 0.0f,
tracking::TrailRecorder* recorder = nullptr);
FASTDEPLOY_DECL cv::Mat SwapBackground( FASTDEPLOY_DECL cv::Mat SwapBackground(
const cv::Mat& im, const cv::Mat& background, const MattingResult& result, const cv::Mat& im, const cv::Mat& background, const MattingResult& result,
bool remove_small_connected_area = false); bool remove_small_connected_area = false);

View File

@@ -86,9 +86,9 @@ void BindVisualize(pybind11::module& m) {
return TensorToPyArray(out); return TensorToPyArray(out);
}) })
.def("vis_mot", .def("vis_mot",
[](pybind11::array& im_data, vision::MOTResult& result,float fps, int frame_id) { [](pybind11::array& im_data, vision::MOTResult& result,float score_threshold, vision::tracking::TrailRecorder record) {
auto im = PyArrayToCvMat(im_data); auto im = PyArrayToCvMat(im_data);
auto vis_im = vision::VisMOT(im, result,fps,frame_id); auto vis_im = vision::VisMOT(im, result, score_threshold, &record);
FDTensor out; FDTensor out;
vision::Mat(vis_im).ShareWithTensor(&out); vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out); return TensorToPyArray(out);
@@ -185,9 +185,10 @@ void BindVisualize(pybind11::module& m) {
return TensorToPyArray(out); return TensorToPyArray(out);
}) })
.def_static("vis_mot", .def_static("vis_mot",
[](pybind11::array& im_data, vision::MOTResult& result,float fps, int frame_id) { [](pybind11::array& im_data, vision::MOTResult& result,float score_threshold,
vision::tracking::TrailRecorder* record) {
auto im = PyArrayToCvMat(im_data); auto im = PyArrayToCvMat(im_data);
auto vis_im = vision::VisMOT(im, result,fps,frame_id); auto vis_im = vision::VisMOT(im, result, score_threshold, record);
FDTensor out; FDTensor out;
vision::Mat(vis_im).ShareWithTensor(&out); vision::Mat(vis_im).ShareWithTensor(&out);
return TensorToPyArray(out); return TensorToPyArray(out);

View File

@@ -12,5 +12,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from __future__ import absolute_import from __future__ import absolute_import
from ... import c_lib_wrap as C
from .pptracking import PPTracking from .pptracking import PPTracking
try:
TrailRecorder = C.vision.tracking.TrailRecorder
except:
pass

View File

@@ -48,3 +48,18 @@ class PPTracking(FastDeployModel):
""" """
assert input_image is not None, "The input image data is None." assert input_image is not None, "The input image data is None."
return self._model.predict(input_image) return self._model.predict(input_image)
def bind_recorder(self, val):
""" Binding tracking trail
:param val: (TrailRecorder) trail recorder, which is contained object's id and center point sequence
:return: None
"""
self._model.bind_recorder(val)
def unbind_recorder(self):
""" cancel binding of tracking trail
:return:
"""
self._model.unbind_recorder()

View File

@@ -15,6 +15,7 @@
from __future__ import absolute_import from __future__ import absolute_import
import logging import logging
from ... import c_lib_wrap as C from ... import c_lib_wrap as C
import cv2
def vis_detection(im_data, def vis_detection(im_data,
@@ -106,5 +107,5 @@ def vis_ppocr(im_data, det_result):
return C.vision.vis_ppocr(im_data, det_result) return C.vision.vis_ppocr(im_data, det_result)
def vis_mot(im_data, mot_result, fps, frame_id): def vis_mot(im_data, mot_result, score_threshold=0.0, records=None):
return C.vision.vis_mot(im_data, mot_result, fps, frame_id) return C.vision.vis_mot(im_data, mot_result, score_threshold, records)