mirror of
				https://github.com/PaddlePaddle/FastDeploy.git
				synced 2025-10-31 20:02:53 +08:00 
			
		
		
		
	 e811e9e239
			
		
	
	e811e9e239
	
	
	
		
			
			* first commit for yolov7 * pybind for yolov7 * CPP README.md * CPP README.md * modified yolov7.cc * README.md * python file modify * delete license in fastdeploy/ * repush the conflict part * README.md modified * README.md modified * file path modified * file path modified * file path modified * file path modified * file path modified * README modified * README modified * move some helpers to private * add examples for yolov7 * api.md modified * api.md modified * api.md modified * YOLOv7 * yolov7 release link * yolov7 release link * yolov7 release link * copyright * change some helpers to private * change variables to const and fix documents. * gitignore * Transfer some funtions to private member of class * Transfer some funtions to private member of class * Merge from develop (#9) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * first commit for yolor * for merge * Develop (#11) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Yolor (#16) * Develop (#11) (#12) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Develop (#13) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * Develop (#14) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com> * add is_dynamic for YOLO series (#22) * modify ppmatting backend and docs * modify ppmatting docs * fix the PPMatting size problem * fix LimitShort's log * retrigger ci * modify PPMatting docs * modify the way for dealing with LimitShort * add python comments for external models * modify resnet c++ comments * modify C++ comments for external models * modify python comments and add result class comments * fix comments compile error * modify result.h comments * comments for vis * python API * python API Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com>
		
			
				
	
	
		
			189 lines
		
	
	
		
			10 KiB
		
	
	
	
		
			C++
		
	
	
		
			Executable File
		
	
	
	
	
			
		
		
	
	
			189 lines
		
	
	
		
			10 KiB
		
	
	
	
		
			C++
		
	
	
		
			Executable File
		
	
	
	
	
| // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
 | |
| //
 | |
| // Licensed under the Apache License, Version 2.0 (the "License");
 | |
| // you may not use this file except in compliance with the License.
 | |
| // You may obtain a copy of the License at
 | |
| //
 | |
| //     http://www.apache.org/licenses/LICENSE-2.0
 | |
| //
 | |
| // Unless required by applicable law or agreed to in writing, software
 | |
| // distributed under the License is distributed on an "AS IS" BASIS,
 | |
| // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 | |
| // See the License for the specific language governing permissions and
 | |
| // limitations under the License.
 | |
| 
 | |
| #ifdef ENABLE_VISION_VISUALIZE
 | |
| #pragma once
 | |
| 
 | |
| #include "fastdeploy/vision/common/result.h"
 | |
| #include "opencv2/imgproc/imgproc.hpp"
 | |
| #include "fastdeploy/vision/tracking/pptracking/model.h"
 | |
| 
 | |
| namespace fastdeploy {
 | |
| /** \brief All C++ FastDeploy Vision Models APIs are defined inside this namespace
 | |
| *
 | |
| */
 | |
| namespace vision {
 | |
| 
 | |
| class FASTDEPLOY_DECL Visualize {
 | |
|  public:
 | |
|   static int num_classes_;
 | |
|   static std::vector<int> color_map_;
 | |
|   static const std::vector<int>& GetColorMap(int num_classes = 1000);
 | |
|   static cv::Mat VisDetection(const cv::Mat& im, const DetectionResult& result,
 | |
|                               float score_threshold = 0.0, int line_size = 1,
 | |
|                               float font_size = 0.5f);
 | |
|   static cv::Mat VisFaceDetection(const cv::Mat& im,
 | |
|                                   const FaceDetectionResult& result,
 | |
|                                   int line_size = 1, float font_size = 0.5f);
 | |
|   static cv::Mat VisSegmentation(const cv::Mat& im,
 | |
|                                  const SegmentationResult& result);
 | |
|   static cv::Mat VisMattingAlpha(const cv::Mat& im, const MattingResult& result,
 | |
|                                  bool remove_small_connected_area = false);
 | |
|   static cv::Mat RemoveSmallConnectedArea(const cv::Mat& alpha_pred,
 | |
|                                           float threshold);
 | |
|   static cv::Mat SwapBackgroundMatting(
 | |
|       const cv::Mat& im, const cv::Mat& background, const MattingResult& result,
 | |
|       bool remove_small_connected_area = false);
 | |
|   static cv::Mat SwapBackgroundSegmentation(const cv::Mat& im,
 | |
|                                             const cv::Mat& background,
 | |
|                                             int background_label,
 | |
|                                             const SegmentationResult& result);
 | |
|   static cv::Mat VisOcr(const cv::Mat& srcimg, const OCRResult& ocr_result);
 | |
| };
 | |
| 
 | |
| std::vector<int> GenerateColorMap(int num_classes = 1000);
 | |
| cv::Mat RemoveSmallConnectedArea(const cv::Mat& alpha_pred, float threshold);
 | |
| /** \brief Show the visualized results for detection models
 | |
|  *
 | |
|  * \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
 | |
|  * \param[in] result the result produced by model
 | |
|  * \param[in] score_threshold threshold for result scores, the bounding box will not be shown if the score is less than score_threshold
 | |
|  * \param[in] line_size line size for bounding boxes
 | |
|  * \param[in] font_size font size for text
 | |
|  * \return cv::Mat type stores the visualized results
 | |
|  */
 | |
| FASTDEPLOY_DECL cv::Mat VisDetection(const cv::Mat& im,
 | |
|                                      const DetectionResult& result,
 | |
|                                      float score_threshold = 0.0,
 | |
|                                      int line_size = 1, float font_size = 0.5f);
 | |
| /** \brief Show the visualized results with custom labels for detection models
 | |
|  *
 | |
|  * \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
 | |
|  * \param[in] result the result produced by model
 | |
|  * \param[in] labels the visualized result will show the bounding box contain class label
 | |
|  * \param[in] score_threshold threshold for result scores, the bounding box will not be shown if the score is less than score_threshold
 | |
|  * \param[in] line_size line size for bounding boxes
 | |
|  * \param[in] font_size font size for text
 | |
|  * \return cv::Mat type stores the visualized results
 | |
|  */
 | |
| FASTDEPLOY_DECL cv::Mat VisDetection(const cv::Mat& im,
 | |
|                                      const DetectionResult& result,
 | |
|                                      const std::vector<std::string>& labels,
 | |
|                                      float score_threshold = 0.0,
 | |
|                                      int line_size = 1, float font_size = 0.5f);
 | |
| /** \brief Show the visualized results for classification models
 | |
|  *
 | |
|  * \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
 | |
|  * \param[in] result the result produced by model
 | |
|  * \param[in] top_k the length of return values, e.g., if topk==2, the result will include the 2 most possible class label for input image.
 | |
|  * \param[in] score_threshold threshold for top_k scores, the class will not be shown if the score is less than score_threshold
 | |
|  * \param[in] font_size font size
 | |
|  * \return cv::Mat type stores the visualized results
 | |
|  */
 | |
| FASTDEPLOY_DECL cv::Mat VisClassification(
 | |
|   const cv::Mat& im, const ClassifyResult& result, int top_k = 5,
 | |
|   float score_threshold = 0.0f, float font_size = 0.5f);
 | |
| /** \brief Show the visualized results with custom labels for classification models
 | |
|  *
 | |
|  * \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
 | |
|  * \param[in] result the result produced by model
 | |
|  * \param[in] labels custom labels for user, the visualized result will show the corresponding custom labels
 | |
|  * \param[in] top_k the length of return values, e.g., if topk==2, the result will include the 2 most possible class label for input image.
 | |
|  * \param[in] score_threshold threshold for top_k scores, the class will not be shown if the score is less than score_threshold
 | |
|  * \param[in] font_size font size
 | |
|  * \return cv::Mat type stores the visualized results
 | |
|  */
 | |
| FASTDEPLOY_DECL cv::Mat VisClassification(
 | |
|   const cv::Mat& im, const ClassifyResult& result,
 | |
|   const std::vector<std::string>& labels, int top_k = 5,
 | |
|   float score_threshold = 0.0f, float font_size = 0.5f);
 | |
| /** \brief Show the visualized results for face detection models
 | |
|  *
 | |
|  * \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
 | |
|  * \param[in] result the result produced by model
 | |
|  * \param[in] line_size line size for bounding boxes
 | |
|  * \param[in] font_size font size for text
 | |
|  * \return cv::Mat type stores the visualized results
 | |
|  */
 | |
| FASTDEPLOY_DECL cv::Mat VisFaceDetection(const cv::Mat& im,
 | |
|                                          const FaceDetectionResult& result,
 | |
|                                          int line_size = 1,
 | |
|                                          float font_size = 0.5f);
 | |
| /** \brief Show the visualized results for face alignment models
 | |
|  *
 | |
|  * \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
 | |
|  * \param[in] result the result produced by model
 | |
|  * \param[in] line_size line size for circle point
 | |
|  * \return cv::Mat type stores the visualized results
 | |
|  */
 | |
| FASTDEPLOY_DECL cv::Mat VisFaceAlignment(const cv::Mat& im,
 | |
|                                          const FaceAlignmentResult& result,
 | |
|                                          int line_size = 1);
 | |
| /** \brief Show the visualized results for segmentation models
 | |
|  *
 | |
|  * \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
 | |
|  * \param[in] result the result produced by model
 | |
|  * \param[in] weight transparent weight of visualized result image
 | |
|  * \return cv::Mat type stores the visualized results
 | |
|  */
 | |
| FASTDEPLOY_DECL cv::Mat VisSegmentation(const cv::Mat& im,
 | |
|                                         const SegmentationResult& result,
 | |
|                                         float weight = 0.5);
 | |
| /** \brief Show the visualized results for matting models
 | |
|  *
 | |
|  * \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
 | |
|  * \param[in] result the result produced by model
 | |
|  * \param[in] remove_small_connected_area if remove_small_connected_area==true, the visualized result will not include the small connected areas
 | |
|  * \return cv::Mat type stores the visualized results
 | |
|  */
 | |
| FASTDEPLOY_DECL cv::Mat VisMatting(const cv::Mat& im,
 | |
|                                    const MattingResult& result,
 | |
|                                    bool remove_small_connected_area = false);
 | |
| /** \brief Show the visualized results for Ocr models
 | |
|  *
 | |
|  * \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
 | |
|  * \param[in] result the result produced by model
 | |
|  * \return cv::Mat type stores the visualized results
 | |
|  */
 | |
| FASTDEPLOY_DECL cv::Mat VisOcr(const cv::Mat& im, const OCRResult& ocr_result);
 | |
| 
 | |
| FASTDEPLOY_DECL cv::Mat VisMOT(const cv::Mat& img, const MOTResult& results,
 | |
|                                float score_threshold = 0.0f,
 | |
|                                tracking::TrailRecorder* recorder = nullptr);
 | |
| FASTDEPLOY_DECL cv::Mat SwapBackground(
 | |
|     const cv::Mat& im, const cv::Mat& background, const MattingResult& result,
 | |
|     bool remove_small_connected_area = false);
 | |
| FASTDEPLOY_DECL cv::Mat SwapBackground(const cv::Mat& im,
 | |
|                                        const cv::Mat& background,
 | |
|                                        const SegmentationResult& result,
 | |
|                                        int background_label);
 | |
| /** \brief Show the visualized results for key point detection models
 | |
|  *
 | |
|  * \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
 | |
|  * \param[in] results the result produced by model
 | |
|  * \param[in] conf_threshold threshold for result scores, the result will not be shown if the score is less than conf_threshold
 | |
|  * \return cv::Mat type stores the visualized results
 | |
|  */
 | |
| FASTDEPLOY_DECL cv::Mat VisKeypointDetection(const cv::Mat& im,
 | |
|                         const KeyPointDetectionResult& results,
 | |
|                         float conf_threshold = 0.5f);
 | |
| FASTDEPLOY_DECL cv::Mat VisHeadPose(const cv::Mat& im,
 | |
|                                     const HeadPoseResult& result,
 | |
|                                     int size = 50,
 | |
|                                     int line_size = 1);
 | |
| 
 | |
| }  // namespace vision
 | |
| }  // namespace fastdeploy
 | |
| #endif
 |