mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-06 00:57:33 +08:00
[Doc] API docs for Visualize module (#770)
* first commit for yolov7 * pybind for yolov7 * CPP README.md * CPP README.md * modified yolov7.cc * README.md * python file modify * delete license in fastdeploy/ * repush the conflict part * README.md modified * README.md modified * file path modified * file path modified * file path modified * file path modified * file path modified * README modified * README modified * move some helpers to private * add examples for yolov7 * api.md modified * api.md modified * api.md modified * YOLOv7 * yolov7 release link * yolov7 release link * yolov7 release link * copyright * change some helpers to private * change variables to const and fix documents. * gitignore * Transfer some funtions to private member of class * Transfer some funtions to private member of class * Merge from develop (#9) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * first commit for yolor * for merge * Develop (#11) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Yolor (#16) * Develop (#11) (#12) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Develop (#13) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * Develop (#14) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com> * add is_dynamic for YOLO series (#22) * modify ppmatting backend and docs * modify ppmatting docs * fix the PPMatting size problem * fix LimitShort's log * retrigger ci * modify PPMatting docs * modify the way for dealing with LimitShort * add python comments for external models * modify resnet c++ comments * modify C++ comments for external models * modify python comments and add result class comments * fix comments compile error * modify result.h comments * comments for vis * python API * python API Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com>
This commit is contained in:
@@ -2100,7 +2100,7 @@ INCLUDE_FILE_PATTERNS =
|
|||||||
# recursively expanded use the := operator instead of the = operator.
|
# recursively expanded use the := operator instead of the = operator.
|
||||||
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
|
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
|
||||||
|
|
||||||
PREDEFINED = protected=private
|
PREDEFINED = protected=private ENABLE_VISION_VISUALIZE=1
|
||||||
|
|
||||||
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
|
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
|
||||||
# tag can be used to specify a list of macro names that should be expanded. The
|
# tag can be used to specify a list of macro names that should be expanded. The
|
||||||
|
57
docs/api_docs/python/visualize.md
Normal file
57
docs/api_docs/python/visualize.md
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
# Visaulize(可视化)
|
||||||
|
|
||||||
|
## fastdeploy.vision.vis_detection
|
||||||
|
|
||||||
|
```{eval-rst}
|
||||||
|
.. autoclass:: fastdeploy.vision.vis_detection
|
||||||
|
:members:
|
||||||
|
:inherited-members:
|
||||||
|
```
|
||||||
|
|
||||||
|
## fastdeploy.vision.vis_segmentation
|
||||||
|
|
||||||
|
```{eval-rst}
|
||||||
|
.. autoclass:: fastdeploy.vision.vis_segmentation
|
||||||
|
:members:
|
||||||
|
:inherited-members:
|
||||||
|
```
|
||||||
|
|
||||||
|
## fastdeploy.vision.vis_keypoint_detection
|
||||||
|
|
||||||
|
```{eval-rst}
|
||||||
|
.. autoclass:: fastdeploy.vision.vis_keypoint_detection
|
||||||
|
:members:
|
||||||
|
:inherited-members:
|
||||||
|
```
|
||||||
|
## fastdeploy.vision.vis_face_detection
|
||||||
|
|
||||||
|
```{eval-rst}
|
||||||
|
.. autoclass:: fastdeploy.vision.vis_face_detection
|
||||||
|
:members:
|
||||||
|
:inherited-members:
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## fastdeploy.vision.vis_face_alignment
|
||||||
|
|
||||||
|
```{eval-rst}
|
||||||
|
.. autoclass:: fastdeploy.vision.vis_face_alignment
|
||||||
|
:members:
|
||||||
|
:inherited-members:
|
||||||
|
```
|
||||||
|
|
||||||
|
## fastdeploy.vision.vis_matting
|
||||||
|
|
||||||
|
```{eval-rst}
|
||||||
|
.. autoclass:: fastdeploy.vision.vis_matting
|
||||||
|
:members:
|
||||||
|
:inherited-members:
|
||||||
|
```
|
||||||
|
|
||||||
|
## fastdeploy.vision.vis_ppocr
|
||||||
|
|
||||||
|
```{eval-rst}
|
||||||
|
.. autoclass:: fastdeploy.vision.vis_ppocr
|
||||||
|
:members:
|
||||||
|
:inherited-members:
|
||||||
|
```
|
@@ -20,9 +20,11 @@
|
|||||||
#include "fastdeploy/vision/tracking/pptracking/model.h"
|
#include "fastdeploy/vision/tracking/pptracking/model.h"
|
||||||
|
|
||||||
namespace fastdeploy {
|
namespace fastdeploy {
|
||||||
|
/** \brief All C++ FastDeploy Vision Models APIs are defined inside this namespace
|
||||||
|
*
|
||||||
|
*/
|
||||||
namespace vision {
|
namespace vision {
|
||||||
|
|
||||||
// This class will deprecated, please not use it
|
|
||||||
class FASTDEPLOY_DECL Visualize {
|
class FASTDEPLOY_DECL Visualize {
|
||||||
public:
|
public:
|
||||||
static int num_classes_;
|
static int num_classes_;
|
||||||
@@ -52,35 +54,108 @@ class FASTDEPLOY_DECL Visualize {
|
|||||||
|
|
||||||
std::vector<int> GenerateColorMap(int num_classes = 1000);
|
std::vector<int> GenerateColorMap(int num_classes = 1000);
|
||||||
cv::Mat RemoveSmallConnectedArea(const cv::Mat& alpha_pred, float threshold);
|
cv::Mat RemoveSmallConnectedArea(const cv::Mat& alpha_pred, float threshold);
|
||||||
|
/** \brief Show the visualized results for detection models
|
||||||
|
*
|
||||||
|
* \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
|
||||||
|
* \param[in] result the result produced by model
|
||||||
|
* \param[in] score_threshold threshold for result scores, the bounding box will not be shown if the score is less than score_threshold
|
||||||
|
* \param[in] line_size line size for bounding boxes
|
||||||
|
* \param[in] font_size font size for text
|
||||||
|
* \return cv::Mat type stores the visualized results
|
||||||
|
*/
|
||||||
FASTDEPLOY_DECL cv::Mat VisDetection(const cv::Mat& im,
|
FASTDEPLOY_DECL cv::Mat VisDetection(const cv::Mat& im,
|
||||||
const DetectionResult& result,
|
const DetectionResult& result,
|
||||||
float score_threshold = 0.0,
|
float score_threshold = 0.0,
|
||||||
int line_size = 1, float font_size = 0.5f);
|
int line_size = 1, float font_size = 0.5f);
|
||||||
|
/** \brief Show the visualized results with custom labels for detection models
|
||||||
|
*
|
||||||
|
* \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
|
||||||
|
* \param[in] result the result produced by model
|
||||||
|
* \param[in] labels the visualized result will show the bounding box contain class label
|
||||||
|
* \param[in] score_threshold threshold for result scores, the bounding box will not be shown if the score is less than score_threshold
|
||||||
|
* \param[in] line_size line size for bounding boxes
|
||||||
|
* \param[in] font_size font size for text
|
||||||
|
* \return cv::Mat type stores the visualized results
|
||||||
|
*/
|
||||||
FASTDEPLOY_DECL cv::Mat VisDetection(const cv::Mat& im,
|
FASTDEPLOY_DECL cv::Mat VisDetection(const cv::Mat& im,
|
||||||
const DetectionResult& result,
|
const DetectionResult& result,
|
||||||
const std::vector<std::string>& labels,
|
const std::vector<std::string>& labels,
|
||||||
float score_threshold = 0.0,
|
float score_threshold = 0.0,
|
||||||
int line_size = 1, float font_size = 0.5f);
|
int line_size = 1, float font_size = 0.5f);
|
||||||
|
/** \brief Show the visualized results for classification models
|
||||||
|
*
|
||||||
|
* \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
|
||||||
|
* \param[in] result the result produced by model
|
||||||
|
* \param[in] top_k the length of return values, e.g., if topk==2, the result will include the 2 most possible class label for input image.
|
||||||
|
* \param[in] score_threshold threshold for top_k scores, the class will not be shown if the score is less than score_threshold
|
||||||
|
* \param[in] font_size font size
|
||||||
|
* \return cv::Mat type stores the visualized results
|
||||||
|
*/
|
||||||
FASTDEPLOY_DECL cv::Mat VisClassification(
|
FASTDEPLOY_DECL cv::Mat VisClassification(
|
||||||
const cv::Mat& im, const ClassifyResult& result, int top_k = 5,
|
const cv::Mat& im, const ClassifyResult& result, int top_k = 5,
|
||||||
float score_threshold = 0.0f, float font_size = 0.5f);
|
float score_threshold = 0.0f, float font_size = 0.5f);
|
||||||
|
/** \brief Show the visualized results with custom labels for classification models
|
||||||
|
*
|
||||||
|
* \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
|
||||||
|
* \param[in] result the result produced by model
|
||||||
|
* \param[in] labels custom labels for user, the visualized result will show the corresponding custom labels
|
||||||
|
* \param[in] top_k the length of return values, e.g., if topk==2, the result will include the 2 most possible class label for input image.
|
||||||
|
* \param[in] score_threshold threshold for top_k scores, the class will not be shown if the score is less than score_threshold
|
||||||
|
* \param[in] font_size font size
|
||||||
|
* \return cv::Mat type stores the visualized results
|
||||||
|
*/
|
||||||
FASTDEPLOY_DECL cv::Mat VisClassification(
|
FASTDEPLOY_DECL cv::Mat VisClassification(
|
||||||
const cv::Mat& im, const ClassifyResult& result,
|
const cv::Mat& im, const ClassifyResult& result,
|
||||||
const std::vector<std::string>& labels, int top_k = 5,
|
const std::vector<std::string>& labels, int top_k = 5,
|
||||||
float score_threshold = 0.0f, float font_size = 0.5f);
|
float score_threshold = 0.0f, float font_size = 0.5f);
|
||||||
|
/** \brief Show the visualized results for face detection models
|
||||||
|
*
|
||||||
|
* \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
|
||||||
|
* \param[in] result the result produced by model
|
||||||
|
* \param[in] line_size line size for bounding boxes
|
||||||
|
* \param[in] font_size font size for text
|
||||||
|
* \return cv::Mat type stores the visualized results
|
||||||
|
*/
|
||||||
FASTDEPLOY_DECL cv::Mat VisFaceDetection(const cv::Mat& im,
|
FASTDEPLOY_DECL cv::Mat VisFaceDetection(const cv::Mat& im,
|
||||||
const FaceDetectionResult& result,
|
const FaceDetectionResult& result,
|
||||||
int line_size = 1,
|
int line_size = 1,
|
||||||
float font_size = 0.5f);
|
float font_size = 0.5f);
|
||||||
|
/** \brief Show the visualized results for face alignment models
|
||||||
|
*
|
||||||
|
* \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
|
||||||
|
* \param[in] result the result produced by model
|
||||||
|
* \param[in] line_size line size for circle point
|
||||||
|
* \return cv::Mat type stores the visualized results
|
||||||
|
*/
|
||||||
FASTDEPLOY_DECL cv::Mat VisFaceAlignment(const cv::Mat& im,
|
FASTDEPLOY_DECL cv::Mat VisFaceAlignment(const cv::Mat& im,
|
||||||
const FaceAlignmentResult& result,
|
const FaceAlignmentResult& result,
|
||||||
int line_size = 1);
|
int line_size = 1);
|
||||||
|
/** \brief Show the visualized results for segmentation models
|
||||||
|
*
|
||||||
|
* \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
|
||||||
|
* \param[in] result the result produced by model
|
||||||
|
* \param[in] weight transparent weight of visualized result image
|
||||||
|
* \return cv::Mat type stores the visualized results
|
||||||
|
*/
|
||||||
FASTDEPLOY_DECL cv::Mat VisSegmentation(const cv::Mat& im,
|
FASTDEPLOY_DECL cv::Mat VisSegmentation(const cv::Mat& im,
|
||||||
const SegmentationResult& result,
|
const SegmentationResult& result,
|
||||||
float weight = 0.5);
|
float weight = 0.5);
|
||||||
|
/** \brief Show the visualized results for matting models
|
||||||
|
*
|
||||||
|
* \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
|
||||||
|
* \param[in] result the result produced by model
|
||||||
|
* \param[in] remove_small_connected_area if remove_small_connected_area==true, the visualized result will not include the small connected areas
|
||||||
|
* \return cv::Mat type stores the visualized results
|
||||||
|
*/
|
||||||
FASTDEPLOY_DECL cv::Mat VisMatting(const cv::Mat& im,
|
FASTDEPLOY_DECL cv::Mat VisMatting(const cv::Mat& im,
|
||||||
const MattingResult& result,
|
const MattingResult& result,
|
||||||
bool remove_small_connected_area = false);
|
bool remove_small_connected_area = false);
|
||||||
|
/** \brief Show the visualized results for Ocr models
|
||||||
|
*
|
||||||
|
* \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
|
||||||
|
* \param[in] result the result produced by model
|
||||||
|
* \return cv::Mat type stores the visualized results
|
||||||
|
*/
|
||||||
FASTDEPLOY_DECL cv::Mat VisOcr(const cv::Mat& im, const OCRResult& ocr_result);
|
FASTDEPLOY_DECL cv::Mat VisOcr(const cv::Mat& im, const OCRResult& ocr_result);
|
||||||
|
|
||||||
FASTDEPLOY_DECL cv::Mat VisMOT(const cv::Mat& img, const MOTResult& results,
|
FASTDEPLOY_DECL cv::Mat VisMOT(const cv::Mat& img, const MOTResult& results,
|
||||||
@@ -93,6 +168,13 @@ FASTDEPLOY_DECL cv::Mat SwapBackground(const cv::Mat& im,
|
|||||||
const cv::Mat& background,
|
const cv::Mat& background,
|
||||||
const SegmentationResult& result,
|
const SegmentationResult& result,
|
||||||
int background_label);
|
int background_label);
|
||||||
|
/** \brief Show the visualized results for key point detection models
|
||||||
|
*
|
||||||
|
* \param[in] im the input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
|
||||||
|
* \param[in] results the result produced by model
|
||||||
|
* \param[in] conf_threshold threshold for result scores, the result will not be shown if the score is less than conf_threshold
|
||||||
|
* \return cv::Mat type stores the visualized results
|
||||||
|
*/
|
||||||
FASTDEPLOY_DECL cv::Mat VisKeypointDetection(const cv::Mat& im,
|
FASTDEPLOY_DECL cv::Mat VisKeypointDetection(const cv::Mat& im,
|
||||||
const KeyPointDetectionResult& results,
|
const KeyPointDetectionResult& results,
|
||||||
float conf_threshold = 0.5f);
|
float conf_threshold = 0.5f);
|
||||||
|
@@ -24,25 +24,64 @@ def vis_detection(im_data,
|
|||||||
score_threshold=0.0,
|
score_threshold=0.0,
|
||||||
line_size=1,
|
line_size=1,
|
||||||
font_size=0.5):
|
font_size=0.5):
|
||||||
|
"""Show the visualized results for detection models
|
||||||
|
|
||||||
|
:param im_data: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
|
||||||
|
:param det_result: the result produced by model
|
||||||
|
:param labels: (list of str) the visualized result will show the bounding box contain class label
|
||||||
|
:param score_threshold: (float) score_threshold threshold for result scores, the bounding box will not be shown if the score is less than score_threshold
|
||||||
|
:param line_size: (float) line_size line size for bounding boxes
|
||||||
|
:param font_size: (float) font_size font size for text
|
||||||
|
:return: (numpy.ndarray) image with visualized results
|
||||||
|
"""
|
||||||
return C.vision.vis_detection(im_data, det_result, labels, score_threshold,
|
return C.vision.vis_detection(im_data, det_result, labels, score_threshold,
|
||||||
line_size, font_size)
|
line_size, font_size)
|
||||||
|
|
||||||
|
|
||||||
def vis_keypoint_detection(im_data, keypoint_det_result, conf_threshold=0.5):
|
def vis_keypoint_detection(im_data, keypoint_det_result, conf_threshold=0.5):
|
||||||
|
"""Show the visualized results for keypoint detection models
|
||||||
|
|
||||||
|
:param im_data: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
|
||||||
|
:param keypoint_det_result: the result produced by model
|
||||||
|
:param conf_threshold: (float) conf_threshold threshold for result scores, the bounding box will not be shown if the score is less than conf_threshold
|
||||||
|
:return: (numpy.ndarray) image with visualized results
|
||||||
|
"""
|
||||||
return C.vision.Visualize.vis_keypoint_detection(
|
return C.vision.Visualize.vis_keypoint_detection(
|
||||||
im_data, keypoint_det_result, conf_threshold)
|
im_data, keypoint_det_result, conf_threshold)
|
||||||
|
|
||||||
|
|
||||||
def vis_face_detection(im_data, face_det_result, line_size=1, font_size=0.5):
|
def vis_face_detection(im_data, face_det_result, line_size=1, font_size=0.5):
|
||||||
|
"""Show the visualized results for face detection models
|
||||||
|
|
||||||
|
:param im_data: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
|
||||||
|
:param face_det_result: the result produced by model
|
||||||
|
:param line_size: (float) line_size line size for bounding boxes
|
||||||
|
:param font_size: (float) font_size font size for text
|
||||||
|
:return: (numpy.ndarray) image with visualized results
|
||||||
|
"""
|
||||||
return C.vision.vis_face_detection(im_data, face_det_result, line_size,
|
return C.vision.vis_face_detection(im_data, face_det_result, line_size,
|
||||||
font_size)
|
font_size)
|
||||||
|
|
||||||
|
|
||||||
def vis_face_alignment(im_data, face_align_result, line_size=1):
|
def vis_face_alignment(im_data, face_align_result, line_size=1):
|
||||||
|
"""Show the visualized results for face alignment models
|
||||||
|
|
||||||
|
:param im_data: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
|
||||||
|
:param face_align_result: the result produced by model
|
||||||
|
:param line_size: (float)line_size line size for circle point
|
||||||
|
:return: (numpy.ndarray) image with visualized results
|
||||||
|
"""
|
||||||
return C.vision.vis_face_alignment(im_data, face_align_result, line_size)
|
return C.vision.vis_face_alignment(im_data, face_align_result, line_size)
|
||||||
|
|
||||||
|
|
||||||
def vis_segmentation(im_data, seg_result, weight=0.5):
|
def vis_segmentation(im_data, seg_result, weight=0.5):
|
||||||
|
"""Show the visualized results for segmentation models
|
||||||
|
|
||||||
|
:param im_data: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
|
||||||
|
:param seg_result: the result produced by model
|
||||||
|
:param weight: (float)transparent weight of visualized result image
|
||||||
|
:return: (numpy.ndarray) image with visualized results
|
||||||
|
"""
|
||||||
return C.vision.vis_segmentation(im_data, seg_result, weight)
|
return C.vision.vis_segmentation(im_data, seg_result, weight)
|
||||||
|
|
||||||
|
|
||||||
@@ -57,6 +96,13 @@ def vis_matting_alpha(im_data,
|
|||||||
|
|
||||||
|
|
||||||
def vis_matting(im_data, matting_result, remove_small_connected_area=False):
|
def vis_matting(im_data, matting_result, remove_small_connected_area=False):
|
||||||
|
"""Show the visualized results for matting models
|
||||||
|
|
||||||
|
:param im_data: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
|
||||||
|
:param matting_result: the result produced by model
|
||||||
|
:param remove_small_connected_area: (bool) if remove_small_connected_area==True, the visualized result will not include the small connected areas
|
||||||
|
:return: (numpy.ndarray) image with visualized results
|
||||||
|
"""
|
||||||
return C.vision.vis_matting(im_data, matting_result,
|
return C.vision.vis_matting(im_data, matting_result,
|
||||||
remove_small_connected_area)
|
remove_small_connected_area)
|
||||||
|
|
||||||
@@ -105,6 +151,12 @@ def swap_background(im_data,
|
|||||||
|
|
||||||
|
|
||||||
def vis_ppocr(im_data, det_result):
|
def vis_ppocr(im_data, det_result):
|
||||||
|
"""Show the visualized results for ocr models
|
||||||
|
|
||||||
|
:param im_data: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
|
||||||
|
:param det_result: the result produced by model
|
||||||
|
:return: (numpy.ndarray) image with visualized results
|
||||||
|
"""
|
||||||
return C.vision.vis_ppocr(im_data, det_result)
|
return C.vision.vis_ppocr(im_data, det_result)
|
||||||
|
|
||||||
|
|
||||||
|
Reference in New Issue
Block a user