[Doc] Add Python comments for external models (#408)

* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (#11)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (#16)

* Develop (#11) (#12)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (#13)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (#14)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

* add python comments for external models

* modify resnet c++ comments

* modify C++ comments for external models

* modify python comments and add result class comments

* fix comments compile error

* modify result.h comments

* modify yolor comments

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
This commit is contained in:
ziqi-jin
2022-10-25 21:32:53 +08:00
committed by GitHub
parent 718dc3218f
commit 1f39b4f411
46 changed files with 1039 additions and 239 deletions

View File

@@ -0,0 +1,34 @@
# Face Detection API
## fastdeploy.vision.facedet.RetinaFace
```{eval-rst}
.. autoclass:: fastdeploy.vision.facedet.RetinaFace
:members:
:inherited-members:
```
## fastdeploy.vision.facedet.SCRFD
```{eval-rst}
.. autoclass:: fastdeploy.vision.facedet.SCRFD
:members:
:inherited-members:
```
## fastdeploy.vision.facedet.UltraFace
```{eval-rst}
.. autoclass:: fastdeploy.vision.facedet.UltraFace
:members:
:inherited-members:
```
## fastdeploy.vision.facedet.YOLOv5Face
```{eval-rst}
.. autoclass:: fastdeploy.vision.facedet.YOLOv5Face
:members:
:inherited-members:
```

View File

@@ -0,0 +1,41 @@
# Face Recognition API
## fastdeploy.vision.faceid.AdaFace
```{eval-rst}
.. autoclass:: fastdeploy.vision.faceid.AdaFace
:members:
:inherited-members:
```
## fastdeploy.vision.faceid.CosFace
```{eval-rst}
.. autoclass:: fastdeploy.vision.faceid.CosFace
:members:
:inherited-members:
```
## fastdeploy.vision.faceid.ArcFace
```{eval-rst}
.. autoclass:: fastdeploy.vision.faceid.ArcFace
:members:
:inherited-members:
```
## fastdeploy.vision.faceid.PartialFC
```{eval-rst}
.. autoclass:: fastdeploy.vision.faceid.PartialFC
:members:
:inherited-members:
```
## fastdeploy.vision.faceid.VPL
```{eval-rst}
.. autoclass:: fastdeploy.vision.faceid.VPL
:members:
:inherited-members:
```

View File

@@ -18,4 +18,6 @@ FastDeploy
image_classification.md image_classification.md
keypoint_detection.md keypoint_detection.md
matting.md matting.md
face_recognition.md
face_detection.md
vision_results_en.md vision_results_en.md

View File

@@ -1,3 +1,17 @@
# Matting API # Matting API
comming soon... ## fastdeploy.vision.matting.MODNet
```{eval-rst}
.. autoclass:: fastdeploy.vision.matting.MODNet
:members:
:inherited-members:
```
## fastdeploy.vision.matting.PPMatting
```{eval-rst}
.. autoclass:: fastdeploy.vision.matting.PPMatting
:members:
:inherited-members:
```

View File

@@ -63,3 +63,93 @@
:members: :members:
:inherited-members: :inherited-members:
``` ```
## fastdeploy.vision.detection.NanoDetPlus
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.NanoDetPlus
:members:
:inherited-members:
```
## fastdeploy.vision.detection.ScaledYOLOv4
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.ScaledYOLOv4
:members:
:inherited-members:
```
## fastdeploy.vision.detection.YOLOR
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOR
:members:
:inherited-members:
```
## fastdeploy.vision.detection.YOLOv5
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv5
:members:
:inherited-members:
```
## fastdeploy.vision.detection.YOLOv5Lite
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv5Lite
:members:
:inherited-members:
```
## fastdeploy.vision.detection.YOLOv6
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv6
:members:
:inherited-members:
```
## fastdeploy.vision.detection.YOLOv7
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv7
:members:
:inherited-members:
```
## fastdeploy.vision.detection.YOLOR
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOR
:members:
:inherited-members:
```
## fastdeploy.vision.detection.YOLOv7End2EndORT
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv7End2EndORT
:members:
:inherited-members:
```
## fastdeploy.vision.detection.YOLOv7End2EndTRT
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv7End2EndTRT
:members:
:inherited-members:
```
## fastdeploy.vision.detection.YOLOX
```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOX
:members:
:inherited-members:
```

View File

@@ -25,7 +25,7 @@ namespace vision {
* *
*/ */
namespace classification { namespace classification {
/*! @brief ResNet series model /*! @brief Torchvision ResNet series model
*/ */
class FASTDEPLOY_DECL ResNet : public FastDeployModel { class FASTDEPLOY_DECL ResNet : public FastDeployModel {
public: public:
@@ -44,17 +44,18 @@ class FASTDEPLOY_DECL ResNet : public FastDeployModel {
virtual std::string ModelName() const { return "ResNet"; } virtual std::string ModelName() const { return "ResNet"; }
/** \brief Predict for the input "im", the result will be saved in "result". /** \brief Predict for the input "im", the result will be saved in "result".
* *
* \param[in] im Input image for inference. * \param[in] im The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
* \param[in] result Saving the inference result. * \param[in] result Saving the inference result.
* \param[in] topk The length of return values, e.g., if topk==2, the result will include the 2 most possible class label for input image. * \param[in] topk The length of return values, e.g., if topk==2, the result will include the 2 most possible class label for input image.
*/ */
virtual bool Predict(cv::Mat* im, ClassifyResult* result, int topk = 1); virtual bool Predict(cv::Mat* im, ClassifyResult* result, int topk = 1);
/*! @brief
/// Tuple of (width, height) Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size; std::vector<int> size;
/// Mean parameters for normalize /// Mean parameters for normalize, size should be the the same as channels
std::vector<float> mean_vals; std::vector<float> mean_vals;
/// Std parameters for normalize /// Std parameters for normalize, size should be the the same as channels
std::vector<float> std_vals; std::vector<float> std_vals;

View File

@@ -154,88 +154,117 @@ struct FASTDEPLOY_DECL OCRResult : public BaseResult {
std::string Str(); std::string Str();
}; };
/*! @brief Face detection result structure for all the face detection models
*/
struct FASTDEPLOY_DECL FaceDetectionResult : public BaseResult { struct FASTDEPLOY_DECL FaceDetectionResult : public BaseResult {
// box: xmin, ymin, xmax, ymax /** \brief All the detected object boxes for an input image, the size of `boxes` is the number of detected objects, and the element of `boxes` is a array of 4 float values, means [xmin, ymin, xmax, ymax]
*/
std::vector<std::array<float, 4>> boxes; std::vector<std::array<float, 4>> boxes;
// landmark: x, y, landmarks may empty if the /** \brief
// model don't detect face with landmarks. * If the model detect face with landmarks, every detected object box correspoing to a landmark, which is a array of 2 float values, means location [x,y]
// Note, one face might have multiple landmarks, */
// such as 5/19/21/68/98/..., etc.
std::vector<std::array<float, 2>> landmarks; std::vector<std::array<float, 2>> landmarks;
/** \brief
* Indicates the confidence of all targets detected from a single image, and the number of elements is consistent with boxes.size()
*/
std::vector<float> scores; std::vector<float> scores;
ResultType type = ResultType::FACE_DETECTION; ResultType type = ResultType::FACE_DETECTION;
// set landmarks_per_face manually in your post processes. /** \brief
* `landmarks_per_face` indicates the number of face landmarks for each detected face
* if the model's output contains face landmarks (such as YOLOv5Face, SCRFD, ...)
*/
int landmarks_per_face; int landmarks_per_face;
FaceDetectionResult() { landmarks_per_face = 0; } FaceDetectionResult() { landmarks_per_face = 0; }
FaceDetectionResult(const FaceDetectionResult& res); FaceDetectionResult(const FaceDetectionResult& res);
/// Clear detection result
void Clear(); void Clear();
void Reserve(int size); void Reserve(int size);
void Resize(int size); void Resize(int size);
/// Debug function, convert the result to string to print
std::string Str(); std::string Str();
}; };
/*! @brief Segmentation result structure for all the segmentation models
*/
struct FASTDEPLOY_DECL SegmentationResult : public BaseResult { struct FASTDEPLOY_DECL SegmentationResult : public BaseResult {
// mask /** \brief
* `label_map` stores the pixel-level category labels for input image. the number of pixels is equal to label_map.size()
*/
std::vector<uint8_t> label_map; std::vector<uint8_t> label_map;
/** \brief
* `score_map` stores the probability of the predicted label for each pixel of input image.
*/
std::vector<float> score_map; std::vector<float> score_map;
/// The output shape, means [H, W]
std::vector<int64_t> shape; std::vector<int64_t> shape;
bool contain_score_map = false; bool contain_score_map = false;
ResultType type = ResultType::SEGMENTATION; ResultType type = ResultType::SEGMENTATION;
/// Clear detection result
void Clear(); void Clear();
void Reserve(int size); void Reserve(int size);
void Resize(int size); void Resize(int size);
/// Debug function, convert the result to string to print
std::string Str(); std::string Str();
}; };
/*! @brief Face recognition result structure for all the Face recognition models
*/
struct FASTDEPLOY_DECL FaceRecognitionResult : public BaseResult { struct FASTDEPLOY_DECL FaceRecognitionResult : public BaseResult {
// face embedding vector with 128/256/512 ... dim /** \brief The feature embedding that represents the final extraction of the face recognition model can be used to calculate the feature similarity between faces.
*/
std::vector<float> embedding; std::vector<float> embedding;
ResultType type = ResultType::FACE_RECOGNITION; ResultType type = ResultType::FACE_RECOGNITION;
FaceRecognitionResult() {} FaceRecognitionResult() {}
FaceRecognitionResult(const FaceRecognitionResult& res); FaceRecognitionResult(const FaceRecognitionResult& res);
/// Clear detection result
void Clear(); void Clear();
void Reserve(int size); void Reserve(int size);
void Resize(int size); void Resize(int size);
/// Debug function, convert the result to string to print
std::string Str(); std::string Str();
}; };
/*! @brief Matting result structure for all the Matting models
*/
struct FASTDEPLOY_DECL MattingResult : public BaseResult { struct FASTDEPLOY_DECL MattingResult : public BaseResult {
// alpha matte and fgr (predicted foreground: HWC/BGR float32) /** \brief
`alpha` is a one-dimensional vector, which is the predicted alpha transparency value. The range of values is [0., 1.], and the length is hxw. h, w are the height and width of the input image
*/
std::vector<float> alpha; // h x w std::vector<float> alpha; // h x w
/** \brief
If the model can predict foreground, `foreground` save the predicted foreground image, the shape is [hight,width,channel] generally.
*/
std::vector<float> foreground; // h x w x c (c=3 default) std::vector<float> foreground; // h x w x c (c=3 default)
// height, width, channel for foreground and alpha /** \brief
// must be (h,w,c) and setup before Reserve and Resize * The shape of output result, when contain_foreground == false, shape only contains (h, w), when contain_foreground == true, shape contains (h, w, c), and c is generally 3
// c is only for foreground if contain_foreground is true. */
std::vector<int64_t> shape; std::vector<int64_t> shape;
/** \brief
If the model can predict alpha matte and foreground, contain_foreground = true, default false
*/
bool contain_foreground = false; bool contain_foreground = false;
ResultType type = ResultType::MATTING; ResultType type = ResultType::MATTING;
MattingResult() {} MattingResult() {}
MattingResult(const MattingResult& res); MattingResult(const MattingResult& res);
/// Clear detection result
void Clear(); void Clear();
void Reserve(int size); void Reserve(int size);
void Resize(int size); void Resize(int size);
/// Debug function, convert the result to string to print
std::string Str(); std::string Str();
}; };

View File

@@ -53,21 +53,23 @@ class FASTDEPLOY_DECL NanoDetPlus : public FastDeployModel {
float conf_threshold = 0.35f, float conf_threshold = 0.35f,
float nms_iou_threshold = 0.5f); float nms_iou_threshold = 0.5f);
/// tuple of input size (width, height), e.g (320, 320) /*! @brief
Argument for image preprocessing step, tuple of input size (width, height), e.g (320, 320)
*/
std::vector<int> size; std::vector<int> size;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value; std::vector<float> padding_value;
/*! @brief // keep aspect ratio or not when perform resize operation.
keep aspect ratio or not when perform resize operation. This option is set as `false` by default in NanoDet-Plus // This option is set as `false` by default in NanoDet-Plus
*/
bool keep_ratio; bool keep_ratio;
/*! @brief // downsample strides for NanoDet-Plus to generate anchors,
downsample strides for NanoDet-Plus to generate anchors, will take (8, 16, 32, 64) as default values // will take (8, 16, 32, 64) as default values
*/
std::vector<int> downsample_strides; std::vector<int> downsample_strides;
/// for offseting the boxes by classes when using NMS, default 4096 // for offseting the boxes by classes when using NMS, default 4096
float max_wh; float max_wh;
/// reg_max for GFL regression, default 7 /*! @brief
Argument for image postprocessing step, reg_max for GFL regression, default 7
*/
int reg_max; int reg_max;
private: private:

View File

@@ -50,23 +50,23 @@ class FASTDEPLOY_DECL ScaledYOLOv4 : public FastDeployModel {
float conf_threshold = 0.25, float conf_threshold = 0.25,
float nms_iou_threshold = 0.5); float nms_iou_threshold = 0.5);
/// tuple of (width, height) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size; std::vector<int> size;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value; std::vector<float> padding_value;
/// only pad to the minimum rectange which height and width is times of stride // only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad; bool is_mini_pad;
/*! @brief // while is_mini_pad = false and is_no_pad = true,
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size // will resize the image to the set size
*/
bool is_no_pad; bool is_no_pad;
/*! @brief // if is_scale_up is false, the input image only can be zoom out,
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0 // the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up; bool is_scale_up;
/// padding stride, for is_mini_pad // padding stride, for is_mini_pad
int stride; int stride;
/// for offseting the boxes by classes when using NMS // for offseting the boxes by classes when using NMS
float max_wh; float max_wh;
private: private:

View File

@@ -1,4 +1,5 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. //NOLINT 
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
// //
// Licensed under the Apache License, Version 2.0 (the "License"); // Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. // you may not use this file except in compliance with the License.
@@ -48,23 +49,24 @@ class FASTDEPLOY_DECL YOLOR : public FastDeployModel {
float conf_threshold = 0.25, float conf_threshold = 0.25,
float nms_iou_threshold = 0.5); float nms_iou_threshold = 0.5);
/// tuple of (width, height) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size; std::vector<int> size;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value; std::vector<float> padding_value;
/// only pad to the minimum rectange which height and width is times of stride // only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad; bool is_mini_pad;
/*! @brief // while is_mini_pad = false and is_no_pad = true,
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size // will resize the image to the set size
*/
bool is_no_pad; bool is_no_pad;
/*! @brief // if is_scale_up is false, the input image only can be zoom out,
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0 // the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up; bool is_scale_up;
/// padding stride, for is_mini_pad // padding stride, for is_mini_pad
int stride; int stride;
/// for offseting the boxes by classes when using NMS // for offseting the boxes by classes when using NMS
float max_wh; float max_wh;
private: private:

View File

@@ -77,23 +77,24 @@ class FASTDEPLOY_DECL YOLOv5 : public FastDeployModel {
float conf_threshold, float nms_iou_threshold, bool multi_label, float conf_threshold, float nms_iou_threshold, bool multi_label,
float max_wh = 7680.0); float max_wh = 7680.0);
/// tuple of (width, height) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size_; std::vector<int> size_;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value_; std::vector<float> padding_value_;
/// only pad to the minimum rectange which height and width is times of stride // only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad_; bool is_mini_pad_;
/*! @brief // while is_mini_pad = false and is_no_pad = true,
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size // will resize the image to the set size
*/
bool is_no_pad_; bool is_no_pad_;
/*! @brief // if is_scale_up is false, the input image only can be zoom out,
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0 // the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up_; bool is_scale_up_;
/// padding stride, for is_mini_pad // padding stride, for is_mini_pad
int stride_; int stride_;
/// for offseting the boxes by classes when using NMS // for offseting the boxes by classes when using NMS
float max_wh_; float max_wh_;
/// for different strategies to get boxes when postprocessing /// for different strategies to get boxes when postprocessing
bool multi_label_; bool multi_label_;

View File

@@ -53,31 +53,30 @@ class FASTDEPLOY_DECL YOLOv5Lite : public FastDeployModel {
void UseCudaPreprocessing(int max_img_size = 3840 * 2160); void UseCudaPreprocessing(int max_img_size = 3840 * 2160);
/// tuple of (width, height) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size; std::vector<int> size;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value; std::vector<float> padding_value;
/// only pad to the minimum rectange which height and width is times of stride // only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad; bool is_mini_pad;
/*! @brief // while is_mini_pad = false and is_no_pad = true,
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size // will resize the image to the set size
*/
bool is_no_pad; bool is_no_pad;
/*! @brief // if is_scale_up is false, the input image only can be zoom out,
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0 // the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up; bool is_scale_up;
/// padding stride, for is_mini_pad // padding stride, for is_mini_pad
int stride; int stride;
/// for offseting the boxes by classes when using NMS // for offseting the boxes by classes when using NMS
float max_wh; float max_wh;
/*! @brief // downsample strides for YOLOv5Lite to generate anchors,
downsample strides for YOLOv5Lite to generate anchors, will take (8,16,32) as default values, might have stride=64. // will take (8,16,32) as default values, might have stride=64.
*/
std::vector<int> downsample_strides; std::vector<int> downsample_strides;
/*! @brief // anchors parameters, downsample_strides will take (8,16,32),
anchors parameters, downsample_strides will take (8,16,32), each stride has three anchors with width and hight // each stride has three anchors with width and hight
*/
std::vector<std::vector<float>> anchor_config; std::vector<std::vector<float>> anchor_config;
/*! @brief /*! @brief
whether the model_file was exported with decode module. The official whether the model_file was exported with decode module. The official

View File

@@ -31,10 +31,12 @@ void BindYOLOv5Lite(pybind11::module& m) {
.def("use_cuda_preprocessing", .def("use_cuda_preprocessing",
[](vision::detection::YOLOv5Lite& self, int max_image_size) { [](vision::detection::YOLOv5Lite& self, int max_image_size) {
self.UseCudaPreprocessing(max_image_size); self.UseCudaPreprocessing(max_image_size);
}) })
.def_readwrite("size", &vision::detection::YOLOv5Lite::size) .def_readwrite("size", &vision::detection::YOLOv5Lite::size)
.def_readwrite("padding_value", .def_readwrite("padding_value",
&vision::detection::YOLOv5Lite::padding_value) &vision::detection::YOLOv5Lite::padding_value)
.def_readwrite("downsample_strides",
&vision::detection::YOLOv5Lite::downsample_strides)
.def_readwrite("is_mini_pad", &vision::detection::YOLOv5Lite::is_mini_pad) .def_readwrite("is_mini_pad", &vision::detection::YOLOv5Lite::is_mini_pad)
.def_readwrite("is_no_pad", &vision::detection::YOLOv5Lite::is_no_pad) .def_readwrite("is_no_pad", &vision::detection::YOLOv5Lite::is_no_pad)
.def_readwrite("is_scale_up", &vision::detection::YOLOv5Lite::is_scale_up) .def_readwrite("is_scale_up", &vision::detection::YOLOv5Lite::is_scale_up)

View File

@@ -56,25 +56,25 @@ class FASTDEPLOY_DECL YOLOv6 : public FastDeployModel {
void UseCudaPreprocessing(int max_img_size = 3840 * 2160); void UseCudaPreprocessing(int max_img_size = 3840 * 2160);
/// tuple of (width, height) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size; std::vector<int> size;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value; std::vector<float> padding_value;
/// only pad to the minimum rectange which height and width is times of stride // only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad; bool is_mini_pad;
/*! @brief // while is_mini_pad = false and is_no_pad = true,
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size // will resize the image to the set size
*/
bool is_no_pad; bool is_no_pad;
/*! @brief // if is_scale_up is false, the input image only can be zoom out,
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0 // the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up; bool is_scale_up;
/// padding stride, for is_mini_pad // padding stride, for is_mini_pad
int stride; int stride;
/*! @brief // for offseting the boxes by classes when using NMS,
for offseting the boxes by classes when using NMS, default 4096 in meituan/YOLOv6 // default 4096 in meituan/YOLOv6
*/
float max_wh; float max_wh;
private: private:

View File

@@ -53,23 +53,24 @@ class FASTDEPLOY_DECL YOLOv7 : public FastDeployModel {
void UseCudaPreprocessing(int max_img_size = 3840 * 2160); void UseCudaPreprocessing(int max_img_size = 3840 * 2160);
/// tuple of (width, height) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size; std::vector<int> size;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value; std::vector<float> padding_value;
/// only pad to the minimum rectange which height and width is times of stride // only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad; bool is_mini_pad;
/*! @brief // while is_mini_pad = false and is_no_pad = true,
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size // will resize the image to the set size
*/
bool is_no_pad; bool is_no_pad;
/*! @brief // if is_scale_up is false, the input image only can be zoom out,
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0 // the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up; bool is_scale_up;
/// padding stride, for is_mini_pad // padding stride, for is_mini_pad
int stride; int stride;
/// for offseting the boxes by classes when using NMS // for offseting the boxes by classes when using NMS
float max_wh; float max_wh;
private: private:

View File

@@ -47,21 +47,22 @@ class FASTDEPLOY_DECL YOLOv7End2EndORT : public FastDeployModel {
virtual bool Predict(cv::Mat* im, DetectionResult* result, virtual bool Predict(cv::Mat* im, DetectionResult* result,
float conf_threshold = 0.25); float conf_threshold = 0.25);
/// tuple of (width, height) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size; std::vector<int> size;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value; std::vector<float> padding_value;
/// only pad to the minimum rectange which height and width is times of stride // only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad; bool is_mini_pad;
/*! @brief // while is_mini_pad = false and is_no_pad = true,
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size // will resize the image to the set size
*/
bool is_no_pad; bool is_no_pad;
/*! @brief // if is_scale_up is false, the input image only can be zoom out,
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0 // the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up; bool is_scale_up;
/// padding stride, for is_mini_pad // padding stride, for is_mini_pad
int stride; int stride;
private: private:

View File

@@ -52,21 +52,22 @@ class FASTDEPLOY_DECL YOLOv7End2EndTRT : public FastDeployModel {
void UseCudaPreprocessing(int max_img_size = 3840 * 2160); void UseCudaPreprocessing(int max_img_size = 3840 * 2160);
/// tuple of (width, height) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size; std::vector<int> size;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value; std::vector<float> padding_value;
/// only pad to the minimum rectange which height and width is times of stride // only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad; bool is_mini_pad;
/*! @brief // while is_mini_pad = false and is_no_pad = true,
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size // will resize the image to the set size
*/
bool is_no_pad; bool is_no_pad;
/*! @brief // if is_scale_up is false, the input image only can be zoom out,
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0 // the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up; bool is_scale_up;
/// padding stride, for is_mini_pad // padding stride, for is_mini_pad
int stride; int stride;
private: private:

View File

@@ -51,9 +51,11 @@ class FASTDEPLOY_DECL YOLOX : public FastDeployModel {
float conf_threshold = 0.25, float conf_threshold = 0.25,
float nms_iou_threshold = 0.5); float nms_iou_threshold = 0.5);
/// tuple of (width, height) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size; std::vector<int> size;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value; std::vector<float> padding_value;
/*! @brief /*! @brief
whether the model_file was exported with decode module. The official whether the model_file was exported with decode module. The official
@@ -62,11 +64,10 @@ class FASTDEPLOY_DECL YOLOX : public FastDeployModel {
was exported with decode module. was exported with decode module.
*/ */
bool is_decode_exported; bool is_decode_exported;
/*! @brief // downsample strides for YOLOX to generate anchors,
downsample strides for YOLOX to generate anchors, will take (8,16,32) as default values, might have stride=64 // will take (8,16,32) as default values, might have stride=64
*/
std::vector<int> downsample_strides; std::vector<int> downsample_strides;
/// for offseting the boxes by classes when using NMS, default 4096 // for offseting the boxes by classes when using NMS, default 4096
float max_wh; float max_wh;
private: private:

View File

@@ -52,19 +52,25 @@ class FASTDEPLOY_DECL RetinaFace : public FastDeployModel {
float conf_threshold = 0.25f, float conf_threshold = 0.25f,
float nms_iou_threshold = 0.4f); float nms_iou_threshold = 0.4f);
/// tuple of (width, height), default (640, 640) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default (640, 640)
*/
std::vector<int> size; std::vector<int> size;
/*! @brief /*! @brief
variance in RetinaFace's prior-box(anchor) generate process, default (0.1, 0.2) Argument for image postprocessing step, variance in RetinaFace's prior-box(anchor) generate process, default (0.1, 0.2)
*/ */
std::vector<float> variance; std::vector<float> variance;
/*! @brief /*! @brief
downsample strides (namely, steps) for RetinaFace to generate anchors, will take (8,16,32) as default values Argument for image postprocessing step, downsample strides (namely, steps) for RetinaFace to generate anchors, will take (8,16,32) as default values
*/ */
std::vector<int> downsample_strides; std::vector<int> downsample_strides;
/// min sizes, width and height for each anchor /*! @brief
Argument for image postprocessing step, min sizes, width and height for each anchor
*/
std::vector<std::vector<int>> min_sizes; std::vector<std::vector<int>> min_sizes;
/// landmarks_per_face, default 5 in RetinaFace /*! @brief
Argument for image postprocessing step, landmarks_per_face, default 5 in RetinaFace
*/
int landmarks_per_face; int landmarks_per_face;
private: private:

View File

@@ -51,33 +51,40 @@ class FASTDEPLOY_DECL SCRFD : public FastDeployModel {
float conf_threshold = 0.25f, float conf_threshold = 0.25f,
float nms_iou_threshold = 0.4f); float nms_iou_threshold = 0.4f);
/// tuple of (width, height), default (640, 640) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default (640, 640)
*/
std::vector<int> size; std::vector<int> size;
/// padding value, size should be the same as channels // padding value, size should be the same as channels
std::vector<float> padding_value; std::vector<float> padding_value;
/// only pad to the minimum rectange which height and width is times of stride // only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad; bool is_mini_pad;
/*! @brief // while is_mini_pad = false and is_no_pad = true,
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size // will resize the image to the set size
*/
bool is_no_pad; bool is_no_pad;
/*! @brief // if is_scale_up is false, the input image only can be zoom out,
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0 // the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up; bool is_scale_up;
/// padding stride, for is_mini_pad // padding stride, for is_mini_pad
int stride; int stride;
/*! @brief /*! @brief
downsample strides (namely, steps) for SCRFD to generate anchors, will take (8,16,32) as default values Argument for image postprocessing step, downsample strides (namely, steps) for SCRFD to generate anchors, will take (8,16,32) as default values
*/ */
std::vector<int> downsample_strides; std::vector<int> downsample_strides;
/// landmarks_per_face, default 5 in SCRFD /*! @brief
Argument for image postprocessing step, landmarks_per_face, default 5 in SCRFD
*/
int landmarks_per_face; int landmarks_per_face;
/// the outputs of onnx file with key points features or not /*! @brief
Argument for image postprocessing step, the outputs of onnx file with key points features or not
*/
bool use_kps; bool use_kps;
/// the upperbond number of boxes processed by nms /*! @brief
Argument for image postprocessing step, the upperbond number of boxes processed by nms
*/
int max_nms; int max_nms;
/// number anchors of each stride /// Argument for image postprocessing step, anchor number of each stride
unsigned int num_anchors; unsigned int num_anchors;
private: private:

View File

@@ -52,7 +52,9 @@ class FASTDEPLOY_DECL UltraFace : public FastDeployModel {
float conf_threshold = 0.7f, float conf_threshold = 0.7f,
float nms_iou_threshold = 0.3f); float nms_iou_threshold = 0.3f);
/// tuple of (width, height), default (320, 240) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default (320, 240)
*/
std::vector<int> size; std::vector<int> size;
private: private:

View File

@@ -50,26 +50,27 @@ class FASTDEPLOY_DECL YOLOv5Face : public FastDeployModel {
float conf_threshold = 0.25, float conf_threshold = 0.25,
float nms_iou_threshold = 0.5); float nms_iou_threshold = 0.5);
/// tuple of (width, height)
std::vector<int> size;
/// padding value, size should be the same as channels
std::vector<float> padding_value;
/// only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad;
/*! @brief /*! @brief
while is_mini_pad = false and is_no_pad = true, will resize the image to the set size Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/ */
std::vector<int> size;
// padding value, size should be the same as channels
std::vector<float> padding_value;
// only pad to the minimum rectange which height and width is times of stride
bool is_mini_pad;
// while is_mini_pad = false and is_no_pad = true,
// will resize the image to the set size
bool is_no_pad; bool is_no_pad;
/*! @brief // if is_scale_up is false, the input image only can be zoom out,
if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0 // the maximum resize scale cannot exceed 1.0
*/
bool is_scale_up; bool is_scale_up;
/// padding stride, for is_mini_pad // padding stride, for is_mini_pad
int stride; int stride;
/*! @brief /*! @brief
setup the number of landmarks for per face (if have), default 5 in Argument for image postprocessing step, setup the number of landmarks for per face (if have), default 5 in
official yolov5face note that, the outupt tensor's shape must be: official yolov5face note that, the outupt tensor's shape must be:
(1,n,4+1+2*landmarks_per_face+1=box+obj+landmarks+cls) (1,n,4+1+2*landmarks_per_face+1=box+obj+landmarks+cls)
*/ */

View File

@@ -40,15 +40,21 @@ class FASTDEPLOY_DECL InsightFaceRecognitionModel : public FastDeployModel {
virtual std::string ModelName() const { return "deepinsight/insightface"; } virtual std::string ModelName() const { return "deepinsight/insightface"; }
/// tuple of (width, height), default (112, 112) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default (112, 112)
*/
std::vector<int> size; std::vector<int> size;
/// alpha values for normalization /// Argument for image preprocessing step, alpha values for normalization
std::vector<float> alpha; std::vector<float> alpha;
/// beta values for normalization /// Argument for image preprocessing step, beta values for normalization
std::vector<float> beta; std::vector<float> beta;
/// whether to swap the B and R channel, such as BGR->RGB, default true. /*! @brief
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
*/
bool swap_rb; bool swap_rb;
/// whether to apply l2 normalize to embedding values, default; /*! @brief
Argument for image postprocessing step, whether to apply l2 normalize to embedding values, default false;
*/
bool l2_normalize; bool l2_normalize;
/** \brief Predict the face recognition result for an input image /** \brief Predict the face recognition result for an input image
* *

View File

@@ -39,13 +39,21 @@ class FASTDEPLOY_DECL MODNet : public FastDeployModel {
std::string ModelName() const { return "matting/MODNet"; } std::string ModelName() const { return "matting/MODNet"; }
/// tuple of (width, height), default (256, 256) /*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize, default (256, 256)
*/
std::vector<int> size; std::vector<int> size;
/// parameters for normalization /*! @brief
Argument for image preprocessing step, parameters for normalization, size should be the the same as channels
*/
std::vector<float> alpha; std::vector<float> alpha;
/// parameters for normalization /*! @brief
Argument for image preprocessing step, parameters for normalization, size should be the the same as channels
*/
std::vector<float> beta; std::vector<float> beta;
/// whether to swap the B and R channel, such as BGR->RGB, default true. /*! @brief
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
*/
bool swap_rb; bool swap_rb;
/** \brief Predict the matting result for an input image /** \brief Predict the matting result for an input image
* *

View File

@@ -24,6 +24,13 @@ class NanoDetPlus(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a NanoDetPlus model exported by NanoDet.
:param model_file: (str)Path of model file, e.g ./nanodet.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(NanoDetPlus, self).__init__(runtime_option) super(NanoDetPlus, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class NanoDetPlus(FastDeployModel):
assert self.initialized, "NanoDetPlus initialize failed." assert self.initialized, "NanoDetPlus initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5): def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
@@ -41,26 +55,36 @@ class NanoDetPlus(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [416, 416]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [416, 416]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def keep_ratio(self): def keep_ratio(self):
# keep aspect ratio or not when perform resize operation. This option is set as false by default in NanoDet-Plus
return self._model.keep_ratio return self._model.keep_ratio
@property @property
def downsample_strides(self): def downsample_strides(self):
# downsample strides for NanoDet-Plus to generate anchors, will take (8, 16, 32, 64) as default values
return self._model.downsample_strides return self._model.downsample_strides
@property @property
def max_wh(self): def max_wh(self):
# for offseting the boxes by classes when using NMS, default 4096
return self._model.max_wh return self._model.max_wh
@property @property
def reg_max(self): def reg_max(self):
"""
reg_max for GFL regression, default 7
"""
return self._model.reg_max return self._model.reg_max
@size.setter @size.setter

View File

@@ -24,6 +24,13 @@ class ScaledYOLOv4(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a ScaledYOLOv4 model exported by ScaledYOLOv4.
:param model_file: (str)Path of model file, e.g ./scaled_yolov4.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(ScaledYOLOv4, self).__init__(runtime_option) super(ScaledYOLOv4, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class ScaledYOLOv4(FastDeployModel):
assert self.initialized, "ScaledYOLOv4 initialize failed." assert self.initialized, "ScaledYOLOv4 initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5): def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
@@ -41,30 +55,39 @@ class ScaledYOLOv4(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_no_pad(self): def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad return self._model.is_no_pad
@property @property
def is_mini_pad(self): def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad return self._model.is_mini_pad
@property @property
def is_scale_up(self): def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up return self._model.is_scale_up
@property @property
def stride(self): def stride(self):
# padding stride, for is_mini_pad
return self._model.stride return self._model.stride
@property @property
def max_wh(self): def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh return self._model.max_wh
@size.setter @size.setter
@@ -92,19 +115,21 @@ class ScaledYOLOv4(FastDeployModel):
@is_mini_pad.setter @is_mini_pad.setter
def is_mini_pad(self, value): def is_mini_pad(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool." value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value self._model.is_mini_pad = value
@is_scale_up.setter @is_scale_up.setter
def is_scale_up(self, value): def is_scale_up(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool." value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value self._model.is_scale_up = value
@stride.setter @stride.setter
def stride(self, value): def stride(self, value):
assert isinstance(value, assert isinstance(
int), "The value to set `stride` must be type of int." value, int), "The value to set `stride` must be type of int."
self._model.stride = value self._model.stride = value
@max_wh.setter @max_wh.setter

View File

@@ -24,6 +24,13 @@ class YOLOR(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a YOLOR model exported by YOLOR
:param model_file: (str)Path of model file, e.g ./yolor.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(YOLOR, self).__init__(runtime_option) super(YOLOR, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class YOLOR(FastDeployModel):
assert self.initialized, "YOLOR initialize failed." assert self.initialized, "YOLOR initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5): def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
@@ -41,30 +55,39 @@ class YOLOR(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_no_pad(self): def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad return self._model.is_no_pad
@property @property
def is_mini_pad(self): def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad return self._model.is_mini_pad
@property @property
def is_scale_up(self): def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up return self._model.is_scale_up
@property @property
def stride(self): def stride(self):
# padding stride, for is_mini_pad
return self._model.stride return self._model.stride
@property @property
def max_wh(self): def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh return self._model.max_wh
@size.setter @size.setter
@@ -92,19 +115,21 @@ class YOLOR(FastDeployModel):
@is_mini_pad.setter @is_mini_pad.setter
def is_mini_pad(self, value): def is_mini_pad(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool." value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value self._model.is_mini_pad = value
@is_scale_up.setter @is_scale_up.setter
def is_scale_up(self, value): def is_scale_up(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool." value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value self._model.is_scale_up = value
@stride.setter @stride.setter
def stride(self, value): def stride(self, value):
assert isinstance(value, assert isinstance(
int), "The value to set `stride` must be type of int." value, int), "The value to set `stride` must be type of int."
self._model.stride = value self._model.stride = value
@max_wh.setter @max_wh.setter

View File

@@ -24,6 +24,13 @@ class YOLOv5(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a YOLOv5 model exported by YOLOv5.
:param model_file: (str)Path of model file, e.g ./yolov5.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(YOLOv5, self).__init__(runtime_option) super(YOLOv5, self).__init__(runtime_option)
@@ -34,12 +41,16 @@ class YOLOv5(FastDeployModel):
assert self.initialized, "YOLOv5 initialize failed." assert self.initialized, "YOLOv5 initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5): def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
def use_cuda_preprocessing(self, max_image_size=3840 * 2160):
return self._model.use_cuda_preprocessing(max_image_size)
@staticmethod @staticmethod
def preprocess(input_image, def preprocess(input_image,
size=[640, 640], size=[640, 640],
@@ -69,30 +80,39 @@ class YOLOv5(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_no_pad(self): def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad return self._model.is_no_pad
@property @property
def is_mini_pad(self): def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad return self._model.is_mini_pad
@property @property
def is_scale_up(self): def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up return self._model.is_scale_up
@property @property
def stride(self): def stride(self):
# padding stride, for is_mini_pad
return self._model.stride return self._model.stride
@property @property
def max_wh(self): def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh return self._model.max_wh
@property @property
@@ -124,19 +144,21 @@ class YOLOv5(FastDeployModel):
@is_mini_pad.setter @is_mini_pad.setter
def is_mini_pad(self, value): def is_mini_pad(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool." value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value self._model.is_mini_pad = value
@is_scale_up.setter @is_scale_up.setter
def is_scale_up(self, value): def is_scale_up(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool." value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value self._model.is_scale_up = value
@stride.setter @stride.setter
def stride(self, value): def stride(self, value):
assert isinstance(value, assert isinstance(
int), "The value to set `stride` must be type of int." value, int), "The value to set `stride` must be type of int."
self._model.stride = value self._model.stride = value
@max_wh.setter @max_wh.setter
@@ -148,5 +170,6 @@ class YOLOv5(FastDeployModel):
@multi_label.setter @multi_label.setter
def multi_label(self, value): def multi_label(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `multi_label` must be type of bool." value,
bool), "The value to set `multi_label` must be type of bool."
self._model.multi_label = value self._model.multi_label = value

View File

@@ -24,6 +24,13 @@ class YOLOv5Lite(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a YOLOv5Lite model exported by YOLOv5Lite.
:param model_file: (str)Path of model file, e.g ./yolov5lite.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(YOLOv5Lite, self).__init__(runtime_option) super(YOLOv5Lite, self).__init__(runtime_option)
@@ -34,50 +41,76 @@ class YOLOv5Lite(FastDeployModel):
assert self.initialized, "YOLOv5Lite initialize failed." assert self.initialized, "YOLOv5Lite initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5): def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
def use_cuda_preprocessing(self, max_image_size=3840 * 2160):
return self._model.use_cuda_preprocessing(max_image_size)
# 一些跟YOLOv5Lite模型有关的属性封装 # 一些跟YOLOv5Lite模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_no_pad(self): def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad return self._model.is_no_pad
@property @property
def is_mini_pad(self): def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad return self._model.is_mini_pad
@property @property
def is_scale_up(self): def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up return self._model.is_scale_up
@property @property
def stride(self): def stride(self):
# padding stride, for is_mini_pad
return self._model.stride return self._model.stride
@property @property
def max_wh(self): def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh return self._model.max_wh
@property @property
def is_decode_exported(self): def is_decode_exported(self):
"""
whether the model_file was exported with decode module.
The official YOLOv5Lite/export.py script will export ONNX file without decode module.
Please set it 'true' manually if the model file was exported with decode module.
false : ONNX files without decode module. true : ONNX file with decode module.
"""
return self._model.is_decode_exported return self._model.is_decode_exported
@property @property
def anchor_config(self): def anchor_config(self):
return self._model.anchor_config return self._model.anchor_config
@property
def downsample_strides(self):
"""
downsample strides for YOLOv5Lite to generate anchors, will take (8,16,32) as default values, might have stride=64.
"""
return self._model.downsample_strides
@size.setter @size.setter
def size(self, wh): def size(self, wh):
assert isinstance(wh, (list, tuple)),\ assert isinstance(wh, (list, tuple)),\
@@ -103,19 +136,21 @@ class YOLOv5Lite(FastDeployModel):
@is_mini_pad.setter @is_mini_pad.setter
def is_mini_pad(self, value): def is_mini_pad(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool." value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value self._model.is_mini_pad = value
@is_scale_up.setter @is_scale_up.setter
def is_scale_up(self, value): def is_scale_up(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool." value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value self._model.is_scale_up = value
@stride.setter @stride.setter
def stride(self, value): def stride(self, value):
assert isinstance(value, assert isinstance(
int), "The value to set `stride` must be type of int." value, int), "The value to set `stride` must be type of int."
self._model.stride = value self._model.stride = value
@max_wh.setter @max_wh.setter
@@ -138,3 +173,10 @@ class YOLOv5Lite(FastDeployModel):
assert isinstance(anchor_config_val[0], list),\ assert isinstance(anchor_config_val[0], list),\
"The value to set `anchor_config` must be 2-dimensions tuple or list" "The value to set `anchor_config` must be 2-dimensions tuple or list"
self._model.anchor_config = anchor_config_val self._model.anchor_config = anchor_config_val
@downsample_strides.setter
def downsample_strides(self, value):
assert isinstance(
value,
list), "The value to set `downsample_strides` must be type of list."
self._model.downsample_strides = value

View File

@@ -24,6 +24,13 @@ class YOLOv6(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a YOLOv6 model exported by YOLOv6.
:param model_file: (str)Path of model file, e.g ./yolov6.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(YOLOv6, self).__init__(runtime_option) super(YOLOv6, self).__init__(runtime_option)
@@ -34,40 +41,53 @@ class YOLOv6(FastDeployModel):
assert self.initialized, "YOLOv6 initialize failed." assert self.initialized, "YOLOv6 initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5): def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
def use_cuda_preprocessing(self, max_image_size=3840 * 2160):
return self._model.use_cuda_preprocessing(max_image_size)
# 一些跟YOLOv6模型有关的属性封装 # 一些跟YOLOv6模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_no_pad(self): def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad return self._model.is_no_pad
@property @property
def is_mini_pad(self): def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad return self._model.is_mini_pad
@property @property
def is_scale_up(self): def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up return self._model.is_scale_up
@property @property
def stride(self): def stride(self):
# padding stride, for is_mini_pad
return self._model.stride return self._model.stride
@property @property
def max_wh(self): def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh return self._model.max_wh
@size.setter @size.setter
@@ -95,19 +115,21 @@ class YOLOv6(FastDeployModel):
@is_mini_pad.setter @is_mini_pad.setter
def is_mini_pad(self, value): def is_mini_pad(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool." value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value self._model.is_mini_pad = value
@is_scale_up.setter @is_scale_up.setter
def is_scale_up(self, value): def is_scale_up(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool." value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value self._model.is_scale_up = value
@stride.setter @stride.setter
def stride(self, value): def stride(self, value):
assert isinstance(value, assert isinstance(
int), "The value to set `stride` must be type of int." value, int), "The value to set `stride` must be type of int."
self._model.stride = value self._model.stride = value
@max_wh.setter @max_wh.setter

View File

@@ -24,6 +24,13 @@ class YOLOv7(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a YOLOv7 model exported by YOLOv7.
:param model_file: (str)Path of model file, e.g ./yolov7.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(YOLOv7, self).__init__(runtime_option) super(YOLOv7, self).__init__(runtime_option)
@@ -34,40 +41,53 @@ class YOLOv7(FastDeployModel):
assert self.initialized, "YOLOv7 initialize failed." assert self.initialized, "YOLOv7 initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5): def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
def use_cuda_preprocessing(self, max_image_size=3840 * 2160):
return self._model.use_cuda_preprocessing(max_image_size)
# 一些跟YOLOv7模型有关的属性封装 # 一些跟YOLOv7模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_no_pad(self): def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad return self._model.is_no_pad
@property @property
def is_mini_pad(self): def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad return self._model.is_mini_pad
@property @property
def is_scale_up(self): def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up return self._model.is_scale_up
@property @property
def stride(self): def stride(self):
# padding stride, for is_mini_pad
return self._model.stride return self._model.stride
@property @property
def max_wh(self): def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh return self._model.max_wh
@size.setter @size.setter
@@ -95,19 +115,21 @@ class YOLOv7(FastDeployModel):
@is_mini_pad.setter @is_mini_pad.setter
def is_mini_pad(self, value): def is_mini_pad(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool." value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value self._model.is_mini_pad = value
@is_scale_up.setter @is_scale_up.setter
def is_scale_up(self, value): def is_scale_up(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool." value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value self._model.is_scale_up = value
@stride.setter @stride.setter
def stride(self, value): def stride(self, value):
assert isinstance(value, assert isinstance(
int), "The value to set `stride` must be type of int." value, int), "The value to set `stride` must be type of int."
self._model.stride = value self._model.stride = value
@max_wh.setter @max_wh.setter

View File

@@ -24,6 +24,13 @@ class YOLOv7End2EndORT(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a YOLOv7End2EndORT model exported by YOLOv7.
:param model_file: (str)Path of model file, e.g ./yolov7end2end_ort.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(YOLOv7End2EndORT, self).__init__(runtime_option) super(YOLOv7End2EndORT, self).__init__(runtime_option)
@@ -34,32 +41,46 @@ class YOLOv7End2EndORT(FastDeployModel):
assert self.initialized, "YOLOv7End2End initialize failed." assert self.initialized, "YOLOv7End2End initialize failed."
def predict(self, input_image, conf_threshold=0.25): def predict(self, input_image, conf_threshold=0.25):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold) return self._model.predict(input_image, conf_threshold)
# 一些跟模型有关的属性封装 # 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_no_pad(self): def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad return self._model.is_no_pad
@property @property
def is_mini_pad(self): def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad return self._model.is_mini_pad
@property @property
def is_scale_up(self): def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up return self._model.is_scale_up
@property @property
def stride(self): def stride(self):
# padding stride, for is_mini_pad
return self._model.stride return self._model.stride
@size.setter @size.setter
@@ -87,17 +108,19 @@ class YOLOv7End2EndORT(FastDeployModel):
@is_mini_pad.setter @is_mini_pad.setter
def is_mini_pad(self, value): def is_mini_pad(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool." value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value self._model.is_mini_pad = value
@is_scale_up.setter @is_scale_up.setter
def is_scale_up(self, value): def is_scale_up(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool." value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value self._model.is_scale_up = value
@stride.setter @stride.setter
def stride(self, value): def stride(self, value):
assert isinstance(value, assert isinstance(
int), "The value to set `stride` must be type of int." value, int), "The value to set `stride` must be type of int."
self._model.stride = value self._model.stride = value

View File

@@ -24,6 +24,13 @@ class YOLOv7End2EndTRT(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a YOLOv7End2EndTRT model exported by YOLOv7.
:param model_file: (str)Path of model file, e.g ./yolov7end2end_trt.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(YOLOv7End2EndTRT, self).__init__(runtime_option) super(YOLOv7End2EndTRT, self).__init__(runtime_option)
@@ -34,35 +41,46 @@ class YOLOv7End2EndTRT(FastDeployModel):
assert self.initialized, "YOLOv7End2EndTRT initialize failed." assert self.initialized, "YOLOv7End2EndTRT initialize failed."
def predict(self, input_image, conf_threshold=0.25): def predict(self, input_image, conf_threshold=0.25):
return self._model.predict(input_image, conf_threshold) """Detect an input image
def use_cuda_preprocessing(self, max_image_size=3840 * 2160): :param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
return self._model.use_cuda_preprocessing(max_image_size) :param conf_threshold: confidence threashold for postprocessing, default is 0.25
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold)
# 一些跟模型有关的属性封装 # 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_no_pad(self): def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad return self._model.is_no_pad
@property @property
def is_mini_pad(self): def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad return self._model.is_mini_pad
@property @property
def is_scale_up(self): def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up return self._model.is_scale_up
@property @property
def stride(self): def stride(self):
# padding stride, for is_mini_pad
return self._model.stride return self._model.stride
@size.setter @size.setter
@@ -90,17 +108,19 @@ class YOLOv7End2EndTRT(FastDeployModel):
@is_mini_pad.setter @is_mini_pad.setter
def is_mini_pad(self, value): def is_mini_pad(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool." value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value self._model.is_mini_pad = value
@is_scale_up.setter @is_scale_up.setter
def is_scale_up(self, value): def is_scale_up(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool." value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value self._model.is_scale_up = value
@stride.setter @stride.setter
def stride(self, value): def stride(self, value):
assert isinstance(value, assert isinstance(
int), "The value to set `stride` must be type of int." value, int), "The value to set `stride` must be type of int."
self._model.stride = value self._model.stride = value

View File

@@ -24,6 +24,13 @@ class YOLOX(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a YOLOX model exported by YOLOX.
:param model_file: (str)Path of model file, e.g ./yolox.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(YOLOX, self).__init__(runtime_option) super(YOLOX, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class YOLOX(FastDeployModel):
assert self.initialized, "YOLOX initialize failed." assert self.initialized, "YOLOX initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5): def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
@@ -41,22 +55,35 @@ class YOLOX(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_decode_exported(self): def is_decode_exported(self):
"""
whether the model_file was exported with decode module.
The official YOLOX/tools/export_onnx.py script will export ONNX file without decode module.
Please set it 'true' manually if the model file was exported with decode module.
"""
return self._model.is_decode_exported return self._model.is_decode_exported
@property @property
def downsample_strides(self): def downsample_strides(self):
"""
downsample strides for YOLOX to generate anchors, will take (8,16,32) as default values, might have stride=64.
"""
return self._model.downsample_strides return self._model.downsample_strides
@property @property
def max_wh(self): def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh return self._model.max_wh
@size.setter @size.setter

View File

@@ -24,6 +24,13 @@ class RetinaFace(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a RetinaFace model exported by RetinaFace.
:param model_file: (str)Path of model file, e.g ./retinaface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(RetinaFace, self).__init__(runtime_option) super(RetinaFace, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class RetinaFace(FastDeployModel):
assert self.initialized, "RetinaFace initialize failed." assert self.initialized, "RetinaFace initialize failed."
def predict(self, input_image, conf_threshold=0.7, nms_iou_threshold=0.3): def predict(self, input_image, conf_threshold=0.7, nms_iou_threshold=0.3):
"""Detect the location and key points of human faces from an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.7
:param nms_iou_threshold: iou threashold for NMS, default is 0.3
:return: FaceDetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
@@ -41,22 +55,37 @@ class RetinaFace(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [640, 480]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [640, 480]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def variance(self): def variance(self):
"""
Argument for image postprocessing step, variance in RetinaFace's prior-box(anchor) generate process, default (0.1, 0.2)
"""
return self._model.variance return self._model.variance
@property @property
def downsample_strides(self): def downsample_strides(self):
"""
Argument for image postprocessing step, downsample strides (namely, steps) for RetinaFace to generate anchors, will take (8,16,32) as default values
"""
return self._model.downsample_strides return self._model.downsample_strides
@property @property
def min_sizes(self): def min_sizes(self):
"""
Argument for image postprocessing step, min sizes, width and height for each anchor
"""
return self._model.min_sizes return self._model.min_sizes
@property @property
def landmarks_per_face(self): def landmarks_per_face(self):
"""
Argument for image postprocessing step, landmarks_per_face, default 5 in RetinaFace
"""
return self._model.landmarks_per_face return self._model.landmarks_per_face
@size.setter @size.setter

View File

@@ -24,16 +24,30 @@ class SCRFD(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a SCRFD model exported by SCRFD.
:param model_file: (str)Path of model file, e.g ./scrfd.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(SCRFD, self).__init__(runtime_option) super(SCRFD, self).__init__(runtime_option)
self._model = C.vision.facedet.SCRFD(model_file, params_file, self._model = C.vision.facedet.SCRFD(
self._runtime_option, model_format) model_file, params_file, self._runtime_option, model_format)
# 通过self.initialized判断整个模型的初始化是否成功 # 通过self.initialized判断整个模型的初始化是否成功
assert self.initialized, "SCRFD initialize failed." assert self.initialized, "SCRFD initialize failed."
def predict(self, input_image, conf_threshold=0.7, nms_iou_threshold=0.3): def predict(self, input_image, conf_threshold=0.7, nms_iou_threshold=0.3):
"""Detect the location and key points of human faces from an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.7
:param nms_iou_threshold: iou threashold for NMS, default is 0.3
:return: FaceDetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
@@ -41,26 +55,34 @@ class SCRFD(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [640, 640]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [640, 640]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_no_pad(self): def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad return self._model.is_no_pad
@property @property
def is_mini_pad(self): def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad return self._model.is_mini_pad
@property @property
def is_scale_up(self): def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up return self._model.is_scale_up
@property @property
def stride(self): def stride(self):
# padding stride, for is_mini_pad
return self._model.stride return self._model.stride
@property @property
@@ -108,19 +130,21 @@ class SCRFD(FastDeployModel):
@is_mini_pad.setter @is_mini_pad.setter
def is_mini_pad(self, value): def is_mini_pad(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool." value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value self._model.is_mini_pad = value
@is_scale_up.setter @is_scale_up.setter
def is_scale_up(self, value): def is_scale_up(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool." value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value self._model.is_scale_up = value
@stride.setter @stride.setter
def stride(self, value): def stride(self, value):
assert isinstance(value, assert isinstance(
int), "The value to set `stride` must be type of int." value, int), "The value to set `stride` must be type of int."
self._model.stride = value self._model.stride = value
@downsample_strides.setter @downsample_strides.setter

View File

@@ -24,6 +24,13 @@ class UltraFace(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a UltraFace model exported by UltraFace.
:param model_file: (str)Path of model file, e.g ./ultraface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(UltraFace, self).__init__(runtime_option) super(UltraFace, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class UltraFace(FastDeployModel):
assert self.initialized, "UltraFace initialize failed." assert self.initialized, "UltraFace initialize failed."
def predict(self, input_image, conf_threshold=0.7, nms_iou_threshold=0.3): def predict(self, input_image, conf_threshold=0.7, nms_iou_threshold=0.3):
"""Detect the location and key points of human faces from an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.7
:param nms_iou_threshold: iou threashold for NMS, default is 0.3
:return: FaceDetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
@@ -41,6 +55,9 @@ class UltraFace(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [640, 480]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [640, 480]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@size.setter @size.setter

View File

@@ -24,6 +24,13 @@ class YOLOv5Face(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a YOLOv5Face model exported by YOLOv5Face.
:param model_file: (str)Path of model file, e.g ./yolov5face.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(YOLOv5Face, self).__init__(runtime_option) super(YOLOv5Face, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class YOLOv5Face(FastDeployModel):
assert self.initialized, "YOLOv5Face initialize failed." assert self.initialized, "YOLOv5Face initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5): def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect the location and key points of human faces from an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: FaceDetectionResult
"""
return self._model.predict(input_image, conf_threshold, return self._model.predict(input_image, conf_threshold,
nms_iou_threshold) nms_iou_threshold)
@@ -41,30 +55,41 @@ class YOLOv5Face(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def padding_value(self): def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value return self._model.padding_value
@property @property
def is_no_pad(self): def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad return self._model.is_no_pad
@property @property
def is_mini_pad(self): def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad return self._model.is_mini_pad
@property @property
def is_scale_up(self): def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up return self._model.is_scale_up
@property @property
def stride(self): def stride(self):
# padding stride, for is_mini_pad
return self._model.stride return self._model.stride
@property @property
def landmarks_per_face(self): def landmarks_per_face(self):
"""
Argument for image postprocessing step, landmarks_per_face, default 5 in YOLOv5Face
"""
return self._model.landmarks_per_face return self._model.landmarks_per_face
@size.setter @size.setter
@@ -92,19 +117,21 @@ class YOLOv5Face(FastDeployModel):
@is_mini_pad.setter @is_mini_pad.setter
def is_mini_pad(self, value): def is_mini_pad(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool." value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value self._model.is_mini_pad = value
@is_scale_up.setter @is_scale_up.setter
def is_scale_up(self, value): def is_scale_up(self, value):
assert isinstance( assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool." value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value self._model.is_scale_up = value
@stride.setter @stride.setter
def stride(self, value): def stride(self, value):
assert isinstance(value, assert isinstance(
int), "The value to set `stride` must be type of int." value, int), "The value to set `stride` must be type of int."
self._model.stride = value self._model.stride = value
@landmarks_per_face.setter @landmarks_per_face.setter

View File

@@ -23,6 +23,13 @@ class AdaFace(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.PADDLE): model_format=ModelFormat.PADDLE):
"""Load a AdaFace model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./adaface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(AdaFace, self).__init__(runtime_option) super(AdaFace, self).__init__(runtime_option)
@@ -33,28 +40,48 @@ class AdaFace(FastDeployModel):
assert self.initialized, "AdaFace initialize failed." assert self.initialized, "AdaFace initialize failed."
def predict(self, input_image): def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image) return self._model.predict(input_image)
# 一些跟模型有关的属性封装 # 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def alpha(self): def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha return self._model.alpha
@property @property
def beta(self): def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta return self._model.beta
@property @property
def swap_rb(self): def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb return self._model.swap_rb
@property @property
def l2_normalize(self): def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default;
"""
return self._model.l2_normalize return self._model.l2_normalize
@size.setter @size.setter

View File

@@ -25,6 +25,13 @@ class ArcFace(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a ArcFace model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./arcface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(ArcFace, self).__init__(runtime_option) super(ArcFace, self).__init__(runtime_option)
@@ -35,28 +42,48 @@ class ArcFace(FastDeployModel):
assert self.initialized, "ArcFace initialize failed." assert self.initialized, "ArcFace initialize failed."
def predict(self, input_image): def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image) return self._model.predict(input_image)
# 一些跟模型有关的属性封装 # 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def alpha(self): def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha return self._model.alpha
@property @property
def beta(self): def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta return self._model.beta
@property @property
def swap_rb(self): def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb return self._model.swap_rb
@property @property
def l2_normalize(self): def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default;
"""
return self._model.l2_normalize return self._model.l2_normalize
@size.setter @size.setter

View File

@@ -24,6 +24,13 @@ class CosFace(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a CosFace model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./cosface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(CosFace, self).__init__(runtime_option) super(CosFace, self).__init__(runtime_option)
@@ -34,28 +41,48 @@ class CosFace(FastDeployModel):
assert self.initialized, "CosFace initialize failed." assert self.initialized, "CosFace initialize failed."
def predict(self, input_image): def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image) return self._model.predict(input_image)
# 一些跟模型有关的属性封装 # 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def alpha(self): def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha return self._model.alpha
@property @property
def beta(self): def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta return self._model.beta
@property @property
def swap_rb(self): def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb return self._model.swap_rb
@property @property
def l2_normalize(self): def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default False;
"""
return self._model.l2_normalize return self._model.l2_normalize
@size.setter @size.setter

View File

@@ -24,6 +24,13 @@ class InsightFaceRecognitionModel(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a InsightFace model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./arcface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(InsightFaceRecognitionModel, self).__init__(runtime_option) super(InsightFaceRecognitionModel, self).__init__(runtime_option)
@@ -34,28 +41,48 @@ class InsightFaceRecognitionModel(FastDeployModel):
assert self.initialized, "InsightFaceRecognitionModel initialize failed." assert self.initialized, "InsightFaceRecognitionModel initialize failed."
def predict(self, input_image): def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image) return self._model.predict(input_image)
# 一些跟InsightFaceRecognitionModel模型有关的属性封装 # 一些跟InsightFaceRecognitionModel模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def alpha(self): def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha return self._model.alpha
@property @property
def beta(self): def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta return self._model.beta
@property @property
def swap_rb(self): def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb return self._model.swap_rb
@property @property
def l2_normalize(self): def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default False;
"""
return self._model.l2_normalize return self._model.l2_normalize
@size.setter @size.setter

View File

@@ -24,6 +24,13 @@ class PartialFC(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a PartialFC model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./partial_fc.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(PartialFC, self).__init__(runtime_option) super(PartialFC, self).__init__(runtime_option)
@@ -34,28 +41,48 @@ class PartialFC(FastDeployModel):
assert self.initialized, "PartialFC initialize failed." assert self.initialized, "PartialFC initialize failed."
def predict(self, input_image): def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image) return self._model.predict(input_image)
# 一些跟模型有关的属性封装 # 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def alpha(self): def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha return self._model.alpha
@property @property
def beta(self): def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta return self._model.beta
@property @property
def swap_rb(self): def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb return self._model.swap_rb
@property @property
def l2_normalize(self): def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default False;
"""
return self._model.l2_normalize return self._model.l2_normalize
@size.setter @size.setter

View File

@@ -24,6 +24,13 @@ class VPL(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a VPL model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./vpl.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(VPL, self).__init__(runtime_option) super(VPL, self).__init__(runtime_option)
@@ -34,28 +41,48 @@ class VPL(FastDeployModel):
assert self.initialized, "VPL initialize failed." assert self.initialized, "VPL initialize failed."
def predict(self, input_image): def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image) return self._model.predict(input_image)
# 一些跟模型有关的属性封装 # 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def alpha(self): def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha return self._model.alpha
@property @property
def beta(self): def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta return self._model.beta
@property @property
def swap_rb(self): def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb return self._model.swap_rb
@property @property
def l2_normalize(self): def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default False;
"""
return self._model.l2_normalize return self._model.l2_normalize
@size.setter @size.setter

View File

@@ -24,6 +24,13 @@ class MODNet(FastDeployModel):
params_file="", params_file="",
runtime_option=None, runtime_option=None,
model_format=ModelFormat.ONNX): model_format=ModelFormat.ONNX):
"""Load a MODNet model exported by MODNet.
:param model_file: (str)Path of model file, e.g ./modnet.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化 # 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option # 初始化后的option保存在self._runtime_option
super(MODNet, self).__init__(runtime_option) super(MODNet, self).__init__(runtime_option)
@@ -34,24 +41,41 @@ class MODNet(FastDeployModel):
assert self.initialized, "MODNet initialize failed." assert self.initialized, "MODNet initialize failed."
def predict(self, input_image): def predict(self, input_image):
""" Predict the matting result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: MattingResult
"""
return self._model.predict(input_image) return self._model.predict(input_image)
# 一些跟模型有关的属性封装 # 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [256, 256]改变预处理时resize的大小前提是模型支持 # 多数是预处理相关可通过修改如model.size = [256, 256]改变预处理时resize的大小前提是模型支持
@property @property
def size(self): def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size return self._model.size
@property @property
def alpha(self): def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha return self._model.alpha
@property @property
def beta(self): def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta return self._model.beta
@property @property
def swap_rb(self): def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb return self._model.swap_rb
@size.setter @size.setter

View File

@@ -25,6 +25,14 @@ class PPMatting(FastDeployModel):
config_file, config_file,
runtime_option=None, runtime_option=None,
model_format=ModelFormat.PADDLE): model_format=ModelFormat.PADDLE):
"""Load a PPMatting model exported by PaddleSeg.
:param model_file: (str)Path of model file, e.g PPMatting-512/model.pdmodel
:param params_file: (str)Path of parameters file, e.g PPMatting-512/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param config_file: (str)Path of configuration file for deployment, e.g PPMatting-512/deploy.yml
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
super(PPMatting, self).__init__(runtime_option) super(PPMatting, self).__init__(runtime_option)
assert model_format == ModelFormat.PADDLE, "PPMatting model only support model format of ModelFormat.Paddle now." assert model_format == ModelFormat.PADDLE, "PPMatting model only support model format of ModelFormat.Paddle now."
@@ -34,5 +42,10 @@ class PPMatting(FastDeployModel):
assert self.initialized, "PPMatting model initialize failed." assert self.initialized, "PPMatting model initialize failed."
def predict(self, input_image): def predict(self, input_image):
""" Predict the matting result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: MattingResult
"""
assert input_image is not None, "The input image data is None." assert input_image is not None, "The input image data is None."
return self._model.predict(input_image) return self._model.predict(input_image)