[Doc] Add default values for public variables for external models (#441)

* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (#11)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (#16)

* Develop (#11) (#12)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (#13)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (#14)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

* add python comments for external models

* modify resnet c++ comments

* modify C++ comments for external models

* modify python comments and add result class comments

* fix comments compile error

* modify result.h comments

* add default values for public variables in comments

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
This commit is contained in:
ziqi-jin
2022-10-27 10:01:56 +08:00
committed by GitHub
parent 583f16afc1
commit b7e06b8c50
39 changed files with 114 additions and 77 deletions

View File

@@ -56,21 +56,21 @@ class ResNet(FastDeployModel):
@property
def size(self):
"""
Returns the preprocess image size
Returns the preprocess image size, default size = [224, 224];
"""
return self._model.size
@property
def mean_vals(self):
"""
Returns the mean value of normlization
Returns the mean value of normlization, default mean_vals = [0.485f, 0.456f, 0.406f];
"""
return self._model.mean_vals
@property
def std_vals(self):
"""
Returns the std value of normlization
Returns the std value of normlization, default std_vals = [0.229f, 0.224f, 0.225f];
"""
return self._model.std_vals

View File

@@ -52,7 +52,7 @@ class YOLOv5Cls(FastDeployModel):
@property
def size(self):
"""
Returns the preprocess image size
Returns the preprocess image size, default is (224, 224)
"""
return self._model.size

View File

@@ -56,7 +56,7 @@ class NanoDetPlus(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (320, 320)
"""
return self._model.size

View File

@@ -56,7 +56,8 @@ class ScaledYOLOv4(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

View File

@@ -56,7 +56,7 @@ class YOLOR(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

View File

@@ -81,7 +81,7 @@ class YOLOv5(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size
@@ -117,6 +117,9 @@ class YOLOv5(FastDeployModel):
@property
def multi_label(self):
"""
Argument for image preprocessing step, for different strategies to get boxes when postprocessing, default True
"""
return self._model.multi_label
@size.setter

View File

@@ -56,7 +56,7 @@ class YOLOv5Lite(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size
@@ -96,7 +96,8 @@ class YOLOv5Lite(FastDeployModel):
whether the model_file was exported with decode module.
The official YOLOv5Lite/export.py script will export ONNX file without decode module.
Please set it 'true' manually if the model file was exported with decode module.
false : ONNX files without decode module. true : ONNX file with decode module.
False : ONNX files without decode module. True : ONNX file with decode module.
default False
"""
return self._model.is_decode_exported

View File

@@ -56,7 +56,7 @@ class YOLOv6(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

View File

@@ -56,7 +56,7 @@ class YOLOv7(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

View File

@@ -54,7 +54,7 @@ class YOLOv7End2EndORT(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

View File

@@ -54,7 +54,7 @@ class YOLOv7End2EndTRT(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size

View File

@@ -56,7 +56,7 @@ class YOLOX(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640, 640]
"""
return self._model.size
@@ -71,6 +71,7 @@ class YOLOX(FastDeployModel):
whether the model_file was exported with decode module.
The official YOLOX/tools/export_onnx.py script will export ONNX file without decode module.
Please set it 'true' manually if the model file was exported with decode module.
Defalut False.
"""
return self._model.is_decode_exported

View File

@@ -56,7 +56,7 @@ class RetinaFace(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (640, 640)
"""
return self._model.size
@@ -77,7 +77,7 @@ class RetinaFace(FastDeployModel):
@property
def min_sizes(self):
"""
Argument for image postprocessing step, min sizes, width and height for each anchor
Argument for image postprocessing step, min sizes, width and height for each anchor, default min_sizes = [[16, 32], [64, 128], [256, 512]]
"""
return self._model.min_sizes

View File

@@ -56,7 +56,7 @@ class SCRFD(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (640, 640)
"""
return self._model.size
@@ -87,22 +87,40 @@ class SCRFD(FastDeployModel):
@property
def downsample_strides(self):
"""
Argument for image postprocessing step,
downsample strides (namely, steps) for SCRFD to generate anchors,
will take (8,16,32) as default values
"""
return self._model.downsample_strides
@property
def landmarks_per_face(self):
"""
Argument for image postprocessing step, landmarks_per_face, default 5 in SCRFD
"""
return self._model.landmarks_per_face
@property
def use_kps(self):
"""
Argument for image postprocessing step,
the outputs of onnx file with key points features or not, default true
"""
return self._model.use_kps
@property
def max_nms(self):
"""
Argument for image postprocessing step, the upperbond number of boxes processed by nms, default 30000
"""
return self._model.max_nms
@property
def num_anchors(self):
"""
Argument for image postprocessing step, anchor number of each stride, default 2
"""
return self._model.num_anchors
@size.setter

View File

@@ -56,7 +56,7 @@ class UltraFace(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (320, 240)
"""
return self._model.size

View File

@@ -56,7 +56,7 @@ class YOLOv5Face(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [640,640]
"""
return self._model.size

View File

@@ -52,35 +52,36 @@ class AdaFace(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (112, 112)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
Argument for image preprocessing step, alpha value for normalization, default alpha = [1.f / 127.5f, 1.f / 127.5f, 1.f / 127.5f]
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
Argument for image preprocessing step, beta values for normalization, default beta = {-1.f, -1.f, -1.f}
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default True.
"""
return self._model.swap_rb
@property
def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default;
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default False;
"""
return self._model.l2_normalize

View File

@@ -54,35 +54,35 @@ class ArcFace(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (112, 112)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
Argument for image preprocessing step, alpha value for normalization, default alpha = [1.f / 127.5f, 1.f / 127.5f, 1.f / 127.5f]
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
Argument for image preprocessing step, beta values for normalization, default beta = {-1.f, -1.f, -1.f}
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default True.
"""
return self._model.swap_rb
@property
def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default;
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default False;
"""
return self._model.l2_normalize

View File

@@ -53,28 +53,28 @@ class CosFace(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (112, 112)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
Argument for image preprocessing step, alpha value for normalization, default alpha = [1.f / 127.5f, 1.f / 127.5f, 1.f / 127.5f]
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
Argument for image preprocessing step, beta values for normalization, default beta = {-1.f, -1.f, -1.f}
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default True.
"""
return self._model.swap_rb

View File

@@ -53,28 +53,28 @@ class InsightFaceRecognitionModel(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (112, 112)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
Argument for image preprocessing step, alpha value for normalization, default alpha = [1.f / 127.5f, 1.f / 127.5f, 1.f / 127.5f]
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
Argument for image preprocessing step, beta values for normalization, default beta = {-1.f, -1.f, -1.f}
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default True.
"""
return self._model.swap_rb

View File

@@ -53,28 +53,28 @@ class PartialFC(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (112, 112)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
Argument for image preprocessing step, alpha value for normalization, default alpha = [1.f / 127.5f, 1.f / 127.5f, 1.f / 127.5f]
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
Argument for image preprocessing step, beta values for normalization, default beta = {-1.f, -1.f, -1.f}
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default True.
"""
return self._model.swap_rb

View File

@@ -53,28 +53,28 @@ class VPL(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default (112, 112)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
Argument for image preprocessing step, alpha value for normalization, default alpha = [1.f / 127.5f, 1.f / 127.5f, 1.f / 127.5f]
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
Argument for image preprocessing step, beta values for normalization, default beta = {-1.f, -1.f, -1.f}
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default True.
"""
return self._model.swap_rb

View File

@@ -53,28 +53,28 @@ class MODNet(FastDeployModel):
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
Argument for image preprocessing step, the preprocess image size, tuple of (width, height), default size = [256,256]
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
Argument for image preprocessing step, alpha value for normalization, default alpha = {1.f / 127.5f, 1.f / 127.5f, 1.f / 127.5f}
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
Argument for image preprocessing step, beta value for normalization, default beta = {-1.f, -1.f, -1.f}
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default True.
"""
return self._model.swap_rb