[Doc] Add Python comments for external models (#408)

* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (#11)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (#16)

* Develop (#11) (#12)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (#13)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (#14)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

* add python comments for external models

* modify resnet c++ comments

* modify C++ comments for external models

* modify python comments and add result class comments

* fix comments compile error

* modify result.h comments

* modify yolor comments

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
This commit is contained in:
ziqi-jin
2022-10-25 21:32:53 +08:00
committed by GitHub
parent 718dc3218f
commit 1f39b4f411
46 changed files with 1039 additions and 239 deletions

View File

@@ -24,6 +24,13 @@ class NanoDetPlus(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a NanoDetPlus model exported by NanoDet.
:param model_file: (str)Path of model file, e.g ./nanodet.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(NanoDetPlus, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class NanoDetPlus(FastDeployModel):
assert self.initialized, "NanoDetPlus initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
@@ -41,26 +55,36 @@ class NanoDetPlus(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [416, 416]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def keep_ratio(self):
# keep aspect ratio or not when perform resize operation. This option is set as false by default in NanoDet-Plus
return self._model.keep_ratio
@property
def downsample_strides(self):
# downsample strides for NanoDet-Plus to generate anchors, will take (8, 16, 32, 64) as default values
return self._model.downsample_strides
@property
def max_wh(self):
# for offseting the boxes by classes when using NMS, default 4096
return self._model.max_wh
@property
def reg_max(self):
"""
reg_max for GFL regression, default 7
"""
return self._model.reg_max
@size.setter

View File

@@ -24,6 +24,13 @@ class ScaledYOLOv4(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a ScaledYOLOv4 model exported by ScaledYOLOv4.
:param model_file: (str)Path of model file, e.g ./scaled_yolov4.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(ScaledYOLOv4, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class ScaledYOLOv4(FastDeployModel):
assert self.initialized, "ScaledYOLOv4 initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
@@ -41,30 +55,39 @@ class ScaledYOLOv4(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad
@property
def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad
@property
def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up
@property
def stride(self):
# padding stride, for is_mini_pad
return self._model.stride
@property
def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh
@size.setter
@@ -92,19 +115,21 @@ class ScaledYOLOv4(FastDeployModel):
@is_mini_pad.setter
def is_mini_pad(self, value):
assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool."
value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value
@is_scale_up.setter
def is_scale_up(self, value):
assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool."
value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value
@stride.setter
def stride(self, value):
assert isinstance(value,
int), "The value to set `stride` must be type of int."
assert isinstance(
value, int), "The value to set `stride` must be type of int."
self._model.stride = value
@max_wh.setter

View File

@@ -24,6 +24,13 @@ class YOLOR(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a YOLOR model exported by YOLOR
:param model_file: (str)Path of model file, e.g ./yolor.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(YOLOR, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class YOLOR(FastDeployModel):
assert self.initialized, "YOLOR initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
@@ -41,30 +55,39 @@ class YOLOR(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad
@property
def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad
@property
def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up
@property
def stride(self):
# padding stride, for is_mini_pad
return self._model.stride
@property
def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh
@size.setter
@@ -92,19 +115,21 @@ class YOLOR(FastDeployModel):
@is_mini_pad.setter
def is_mini_pad(self, value):
assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool."
value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value
@is_scale_up.setter
def is_scale_up(self, value):
assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool."
value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value
@stride.setter
def stride(self, value):
assert isinstance(value,
int), "The value to set `stride` must be type of int."
assert isinstance(
value, int), "The value to set `stride` must be type of int."
self._model.stride = value
@max_wh.setter

View File

@@ -24,6 +24,13 @@ class YOLOv5(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a YOLOv5 model exported by YOLOv5.
:param model_file: (str)Path of model file, e.g ./yolov5.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(YOLOv5, self).__init__(runtime_option)
@@ -34,12 +41,16 @@ class YOLOv5(FastDeployModel):
assert self.initialized, "YOLOv5 initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
def use_cuda_preprocessing(self, max_image_size=3840 * 2160):
return self._model.use_cuda_preprocessing(max_image_size)
@staticmethod
def preprocess(input_image,
size=[640, 640],
@@ -69,30 +80,39 @@ class YOLOv5(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad
@property
def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad
@property
def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up
@property
def stride(self):
# padding stride, for is_mini_pad
return self._model.stride
@property
def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh
@property
@@ -124,19 +144,21 @@ class YOLOv5(FastDeployModel):
@is_mini_pad.setter
def is_mini_pad(self, value):
assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool."
value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value
@is_scale_up.setter
def is_scale_up(self, value):
assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool."
value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value
@stride.setter
def stride(self, value):
assert isinstance(value,
int), "The value to set `stride` must be type of int."
assert isinstance(
value, int), "The value to set `stride` must be type of int."
self._model.stride = value
@max_wh.setter
@@ -148,5 +170,6 @@ class YOLOv5(FastDeployModel):
@multi_label.setter
def multi_label(self, value):
assert isinstance(
value, bool), "The value to set `multi_label` must be type of bool."
value,
bool), "The value to set `multi_label` must be type of bool."
self._model.multi_label = value

View File

@@ -24,6 +24,13 @@ class YOLOv5Lite(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a YOLOv5Lite model exported by YOLOv5Lite.
:param model_file: (str)Path of model file, e.g ./yolov5lite.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(YOLOv5Lite, self).__init__(runtime_option)
@@ -34,50 +41,76 @@ class YOLOv5Lite(FastDeployModel):
assert self.initialized, "YOLOv5Lite initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
def use_cuda_preprocessing(self, max_image_size=3840 * 2160):
return self._model.use_cuda_preprocessing(max_image_size)
# 一些跟YOLOv5Lite模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad
@property
def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad
@property
def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up
@property
def stride(self):
# padding stride, for is_mini_pad
return self._model.stride
@property
def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh
@property
def is_decode_exported(self):
"""
whether the model_file was exported with decode module.
The official YOLOv5Lite/export.py script will export ONNX file without decode module.
Please set it 'true' manually if the model file was exported with decode module.
false : ONNX files without decode module. true : ONNX file with decode module.
"""
return self._model.is_decode_exported
@property
def anchor_config(self):
return self._model.anchor_config
@property
def downsample_strides(self):
"""
downsample strides for YOLOv5Lite to generate anchors, will take (8,16,32) as default values, might have stride=64.
"""
return self._model.downsample_strides
@size.setter
def size(self, wh):
assert isinstance(wh, (list, tuple)),\
@@ -103,19 +136,21 @@ class YOLOv5Lite(FastDeployModel):
@is_mini_pad.setter
def is_mini_pad(self, value):
assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool."
value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value
@is_scale_up.setter
def is_scale_up(self, value):
assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool."
value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value
@stride.setter
def stride(self, value):
assert isinstance(value,
int), "The value to set `stride` must be type of int."
assert isinstance(
value, int), "The value to set `stride` must be type of int."
self._model.stride = value
@max_wh.setter
@@ -138,3 +173,10 @@ class YOLOv5Lite(FastDeployModel):
assert isinstance(anchor_config_val[0], list),\
"The value to set `anchor_config` must be 2-dimensions tuple or list"
self._model.anchor_config = anchor_config_val
@downsample_strides.setter
def downsample_strides(self, value):
assert isinstance(
value,
list), "The value to set `downsample_strides` must be type of list."
self._model.downsample_strides = value

View File

@@ -24,6 +24,13 @@ class YOLOv6(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a YOLOv6 model exported by YOLOv6.
:param model_file: (str)Path of model file, e.g ./yolov6.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(YOLOv6, self).__init__(runtime_option)
@@ -34,40 +41,53 @@ class YOLOv6(FastDeployModel):
assert self.initialized, "YOLOv6 initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
def use_cuda_preprocessing(self, max_image_size=3840 * 2160):
return self._model.use_cuda_preprocessing(max_image_size)
# 一些跟YOLOv6模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad
@property
def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad
@property
def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up
@property
def stride(self):
# padding stride, for is_mini_pad
return self._model.stride
@property
def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh
@size.setter
@@ -95,19 +115,21 @@ class YOLOv6(FastDeployModel):
@is_mini_pad.setter
def is_mini_pad(self, value):
assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool."
value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value
@is_scale_up.setter
def is_scale_up(self, value):
assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool."
value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value
@stride.setter
def stride(self, value):
assert isinstance(value,
int), "The value to set `stride` must be type of int."
assert isinstance(
value, int), "The value to set `stride` must be type of int."
self._model.stride = value
@max_wh.setter

View File

@@ -24,6 +24,13 @@ class YOLOv7(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a YOLOv7 model exported by YOLOv7.
:param model_file: (str)Path of model file, e.g ./yolov7.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(YOLOv7, self).__init__(runtime_option)
@@ -34,40 +41,53 @@ class YOLOv7(FastDeployModel):
assert self.initialized, "YOLOv7 initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
def use_cuda_preprocessing(self, max_image_size=3840 * 2160):
return self._model.use_cuda_preprocessing(max_image_size)
# 一些跟YOLOv7模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad
@property
def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad
@property
def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up
@property
def stride(self):
# padding stride, for is_mini_pad
return self._model.stride
@property
def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh
@size.setter
@@ -95,19 +115,21 @@ class YOLOv7(FastDeployModel):
@is_mini_pad.setter
def is_mini_pad(self, value):
assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool."
value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value
@is_scale_up.setter
def is_scale_up(self, value):
assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool."
value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value
@stride.setter
def stride(self, value):
assert isinstance(value,
int), "The value to set `stride` must be type of int."
assert isinstance(
value, int), "The value to set `stride` must be type of int."
self._model.stride = value
@max_wh.setter

View File

@@ -24,6 +24,13 @@ class YOLOv7End2EndORT(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a YOLOv7End2EndORT model exported by YOLOv7.
:param model_file: (str)Path of model file, e.g ./yolov7end2end_ort.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(YOLOv7End2EndORT, self).__init__(runtime_option)
@@ -34,32 +41,46 @@ class YOLOv7End2EndORT(FastDeployModel):
assert self.initialized, "YOLOv7End2End initialize failed."
def predict(self, input_image, conf_threshold=0.25):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold)
# 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad
@property
def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad
@property
def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up
@property
def stride(self):
# padding stride, for is_mini_pad
return self._model.stride
@size.setter
@@ -87,17 +108,19 @@ class YOLOv7End2EndORT(FastDeployModel):
@is_mini_pad.setter
def is_mini_pad(self, value):
assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool."
value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value
@is_scale_up.setter
def is_scale_up(self, value):
assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool."
value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value
@stride.setter
def stride(self, value):
assert isinstance(value,
int), "The value to set `stride` must be type of int."
assert isinstance(
value, int), "The value to set `stride` must be type of int."
self._model.stride = value

View File

@@ -24,6 +24,13 @@ class YOLOv7End2EndTRT(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a YOLOv7End2EndTRT model exported by YOLOv7.
:param model_file: (str)Path of model file, e.g ./yolov7end2end_trt.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(YOLOv7End2EndTRT, self).__init__(runtime_option)
@@ -34,35 +41,46 @@ class YOLOv7End2EndTRT(FastDeployModel):
assert self.initialized, "YOLOv7End2EndTRT initialize failed."
def predict(self, input_image, conf_threshold=0.25):
return self._model.predict(input_image, conf_threshold)
"""Detect an input image
def use_cuda_preprocessing(self, max_image_size=3840 * 2160):
return self._model.use_cuda_preprocessing(max_image_size)
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold)
# 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad
@property
def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad
@property
def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up
@property
def stride(self):
# padding stride, for is_mini_pad
return self._model.stride
@size.setter
@@ -90,17 +108,19 @@ class YOLOv7End2EndTRT(FastDeployModel):
@is_mini_pad.setter
def is_mini_pad(self, value):
assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool."
value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value
@is_scale_up.setter
def is_scale_up(self, value):
assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool."
value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value
@stride.setter
def stride(self, value):
assert isinstance(value,
int), "The value to set `stride` must be type of int."
assert isinstance(
value, int), "The value to set `stride` must be type of int."
self._model.stride = value

View File

@@ -24,6 +24,13 @@ class YOLOX(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a YOLOX model exported by YOLOX.
:param model_file: (str)Path of model file, e.g ./yolox.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(YOLOX, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class YOLOX(FastDeployModel):
assert self.initialized, "YOLOX initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: DetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
@@ -41,22 +55,35 @@ class YOLOX(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_decode_exported(self):
"""
whether the model_file was exported with decode module.
The official YOLOX/tools/export_onnx.py script will export ONNX file without decode module.
Please set it 'true' manually if the model file was exported with decode module.
"""
return self._model.is_decode_exported
@property
def downsample_strides(self):
"""
downsample strides for YOLOX to generate anchors, will take (8,16,32) as default values, might have stride=64.
"""
return self._model.downsample_strides
@property
def max_wh(self):
# for offseting the boxes by classes when using NMS
return self._model.max_wh
@size.setter

View File

@@ -24,6 +24,13 @@ class RetinaFace(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a RetinaFace model exported by RetinaFace.
:param model_file: (str)Path of model file, e.g ./retinaface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(RetinaFace, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class RetinaFace(FastDeployModel):
assert self.initialized, "RetinaFace initialize failed."
def predict(self, input_image, conf_threshold=0.7, nms_iou_threshold=0.3):
"""Detect the location and key points of human faces from an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.7
:param nms_iou_threshold: iou threashold for NMS, default is 0.3
:return: FaceDetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
@@ -41,22 +55,37 @@ class RetinaFace(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [640, 480]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def variance(self):
"""
Argument for image postprocessing step, variance in RetinaFace's prior-box(anchor) generate process, default (0.1, 0.2)
"""
return self._model.variance
@property
def downsample_strides(self):
"""
Argument for image postprocessing step, downsample strides (namely, steps) for RetinaFace to generate anchors, will take (8,16,32) as default values
"""
return self._model.downsample_strides
@property
def min_sizes(self):
"""
Argument for image postprocessing step, min sizes, width and height for each anchor
"""
return self._model.min_sizes
@property
def landmarks_per_face(self):
"""
Argument for image postprocessing step, landmarks_per_face, default 5 in RetinaFace
"""
return self._model.landmarks_per_face
@size.setter

View File

@@ -24,16 +24,30 @@ class SCRFD(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a SCRFD model exported by SCRFD.
:param model_file: (str)Path of model file, e.g ./scrfd.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(SCRFD, self).__init__(runtime_option)
self._model = C.vision.facedet.SCRFD(model_file, params_file,
self._runtime_option, model_format)
self._model = C.vision.facedet.SCRFD(
model_file, params_file, self._runtime_option, model_format)
# 通过self.initialized判断整个模型的初始化是否成功
assert self.initialized, "SCRFD initialize failed."
def predict(self, input_image, conf_threshold=0.7, nms_iou_threshold=0.3):
"""Detect the location and key points of human faces from an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.7
:param nms_iou_threshold: iou threashold for NMS, default is 0.3
:return: FaceDetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
@@ -41,26 +55,34 @@ class SCRFD(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [640, 640]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad
@property
def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad
@property
def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up
@property
def stride(self):
# padding stride, for is_mini_pad
return self._model.stride
@property
@@ -108,19 +130,21 @@ class SCRFD(FastDeployModel):
@is_mini_pad.setter
def is_mini_pad(self, value):
assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool."
value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value
@is_scale_up.setter
def is_scale_up(self, value):
assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool."
value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value
@stride.setter
def stride(self, value):
assert isinstance(value,
int), "The value to set `stride` must be type of int."
assert isinstance(
value, int), "The value to set `stride` must be type of int."
self._model.stride = value
@downsample_strides.setter

View File

@@ -24,6 +24,13 @@ class UltraFace(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a UltraFace model exported by UltraFace.
:param model_file: (str)Path of model file, e.g ./ultraface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(UltraFace, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class UltraFace(FastDeployModel):
assert self.initialized, "UltraFace initialize failed."
def predict(self, input_image, conf_threshold=0.7, nms_iou_threshold=0.3):
"""Detect the location and key points of human faces from an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.7
:param nms_iou_threshold: iou threashold for NMS, default is 0.3
:return: FaceDetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
@@ -41,6 +55,9 @@ class UltraFace(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [640, 480]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@size.setter

View File

@@ -24,6 +24,13 @@ class YOLOv5Face(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a YOLOv5Face model exported by YOLOv5Face.
:param model_file: (str)Path of model file, e.g ./yolov5face.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(YOLOv5Face, self).__init__(runtime_option)
@@ -34,6 +41,13 @@ class YOLOv5Face(FastDeployModel):
assert self.initialized, "YOLOv5Face initialize failed."
def predict(self, input_image, conf_threshold=0.25, nms_iou_threshold=0.5):
"""Detect the location and key points of human faces from an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:param conf_threshold: confidence threashold for postprocessing, default is 0.25
:param nms_iou_threshold: iou threashold for NMS, default is 0.5
:return: FaceDetectionResult
"""
return self._model.predict(input_image, conf_threshold,
nms_iou_threshold)
@@ -41,30 +55,41 @@ class YOLOv5Face(FastDeployModel):
# 多数是预处理相关可通过修改如model.size = [1280, 1280]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def padding_value(self):
# padding value, size should be the same as channels
return self._model.padding_value
@property
def is_no_pad(self):
# while is_mini_pad = false and is_no_pad = true, will resize the image to the set size
return self._model.is_no_pad
@property
def is_mini_pad(self):
# only pad to the minimum rectange which height and width is times of stride
return self._model.is_mini_pad
@property
def is_scale_up(self):
# if is_scale_up is false, the input image only can be zoom out, the maximum resize scale cannot exceed 1.0
return self._model.is_scale_up
@property
def stride(self):
# padding stride, for is_mini_pad
return self._model.stride
@property
def landmarks_per_face(self):
"""
Argument for image postprocessing step, landmarks_per_face, default 5 in YOLOv5Face
"""
return self._model.landmarks_per_face
@size.setter
@@ -92,19 +117,21 @@ class YOLOv5Face(FastDeployModel):
@is_mini_pad.setter
def is_mini_pad(self, value):
assert isinstance(
value, bool), "The value to set `is_mini_pad` must be type of bool."
value,
bool), "The value to set `is_mini_pad` must be type of bool."
self._model.is_mini_pad = value
@is_scale_up.setter
def is_scale_up(self, value):
assert isinstance(
value, bool), "The value to set `is_scale_up` must be type of bool."
value,
bool), "The value to set `is_scale_up` must be type of bool."
self._model.is_scale_up = value
@stride.setter
def stride(self, value):
assert isinstance(value,
int), "The value to set `stride` must be type of int."
assert isinstance(
value, int), "The value to set `stride` must be type of int."
self._model.stride = value
@landmarks_per_face.setter

View File

@@ -23,6 +23,13 @@ class AdaFace(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.PADDLE):
"""Load a AdaFace model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./adaface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(AdaFace, self).__init__(runtime_option)
@@ -33,28 +40,48 @@ class AdaFace(FastDeployModel):
assert self.initialized, "AdaFace initialize failed."
def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image)
# 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb
@property
def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default;
"""
return self._model.l2_normalize
@size.setter

View File

@@ -25,6 +25,13 @@ class ArcFace(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a ArcFace model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./arcface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(ArcFace, self).__init__(runtime_option)
@@ -35,28 +42,48 @@ class ArcFace(FastDeployModel):
assert self.initialized, "ArcFace initialize failed."
def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image)
# 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb
@property
def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default;
"""
return self._model.l2_normalize
@size.setter

View File

@@ -24,6 +24,13 @@ class CosFace(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a CosFace model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./cosface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(CosFace, self).__init__(runtime_option)
@@ -34,28 +41,48 @@ class CosFace(FastDeployModel):
assert self.initialized, "CosFace initialize failed."
def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image)
# 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb
@property
def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default False;
"""
return self._model.l2_normalize
@size.setter

View File

@@ -24,6 +24,13 @@ class InsightFaceRecognitionModel(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a InsightFace model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./arcface.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(InsightFaceRecognitionModel, self).__init__(runtime_option)
@@ -34,28 +41,48 @@ class InsightFaceRecognitionModel(FastDeployModel):
assert self.initialized, "InsightFaceRecognitionModel initialize failed."
def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image)
# 一些跟InsightFaceRecognitionModel模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb
@property
def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default False;
"""
return self._model.l2_normalize
@size.setter

View File

@@ -24,6 +24,13 @@ class PartialFC(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a PartialFC model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./partial_fc.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(PartialFC, self).__init__(runtime_option)
@@ -34,28 +41,48 @@ class PartialFC(FastDeployModel):
assert self.initialized, "PartialFC initialize failed."
def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image)
# 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb
@property
def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default False;
"""
return self._model.l2_normalize
@size.setter

View File

@@ -24,6 +24,13 @@ class VPL(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a VPL model exported by InsigtFace.
:param model_file: (str)Path of model file, e.g ./vpl.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(VPL, self).__init__(runtime_option)
@@ -34,28 +41,48 @@ class VPL(FastDeployModel):
assert self.initialized, "VPL initialize failed."
def predict(self, input_image):
""" Predict the face recognition result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: FaceRecognitionResult
"""
return self._model.predict(input_image)
# 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [112, 112]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb
@property
def l2_normalize(self):
"""
Argument for image preprocessing step, whether to apply l2 normalize to embedding values, default False;
"""
return self._model.l2_normalize
@size.setter

View File

@@ -24,6 +24,13 @@ class MODNet(FastDeployModel):
params_file="",
runtime_option=None,
model_format=ModelFormat.ONNX):
"""Load a MODNet model exported by MODNet.
:param model_file: (str)Path of model file, e.g ./modnet.onnx
:param params_file: (str)Path of parameters file, e.g yolox/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
# 初始化后的option保存在self._runtime_option
super(MODNet, self).__init__(runtime_option)
@@ -34,24 +41,41 @@ class MODNet(FastDeployModel):
assert self.initialized, "MODNet initialize failed."
def predict(self, input_image):
""" Predict the matting result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: MattingResult
"""
return self._model.predict(input_image)
# 一些跟模型有关的属性封装
# 多数是预处理相关可通过修改如model.size = [256, 256]改变预处理时resize的大小前提是模型支持
@property
def size(self):
"""
Argument for image preprocessing step, the preprocess image size, tuple of (width, height)
"""
return self._model.size
@property
def alpha(self):
"""
Argument for image preprocessing step, alpha value for normalization
"""
return self._model.alpha
@property
def beta(self):
"""
Argument for image preprocessing step, beta value for normalization
"""
return self._model.beta
@property
def swap_rb(self):
"""
Argument for image preprocessing step, whether to swap the B and R channel, such as BGR->RGB, default true.
"""
return self._model.swap_rb
@size.setter

View File

@@ -25,6 +25,14 @@ class PPMatting(FastDeployModel):
config_file,
runtime_option=None,
model_format=ModelFormat.PADDLE):
"""Load a PPMatting model exported by PaddleSeg.
:param model_file: (str)Path of model file, e.g PPMatting-512/model.pdmodel
:param params_file: (str)Path of parameters file, e.g PPMatting-512/model.pdiparams, if the model_fomat is ModelFormat.ONNX, this param will be ignored, can be set as empty string
:param config_file: (str)Path of configuration file for deployment, e.g PPMatting-512/deploy.yml
:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
super(PPMatting, self).__init__(runtime_option)
assert model_format == ModelFormat.PADDLE, "PPMatting model only support model format of ModelFormat.Paddle now."
@@ -34,5 +42,10 @@ class PPMatting(FastDeployModel):
assert self.initialized, "PPMatting model initialize failed."
def predict(self, input_image):
""" Predict the matting result for an input image
:param input_image: (numpy.ndarray)The input image data, 3-D array with layout HWC, BGR format
:return: MattingResult
"""
assert input_image is not None, "The input image data is None."
return self._model.predict(input_image)