mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-05 16:48:03 +08:00
[YOLOv8] Add PaddleYOLOv8 models download links (#1152)
* [Model] Support PaddleYOLOv8 model * [YOLOv8] Add PaddleYOLOv8 pybind * [Other] update from latest develop (#30) * [Backend] Remove all lite options in RuntimeOption (#1109) * Remove all lite options in RuntimeOption * Fix code error * move pybind * Fix build error * [Backend] Add TensorRT FP16 support for AdaptivePool2d (#1116) * add fp16 cuda kernel * fix code bug * update code * [Doc] Fix KunlunXin doc (#1139) fix kunlunxin doc * [Model] Support PaddleYOLOv8 model (#1136) Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> * [YOLOv8] add PaddleYOLOv8 pybind11 (#1144) (#31) * [Model] Support PaddleYOLOv8 model * [YOLOv8] Add PaddleYOLOv8 pybind * [Other] update from latest develop (#30) * [Backend] Remove all lite options in RuntimeOption (#1109) * Remove all lite options in RuntimeOption * Fix code error * move pybind * Fix build error * [Backend] Add TensorRT FP16 support for AdaptivePool2d (#1116) * add fp16 cuda kernel * fix code bug * update code * [Doc] Fix KunlunXin doc (#1139) fix kunlunxin doc * [Model] Support PaddleYOLOv8 model (#1136) Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> * [benchmark] add PaddleYOLOv8 -> benchmark * [benchmark] add PaddleYOLOv8 -> benchmark * [Lite] Support PaddleYOLOv8 with Lite Backend * [Pick] Update from latest develop (#32) * [Model] Support Insightface model inferenceing on RKNPU (#1113) * 更新交叉编译 * 更新交叉编译 * 更新交叉编译 * 更新交叉编译 * 更新交叉编译 * 更新交叉编译 * 更新交叉编译 * 更新交叉编译 * 更新交叉编译 * Update issues.md * Update fastdeploy_init.sh * 更新交叉编译 * 更新insightface系列模型的rknpu2支持 * 更新insightface系列模型的rknpu2支持 * 更新说明文档 * 更新insightface * 尝试解决pybind问题 Co-authored-by: Jason <928090362@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> * [Other] Add Function For Aligning Face With Five Points (#1124) * 更新5点人脸对齐的代码 * 更新代码格式 * 解决comment * update example * 更新注释 Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> * [Lite] Support PaddleYOLOv8 with Lite Backend (#1145) * [Model] Support PaddleYOLOv8 model * [YOLOv8] Add PaddleYOLOv8 pybind * [Other] update from latest develop (#30) * [Backend] Remove all lite options in RuntimeOption (#1109) * Remove all lite options in RuntimeOption * Fix code error * move pybind * Fix build error * [Backend] Add TensorRT FP16 support for AdaptivePool2d (#1116) * add fp16 cuda kernel * fix code bug * update code * [Doc] Fix KunlunXin doc (#1139) fix kunlunxin doc * [Model] Support PaddleYOLOv8 model (#1136) Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> * [YOLOv8] add PaddleYOLOv8 pybind11 (#1144) (#31) * [Model] Support PaddleYOLOv8 model * [YOLOv8] Add PaddleYOLOv8 pybind * [Other] update from latest develop (#30) * [Backend] Remove all lite options in RuntimeOption (#1109) * Remove all lite options in RuntimeOption * Fix code error * move pybind * Fix build error * [Backend] Add TensorRT FP16 support for AdaptivePool2d (#1116) * add fp16 cuda kernel * fix code bug * update code * [Doc] Fix KunlunXin doc (#1139) fix kunlunxin doc * [Model] Support PaddleYOLOv8 model (#1136) Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> * [benchmark] add PaddleYOLOv8 -> benchmark * [benchmark] add PaddleYOLOv8 -> benchmark * [Lite] Support PaddleYOLOv8 with Lite Backend Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> * [Model] Add Silero VAD example (#1107) * add vad example * fix typo * fix typo * rename file * remove model and wav * delete Vad.cc * delete Vad.h * rename and format * fix max and min * update readme * rename var * format * add params * update readme * update readme * Update README.md * Update README_CN.md Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: Zheng-Bicheng <58363586+Zheng-Bicheng@users.noreply.github.com> Co-authored-by: Jason <928090362@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> Co-authored-by: Qianhe Chen <54462604+chenqianhe@users.noreply.github.com> * [YOLOv8] Support PaddleYOLOv8 on Kunlunxin&Ascend * [YOLOv8] Add PaddleYOLOv8 model download links * [YOLOv8] Add PaddleYOLOv8 Box AP Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: yeliang2258 <30516196+yeliang2258@users.noreply.github.com> Co-authored-by: Zheng-Bicheng <58363586+Zheng-Bicheng@users.noreply.github.com> Co-authored-by: Jason <928090362@qq.com> Co-authored-by: Qianhe Chen <54462604+chenqianhe@users.noreply.github.com>
This commit is contained in:
@@ -43,17 +43,22 @@ def parse_arguments():
|
||||
parser.add_argument(
|
||||
"--device",
|
||||
default="cpu",
|
||||
help="Type of inference device, support 'cpu' or 'gpu'.")
|
||||
help="Type of inference device, support 'cpu', 'gpu', 'kunlunxin', 'ascend' etc.")
|
||||
parser.add_argument(
|
||||
"--backend",
|
||||
type=str,
|
||||
default="default",
|
||||
help="inference backend, default, ort, ov, trt, paddle, paddle_trt.")
|
||||
help="inference backend, default, ort, ov, trt, paddle, paddle_trt, lite.")
|
||||
parser.add_argument(
|
||||
"--enable_trt_fp16",
|
||||
type=ast.literal_eval,
|
||||
default=False,
|
||||
help="whether enable fp16 in trt backend")
|
||||
parser.add_argument(
|
||||
"--enable_lite_fp16",
|
||||
type=ast.literal_eval,
|
||||
default=False,
|
||||
help="whether enable fp16 in lite backend")
|
||||
parser.add_argument(
|
||||
"--enable_collect_memory_info",
|
||||
type=ast.literal_eval,
|
||||
@@ -68,6 +73,7 @@ def build_option(args):
|
||||
device = args.device
|
||||
backend = args.backend
|
||||
enable_trt_fp16 = args.enable_trt_fp16
|
||||
enable_lite_fp16 = args.enable_lite_fp16
|
||||
option.set_cpu_thread_num(args.cpu_num_thread)
|
||||
if device == "gpu":
|
||||
option.use_gpu()
|
||||
@@ -111,9 +117,35 @@ def build_option(args):
|
||||
raise Exception(
|
||||
"While inference with CPU, only support default/ort/ov/paddle now, {} is not supported.".
|
||||
format(backend))
|
||||
elif device == "kunlunxin":
|
||||
option.use_kunlunxin()
|
||||
if backend == "lite":
|
||||
option.use_lite_backend()
|
||||
elif backend == "ort":
|
||||
option.use_ort_backend()
|
||||
elif backend == "paddle":
|
||||
option.use_paddle_backend()
|
||||
elif backend == "default":
|
||||
return option
|
||||
else:
|
||||
raise Exception(
|
||||
"While inference with CPU, only support default/ort/lite/paddle now, {} is not supported.".
|
||||
format(backend))
|
||||
elif device == "ascend":
|
||||
option.use_ascend()
|
||||
if backend == "lite":
|
||||
option.use_lite_backend()
|
||||
if enable_lite_fp16:
|
||||
option.enable_lite_fp16()
|
||||
elif backend == "default":
|
||||
return option
|
||||
else:
|
||||
raise Exception(
|
||||
"While inference with CPU, only support default/lite now, {} is not supported.".
|
||||
format(backend))
|
||||
else:
|
||||
raise Exception(
|
||||
"Only support device CPU/GPU now, {} is not supported.".format(
|
||||
"Only support device CPU/GPU/Kunlunxin/Ascend now, {} is not supported.".format(
|
||||
device))
|
||||
|
||||
return option
|
||||
|
@@ -19,8 +19,18 @@ Now FastDeploy supports the deployment of the following models
|
||||
- [SSD models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ssd)
|
||||
- [YOLOv5 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov5)
|
||||
- [YOLOv6 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov6)
|
||||
- [YOLOv7 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
|
||||
- [YOLOv7 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
|
||||
- [YOLOv8 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov8)
|
||||
- [RTMDet models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/rtmdet)
|
||||
- [RTMDet models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/rtmdet)
|
||||
- [CascadeRCNN models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/cascade_rcnn)
|
||||
- [PSSDet models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/rcnn_enhance)
|
||||
- [RetinaNet models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/retinanet)
|
||||
- [PPYOLOESOD models](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/smalldet)
|
||||
- [FCOS models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/fcos)
|
||||
- [TTFNet models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ttfnet)
|
||||
- [TOOD models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/tood)
|
||||
- [GFL models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/gfl)
|
||||
|
||||
## Export Deployment Model
|
||||
|
||||
@@ -58,7 +68,22 @@ The accuracy metric is from model descriptions in PaddleDetection. Refer to them
|
||||
| [yolov6_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6_l_300e_coco.tgz) | 229M | Box AP 51.0%| |
|
||||
| [yolov6_s_400e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6_s_400e_coco.tgz) | 68M | Box AP 43.4%| |
|
||||
| [yolov7_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_l_300e_coco.tgz) | 145M | Box AP 51.0%| |
|
||||
| [yolov7_x_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_x_300e_coco.tgz) | 277M | Box AP 53.0%| |
|
||||
| [yolov7_x_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_x_300e_coco.tgz) | 277M | Box AP 53.0%| |
|
||||
| [cascade_rcnn_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/cascade_rcnn_r50_fpn_1x_coco.tgz) | 271M | Box AP 41.1%| TensorRT、ORT not supported yet|
|
||||
| [cascade_rcnn_r50_vd_fpn_ssld_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.tgz) | 271M | Box AP 45.0%| TensorRT、ORT not supported yet|
|
||||
| [faster_rcnn_enhance_3x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/faster_rcnn_enhance_3x_coco.tgz) | 119M | Box AP 41.5%| TensorRT、ORT not supported yet|
|
||||
| [fcos_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/fcos_r50_fpn_1x_coco.tgz) | 129M | Box AP 39.6%| TensorRT not supported yet |
|
||||
| [gfl_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/gfl_r50_fpn_1x_coco.tgz) | 128M | Box AP 41.0%| TensorRT not supported yet|
|
||||
| [ppyoloe_crn_l_80e_sliced_visdrone_640_025](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_80e_sliced_visdrone_640_025.tgz) | 200M | Box AP 31.9%| |
|
||||
| [retinanet_r101_fpn_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r101_fpn_2x_coco.tgz) | 210M | Box AP 40.6%| TensorRT、ORT not supported yet|
|
||||
| [retinanet_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r50_fpn_1x_coco.tgz) | 136M | Box AP 37.5%| TensorRT、ORT not supported yet|
|
||||
| [tood_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/tood_r50_fpn_1x_coco.tgz) | 130M | Box AP 42.5%| TensorRT、ORT not supported yet|
|
||||
| [ttfnet_darknet53_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ttfnet_darknet53_1x_coco.tgz) | 178M | Box AP 33.5%| TensorRT、ORT not supported yet|
|
||||
| [yolov8_x_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_x_500e_coco.tgz) | 265M | Box AP 53.8%
|
||||
| [yolov8_l_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_l_500e_coco.tgz) | 173M | Box AP 52.8%
|
||||
| [yolov8_m_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_m_500e_coco.tgz) | 99M | Box AP 50.2%
|
||||
| [yolov8_s_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_s_500e_coco.tgz) | 43M | Box AP 44.9%
|
||||
| [yolov8_n_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_n_500e_coco.tgz) | 13M | Box AP 37.3%
|
||||
|
||||
## Detailed Deployment Documents
|
||||
|
||||
|
@@ -19,7 +19,8 @@
|
||||
- [SSD系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ssd)
|
||||
- [YOLOv5系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov5)
|
||||
- [YOLOv6系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov6)
|
||||
- [YOLOv7系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
|
||||
- [YOLOv7系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
|
||||
- [YOLOv8系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov8)
|
||||
- [RTMDet系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/rtmdet)
|
||||
- [CascadeRCNN系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/cascade_rcnn)
|
||||
- [PSSDet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/rcnn_enhance)
|
||||
@@ -78,7 +79,12 @@
|
||||
| [retinanet_r101_fpn_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r101_fpn_2x_coco.tgz) | 210M | Box AP 40.6%| 暂不支持TensorRT、ORT |
|
||||
| [retinanet_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r50_fpn_1x_coco.tgz) | 136M | Box AP 37.5%| 暂不支持TensorRT、ORT |
|
||||
| [tood_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/tood_r50_fpn_1x_coco.tgz) | 130M | Box AP 42.5%| 暂不支持TensorRT、ORT |
|
||||
| [ttfnet_darknet53_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ttfnet_darknet53_1x_coco.tgz) | 178M | Box AP 33.5%| 暂不支持TensorRT、ORT |
|
||||
| [ttfnet_darknet53_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ttfnet_darknet53_1x_coco.tgz) | 178M | Box AP 33.5%| 暂不支持TensorRT、ORT |
|
||||
| [yolov8_x_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_x_500e_coco.tgz) | 265M | Box AP 53.8%
|
||||
| [yolov8_l_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_l_500e_coco.tgz) | 173M | Box AP 52.8%
|
||||
| [yolov8_m_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_m_500e_coco.tgz) | 99M | Box AP 50.2%
|
||||
| [yolov8_s_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_s_500e_coco.tgz) | 43M | Box AP 44.9%
|
||||
| [yolov8_n_500e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov8_n_500e_coco.tgz) | 13M | Box AP 37.3%
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
|
@@ -32,6 +32,9 @@ def build_option(args):
|
||||
if args.device.lower() == "kunlunxin":
|
||||
option.use_kunlunxin()
|
||||
|
||||
if args.device.lower() == "ascend":
|
||||
option.use_ascend()
|
||||
|
||||
if args.device.lower() == "gpu":
|
||||
option.use_gpu()
|
||||
|
||||
|
@@ -256,6 +256,7 @@ class FASTDEPLOY_DECL PaddleYOLOv8 : public PPDetBase {
|
||||
valid_cpu_backends = {Backend::OPENVINO, Backend::ORT, Backend::PDINFER, Backend::LITE};
|
||||
valid_gpu_backends = {Backend::ORT, Backend::PDINFER, Backend::TRT};
|
||||
valid_kunlunxin_backends = {Backend::LITE};
|
||||
valid_ascend_backends = {Backend::LITE};
|
||||
initialized = Initialize();
|
||||
}
|
||||
|
||||
|
Reference in New Issue
Block a user