mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-06 00:57:33 +08:00
[Doc]Add English version of documents in examples (#1070)
* Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README_EN.md * Rename README_EN.md to README_CN.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README_EN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README_EN.md * Rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Update README.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README.md * Update and rename README_CN.md to README_EN.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update and rename README_EN.md to README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README.md * Update README_CN.md * Update README_CN.md * Update README.md * Update export.md * Create export_cn.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md
This commit is contained in:
@@ -1,33 +1,36 @@
|
||||
# 视觉模型部署
|
||||
English | [简体中文](README_CN.md)
|
||||
|
||||
本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型
|
||||
# Visual Model Deployment
|
||||
|
||||
| 任务类型 | 说明 | 预测结果结构体 |
|
||||
This directory provides the deployment of various visual models, including the following task types
|
||||
|
||||
| Task Type | Description | Predicted Structure |
|
||||
|:-------------- |:----------------------------------- |:-------------------------------------------------------------------------------- |
|
||||
| Detection | 目标检测,输入图像,检测图像中物体位置,并返回检测框坐标及类别和置信度 | [DetectionResult](../../docs/api/vision_results/detection_result.md) |
|
||||
| Segmentation | 语义分割,输入图像,给出图像中每个像素的分类及置信度 | [SegmentationResult](../../docs/api/vision_results/segmentation_result.md) |
|
||||
| Classification | 图像分类,输入图像,给出图像的分类结果和置信度 | [ClassifyResult](../../docs/api/vision_results/classification_result.md) |
|
||||
| FaceDetection | 人脸检测,输入图像,检测图像中人脸位置,并返回检测框坐标及人脸关键点 | [FaceDetectionResult](../../docs/api/vision_results/face_detection_result.md) |
|
||||
| FaceAlignment | 人脸对齐(人脸关键点检测),输入图像,返回人脸关键点 | [FaceAlignmentResult](../../docs/api/vision_results/face_alignment_result.md) |
|
||||
| KeypointDetection | 关键点检测,输入图像,返回图像中人物行为的各个关键点坐标和置信度 | [KeyPointDetectionResult](../../docs/api/vision_results/keypointdetection_result.md) |
|
||||
| FaceRecognition | 人脸识别,输入图像,返回可用于相似度计算的人脸特征的embedding | [FaceRecognitionResult](../../docs/api/vision_results/face_recognition_result.md) |
|
||||
| Matting | 抠图,输入图像,返回图片的前景每个像素点的Alpha值 | [MattingResult](../../docs/api/vision_results/matting_result.md) |
|
||||
| OCR | 文本框检测,分类,文本框内容识别,输入图像,返回文本框坐标,文本框的方向类别以及框内的文本内容 | [OCRResult](../../docs/api/vision_results/ocr_result.md) |
|
||||
| MOT | 多目标跟踪,输入图像,检测图像中物体位置,并返回检测框坐标,对象id及类别置信度 | [MOTResult](../../docs/api/vision_results/mot_result.md) |
|
||||
| HeadPose | 头部姿态估计,返回头部欧拉角 | [HeadPoseResult](../../docs/api/vision_results/headpose_result.md) |
|
||||
| Detection | Target detection. Input the image, detect the object’s position in the image, and return the detected box coordinates, category, and confidence coefficient | [DetectionResult](../../docs/api/vision_results/detection_result.md) |
|
||||
| Segmentation | Semantic segmentation. Input the image and output the classification and confidence coefficient of each pixel | [SegmentationResult](../../docs/api/vision_results/segmentation_result.md) |
|
||||
| Classification | Image classification. Input the image and output the classification result and confidence coefficient of the image | [ClassifyResult](../../docs/api/vision_results/classification_result.md) |
|
||||
| FaceDetection | Face detection. Input the image, detect the position of faces in the image, and return detected box coordinates and key points of faces | [FaceDetectionResult](../../docs/api/vision_results/face_detection_result.md) |
|
||||
| FaceAlignment | Face alignment(key points detection).Input the image and return face key points | [FaceAlignmentResult](../../docs/api/vision_results/face_alignment_result.md) |
|
||||
| KeypointDetection | Key point detection. Input the image and return the coordinates and confidence coefficient of the key points of the person's behavior in the image | [KeyPointDetectionResult](../../docs/api/vision_results/keypointdetection_result.md) |
|
||||
| FaceRecognition | Face recognition. Input the image and return an embedding of facial features that can be used for similarity calculation | [FaceRecognitionResult](../../docs/api/vision_results/face_recognition_result.md) |
|
||||
| Matting | Matting. Input the image and return the Alpha value of each pixel in the foreground of the image | [MattingResult](../../docs/api/vision_results/matting_result.md) |
|
||||
| OCR | Text box detection, classification, and text box content recognition. Input the image and return the text box’s coordinates, orientation category, and content | [OCRResult](../../docs/api/vision_results/ocr_result.md) |
|
||||
| MOT | Multi-objective tracking. Input the image and detect the position of objects in the image, and return detected box coordinates, object id, and class confidence | [MOTResult](../../docs/api/vision_results/mot_result.md) |
|
||||
| HeadPose | Head posture estimation. Return head Euler angle | [HeadPoseResult](../../docs/api/vision_results/headpose_result.md) |
|
||||
|
||||
## FastDeploy API设计
|
||||
## FastDeploy API Design
|
||||
|
||||
视觉模型具有较有统一任务范式,在设计API时(包括C++/Python),FastDeploy将视觉模型的部署拆分为四个步骤
|
||||
Generally, visual models have a uniform task paradigm. When designing API (including C++/Python), FastDeploy conducts four steps to deploy visual models
|
||||
|
||||
- 模型加载
|
||||
- 图像预处理
|
||||
- 模型推理
|
||||
- 推理结果后处理
|
||||
- Model loading
|
||||
- Image pre-processing
|
||||
- Model Inference
|
||||
- Post-processing of inference results
|
||||
|
||||
FastDeploy针对飞桨的视觉套件,以及外部热门模型,提供端到端的部署服务,用户只需准备模型,按以下步骤即可完成整个模型的部署
|
||||
Targeted at the vision suite of PaddlePaddle and external popular models, FastDeploy provides an end-to-end deployment service. Users merely prepare the model and follow these steps to complete the deployment
|
||||
|
||||
- 加载模型
|
||||
- 调用`predict`接口
|
||||
- Model Loading
|
||||
- Calling the `predict`interface
|
||||
|
||||
When deploying visual models, FastDeploy supports one-click switching of the backend inference engine. Please refer to [How to switch model inference engine](../../docs/cn/faq/how_to_change_backend.md).
|
||||
|
||||
FastDeploy在各视觉模型部署时,也支持一键切换后端推理引擎,详情参阅[如何切换模型推理引擎](../../docs/cn/faq/how_to_change_backend.md)。
|
||||
|
34
examples/vision/README_CN.md
Normal file
34
examples/vision/README_CN.md
Normal file
@@ -0,0 +1,34 @@
|
||||
[English](README_EN.md) | 简体中文
|
||||
# 视觉模型部署
|
||||
|
||||
本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型
|
||||
|
||||
| 任务类型 | 说明 | 预测结果结构体 |
|
||||
|:-------------- |:----------------------------------- |:-------------------------------------------------------------------------------- |
|
||||
| Detection | 目标检测,输入图像,检测图像中物体位置,并返回检测框坐标及类别和置信度 | [DetectionResult](../../docs/api/vision_results/detection_result.md) |
|
||||
| Segmentation | 语义分割,输入图像,给出图像中每个像素的分类及置信度 | [SegmentationResult](../../docs/api/vision_results/segmentation_result.md) |
|
||||
| Classification | 图像分类,输入图像,给出图像的分类结果和置信度 | [ClassifyResult](../../docs/api/vision_results/classification_result.md) |
|
||||
| FaceDetection | 人脸检测,输入图像,检测图像中人脸位置,并返回检测框坐标及人脸关键点 | [FaceDetectionResult](../../docs/api/vision_results/face_detection_result.md) |
|
||||
| FaceAlignment | 人脸对齐(人脸关键点检测),输入图像,返回人脸关键点 | [FaceAlignmentResult](../../docs/api/vision_results/face_alignment_result.md) |
|
||||
| KeypointDetection | 关键点检测,输入图像,返回图像中人物行为的各个关键点坐标和置信度 | [KeyPointDetectionResult](../../docs/api/vision_results/keypointdetection_result.md) |
|
||||
| FaceRecognition | 人脸识别,输入图像,返回可用于相似度计算的人脸特征的embedding | [FaceRecognitionResult](../../docs/api/vision_results/face_recognition_result.md) |
|
||||
| Matting | 抠图,输入图像,返回图片的前景每个像素点的Alpha值 | [MattingResult](../../docs/api/vision_results/matting_result.md) |
|
||||
| OCR | 文本框检测,分类,文本框内容识别,输入图像,返回文本框坐标,文本框的方向类别以及框内的文本内容 | [OCRResult](../../docs/api/vision_results/ocr_result.md) |
|
||||
| MOT | 多目标跟踪,输入图像,检测图像中物体位置,并返回检测框坐标,对象id及类别置信度 | [MOTResult](../../docs/api/vision_results/mot_result.md) |
|
||||
| HeadPose | 头部姿态估计,返回头部欧拉角 | [HeadPoseResult](../../docs/api/vision_results/headpose_result.md) |
|
||||
|
||||
## FastDeploy API设计
|
||||
|
||||
视觉模型具有较有统一任务范式,在设计API时(包括C++/Python),FastDeploy将视觉模型的部署拆分为四个步骤
|
||||
|
||||
- 模型加载
|
||||
- 图像预处理
|
||||
- 模型推理
|
||||
- 推理结果后处理
|
||||
|
||||
FastDeploy针对飞桨的视觉套件,以及外部热门模型,提供端到端的部署服务,用户只需准备模型,按以下步骤即可完成整个模型的部署
|
||||
|
||||
- 加载模型
|
||||
- 调用`predict`接口
|
||||
|
||||
FastDeploy在各视觉模型部署时,也支持一键切换后端推理引擎,详情参阅[如何切换模型推理引擎](../../docs/cn/faq/how_to_change_backend.md)。
|
@@ -1,34 +1,36 @@
|
||||
# PaddleClas 模型部署
|
||||
English | [简体中文](README_CN.md)
|
||||
|
||||
## 模型版本说明
|
||||
# PaddleClas Model Deployment
|
||||
|
||||
## Model Description
|
||||
|
||||
- [PaddleClas Release/2.4](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4)
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
Now FastDeploy supports the deployment of the following models
|
||||
|
||||
- [PP-LCNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-LCNet.md)
|
||||
- [PP-LCNetV2系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-LCNetV2.md)
|
||||
- [EfficientNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/EfficientNet_and_ResNeXt101_wsl.md)
|
||||
- [GhostNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
|
||||
- [MobileNet系列模型(包含v1,v2,v3)](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
|
||||
- [ShuffleNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
|
||||
- [SqueezeNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Others.md)
|
||||
- [Inception系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Inception.md)
|
||||
- [PP-HGNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-HGNet.md)
|
||||
- [ResNet系列模型(包含vd系列)](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/ResNet_and_vd.md)
|
||||
- [PP-LCNet Models](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-LCNet.md)
|
||||
- [PP-LCNetV2 Models](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-LCNetV2.md)
|
||||
- [EfficientNet Models](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/EfficientNet_and_ResNeXt101_wsl.md)
|
||||
- [GhostNet Models](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
|
||||
- [MobileNet Models(including v1,v2,v3)](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
|
||||
- [ShuffleNet Models](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
|
||||
- [SqueezeNet Models](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Others.md)
|
||||
- [Inception Models](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Inception.md)
|
||||
- [PP-HGNet Models](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-HGNet.md)
|
||||
- [ResNet Models(including vd series)](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/ResNet_and_vd.md)
|
||||
|
||||
## 准备PaddleClas部署模型
|
||||
## Prepare PaddleClas Deployment Model
|
||||
|
||||
PaddleClas模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
|
||||
For PaddleClas model export, refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
|
||||
|
||||
注意:PaddleClas导出的模型仅包含`inference.pdmodel`和`inference.pdiparams`两个文件,但为了满足部署的需求,同时也需准备其提供的通用[inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/deploy/configs/inference_cls.yaml)文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息,开发者可直接下载此文件使用。但需根据自己的需求修改yaml文件中的配置参数,具体可比照PaddleClas模型训练[config](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4/ppcls/configs/ImageNet)中的infer部分的配置信息进行修改。
|
||||
Attention:The model exported by PaddleClas contains two files, including `inference.pdmodel` and `inference.pdiparams`. However, it is necessary to prepare the generic [inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/deploy/configs/inference_cls.yaml) file provided by PaddleClas to meet the requirements of deployment. FastDeploy will obtain from the yaml file the preprocessing information required during inference. FastDeploy will get the preprocessing information needed by the model from the yaml file. Developers can directly download this file. But they need to modify the configuration parameters in the yaml file based on personalized needs. Refer to the configuration information in the infer section of the PaddleClas model training [config.](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4/ppcls/configs/ImageNet)
|
||||
|
||||
|
||||
## 下载预训练模型
|
||||
## Download Pre-trained Model
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleClas导出的部分模型(含inference_cls.yaml文件),开发者可直接下载使用。
|
||||
For developers' testing, some models exported by PaddleClas (including the inference_cls.yaml file) are provided below. Developers can download them directly.
|
||||
|
||||
| 模型 | 参数文件大小 |输入Shape | Top1 | Top5 |
|
||||
| Model | Parameter File Size |Input Shape | Top1 | Top5 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- |
|
||||
| [PPLCNet_x1_0](https://bj.bcebos.com/paddlehub/fastdeploy/PPLCNet_x1_0_infer.tgz) | 12MB | 224x224 |71.32% | 90.03% |
|
||||
| [PPLCNetV2_base](https://bj.bcebos.com/paddlehub/fastdeploy/PPLCNetV2_base_infer.tgz) | 26MB | 224x224 |77.04% | 93.27% |
|
||||
@@ -50,8 +52,8 @@ PaddleClas模型导出,请参考其文档说明[模型导出](https://github.c
|
||||
| [PPHGNet_base_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/PPHGNet_base_ssld_infer.tgz) | 274MB | 224x224 | 85.0% | 97.35% |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz) | 98MB | 224x224 | 79.12% | 94.44% |
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [服务化部署](serving)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
- [Serving Deployment](serving)
|
||||
|
58
examples/vision/classification/paddleclas/README_CN.md
Normal file
58
examples/vision/classification/paddleclas/README_CN.md
Normal file
@@ -0,0 +1,58 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleClas 模型部署
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PaddleClas Release/2.4](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4)
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
||||
- [PP-LCNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-LCNet.md)
|
||||
- [PP-LCNetV2系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-LCNetV2.md)
|
||||
- [EfficientNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/EfficientNet_and_ResNeXt101_wsl.md)
|
||||
- [GhostNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
|
||||
- [MobileNet系列模型(包含v1,v2,v3)](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
|
||||
- [ShuffleNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
|
||||
- [SqueezeNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Others.md)
|
||||
- [Inception系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Inception.md)
|
||||
- [PP-HGNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-HGNet.md)
|
||||
- [ResNet系列模型(包含vd系列)](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/ResNet_and_vd.md)
|
||||
|
||||
## 准备PaddleClas部署模型
|
||||
|
||||
PaddleClas模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
|
||||
|
||||
注意:PaddleClas导出的模型仅包含`inference.pdmodel`和`inference.pdiparams`两个文件,但为了满足部署的需求,同时也需准备其提供的通用[inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/deploy/configs/inference_cls.yaml)文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息,开发者可直接下载此文件使用。但需根据自己的需求修改yaml文件中的配置参数,具体可比照PaddleClas模型训练[config](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4/ppcls/configs/ImageNet)中的infer部分的配置信息进行修改。
|
||||
|
||||
|
||||
## 下载预训练模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleClas导出的部分模型(含inference_cls.yaml文件),开发者可直接下载使用。
|
||||
|
||||
| 模型 | 参数文件大小 |输入Shape | Top1 | Top5 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- |
|
||||
| [PPLCNet_x1_0](https://bj.bcebos.com/paddlehub/fastdeploy/PPLCNet_x1_0_infer.tgz) | 12MB | 224x224 |71.32% | 90.03% |
|
||||
| [PPLCNetV2_base](https://bj.bcebos.com/paddlehub/fastdeploy/PPLCNetV2_base_infer.tgz) | 26MB | 224x224 |77.04% | 93.27% |
|
||||
| [EfficientNetB7](https://bj.bcebos.com/paddlehub/fastdeploy/EfficientNetB7_infer.tgz) | 255MB | 600x600 | 84.3% | 96.9% |
|
||||
| [EfficientNetB0_small](https://bj.bcebos.com/paddlehub/fastdeploy/EfficientNetB0_small_infer.tgz)| 18MB | 224x224 | 75.8% | 75.8% |
|
||||
| [GhostNet_x1_3_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/GhostNet_x1_3_ssld_infer.tgz) | 29MB | 224x224 | 75.7% | 92.5% |
|
||||
| [GhostNet_x0_5](https://bj.bcebos.com/paddlehub/fastdeploy/GhostNet_x0_5_infer.tgz) | 10MB | 224x224 | 66.8% | 86.9% |
|
||||
| [MobileNetV1_x0_25](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV1_x0_25_infer.tgz) | 1.9MB | 224x224 | 51.4% | 75.5% |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV1_ssld_infer.tgz) | 17MB | 224x224 | 77.9% | 93.9% |
|
||||
| [MobileNetV2_x0_25](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV2_x0_25_infer.tgz) | 5.9MB | 224x224 | 53.2% | 76.5% |
|
||||
| [MobileNetV2_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV2_ssld_infer.tgz) | 14MB | 224x224 | 76.74% | 93.39% |
|
||||
| [MobileNetV3_small_x0_35_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV3_small_x0_35_ssld_infer.tgz) | 6.4MB | 224x224 | 55.55% | 77.71% |
|
||||
| [MobileNetV3_large_x1_0_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV3_large_x1_0_ssld_infer.tgz) | 22MB | 224x224 | 78.96% | 94.48% |
|
||||
| [ShuffleNetV2_x0_25](https://bj.bcebos.com/paddlehub/fastdeploy/ShuffleNetV2_x0_25_infer.tgz) | 2.4MB | 224x224 | 49.9% | 73.79% |
|
||||
| [ShuffleNetV2_x2_0](https://bj.bcebos.com/paddlehub/fastdeploy/ShuffleNetV2_x2_0_infer.tgz) | 29MB | 224x224 | 73.15% | 91.2% |
|
||||
| [SqueezeNet1_1](https://bj.bcebos.com/paddlehub/fastdeploy/SqueezeNet1_1_infer.tgz) | 4.8MB | 224x224 | 60.1% | 81.9% |
|
||||
| [InceptionV3](https://bj.bcebos.com/paddlehub/fastdeploy/InceptionV3_infer.tgz) | 92MB | 299x299 | 79.14% | 94.59% |
|
||||
| [PPHGNet_tiny_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/PPHGNet_tiny_ssld_infer.tgz) | 57MB | 224x224 | 81.95% | 96.12% |
|
||||
| [PPHGNet_base_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/PPHGNet_base_ssld_infer.tgz) | 274MB | 224x224 | 85.0% | 97.35% |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz) | 98MB | 224x224 | 79.12% | 94.44% |
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [服务化部署](serving)
|
@@ -1,11 +1,12 @@
|
||||
# PaddleClas 量化模型在 A311D 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PaddleClas 量化模型到 A311D 上。
|
||||
English | [简体中文](README_CN.md)
|
||||
# Deploy PaddleClas Quantification Model on A311D
|
||||
Now FastDeploy supports the deployment of PaddleClas quantification model to A311D based on Paddle Lite.
|
||||
|
||||
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
|
||||
For model quantification and download, refer to [model quantification](../quantize/README.md)
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
在 A311D 上只支持 C++ 的部署。
|
||||
Only C++ deployment is supported on A311D.
|
||||
|
||||
- [C++部署](cpp)
|
||||
- [C++ deployment](cpp)
|
||||
|
12
examples/vision/classification/paddleclas/a311d/README_CN.md
Normal file
12
examples/vision/classification/paddleclas/a311d/README_CN.md
Normal file
@@ -0,0 +1,12 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleClas 量化模型在 A311D 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PaddleClas 量化模型到 A311D 上。
|
||||
|
||||
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
在 A311D 上只支持 C++ 的部署。
|
||||
|
||||
- [C++部署](cpp)
|
@@ -1,81 +1,82 @@
|
||||
## 图像分类 PaddleClas Android Demo 使用文档
|
||||
English | [简体中文](README_CN.md)
|
||||
## PaddleClas Android Demo Tutorial
|
||||
|
||||
在 Android 上实现实时的图像分类功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
|
||||
Real-time image classification on Android. This demo is easy to use for everyone. For example, you can run your own trained model in the Demo.
|
||||
|
||||
## 环境准备
|
||||
## Prepare the Environment
|
||||
|
||||
1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
|
||||
2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
|
||||
1. Install Android Studio in your local environment. Refer to [Android Studio Official Website](https://developer.android.com/studio) for detailed tutorial.
|
||||
2. Prepare an Android phone and turn on the USB debug mode: `Settings -> Find developer options -> Open developer options and USB debug mode`
|
||||
|
||||
## 部署步骤
|
||||
## Deployment steps
|
||||
|
||||
1. 目标检测 PaddleClas Demo 位于 `fastdeploy/examples/vision/classification/paddleclas/android` 目录
|
||||
2. 用 Android Studio 打开 paddleclas/android 工程
|
||||
3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
|
||||
1. The target detection PaddleClas Demo is located in the `fastdeploy/examples/vision/classification/paddleclas/android`
|
||||
2. Open paddleclas/android project with Android Studio
|
||||
3. Connect the phone to the computer, turn on USB debug mode and file transfer mode, and connect your phone to Android Studio (allow the phone to install software from USB)
|
||||
|
||||
<p align="center">
|
||||
<img width="1280" alt="image" src="https://user-images.githubusercontent.com/31974251/197338597-2c9e1cf0-569b-49b9-a7fb-cdec71921af8.png">
|
||||
</p>
|
||||
|
||||
> **注意:**
|
||||
>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
|
||||
> **Attention:**
|
||||
>> If you encounter an NDK configuration error during import, compilation or running, open ` File > Project Structure > SDK Location` and change the path of SDK configured by the `Andriod SDK location`.
|
||||
|
||||
4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库,需要联网)
|
||||
成功后效果如下,图一:APP 安装到手机;图二: APP 打开后的效果,会自动识别图片中的物体并标记;图三:APP设置选项,点击右上角的设置图片,可以设置不同选项进行体验。
|
||||
4. Click the Run button to automatically compile the APP and install it to the phone. (The process will automatically download the pre-compiled FastDeploy Android library and model files. Internet is required).
|
||||
The final effect is as follows. Figure 1: Install the APP on the phone; Figure 2: The effect when opening the APP. It will automatically recognize and mark the objects in the image; Figure 3: APP setting option. Click setting in the upper right corner and modify your options.
|
||||
|
||||
| APP 图标 | APP 效果 | APP设置项
|
||||
| APP Icon | APP Effect | APP Settings
|
||||
| --- | --- | --- |
|
||||
|  |  |  |
|
||||
|
||||
## PaddleClasModel Java API 说明
|
||||
- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PaddleClasModel初始化参数说明如下:
|
||||
- modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
|
||||
- paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
|
||||
- configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
|
||||
- labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 imagenet1k_label_list.txt,每一行包含一个label
|
||||
- option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
|
||||
## PaddleClasModel Java API Description
|
||||
- Model initialized API: The initialized API contains two ways: Firstly, initialize directly through the constructor. Secondly, initialize at the appropriate program node by calling the init function. PaddleClasModel initialization parameters are as follows.
|
||||
- modelFile: String. Model file path in paddle format, such as model.pdmodel
|
||||
- paramFile: String. Parameter file path in paddle format, such as model.pdiparams
|
||||
- configFile: String. Preprocessing file for model inference, such as infer_cfg.yml
|
||||
- labelFile: String. This optional parameter indicates the path of the label file and is used for visualization, such as imagenet1k_label_list.txt, each line containing one label
|
||||
- option: RuntimeOption. Optional parameter for model initialization. Default runtime options if not passing the parameter.
|
||||
|
||||
```java
|
||||
// 构造函数: constructor w/o label file
|
||||
public PaddleClasModel(); // 空构造函数,之后可以调用init初始化
|
||||
// Constructor: constructor w/o label file
|
||||
public PaddleClasModel(); // An empty constructor, which can be initialized by calling init
|
||||
public PaddleClasModel(String modelFile, String paramsFile, String configFile);
|
||||
public PaddleClasModel(String modelFile, String paramsFile, String configFile, String labelFile);
|
||||
public PaddleClasModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
|
||||
public PaddleClasModel(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
|
||||
// 手动调用init初始化: call init manually w/o label file
|
||||
// Call init manually for initialization: call init manually w/o label file
|
||||
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
|
||||
public boolean init(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
|
||||
```
|
||||
- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
|
||||
- Model Prediction API: The Model Prediction API contains an API for direct prediction and an API for visualization. In direct prediction, we do not save the image and render the result on Bitmap. Instead, we merely predict the inference result. For prediction and visualization, the results are both predicted and visualized, the visualized images are saved to the specified path, and the visualized results are rendered in Bitmap (Now Bitmap in ARGB8888 format is supported). Afterward, the Bitmap can be displayed on the camera.
|
||||
```java
|
||||
// 直接预测:不保存图片以及不渲染结果到Bitmap上
|
||||
// Direct prediction: No image saving and no result rendering to Bitmap
|
||||
public ClassifyResult predict(Bitmap ARGB8888Bitmap);
|
||||
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
|
||||
// Prediction and visualization: Predict and visualize the results, save the visualized image to the specified path, and render the visualized results on Bitmap
|
||||
public ClassifyResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float scoreThreshold)
|
||||
```
|
||||
- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
|
||||
- Model resource release API: Call release() API to release model resources. Return true for successful release and false for failure; call initialized() to determine whether the model was initialized successfully, with true indicating successful initialization and false indicating failure.
|
||||
```java
|
||||
public boolean release(); // 释放native资源
|
||||
public boolean initialized(); // 检查是否初始化成功
|
||||
public boolean release(); // Realise native resources
|
||||
public boolean initialized(); // Check if initialization is successful
|
||||
```
|
||||
- RuntimeOption设置说明
|
||||
- RuntimeOption settings
|
||||
```java
|
||||
public void enableLiteFp16(); // 开启fp16精度推理
|
||||
public void disableLiteFP16(); // 关闭fp16精度推理
|
||||
public void setCpuThreadNum(int threadNum); // 设置线程数
|
||||
public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
|
||||
public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
|
||||
public void enableRecordTimeOfRuntime(); // 是否打印模型运行耗时
|
||||
public void enableLiteFp16(); // Enable fp16 accuracy inference
|
||||
public void disableLiteFP16(); // Disable fp16 accuracy inference
|
||||
public void setCpuThreadNum(int threadNum); // Set thread numbers
|
||||
public void setLitePowerMode(LitePowerMode mode); // Set power mode
|
||||
public void setLitePowerMode(String modeStr); // Set power mode through character string
|
||||
public void enableRecordTimeOfRuntime(); // Whether the print model is time-consuming
|
||||
```
|
||||
|
||||
- 模型结果ClassifyResult说明
|
||||
- Model ClassifyResult
|
||||
```java
|
||||
public float[] mScores; // [n] 得分
|
||||
public int[] mLabelIds; // [n] 分类ID
|
||||
public boolean initialized(); // 检测结果是否有效
|
||||
public float[] mScores; // [n] Score
|
||||
public int[] mLabelIds; // [n] Classification ID
|
||||
public boolean initialized(); // Whether the result is valid or not
|
||||
```
|
||||
|
||||
- 模型调用示例1:使用构造函数以及默认的RuntimeOption
|
||||
- Model Calling Example 1: Using Constructor or Default RuntimeOption
|
||||
```java
|
||||
import java.nio.ByteBuffer;
|
||||
import android.graphics.Bitmap;
|
||||
@@ -84,67 +85,67 @@ import android.opengl.GLES20;
|
||||
import com.baidu.paddle.fastdeploy.vision.ClassifyResult;
|
||||
import com.baidu.paddle.fastdeploy.vision.classification.PaddleClasModel;
|
||||
|
||||
// 初始化模型
|
||||
// Initialize the model
|
||||
PaddleClasModel model = new PaddleClasModel("MobileNetV1_x0_25_infer/inference.pdmodel",
|
||||
"MobileNetV1_x0_25_infer/inference.pdiparams",
|
||||
"MobileNetV1_x0_25_infer/inference_cls.yml");
|
||||
|
||||
// 读取图片: 以下仅为读取Bitmap的伪代码
|
||||
// Read the image: The following is merely the pseudo code to read the Bitmap
|
||||
ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
|
||||
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
|
||||
Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
|
||||
ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
|
||||
|
||||
// 模型推理
|
||||
// Model inference
|
||||
ClassifyResult result = model.predict(ARGB8888ImageBitmap);
|
||||
|
||||
// 释放模型资源
|
||||
// Release model resources
|
||||
model.release();
|
||||
```
|
||||
|
||||
- 模型调用示例2: 在合适的程序节点,手动调用init,并自定义RuntimeOption
|
||||
- Model calling example 2: Manually call init at the appropriate program node and self-define RuntimeOption
|
||||
```java
|
||||
// import 同上 ...
|
||||
// import is as the above...
|
||||
import com.baidu.paddle.fastdeploy.RuntimeOption;
|
||||
import com.baidu.paddle.fastdeploy.LitePowerMode;
|
||||
import com.baidu.paddle.fastdeploy.vision.ClassifyResult;
|
||||
import com.baidu.paddle.fastdeploy.vision.classification.PaddleClasModel;
|
||||
// 新建空模型
|
||||
// Create an empty model
|
||||
PaddleClasModel model = new PaddleClasModel();
|
||||
// 模型路径
|
||||
// Model path
|
||||
String modelFile = "MobileNetV1_x0_25_infer/inference.pdmodel";
|
||||
String paramFile = "MobileNetV1_x0_25_infer/inference.pdiparams";
|
||||
String configFile = "MobileNetV1_x0_25_infer/inference_cls.yml";
|
||||
// 指定RuntimeOption
|
||||
// Specify RuntimeOption
|
||||
RuntimeOption option = new RuntimeOption();
|
||||
option.setCpuThreadNum(2);
|
||||
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
|
||||
option.enableRecordTimeOfRuntime();
|
||||
option.enableLiteFp16();
|
||||
// 使用init函数初始化
|
||||
// Use init function for initialization
|
||||
model.init(modelFile, paramFile, configFile, option);
|
||||
// Bitmap读取、模型预测、资源释放 同上 ...
|
||||
// Bitmap reading, model prediction, and resource release are as above ...
|
||||
```
|
||||
更详细的用法请参考 [MainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/classification/ClassificationMainActivity.java) 中的用法
|
||||
Refer to [MainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/classification/ClassificationMainActivity.java) for more information
|
||||
|
||||
## 替换 FastDeploy 预测库和模型
|
||||
替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-xxx-shared`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models/MobileNetV1_x0_25_infer`。
|
||||
- 替换FastDeploy预测库的步骤:
|
||||
- 下载或编译最新的FastDeploy Android预测库,解压缩后放在 `app/libs` 目录下;
|
||||
- 修改 `app/src/main/cpp/CMakeLists.txt` 中的预测库路径,指向您下载或编译的预测库路径。如:
|
||||
## Replace FastDeploy Prediction Library and Models
|
||||
It’s simple to replace the FastDeploy prediction library and models. The prediction library is located at `app/libs/fastdeploy-android-xxx-shared`, where `xxx` represents the version of your prediction library. The models are located at `app/src/main/assets/models/MobileNetV1_x0_25_infer`.
|
||||
- Steps to replace FastDeploy prediction library:
|
||||
- Download or compile the latest FastDeploy Android SDK, unzip and place it in the `app/libs`;
|
||||
- Modify the default value of the model path in `app/src/main/cpp/CMakeLists.txt` and to the prediction library path you download or compile. For example,
|
||||
```cmake
|
||||
set(FastDeploy_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../../../libs/fastdeploy-android-xxx-shared")
|
||||
```
|
||||
- 替换PaddleClas模型的步骤:
|
||||
- 将您的PaddleClas分类模型放在 `app/src/main/assets/models` 目录下;
|
||||
- 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
|
||||
- Steps to replace PaddleClas models:
|
||||
- Put your PaddleClas model in `app/src/main/assets/models`;
|
||||
- Modify the default value of the model path in `app/src/main/res/values/strings.xml`. For example,
|
||||
```xml
|
||||
<!-- 将这个路径指修改成您的模型,如 models/MobileNetV2_x0_25_infer -->
|
||||
<!-- Change this path to your model, such as models/MobileNetV2_x0_25_infer -->
|
||||
<string name="CLASSIFICATION_MODEL_DIR_DEFAULT">models/MobileNetV1_x0_25_infer</string>
|
||||
<string name="CLASSIFICATION_LABEL_PATH_DEFAULT">labels/imagenet1k_label_list.txt</string>
|
||||
```
|
||||
|
||||
## 更多参考文档
|
||||
如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
|
||||
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
|
||||
- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
|
||||
## More Reference Documents
|
||||
For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
|
||||
- [Use FastDeploy Java SDK in Android](../../../../../java/android/)
|
||||
- [Use FastDeploy C++ SDK in Android](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
|
||||
|
151
examples/vision/classification/paddleclas/android/README_CN.md
Normal file
151
examples/vision/classification/paddleclas/android/README_CN.md
Normal file
@@ -0,0 +1,151 @@
|
||||
[English](README.md) | 简体中文
|
||||
## 图像分类 PaddleClas Android Demo 使用文档
|
||||
|
||||
在 Android 上实现实时的图像分类功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
|
||||
|
||||
## 环境准备
|
||||
|
||||
1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
|
||||
2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
|
||||
|
||||
## 部署步骤
|
||||
|
||||
1. 目标检测 PaddleClas Demo 位于 `fastdeploy/examples/vision/classification/paddleclas/android` 目录
|
||||
2. 用 Android Studio 打开 paddleclas/android 工程
|
||||
3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
|
||||
|
||||
<p align="center">
|
||||
<img width="1280" alt="image" src="https://user-images.githubusercontent.com/31974251/197338597-2c9e1cf0-569b-49b9-a7fb-cdec71921af8.png">
|
||||
</p>
|
||||
|
||||
> **注意:**
|
||||
>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
|
||||
|
||||
4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库,需要联网)
|
||||
成功后效果如下,图一:APP 安装到手机;图二: APP 打开后的效果,会自动识别图片中的物体并标记;图三:APP设置选项,点击右上角的设置图片,可以设置不同选项进行体验。
|
||||
|
||||
| APP 图标 | APP 效果 | APP设置项
|
||||
| --- | --- | --- |
|
||||
|  |  |  |
|
||||
|
||||
## PaddleClasModel Java API 说明
|
||||
- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PaddleClasModel初始化参数说明如下:
|
||||
- modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
|
||||
- paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
|
||||
- configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
|
||||
- labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 imagenet1k_label_list.txt,每一行包含一个label
|
||||
- option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
|
||||
|
||||
```java
|
||||
// 构造函数: constructor w/o label file
|
||||
public PaddleClasModel(); // 空构造函数,之后可以调用init初始化
|
||||
public PaddleClasModel(String modelFile, String paramsFile, String configFile);
|
||||
public PaddleClasModel(String modelFile, String paramsFile, String configFile, String labelFile);
|
||||
public PaddleClasModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
|
||||
public PaddleClasModel(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
|
||||
// 手动调用init初始化: call init manually w/o label file
|
||||
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
|
||||
public boolean init(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
|
||||
```
|
||||
- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
|
||||
```java
|
||||
// 直接预测:不保存图片以及不渲染结果到Bitmap上
|
||||
public ClassifyResult predict(Bitmap ARGB8888Bitmap);
|
||||
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
|
||||
public ClassifyResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float scoreThreshold)
|
||||
```
|
||||
- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
|
||||
```java
|
||||
public boolean release(); // 释放native资源
|
||||
public boolean initialized(); // 检查是否初始化成功
|
||||
```
|
||||
- RuntimeOption设置说明
|
||||
```java
|
||||
public void enableLiteFp16(); // 开启fp16精度推理
|
||||
public void disableLiteFP16(); // 关闭fp16精度推理
|
||||
public void setCpuThreadNum(int threadNum); // 设置线程数
|
||||
public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
|
||||
public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
|
||||
public void enableRecordTimeOfRuntime(); // 是否打印模型运行耗时
|
||||
```
|
||||
|
||||
- 模型结果ClassifyResult说明
|
||||
```java
|
||||
public float[] mScores; // [n] 得分
|
||||
public int[] mLabelIds; // [n] 分类ID
|
||||
public boolean initialized(); // 检测结果是否有效
|
||||
```
|
||||
|
||||
- 模型调用示例1:使用构造函数以及默认的RuntimeOption
|
||||
```java
|
||||
import java.nio.ByteBuffer;
|
||||
import android.graphics.Bitmap;
|
||||
import android.opengl.GLES20;
|
||||
|
||||
import com.baidu.paddle.fastdeploy.vision.ClassifyResult;
|
||||
import com.baidu.paddle.fastdeploy.vision.classification.PaddleClasModel;
|
||||
|
||||
// 初始化模型
|
||||
PaddleClasModel model = new PaddleClasModel("MobileNetV1_x0_25_infer/inference.pdmodel",
|
||||
"MobileNetV1_x0_25_infer/inference.pdiparams",
|
||||
"MobileNetV1_x0_25_infer/inference_cls.yml");
|
||||
|
||||
// 读取图片: 以下仅为读取Bitmap的伪代码
|
||||
ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
|
||||
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
|
||||
Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
|
||||
ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
|
||||
|
||||
// 模型推理
|
||||
ClassifyResult result = model.predict(ARGB8888ImageBitmap);
|
||||
|
||||
// 释放模型资源
|
||||
model.release();
|
||||
```
|
||||
|
||||
- 模型调用示例2: 在合适的程序节点,手动调用init,并自定义RuntimeOption
|
||||
```java
|
||||
// import 同上 ...
|
||||
import com.baidu.paddle.fastdeploy.RuntimeOption;
|
||||
import com.baidu.paddle.fastdeploy.LitePowerMode;
|
||||
import com.baidu.paddle.fastdeploy.vision.ClassifyResult;
|
||||
import com.baidu.paddle.fastdeploy.vision.classification.PaddleClasModel;
|
||||
// 新建空模型
|
||||
PaddleClasModel model = new PaddleClasModel();
|
||||
// 模型路径
|
||||
String modelFile = "MobileNetV1_x0_25_infer/inference.pdmodel";
|
||||
String paramFile = "MobileNetV1_x0_25_infer/inference.pdiparams";
|
||||
String configFile = "MobileNetV1_x0_25_infer/inference_cls.yml";
|
||||
// 指定RuntimeOption
|
||||
RuntimeOption option = new RuntimeOption();
|
||||
option.setCpuThreadNum(2);
|
||||
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
|
||||
option.enableRecordTimeOfRuntime();
|
||||
option.enableLiteFp16();
|
||||
// 使用init函数初始化
|
||||
model.init(modelFile, paramFile, configFile, option);
|
||||
// Bitmap读取、模型预测、资源释放 同上 ...
|
||||
```
|
||||
更详细的用法请参考 [MainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/classification/ClassificationMainActivity.java) 中的用法
|
||||
|
||||
## 替换 FastDeploy 预测库和模型
|
||||
替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-xxx-shared`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models/MobileNetV1_x0_25_infer`。
|
||||
- 替换FastDeploy预测库的步骤:
|
||||
- 下载或编译最新的FastDeploy Android预测库,解压缩后放在 `app/libs` 目录下;
|
||||
- 修改 `app/src/main/cpp/CMakeLists.txt` 中的预测库路径,指向您下载或编译的预测库路径。如:
|
||||
```cmake
|
||||
set(FastDeploy_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../../../libs/fastdeploy-android-xxx-shared")
|
||||
```
|
||||
- 替换PaddleClas模型的步骤:
|
||||
- 将您的PaddleClas分类模型放在 `app/src/main/assets/models` 目录下;
|
||||
- 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
|
||||
```xml
|
||||
<!-- 将这个路径指修改成您的模型,如 models/MobileNetV2_x0_25_infer -->
|
||||
<string name="CLASSIFICATION_MODEL_DIR_DEFAULT">models/MobileNetV1_x0_25_infer</string>
|
||||
<string name="CLASSIFICATION_LABEL_PATH_DEFAULT">labels/imagenet1k_label_list.txt</string>
|
||||
```
|
||||
|
||||
## 更多参考文档
|
||||
如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
|
||||
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
|
||||
- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
|
@@ -1,52 +1,48 @@
|
||||
# PaddleClas C++部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleClas C++ Deployment Example
|
||||
|
||||
本目录下提供`infer.cc`快速完成PaddleClas系列模型在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of PaddleClas models on CPU/GPU and GPU accelerated by TensorRT.
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation.
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上ResNet50_vd推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
Taking ResNet50_vd inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
# Download FastDeploy precompiled library. Users can choose your appropriate version in the`FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载ResNet50_vd模型文件和测试图片
|
||||
# Download ResNet50_vd model file and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
|
||||
tar -xvf ResNet50_vd_infer.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 2
|
||||
# IPU推理
|
||||
# IPU inference
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 3
|
||||
# KunlunXin XPU推理
|
||||
# KunlunXin XPU inference
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 4
|
||||
# Huawei Ascend NPU推理
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 5
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
The above command works for Linux or MacOS. Refer to
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
## PaddleClas C++ Interface
|
||||
|
||||
## PaddleClas C++接口
|
||||
|
||||
### PaddleClas类
|
||||
### PaddleClas Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::classification::PaddleClasModel(
|
||||
@@ -57,32 +53,32 @@ fastdeploy::vision::classification::PaddleClasModel(
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PaddleClas模型加载和初始化,其中model_file, params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
|
||||
PaddleClas model loading and initialization, where model_file and params_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA) for more information
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
#### Predict函数
|
||||
#### Predict function
|
||||
|
||||
> ```c++
|
||||
> PaddleClasModel::Predict(cv::Mat* im, ClassifyResult* result, int topk = 1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output results directly.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 分类结果,包括label_id,以及相应的置信度, ClassifyResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: The classification result, including label_id, and the corresponding confidence. Refer to [Visual Model Prediction Results](../../../../../docs/api/vision_results/) for the description of ClassifyResult
|
||||
> > * **topk**(int): Return the topk classification results with the highest prediction probability. Default 1
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Visual Model prediction results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
89
examples/vision/classification/paddleclas/cpp/README_CN.md
Normal file
89
examples/vision/classification/paddleclas/cpp/README_CN.md
Normal file
@@ -0,0 +1,89 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleClas C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成PaddleClas系列模型在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上ResNet50_vd推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载ResNet50_vd模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
|
||||
tar -xvf ResNet50_vd_infer.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 0
|
||||
# GPU推理
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 2
|
||||
# IPU推理
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 3
|
||||
# KunlunXin XPU推理
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 4
|
||||
# Huawei Ascend NPU推理
|
||||
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 5
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
## PaddleClas C++接口
|
||||
|
||||
### PaddleClas类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::classification::PaddleClasModel(
|
||||
const string& model_file,
|
||||
const string& params_file,
|
||||
const string& config_file,
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PaddleClas模型加载和初始化,其中model_file, params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> PaddleClasModel::Predict(cv::Mat* im, ClassifyResult* result, int topk = 1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 分类结果,包括label_id,以及相应的置信度, ClassifyResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,37 +1,36 @@
|
||||
# PaddleClas模型 Python部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# Example of PaddleClas models Python Deployment
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation.
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install the FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成ResNet50_vd在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of ResNet50_vd on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download deployment example code
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/classification/paddleclas/python
|
||||
|
||||
# 下载ResNet50_vd模型文件和测试图片
|
||||
# Download the ResNet50_vd model file and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
|
||||
tar -xvf ResNet50_vd_infer.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device cpu --topk 1
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device gpu --topk 1
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
# Use TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device gpu --use_trt True --topk 1
|
||||
# IPU推理(注意:IPU推理首次运行会有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
# IPU inference(Attention: It is somewhat time-consuming for the operation of model serialization when running IPU inference for the first time. Please be patient.)
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device ipu --topk 1
|
||||
# 昆仑芯XPU推理
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device kunlunxin --topk 1
|
||||
# 华为昇腾NPU推理
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device ascend --topk 1
|
||||
# XPU inference
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device xpu --topk 1
|
||||
```
|
||||
|
||||
运行完成后返回结果如下所示
|
||||
The result returned after running is as follows
|
||||
```bash
|
||||
ClassifyResult(
|
||||
label_ids: 153,
|
||||
@@ -39,43 +38,43 @@ scores: 0.686229,
|
||||
)
|
||||
```
|
||||
|
||||
## PaddleClasModel Python接口
|
||||
## PaddleClasModel Python Interface
|
||||
|
||||
```python
|
||||
fd.vision.classification.PaddleClasModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleClas模型加载和初始化,其中model_file, params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
|
||||
PaddleClas model loading and initialization, where model_file and params_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA) for more information
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference deployment configuration file
|
||||
> * **runtime_option**(RuntimeOption): Backend Inference configuration. None by default. (use the default configuration)
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
### predict函数
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> PaddleClasModel.predict(input_image, topk=1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出分类topk结果。
|
||||
> Model prediction interface. Input images and output classification topk results directly.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
> > * **input_image**(np.ndarray): Input data in HWC or BGR format
|
||||
> > * **topk**(int): Return the topk classification results with the highest prediction probability. Default 1
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.ClassifyResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.ClassifyResult` structure. Refer to [Visual Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other documents
|
||||
|
||||
- [PaddleClas 模型介绍](..)
|
||||
- [PaddleClas C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [PaddleClas Model Description](..)
|
||||
- [PaddleClas C++ Deployment](../cpp)
|
||||
- [Model prediction results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
@@ -0,0 +1,82 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleClas模型 Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成ResNet50_vd在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/classification/paddleclas/python
|
||||
|
||||
# 下载ResNet50_vd模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
|
||||
tar -xvf ResNet50_vd_infer.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device cpu --topk 1
|
||||
# GPU推理
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device gpu --topk 1
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device gpu --use_trt True --topk 1
|
||||
# IPU推理(注意:IPU推理首次运行会有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device ipu --topk 1
|
||||
# 昆仑芯XPU推理
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device kunlunxin --topk 1
|
||||
# 华为昇腾NPU推理
|
||||
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device ascend --topk 1
|
||||
```
|
||||
|
||||
运行完成后返回结果如下所示
|
||||
```bash
|
||||
ClassifyResult(
|
||||
label_ids: 153,
|
||||
scores: 0.686229,
|
||||
)
|
||||
```
|
||||
|
||||
## PaddleClasModel Python接口
|
||||
|
||||
```python
|
||||
fd.vision.classification.PaddleClasModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleClas模型加载和初始化,其中model_file, params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理部署配置文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> PaddleClasModel.predict(input_image, topk=1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出分类topk结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.ClassifyResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [PaddleClas 模型介绍](..)
|
||||
- [PaddleClas C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,50 +1,51 @@
|
||||
# PaddleClas 量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleClas Quantification Model Deployment
|
||||
FastDeploy supports the deployment of quantification models and provides a convenient tool for automatic model compression.
|
||||
Users can use it to deploy models after quantification or directly deploy quantized models provided by FastDeploy.
|
||||
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
|
||||
注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。
|
||||
## FastDeploy one-click auto-compression tool
|
||||
FastDeploy provides a one-click auto-compression tool that allows users to quantize models by simply entering a configuration file.
|
||||
Refer to [one-click auto-compression tool](../../../../../tools/common_tools/auto_compression/) for details.
|
||||
Attention:The quantized classification model still requires the inference_cls.yaml file in the FP32 model folder. The model folder after personal quantification does not contain this yaml file. But users can copy this yaml file from the FP32 model folder to your quantized model folder.
|
||||
|
||||
## 下载量化完成的PaddleClas模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.
|
||||
## Download the quantized PaddleClas model
|
||||
Users can also directly download the quantized models in the table below.
|
||||
|
||||
Benchmark表格说明:
|
||||
- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
|
||||
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
|
||||
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
|
||||
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
|
||||
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
|
||||
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
|
||||
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
|
||||
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
|
||||
Benchmark table description:
|
||||
- Runtime latency: model’s inference latency on multiple Runtimes, including CPU->GPU data copy, GPU inference, and GPU->CPU data copy time. It does not include the pre and post processing time of the model.
|
||||
- End2End latency: model’s latency in the actual inference scenario, including the pre and post processing time of the model.
|
||||
- Measured latency: The average latency after 1000 times of inference in milliseconds.
|
||||
- INT8 + FP16: Enable FP16 inference for Runtime while inferring the INT8 quantification model
|
||||
- INT8 + FP16 + PM: Use Pinned Memory to speed up the GPU->CPU data copy while inferring the INT8 quantization model with FP16 turned on.
|
||||
- Maximum speedup ratio: Obtained by dividing the FP32 latency by the highest INT8 inference latency.
|
||||
- The strategy is to use a few unlabeled data sets to train the model for quantification and to verify the accuracy on the full validation set. The INT8 accuracy does not represent the highest value.
|
||||
- The CPU is Intel(R) Xeon(R) Gold 6271C, and the number of CPU threads is fixed to 1. The GPU is Tesla T4 with TensorRT version 8.4.15.
|
||||
|
||||
### Runtime Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 Top1 | INT8 Top1 | 量化方式 |
|
||||
| Model |Inference Backend |Deployment Hardware | FP32 Runtime Latency | INT8 Runtime Latency | INT8 + FP16 Runtime Latency | INT8+FP16+PM Runtime Latency | Maximum Speedup Ratio | FP32 Top1 | INT8 Top1 | Quantification Method |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | TensorRT | GPU | 3.55 | 0.99|0.98|1.06 | 3.62 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle-TensorRT | GPU | 3.46 |None |0.87|1.03 | 3.98 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | ONNX Runtime | CPU | 76.14 | 35.43 |None|None | 2.15 | 79.12 | 78.87| 离线量化|
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle Inference | CPU | 76.21 | 24.01 |None|None | 3.17 | 79.12 | 78.55 | 离线量化|
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | TensorRT | GPU | 0.91 | 0.43 |0.49 | 0.54 | 2.12 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle-TensorRT | GPU | 0.88| None| 0.49|0.51 | 1.80 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | ONNX Runtime | CPU | 30.53 | 9.59|None|None | 3.18 |77.89 | 75.09 |离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle Inference | CPU | 12.29 | 4.68 | None|None|2.62 |77.89 | 71.36 |离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | TensorRT | GPU | 3.55 | 0.99|0.98|1.06 | 3.62 | 79.12 | 79.06 | Offline |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle-TensorRT | GPU | 3.46 |None |0.87|1.03 | 3.98 | 79.12 | 79.06 | Offline |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | ONNX Runtime | CPU | 76.14 | 35.43 |None|None | 2.15 | 79.12 | 78.87| Offline|
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle Inference | CPU | 76.21 | 24.01 |None|None | 3.17 | 79.12 | 78.55 | Offline|
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | TensorRT | GPU | 0.91 | 0.43 |0.49 | 0.54 | 2.12 |77.89 | 76.86 | Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle-TensorRT | GPU | 0.88| None| 0.49|0.51 | 1.80 |77.89 | 76.86 | Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | ONNX Runtime | CPU | 30.53 | 9.59|None|None | 3.18 |77.89 | 75.09 |Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle Inference | CPU | 12.29 | 4.68 | None|None|2.62 |77.89 | 71.36 |Offline |
|
||||
|
||||
### 端到端 Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 Top1 | INT8 Top1 | 量化方式 |
|
||||
### End2End Benchmark
|
||||
| Model |Inference Backend |Deployment Hardware | FP32 End2End Latency | INT8 End2End Latency | INT8 + FP16 End2End Latency | INT8+FP16+PM End2End Latency | Maximum Speedup Ratio | FP32 Top1 | INT8 Top1 | Quantification Method |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | TensorRT | GPU | 4.92| 2.28|2.24|2.23 | 2.21 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle-TensorRT | GPU | 4.48|None |2.09|2.10 | 2.14 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | ONNX Runtime | CPU | 77.43 | 41.90 |None|None | 1.85 | 79.12 | 78.87| 离线量化|
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle Inference | CPU | 80.60 | 27.75 |None|None | 2.90 | 79.12 | 78.55 | 离线量化|
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | TensorRT | GPU | 2.19 | 1.48|1.57| 1.57 | 1.48 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle-TensorRT | GPU | 2.04| None| 1.47|1.45 | 1.41 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | ONNX Runtime | CPU | 34.02 | 12.97|None|None | 2.62 |77.89 | 75.09 |离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle Inference | CPU | 16.31 | 7.42 | None|None| 2.20 |77.89 | 71.36 |离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | TensorRT | GPU | 4.92| 2.28|2.24|2.23 | 2.21 | 79.12 | 79.06 | Offline |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle-TensorRT | GPU | 4.48|None |2.09|2.10 | 2.14 | 79.12 | 79.06 | Offline |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | ONNX Runtime | CPU | 77.43 | 41.90 |None|None | 1.85 | 79.12 | 78.87| Offline|
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle Inference | CPU | 80.60 | 27.75 |None|None | 2.90 | 79.12 | 78.55 | Offline|
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | TensorRT | GPU | 2.19 | 1.48|1.57| 1.57 | 1.48 |77.89 | 76.86 | Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle-TensorRT | GPU | 2.04| None| 1.47|1.45 | 1.41 |77.89 | 76.86 | Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | ONNX Runtime | CPU | 34.02 | 12.97|None|None | 2.62 |77.89 | 75.09 |Offline |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle Inference | CPU | 16.31 | 7.42 | None|None| 2.20 |77.89 | 71.36 |Offline |
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
@@ -0,0 +1,51 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleClas 量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
|
||||
注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。
|
||||
|
||||
## 下载量化完成的PaddleClas模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.
|
||||
|
||||
Benchmark表格说明:
|
||||
- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
|
||||
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
|
||||
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
|
||||
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
|
||||
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
|
||||
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
|
||||
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
|
||||
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
|
||||
|
||||
### Runtime Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 Top1 | INT8 Top1 | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | TensorRT | GPU | 3.55 | 0.99|0.98|1.06 | 3.62 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle-TensorRT | GPU | 3.46 |None |0.87|1.03 | 3.98 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | ONNX Runtime | CPU | 76.14 | 35.43 |None|None | 2.15 | 79.12 | 78.87| 离线量化|
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle Inference | CPU | 76.21 | 24.01 |None|None | 3.17 | 79.12 | 78.55 | 离线量化|
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | TensorRT | GPU | 0.91 | 0.43 |0.49 | 0.54 | 2.12 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle-TensorRT | GPU | 0.88| None| 0.49|0.51 | 1.80 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | ONNX Runtime | CPU | 30.53 | 9.59|None|None | 3.18 |77.89 | 75.09 |离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle Inference | CPU | 12.29 | 4.68 | None|None|2.62 |77.89 | 71.36 |离线量化 |
|
||||
|
||||
### 端到端 Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 Top1 | INT8 Top1 | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | TensorRT | GPU | 4.92| 2.28|2.24|2.23 | 2.21 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle-TensorRT | GPU | 4.48|None |2.09|2.10 | 2.14 | 79.12 | 79.06 | 离线量化 |
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | ONNX Runtime | CPU | 77.43 | 41.90 |None|None | 1.85 | 79.12 | 78.87| 离线量化|
|
||||
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar) | Paddle Inference | CPU | 80.60 | 27.75 |None|None | 2.90 | 79.12 | 78.55 | 离线量化|
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | TensorRT | GPU | 2.19 | 1.48|1.57| 1.57 | 1.48 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle-TensorRT | GPU | 2.04| None| 1.47|1.45 | 1.41 |77.89 | 76.86 | 离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | ONNX Runtime | CPU | 34.02 | 12.97|None|None | 2.62 |77.89 | 75.09 |离线量化 |
|
||||
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/mobilenetv1_ssld_ptq.tar) | Paddle Inference | CPU | 16.31 | 7.42 | None|None| 2.20 |77.89 | 71.36 |离线量化 |
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
@@ -1,18 +1,19 @@
|
||||
# PaddleClas 模型RKNPU2部署
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleClas Model RKNPU2 Deployment
|
||||
|
||||
## 转换模型
|
||||
下面以 ResNet50_vd为例子,教大家如何转换分类模型到RKNN模型。
|
||||
## Convert the model
|
||||
Taking ResNet50_vd as an example, this document demonstrates how to convert classification model to RKNN model.
|
||||
|
||||
### 导出ONNX模型
|
||||
### Export the ONNX model
|
||||
```bash
|
||||
# 安装 paddle2onnx
|
||||
# Install paddle2onnx
|
||||
pip install paddle2onnx
|
||||
|
||||
# 下载ResNet50_vd模型文件和测试图片
|
||||
# Download ResNet50_vd model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
|
||||
tar -xvf ResNet50_vd_infer.tgz
|
||||
|
||||
# 静态图转ONNX模型,注意,这里的save_file请和压缩包名对齐
|
||||
# From static map to ONNX model. Attention: Align the save_file with the zip file name
|
||||
paddle2onnx --model_dir ResNet50_vd_infer \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
@@ -21,16 +22,16 @@ paddle2onnx --model_dir ResNet50_vd_infer \
|
||||
--opset_version 10 \
|
||||
--enable_onnx_checker True
|
||||
|
||||
# 固定shape,注意这里的inputs得对应netron.app展示的 inputs 的 name,有可能是image 或者 x
|
||||
# Fix shape. Attention: the inputs here should correspond to the name of the inputs shown in netron.app, which may be image or x
|
||||
python -m paddle2onnx.optimize --input_model ResNet50_vd_infer/ResNet50_vd_infer.onnx \
|
||||
--output_model ResNet50_vd_infer/ResNet50_vd_infer.onnx \
|
||||
--input_shape_dict "{'inputs':[1,3,224,224]}"
|
||||
```
|
||||
|
||||
### 编写模型导出配置文件
|
||||
以转化RK3588的RKNN模型为例子,我们需要编辑tools/rknpu2/config/ResNet50_vd_infer_rknn.yaml,来转换ONNX模型到RKNN模型。
|
||||
### Write the model export configuration file
|
||||
Taking the example of RKNN model from RK3588, we need to edit tools/rknpu2/config/ResNet50_vd_infer_rknn.yaml to convert ONNX model to RKNN model.
|
||||
|
||||
如果你需要在NPU上执行normalize操作,请根据你的模型配置normalize参数,例如:
|
||||
If you need to perform the normalize operation on NPU, configure the normalize parameters based on your model. For example:
|
||||
```yaml
|
||||
model_path: ./ResNet50_vd_infer/ResNet50_vd_infer.onnx
|
||||
output_folder: ./ResNet50_vd_infer
|
||||
@@ -49,7 +50,7 @@ do_quantization: False
|
||||
dataset: "./ResNet50_vd_infer/dataset.txt"
|
||||
```
|
||||
|
||||
**在CPU上做normalize**可以参考以下yaml:
|
||||
To **normalize on CPU**, refer to the following yaml:
|
||||
```yaml
|
||||
model_path: ./ResNet50_vd_infer/ResNet50_vd_infer.onnx
|
||||
output_folder: ./ResNet50_vd_infer
|
||||
@@ -67,17 +68,17 @@ outputs_nodes:
|
||||
do_quantization: False
|
||||
dataset: "./ResNet50_vd_infer/dataset.txt"
|
||||
```
|
||||
这里我们选择在NPU上执行normalize操作.
|
||||
Here we perform the normalize operation on NPU.
|
||||
|
||||
|
||||
### ONNX模型转RKNN模型
|
||||
### From ONNX model to RKNN model
|
||||
```shell
|
||||
python tools/rknpu2/export.py \
|
||||
--config_path tools/rknpu2/config/ResNet50_vd_infer_rknn.yaml \
|
||||
--target_platform rk3588
|
||||
```
|
||||
|
||||
## 其他链接
|
||||
- [Cpp部署](./cpp)
|
||||
- [Python部署](./python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
## Other Links
|
||||
- [Cpp Deployment](./cpp)
|
||||
- [Python Deployment](./python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
|
@@ -0,0 +1,84 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleClas 模型RKNPU2部署
|
||||
|
||||
## 转换模型
|
||||
下面以 ResNet50_vd为例子,教大家如何转换分类模型到RKNN模型。
|
||||
|
||||
### 导出ONNX模型
|
||||
```bash
|
||||
# 安装 paddle2onnx
|
||||
pip install paddle2onnx
|
||||
|
||||
# 下载ResNet50_vd模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
|
||||
tar -xvf ResNet50_vd_infer.tgz
|
||||
|
||||
# 静态图转ONNX模型,注意,这里的save_file请和压缩包名对齐
|
||||
paddle2onnx --model_dir ResNet50_vd_infer \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--save_file ResNet50_vd_infer/ResNet50_vd_infer.onnx \
|
||||
--enable_dev_version True \
|
||||
--opset_version 10 \
|
||||
--enable_onnx_checker True
|
||||
|
||||
# 固定shape,注意这里的inputs得对应netron.app展示的 inputs 的 name,有可能是image 或者 x
|
||||
python -m paddle2onnx.optimize --input_model ResNet50_vd_infer/ResNet50_vd_infer.onnx \
|
||||
--output_model ResNet50_vd_infer/ResNet50_vd_infer.onnx \
|
||||
--input_shape_dict "{'inputs':[1,3,224,224]}"
|
||||
```
|
||||
|
||||
### 编写模型导出配置文件
|
||||
以转化RK3588的RKNN模型为例子,我们需要编辑tools/rknpu2/config/ResNet50_vd_infer_rknn.yaml,来转换ONNX模型到RKNN模型。
|
||||
|
||||
如果你需要在NPU上执行normalize操作,请根据你的模型配置normalize参数,例如:
|
||||
```yaml
|
||||
model_path: ./ResNet50_vd_infer/ResNet50_vd_infer.onnx
|
||||
output_folder: ./ResNet50_vd_infer
|
||||
mean:
|
||||
-
|
||||
- 123.675
|
||||
- 116.28
|
||||
- 103.53
|
||||
std:
|
||||
-
|
||||
- 58.395
|
||||
- 57.12
|
||||
- 57.375
|
||||
outputs_nodes:
|
||||
do_quantization: False
|
||||
dataset: "./ResNet50_vd_infer/dataset.txt"
|
||||
```
|
||||
|
||||
**在CPU上做normalize**可以参考以下yaml:
|
||||
```yaml
|
||||
model_path: ./ResNet50_vd_infer/ResNet50_vd_infer.onnx
|
||||
output_folder: ./ResNet50_vd_infer
|
||||
mean:
|
||||
-
|
||||
- 0
|
||||
- 0
|
||||
- 0
|
||||
std:
|
||||
-
|
||||
- 1
|
||||
- 1
|
||||
- 1
|
||||
outputs_nodes:
|
||||
do_quantization: False
|
||||
dataset: "./ResNet50_vd_infer/dataset.txt"
|
||||
```
|
||||
这里我们选择在NPU上执行normalize操作.
|
||||
|
||||
|
||||
### ONNX模型转RKNN模型
|
||||
```shell
|
||||
python tools/rknpu2/export.py \
|
||||
--config_path tools/rknpu2/config/ResNet50_vd_infer_rknn.yaml \
|
||||
--target_platform rk3588
|
||||
```
|
||||
|
||||
## 其他链接
|
||||
- [Cpp部署](./cpp)
|
||||
- [Python部署](./python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
@@ -1,11 +1,12 @@
|
||||
# PaddleClas 量化模型在 RV1126 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PaddleClas 量化模型到 RV1126 上。
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleClas Quantification Model Deployment on RV1126
|
||||
FastDeploy currently supports the deployment of PaddleClas quantification models to RV1126 based on Paddle Lite.
|
||||
|
||||
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
|
||||
For model quantization and download of quantized models, refer to [Model Quantization](../quantize/README.md)
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
在 RV1126 上只支持 C++ 的部署。
|
||||
Only C++ deployment is supported on RV1126.
|
||||
|
||||
- [C++部署](cpp)
|
||||
- [C++ Deployment](cpp)
|
||||
|
@@ -0,0 +1,12 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleClas 量化模型在 RV1126 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PaddleClas 量化模型到 RV1126 上。
|
||||
|
||||
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
在 RV1126 上只支持 C++ 的部署。
|
||||
|
||||
- [C++部署](cpp)
|
@@ -1,50 +1,51 @@
|
||||
# PaddleClas 服务化部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleClas Service Deployment Example
|
||||
|
||||
在服务化部署前,需确认
|
||||
Before the service deployment, please confirm
|
||||
|
||||
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
|
||||
- 1. Refer to [FastDeploy Service Deployment](../../../../../serving/README_CN.md) for software and hardware environment requirements and image pull commands
|
||||
|
||||
|
||||
## 启动服务
|
||||
## Start the Service
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download the example code for deployment
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/classification/paddleclas/serving
|
||||
|
||||
# 下载ResNet50_vd模型文件和测试图片
|
||||
# Download ResNet50_vd model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
|
||||
tar -xvf ResNet50_vd_infer.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
# 将配置文件放入预处理目录
|
||||
# Put the configuration file into the preprocessing directory
|
||||
mv ResNet50_vd_infer/inference_cls.yaml models/preprocess/1/inference_cls.yaml
|
||||
|
||||
# 将模型放入 models/runtime/1目录下, 并重命名为model.pdmodel和model.pdiparams
|
||||
# Place the model under models/runtime/1 and rename them to model.pdmodel和model.pdiparams
|
||||
mv ResNet50_vd_infer/inference.pdmodel models/runtime/1/model.pdmodel
|
||||
mv ResNet50_vd_infer/inference.pdiparams models/runtime/1/model.pdiparams
|
||||
|
||||
# 拉取fastdeploy镜像(x.y.z为镜像版本号,需参照serving文档替换为数字)
|
||||
# GPU镜像
|
||||
# Pull the fastdeploy image (x.y.z represent the image version. Refer to the serving document to replace them with numbers)
|
||||
# GPU image
|
||||
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10
|
||||
# CPU镜像
|
||||
# CPU image
|
||||
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-cpu-only-21.10
|
||||
|
||||
# 运行容器.容器名字为 fd_serving, 并挂载当前目录为容器的 /serving 目录
|
||||
# Run the container named fd_serving and mount it in the /serving directory of the container
|
||||
nvidia-docker run -it --net=host --name fd_serving -v `pwd`/:/serving registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash
|
||||
|
||||
# 启动服务(不设置CUDA_VISIBLE_DEVICES环境变量,会拥有所有GPU卡的调度权限)
|
||||
# Start the service (The CUDA_VISIBLE_DEVICES environment variable is not set, which entitles the scheduling authority of all GPU cards)
|
||||
CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/serving/models --backend-config=python,shm-default-byte-size=10485760
|
||||
```
|
||||
>> **注意**:
|
||||
>> **Attention**:
|
||||
|
||||
>> 拉取其他硬件上的镜像请看[服务化部署主文档](../../../../../serving/README_CN.md)
|
||||
>> To pull images from other hardware, refer to [Service Deployment Master Document](../../../../../serving/README_CN.md)
|
||||
|
||||
>> 执行fastdeployserver启动服务出现"Address already in use", 请使用`--grpc-port`指定端口号来启动服务,同时更改客户端示例中的请求端口号.
|
||||
>> If "Address already in use" appears when running fastdeployserver to start the service, use `--grpc-port` to specify the port number and change the request port number in the client demo.
|
||||
|
||||
>> 其他启动参数可以使用 fastdeployserver --help 查看
|
||||
>> Other startup parameters can be checked by fastdeployserver --help
|
||||
|
||||
服务启动成功后, 会有以下输出:
|
||||
Successful service start brings the following output:
|
||||
```
|
||||
......
|
||||
I0928 04:51:15.784517 206 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001
|
||||
@@ -53,26 +54,26 @@ I0928 04:51:15.826578 206 http_server.cc:167] Started Metrics Service at 0.0.0.0
|
||||
```
|
||||
|
||||
|
||||
## 客户端请求
|
||||
## Client Request
|
||||
|
||||
在物理机器中执行以下命令,发送grpc请求并输出结果
|
||||
Execute the following command in the physical machine to send the grpc request and output the result
|
||||
```
|
||||
#下载测试图片
|
||||
# Download test images
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
#安装客户端依赖
|
||||
# Install client dependencies
|
||||
python3 -m pip install tritonclient\[all\]
|
||||
|
||||
# 发送请求
|
||||
# Send the request
|
||||
python3 paddlecls_grpc_client.py
|
||||
```
|
||||
|
||||
发送请求成功后,会返回json格式的检测结果并打印输出:
|
||||
The result is returned in json format and printed after sending the request:
|
||||
```
|
||||
output_name: CLAS_RESULT
|
||||
{'label_ids': [153], 'scores': [0.6862289905548096]}
|
||||
```
|
||||
|
||||
## 配置修改
|
||||
## Configuration Change
|
||||
|
||||
当前默认配置在GPU上运行TensorRT引擎, 如果要在CPU或其他推理引擎上运行。 需要修改`models/runtime/config.pbtxt`中配置,详情请参考[配置文档](../../../../../serving/docs/zh_CN/model_configuration.md)
|
||||
The current default configuration runs the TensorRT engine on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/zh_CN/model_configuration.md) for more information.
|
||||
|
@@ -0,0 +1,79 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleClas 服务化部署示例
|
||||
|
||||
在服务化部署前,需确认
|
||||
|
||||
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
|
||||
|
||||
|
||||
## 启动服务
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/classification/paddleclas/serving
|
||||
|
||||
# 下载ResNet50_vd模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
|
||||
tar -xvf ResNet50_vd_infer.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
# 将配置文件放入预处理目录
|
||||
mv ResNet50_vd_infer/inference_cls.yaml models/preprocess/1/inference_cls.yaml
|
||||
|
||||
# 将模型放入 models/runtime/1目录下, 并重命名为model.pdmodel和model.pdiparams
|
||||
mv ResNet50_vd_infer/inference.pdmodel models/runtime/1/model.pdmodel
|
||||
mv ResNet50_vd_infer/inference.pdiparams models/runtime/1/model.pdiparams
|
||||
|
||||
# 拉取fastdeploy镜像(x.y.z为镜像版本号,需参照serving文档替换为数字)
|
||||
# GPU镜像
|
||||
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10
|
||||
# CPU镜像
|
||||
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-cpu-only-21.10
|
||||
|
||||
# 运行容器.容器名字为 fd_serving, 并挂载当前目录为容器的 /serving 目录
|
||||
nvidia-docker run -it --net=host --name fd_serving -v `pwd`/:/serving registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash
|
||||
|
||||
# 启动服务(不设置CUDA_VISIBLE_DEVICES环境变量,会拥有所有GPU卡的调度权限)
|
||||
CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/serving/models --backend-config=python,shm-default-byte-size=10485760
|
||||
```
|
||||
>> **注意**:
|
||||
|
||||
>> 拉取其他硬件上的镜像请看[服务化部署主文档](../../../../../serving/README_CN.md)
|
||||
|
||||
>> 执行fastdeployserver启动服务出现"Address already in use", 请使用`--grpc-port`指定端口号来启动服务,同时更改客户端示例中的请求端口号.
|
||||
|
||||
>> 其他启动参数可以使用 fastdeployserver --help 查看
|
||||
|
||||
服务启动成功后, 会有以下输出:
|
||||
```
|
||||
......
|
||||
I0928 04:51:15.784517 206 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001
|
||||
I0928 04:51:15.785177 206 http_server.cc:2815] Started HTTPService at 0.0.0.0:8000
|
||||
I0928 04:51:15.826578 206 http_server.cc:167] Started Metrics Service at 0.0.0.0:8002
|
||||
```
|
||||
|
||||
|
||||
## 客户端请求
|
||||
|
||||
在物理机器中执行以下命令,发送grpc请求并输出结果
|
||||
```
|
||||
#下载测试图片
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
#安装客户端依赖
|
||||
python3 -m pip install tritonclient\[all\]
|
||||
|
||||
# 发送请求
|
||||
python3 paddlecls_grpc_client.py
|
||||
```
|
||||
|
||||
发送请求成功后,会返回json格式的检测结果并打印输出:
|
||||
```
|
||||
output_name: CLAS_RESULT
|
||||
{'label_ids': [153], 'scores': [0.6862289905548096]}
|
||||
```
|
||||
|
||||
## 配置修改
|
||||
|
||||
当前默认配置在GPU上运行TensorRT引擎, 如果要在CPU或其他推理引擎上运行。 需要修改`models/runtime/config.pbtxt`中配置,详情请参考[配置文档](../../../../../serving/docs/zh_CN/model_configuration.md)
|
@@ -1,36 +1,35 @@
|
||||
English | [简体中文](README_CN.md)
|
||||
# MobileNet Front-end Deployment Example
|
||||
|
||||
# MobileNet 前端部署示例
|
||||
|
||||
本节介绍部署PaddleClas的图像分类mobilenet模型在浏览器中运行,以及@paddle-js-models/mobilenet npm包中的js接口。
|
||||
This document introduces the deployment of PaddleClas's mobilenet models for image classification to run in the browser, and the js interface in the @paddle-js-models/mobilenet npm package.
|
||||
|
||||
|
||||
## 前端部署图像分类模型
|
||||
## Front-end Deployment of Image Classification Model
|
||||
|
||||
图像分类模型web demo使用[**参考文档**](../../../../application/js/web_demo/)
|
||||
To use the web demo of image classification models, refer to [**Reference Document**](../../../../application/js/web_demo/)
|
||||
|
||||
|
||||
## MobileNet js接口
|
||||
## MobileNet js Interface
|
||||
|
||||
```
|
||||
import * as mobilenet from "@paddle-js-models/mobilenet";
|
||||
# mobilenet模型加载和初始化
|
||||
# mobilenet model loading and initialization
|
||||
await mobilenet.load()
|
||||
# mobilenet模型执行预测,并获得分类的类别
|
||||
# mobilenet model performs the prediction and obtains the classification result
|
||||
const res = await mobilenet.classify(img);
|
||||
console.log(res);
|
||||
```
|
||||
|
||||
**load()函数参数**
|
||||
**load() function parameter**
|
||||
|
||||
> * **Config**(dict): 图像分类模型配置参数,默认值为 {Path: 'https://paddlejs.bj.bcebos.com/models/fuse/mobilenet/mobileNetV2_fuse_activation/model.json', fill: '#fff', mean: [0.485, 0.456, 0.406],std: [0.229, 0.224, 0.225]}; 其中,modelPath为js模型路径,fill 为图像预处理padding的值,mean和std分别为预处理的均值和标准差。
|
||||
> * **Config**(dict): The configuration parameter for the image classification model. Default {Path: 'https://paddlejs.bj.bcebos.com/models/fuse/mobilenet/mobileNetV2_fuse_activation/model.json', fill: '#fff', mean: [0.485, 0.456, 0.406],std: [0.229, 0.224, 0.225]}; Among them, modelPath is the path of the js model, fill is the padding value in the image pre-processing, and mean/std are the mean and standard deviation in the pre-processing
|
||||
|
||||
|
||||
**classify()函数参数**
|
||||
> * **img**(HTMLImageElement): 输入图像参数,类型为HTMLImageElement。
|
||||
**classify() function parameter**
|
||||
> * **img**(HTMLImageElement): Enter an image parameter in HTMLImageElement.
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [PaddleClas模型 python部署](../../paddleclas/python/)
|
||||
- [PaddleClas模型 C++部署](../cpp/)
|
||||
- [PaddleClas model python deployment](../../paddleclas/python/)
|
||||
- [PaddleClas model C++ deployment](../cpp/)
|
||||
|
36
examples/vision/classification/paddleclas/web/README_CN.md
Normal file
36
examples/vision/classification/paddleclas/web/README_CN.md
Normal file
@@ -0,0 +1,36 @@
|
||||
[English](README.md) | 简体中文
|
||||
# MobileNet 前端部署示例
|
||||
|
||||
本节介绍部署PaddleClas的图像分类mobilenet模型在浏览器中运行,以及@paddle-js-models/mobilenet npm包中的js接口。
|
||||
|
||||
|
||||
## 前端部署图像分类模型
|
||||
|
||||
图像分类模型web demo使用[**参考文档**](../../../../application/js/web_demo/)
|
||||
|
||||
|
||||
## MobileNet js接口
|
||||
|
||||
```
|
||||
import * as mobilenet from "@paddle-js-models/mobilenet";
|
||||
# mobilenet模型加载和初始化
|
||||
await mobilenet.load()
|
||||
# mobilenet模型执行预测,并获得分类的类别
|
||||
const res = await mobilenet.classify(img);
|
||||
console.log(res);
|
||||
```
|
||||
|
||||
**load()函数参数**
|
||||
|
||||
> * **Config**(dict): 图像分类模型配置参数,默认值为 {Path: 'https://paddlejs.bj.bcebos.com/models/fuse/mobilenet/mobileNetV2_fuse_activation/model.json', fill: '#fff', mean: [0.485, 0.456, 0.406],std: [0.229, 0.224, 0.225]}; 其中,modelPath为js模型路径,fill 为图像预处理padding的值,mean和std分别为预处理的均值和标准差。
|
||||
|
||||
|
||||
**classify()函数参数**
|
||||
> * **img**(HTMLImageElement): 输入图像参数,类型为HTMLImageElement。
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [PaddleClas模型 python部署](../../paddleclas/python/)
|
||||
- [PaddleClas模型 C++部署](../cpp/)
|
@@ -1,41 +1,42 @@
|
||||
# ResNet准备部署模型
|
||||
English | [简体中文](README_CN.md)
|
||||
# ResNet Ready-to-deploy Model
|
||||
|
||||
- ResNet部署实现来自[Torchvision](https://github.com/pytorch/vision/tree/v0.12.0)的代码,和[基于ImageNet2012的预训练模型](https://github.com/pytorch/vision/tree/v0.12.0)。
|
||||
- ResNet Deployment is based on the code of [Torchvision](https://github.com/pytorch/vision/tree/v0.12.0) and [Pre-trained Models on ImageNet2012](https://github.com/pytorch/vision/tree/v0.12.0)。
|
||||
|
||||
- (1)[官方库](https://github.com/pytorch/vision/tree/v0.12.0)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
|
||||
- (2)自己数据训练的ResNet模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
- (1)Deployment is conducted after [Export ONNX Model](#导出ONNX模型) by the *.pt provided by [Official Repository](https://github.com/pytorch/vision/tree/v0.12.0);
|
||||
- (2)The ResNet Model trained by personal data should [Export ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Tutorials](#详细部署文档) for deployment.
|
||||
|
||||
|
||||
## 导出ONNX模型
|
||||
## Export the ONNX Model
|
||||
|
||||
|
||||
导入[Torchvision](https://github.com/pytorch/vision/tree/v0.12.0),加载预训练模型,并进行模型转换,具体转换步骤如下。
|
||||
Import [Torchvision](https://github.com/pytorch/vision/tree/v0.12.0), load the pre-trained model, and conduct model transformation as the following steps.
|
||||
|
||||
```python
|
||||
import torch
|
||||
import torchvision.models as models
|
||||
|
||||
model = models.resnet50(pretrained=True)
|
||||
batch_size = 1 #批处理大小
|
||||
input_shape = (3, 224, 224) #输入数据,改成自己的输入shape
|
||||
batch_size = 1 #Batch size
|
||||
input_shape = (3, 224, 224) #Input data, and change to personal input shape
|
||||
# #set the model to inference mode
|
||||
model.eval()
|
||||
x = torch.randn(batch_size, *input_shape) # 生成张量
|
||||
export_onnx_file = "ResNet50.onnx" # 目的ONNX文件名
|
||||
x = torch.randn(batch_size, *input_shape) # Generate tensor
|
||||
export_onnx_file = "ResNet50.onnx" # Purpose ONNX file name
|
||||
torch.onnx.export(model,
|
||||
x,
|
||||
export_onnx_file,
|
||||
opset_version=12,
|
||||
input_names=["input"], # 输入名
|
||||
output_names=["output"], # 输出名
|
||||
dynamic_axes={"input":{0:"batch_size"}, # 批处理变量
|
||||
input_names=["input"], # Input name
|
||||
output_names=["output"], # Output name
|
||||
dynamic_axes={"input":{0:"batch_size"}, # Batch variables
|
||||
"output":{0:"batch_size"}})
|
||||
```
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
为了方便开发者的测试,下面提供了ResNet导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 |
|
||||
For developers' testing, models exported by ResNet are provided below. Developers can download them directly. (The model accuracy in the following table is derived from the source official repository)
|
||||
| Model | Size | Accuracy |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |
|
||||
| [ResNet-18](https://bj.bcebos.com/paddlehub/fastdeploy/resnet18.onnx) | 45MB | |
|
||||
| [ResNet-34](https://bj.bcebos.com/paddlehub/fastdeploy/resnet34.onnx) | 84MB | |
|
||||
@@ -43,11 +44,11 @@
|
||||
| [ResNet-101](https://bj.bcebos.com/paddlehub/fastdeploy/resnet101.onnx) | 170MB | |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
||||
## 版本说明
|
||||
## Release Note
|
||||
|
||||
- 本版本文档和代码基于[Torchvision v0.12.0](https://github.com/pytorch/vision/tree/v0.12.0) 编写
|
||||
- Document and code are based on [Torchvision v0.12.0](https://github.com/pytorch/vision/tree/v0.12.0)
|
||||
|
54
examples/vision/classification/resnet/README_CN.md
Normal file
54
examples/vision/classification/resnet/README_CN.md
Normal file
@@ -0,0 +1,54 @@
|
||||
[English](README.md) | 简体中文
|
||||
# ResNet准备部署模型
|
||||
|
||||
- ResNet部署实现来自[Torchvision](https://github.com/pytorch/vision/tree/v0.12.0)的代码,和[基于ImageNet2012的预训练模型](https://github.com/pytorch/vision/tree/v0.12.0)。
|
||||
|
||||
- (1)[官方库](https://github.com/pytorch/vision/tree/v0.12.0)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
|
||||
- (2)自己数据训练的ResNet模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
|
||||
|
||||
## 导出ONNX模型
|
||||
|
||||
|
||||
导入[Torchvision](https://github.com/pytorch/vision/tree/v0.12.0),加载预训练模型,并进行模型转换,具体转换步骤如下。
|
||||
|
||||
```python
|
||||
import torch
|
||||
import torchvision.models as models
|
||||
|
||||
model = models.resnet50(pretrained=True)
|
||||
batch_size = 1 #批处理大小
|
||||
input_shape = (3, 224, 224) #输入数据,改成自己的输入shape
|
||||
# #set the model to inference mode
|
||||
model.eval()
|
||||
x = torch.randn(batch_size, *input_shape) # 生成张量
|
||||
export_onnx_file = "ResNet50.onnx" # 目的ONNX文件名
|
||||
torch.onnx.export(model,
|
||||
x,
|
||||
export_onnx_file,
|
||||
opset_version=12,
|
||||
input_names=["input"], # 输入名
|
||||
output_names=["output"], # 输出名
|
||||
dynamic_axes={"input":{0:"batch_size"}, # 批处理变量
|
||||
"output":{0:"batch_size"}})
|
||||
```
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了ResNet导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |
|
||||
| [ResNet-18](https://bj.bcebos.com/paddlehub/fastdeploy/resnet18.onnx) | 45MB | |
|
||||
| [ResNet-34](https://bj.bcebos.com/paddlehub/fastdeploy/resnet34.onnx) | 84MB | |
|
||||
| [ResNet-50](https://bj.bcebos.com/paddlehub/fastdeploy/resnet50.onnx) | 98MB | |
|
||||
| [ResNet-101](https://bj.bcebos.com/paddlehub/fastdeploy/resnet101.onnx) | 170MB | |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
|
||||
## 版本说明
|
||||
|
||||
- 本版本文档和代码基于[Torchvision v0.12.0](https://github.com/pytorch/vision/tree/v0.12.0) 编写
|
@@ -1,42 +1,43 @@
|
||||
# ResNet C++部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# ResNet C++ Deployment Example
|
||||
|
||||
本目录下提供`infer.cc`快速完成ResNet系列模型在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of ResNet models on CPU/GPU and GPU accelerated by TensorRT.
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation.
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上 ResNet50 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
Taking ResNet50 inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载ResNet模型文件和测试图片
|
||||
# Download the ResNet50 model file and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_demo resnet50.onnx ILSVRC2012_val_00000010.jpeg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_demo resnet50.onnx ILSVRC2012_val_00000010.jpeg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT Inference on GPU
|
||||
./infer_demo resnet50.onnx ILSVRC2012_val_00000010.jpeg 2
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
The above command works for Linux or MacOS. Refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows
|
||||
|
||||
## ResNet C++接口
|
||||
## ResNet C++ Interface
|
||||
|
||||
### ResNet类
|
||||
### ResNet Class
|
||||
|
||||
```c++
|
||||
|
||||
@@ -48,29 +49,29 @@ fastdeploy::vision::classification::ResNet(
|
||||
```
|
||||
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
#### Predict函数
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> ResNet::Predict(cv::Mat* im, ClassifyResult* result, int topk = 1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output results directly.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 分类结果,包括label_id,以及相应的置信度, ClassifyResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: The classification result, including label_id, and the corresponding confidence. Refer to [Visual Model Prediction Results](../../../../../docs/api/vision_results/) for the description of ClassifyResult
|
||||
> > * **topk**(int): Return the topk classification results with the highest prediction probability. Default 1
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
77
examples/vision/classification/resnet/cpp/README_CN.md
Normal file
77
examples/vision/classification/resnet/cpp/README_CN.md
Normal file
@@ -0,0 +1,77 @@
|
||||
[English](README.md) | 简体中文
|
||||
# ResNet C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成ResNet系列模型在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上 ResNet50 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载ResNet模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo resnet50.onnx ILSVRC2012_val_00000010.jpeg 0
|
||||
# GPU推理
|
||||
./infer_demo resnet50.onnx ILSVRC2012_val_00000010.jpeg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo resnet50.onnx ILSVRC2012_val_00000010.jpeg 2
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## ResNet C++接口
|
||||
|
||||
### ResNet类
|
||||
|
||||
```c++
|
||||
|
||||
fastdeploy::vision::classification::ResNet(
|
||||
const std::string& model_file,
|
||||
const std::string& params_file = "",
|
||||
const RuntimeOption& custom_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> ResNet::Predict(cv::Mat* im, ClassifyResult* result, int topk = 1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 分类结果,包括label_id,以及相应的置信度, ClassifyResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,30 +1,31 @@
|
||||
# ResNet模型 Python部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# ResNet Model Python Deployment Example
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成ResNet50_vd在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of ResNet50_vd on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download deployment example code
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/classification/resnet/python
|
||||
|
||||
# 下载ResNet50_vd模型文件和测试图片
|
||||
# Download the ResNet50_vd model file and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
python infer.py --model resnet50.onnx --image ILSVRC2012_val_00000010.jpeg --device cpu --topk 1
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
python infer.py --model resnet50.onnx --image ILSVRC2012_val_00000010.jpeg --device gpu --topk 1
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
# Use TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
||||
python infer.py --model resnet50.onnx --image ILSVRC2012_val_00000010.jpeg --device gpu --use_trt True --topk 1
|
||||
```
|
||||
|
||||
运行完成后返回结果如下所示
|
||||
The result returned after running is as follows
|
||||
```bash
|
||||
ClassifyResult(
|
||||
label_ids: 332,
|
||||
@@ -32,41 +33,41 @@ scores: 0.825349,
|
||||
)
|
||||
```
|
||||
|
||||
## ResNet Python接口
|
||||
## ResNet Python Interface
|
||||
|
||||
```python
|
||||
fd.vision.classification.ResNet(model_file, params_file, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
### predict函数
|
||||
### predict Function
|
||||
|
||||
> ```python
|
||||
> ResNet.predict(input_image, topk=1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output results directly.
|
||||
>
|
||||
> **参数**
|
||||
> **parameter**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
> > * **input_image**(np.ndarray): Input data in HWC or BGR format
|
||||
> > * **topk**(int): Return the topk classification results with the highest prediction probability. Default 1
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.ClassifyResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.ClassifyResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [ResNet 模型介绍](..)
|
||||
- [ResNet C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [ResNet Model Description](..)
|
||||
- [ResNet C++ Deployment](../cpp)
|
||||
- [Model prediction results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
73
examples/vision/classification/resnet/python/README_CN.md
Normal file
73
examples/vision/classification/resnet/python/README_CN.md
Normal file
@@ -0,0 +1,73 @@
|
||||
[English](README.md) | 简体中文
|
||||
# ResNet模型 Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成ResNet50_vd在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/classification/resnet/python
|
||||
|
||||
# 下载ResNet50_vd模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model resnet50.onnx --image ILSVRC2012_val_00000010.jpeg --device cpu --topk 1
|
||||
# GPU推理
|
||||
python infer.py --model resnet50.onnx --image ILSVRC2012_val_00000010.jpeg --device gpu --topk 1
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer.py --model resnet50.onnx --image ILSVRC2012_val_00000010.jpeg --device gpu --use_trt True --topk 1
|
||||
```
|
||||
|
||||
运行完成后返回结果如下所示
|
||||
```bash
|
||||
ClassifyResult(
|
||||
label_ids: 332,
|
||||
scores: 0.825349,
|
||||
)
|
||||
```
|
||||
|
||||
## ResNet Python接口
|
||||
|
||||
```python
|
||||
fd.vision.classification.ResNet(model_file, params_file, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> ResNet.predict(input_image, topk=1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.ClassifyResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [ResNet 模型介绍](..)
|
||||
- [ResNet C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,14 +1,16 @@
|
||||
# YOLOv5Cls准备部署模型
|
||||
English | [简体中文](README_CN.md)
|
||||
|
||||
- YOLOv5Cls v6.2部署模型实现来自[YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.2),和[基于ImageNet的预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v6.2)
|
||||
- (1)[官方库](https://github.com/ultralytics/yolov5/releases/tag/v6.2)提供的*-cls.pt模型,使用[YOLOv5](https://github.com/ultralytics/yolov5)中的`export.py`导出ONNX文件后,可直接进行部署;
|
||||
- (2)开发者基于自己数据训练的YOLOv5Cls v6.2模型,可使用[YOLOv5](https://github.com/ultralytics/yolov5)中的`export.py`导出ONNX文件后,完成部署。
|
||||
# YOLOv5Cls Ready-to-deploy Model
|
||||
|
||||
- YOLOv5Cls v6.2 model deployment is based on [YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.2) and [Pre-trained Models on ImageNet](https://github.com/ultralytics/yolov5/releases/tag/v6.2)
|
||||
- (1)The *-cls.pt model provided by [Official Repository](https://github.com/ultralytics/yolov5/releases/tag/v6.2) can export the ONNX file using `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5), then deployment can be conducted;
|
||||
- (2)The YOLOv5Cls v6.2 Model trained by personal data should export the ONNX file using `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5).
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv5Cls导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度(top1) | 精度(top5) |
|
||||
For developers' testing, models exported by YOLOv5Cls are provided below. Developers can download them directly. (The model accuracy in the following table is derived from the source official repository)
|
||||
| Model | Size | Accuracy(top1) | Accuracy(top5) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:----- |
|
||||
| [YOLOv5n-cls](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n-cls.onnx) | 9.6MB | 64.6% | 85.4% |
|
||||
| [YOLOv5s-cls](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s-cls.onnx) | 21MB | 71.5% | 90.2% |
|
||||
@@ -17,11 +19,11 @@
|
||||
| [YOLOv5x-cls](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5x-cls.onnx) | 184MB | 79.0% | 94.4% |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
||||
## 版本说明
|
||||
## Release Note
|
||||
|
||||
- 本版本文档和代码基于[YOLOv5 v6.2](https://github.com/ultralytics/yolov5/tree/v6.2) 编写
|
||||
- Document and code are based on [YOLOv5 v6.2](https://github.com/ultralytics/yolov5/tree/v6.2).
|
||||
|
28
examples/vision/classification/yolov5cls/README_CN.md
Normal file
28
examples/vision/classification/yolov5cls/README_CN.md
Normal file
@@ -0,0 +1,28 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5Cls准备部署模型
|
||||
|
||||
- YOLOv5Cls v6.2部署模型实现来自[YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.2),和[基于ImageNet的预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v6.2)
|
||||
- (1)[官方库](https://github.com/ultralytics/yolov5/releases/tag/v6.2)提供的*-cls.pt模型,使用[YOLOv5](https://github.com/ultralytics/yolov5)中的`export.py`导出ONNX文件后,可直接进行部署;
|
||||
- (2)开发者基于自己数据训练的YOLOv5Cls v6.2模型,可使用[YOLOv5](https://github.com/ultralytics/yolov5)中的`export.py`导出ONNX文件后,完成部署。
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv5Cls导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度(top1) | 精度(top5) |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:----- |
|
||||
| [YOLOv5n-cls](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n-cls.onnx) | 9.6MB | 64.6% | 85.4% |
|
||||
| [YOLOv5s-cls](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s-cls.onnx) | 21MB | 71.5% | 90.2% |
|
||||
| [YOLOv5m-cls](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5m-cls.onnx) | 50MB | 75.9% | 92.9% |
|
||||
| [YOLOv5l-cls](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5l-cls.onnx) | 102MB | 78.0% | 94.0% |
|
||||
| [YOLOv5x-cls](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5x-cls.onnx) | 184MB | 79.0% | 94.4% |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
|
||||
## 版本说明
|
||||
|
||||
- 本版本文档和代码基于[YOLOv5 v6.2](https://github.com/ultralytics/yolov5/tree/v6.2) 编写
|
@@ -1,37 +1,38 @@
|
||||
# YOLOv5Cls C++部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5Cls C++ Deployment Example
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOv5Cls在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
This directory provides examples that ` infer.cc` fast finishes the deployment of YOLOv5Cls models on CPU/GPU and GPU accelerated by TensorRT.
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
Taking CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
#下载官方转换好的yolov5模型文件和测试图片
|
||||
# Download the official converted yolov5 model file and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n-cls.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_demo yolov5n-cls.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_demo yolov5n-cls.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT Inference on GPU
|
||||
./infer_demo yolov5n-cls.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成后返回结果如下所示
|
||||
The result returned after running is as follows
|
||||
```bash
|
||||
ClassifyResult(
|
||||
label_ids: 265,
|
||||
@@ -39,12 +40,12 @@ scores: 0.196327,
|
||||
)
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
The above command works for Linux or MacOS. Refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows
|
||||
|
||||
## YOLOv5Cls C++接口
|
||||
## YOLOv5Cls C++ Interface
|
||||
|
||||
### YOLOv5Cls类
|
||||
### YOLOv5Cls Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::classification::YOLOv5Cls(
|
||||
@@ -54,37 +55,36 @@ fastdeploy::vision::classification::YOLOv5Cls(
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
YOLOv5Cls模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
YOLOv5Cls model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. Only passing an empty string when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
#### Predict函数
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> YOLOv5Cls::Predict(cv::Mat* im, int topk = 1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出输出分类topk结果。
|
||||
> Model prediction interface. Input images and output classification topk results directly.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
> > * **input_image**(np.ndarray): Input data in HWC or BGR format
|
||||
> > * **topk**(int): Return the topk classification results with the highest prediction probability. Default 1
|
||||
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.ClassifyResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.ClassifyResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [YOLOv5Cls 模型介绍](..)
|
||||
- [YOLOv5Cls Python部署](../python)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [YOLOv5Cls Model Description](..)
|
||||
- [YOLOv5Cls Python Deployment](../python)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
91
examples/vision/classification/yolov5cls/cpp/README_CN.md
Normal file
91
examples/vision/classification/yolov5cls/cpp/README_CN.md
Normal file
@@ -0,0 +1,91 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5Cls C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOv5Cls在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
#下载官方转换好的yolov5模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n-cls.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo yolov5n-cls.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo yolov5n-cls.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo yolov5n-cls.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成后返回结果如下所示
|
||||
```bash
|
||||
ClassifyResult(
|
||||
label_ids: 265,
|
||||
scores: 0.196327,
|
||||
)
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## YOLOv5Cls C++接口
|
||||
|
||||
### YOLOv5Cls类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::classification::YOLOv5Cls(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
YOLOv5Cls模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> YOLOv5Cls::Predict(cv::Mat* im, int topk = 1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出输出分类topk结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.ClassifyResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [YOLOv5Cls 模型介绍](..)
|
||||
- [YOLOv5Cls Python部署](../python)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,30 +1,31 @@
|
||||
# YOLOv5Cls Python部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5Cls Python Deployment Example
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation.
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOv5Cls在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5Cls on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download deployment example code
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/classification/yolov5cls/python/
|
||||
|
||||
#下载 YOLOv5Cls 模型文件和测试图片
|
||||
# Download the YOLOv5Cls model file and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n-cls.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
python infer.py --model yolov5n-cls.onnx --image ILSVRC2012_val_00000010.jpeg --device cpu --topk 1
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
python infer.py --model yolov5n-cls.onnx --image ILSVRC2012_val_00000010.jpeg --device gpu --topk 1
|
||||
# GPU上使用TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
python infer.py --model yolov5n-cls.onnx --image ILSVRC2012_val_00000010.jpeg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成后返回结果如下所示
|
||||
The result returned after running is as follows
|
||||
```bash
|
||||
ClassifyResult(
|
||||
label_ids: 265,
|
||||
@@ -32,42 +33,42 @@ scores: 0.196327,
|
||||
)
|
||||
```
|
||||
|
||||
## YOLOv5Cls Python接口
|
||||
## YOLOv5Cls Python Interface
|
||||
|
||||
```python
|
||||
fastdeploy.vision.classification.YOLOv5Cls(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
YOLOv5Cls模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
YOLOv5Cls model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
### predict函数
|
||||
### predict Function
|
||||
|
||||
> ```python
|
||||
> YOLOv5Cls.predict(image_data, topk=1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出分类topk结果。
|
||||
> Model prediction interface. Input images and output classification topk results directly.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
> > * **input_image**(np.ndarray): Input data in HWC or BGR format
|
||||
> > * **topk**(int): Return the topk classification results with the highest prediction probability. Default 1
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.ClassifyResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.ClassifyResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [YOLOv5Cls 模型介绍](..)
|
||||
- [YOLOv5Cls C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [YOLOv5Cls Model Description](..)
|
||||
- [YOLOv5Cls C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
74
examples/vision/classification/yolov5cls/python/README_CN.md
Normal file
74
examples/vision/classification/yolov5cls/python/README_CN.md
Normal file
@@ -0,0 +1,74 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5Cls Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOv5Cls在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/classification/yolov5cls/python/
|
||||
|
||||
#下载 YOLOv5Cls 模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n-cls.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model yolov5n-cls.onnx --image ILSVRC2012_val_00000010.jpeg --device cpu --topk 1
|
||||
# GPU推理
|
||||
python infer.py --model yolov5n-cls.onnx --image ILSVRC2012_val_00000010.jpeg --device gpu --topk 1
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model yolov5n-cls.onnx --image ILSVRC2012_val_00000010.jpeg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成后返回结果如下所示
|
||||
```bash
|
||||
ClassifyResult(
|
||||
label_ids: 265,
|
||||
scores: 0.196327,
|
||||
)
|
||||
```
|
||||
|
||||
## YOLOv5Cls Python接口
|
||||
|
||||
```python
|
||||
fastdeploy.vision.classification.YOLOv5Cls(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
YOLOv5Cls模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> YOLOv5Cls.predict(image_data, topk=1)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出分类topk结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **topk**(int):返回预测概率最高的topk个分类结果,默认为1
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.ClassifyResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [YOLOv5Cls 模型介绍](..)
|
||||
- [YOLOv5Cls C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,20 +1,22 @@
|
||||
# 目标检测模型
|
||||
English | [简体中文](README_CN.md)
|
||||
|
||||
FastDeploy目前支持如下目标检测模型部署
|
||||
# Object Detection Model
|
||||
|
||||
| 模型 | 说明 | 模型格式 | 版本 |
|
||||
FastDeploy currently supports the deployment of the following object detection models
|
||||
|
||||
| Model | Description | Model format | Version |
|
||||
| :--- | :--- | :------- | :--- |
|
||||
| [PaddleDetection/PP-YOLOE](./paddledetection) | PP-YOLOE(含P-PYOLOE+)系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/PicoDet](./paddledetection) | PicoDet系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/YOLOX](./paddledetection) | Paddle版本的YOLOX系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/YOLOv3](./paddledetection) | YOLOv3系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/PP-YOLO](./paddledetection) | PP-YOLO系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/FasterRCNN](./paddledetection) | FasterRCNN系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [WongKinYiu/YOLOv7](./yolov7) | YOLOv7、YOLOv7-X等系列模型 | ONNX | [Release/v0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1) |
|
||||
| [RangiLyu/NanoDetPlus](./nanodet_plus) | NanoDetPlus 系列模型 | ONNX | [Release/v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) |
|
||||
| [ultralytics/YOLOv5](./yolov5) | YOLOv5 系列模型 | ONNX | [Release/v7.0](https://github.com/ultralytics/yolov5/tree/v7.0) |
|
||||
| [ppogg/YOLOv5-Lite](./yolov5lite) | YOLOv5-Lite 系列模型 | ONNX | [Release/v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) |
|
||||
| [meituan/YOLOv6](./yolov6) | YOLOv6 系列模型 | ONNX | [Release/0.1.0](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) |
|
||||
| [WongKinYiu/YOLOR](./yolor) | YOLOR 系列模型 | ONNX | [Release/weights](https://github.com/WongKinYiu/yolor/releases/tag/weights) |
|
||||
| [Megvii-BaseDetection/YOLOX](./yolox) | YOLOX 系列模型 | ONNX | [Release/v0.1.1](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0) |
|
||||
| [WongKinYiu/ScaledYOLOv4](./scaledyolov4) | ScaledYOLOv4 系列模型 | ONNX | [CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415) |
|
||||
| [PaddleDetection/PP-YOLOE](./paddledetection) | PP-YOLOE(including P-PYOLOE+) models | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/PicoDet](./paddledetection) | PicoDet models | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/YOLOX](./paddledetection) | YOLOX models of Paddle version | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/YOLOv3](./paddledetection) | YOLOv3 models | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/PP-YOLO](./paddledetection) | PP-YOLO models | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/FasterRCNN](./paddledetection) | FasterRCNN models | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [WongKinYiu/YOLOv7](./yolov7) | YOLOv7、YOLOv7-X models | ONNX | [Release/v0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1) |
|
||||
| [RangiLyu/NanoDetPlus](./nanodet_plus) | NanoDetPlus models | ONNX | [Release/v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) |
|
||||
| [ultralytics/YOLOv5](./yolov5) | YOLOv5 models | ONNX | [Release/v7.0](https://github.com/ultralytics/yolov5/tree/v7.0) |
|
||||
| [ppogg/YOLOv5-Lite](./yolov5lite) | YOLOv5-Lite models | ONNX | [Release/v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) |
|
||||
| [meituan/YOLOv6](./yolov6) | YOLOv6 models | ONNX | [Release/0.1.0](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) |
|
||||
| [WongKinYiu/YOLOR](./yolor) | YOLOR models | ONNX | [Release/weights](https://github.com/WongKinYiu/yolor/releases/tag/weights) |
|
||||
| [Megvii-BaseDetection/YOLOX](./yolox) | YOLOX models | ONNX | [Release/v0.1.1](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0) |
|
||||
| [WongKinYiu/ScaledYOLOv4](./scaledyolov4) | ScaledYOLOv4 models | ONNX | [CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415) |
|
||||
|
21
examples/vision/detection/README_CN.md
Normal file
21
examples/vision/detection/README_CN.md
Normal file
@@ -0,0 +1,21 @@
|
||||
[English](README.md) | 简体中文
|
||||
# 目标检测模型
|
||||
|
||||
FastDeploy目前支持如下目标检测模型部署
|
||||
|
||||
| 模型 | 说明 | 模型格式 | 版本 |
|
||||
| :--- | :--- | :------- | :--- |
|
||||
| [PaddleDetection/PP-YOLOE](./paddledetection) | PP-YOLOE(含P-PYOLOE+)系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/PicoDet](./paddledetection) | PicoDet系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/YOLOX](./paddledetection) | Paddle版本的YOLOX系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/YOLOv3](./paddledetection) | YOLOv3系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/PP-YOLO](./paddledetection) | PP-YOLO系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [PaddleDetection/FasterRCNN](./paddledetection) | FasterRCNN系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) |
|
||||
| [WongKinYiu/YOLOv7](./yolov7) | YOLOv7、YOLOv7-X等系列模型 | ONNX | [Release/v0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1) |
|
||||
| [RangiLyu/NanoDetPlus](./nanodet_plus) | NanoDetPlus 系列模型 | ONNX | [Release/v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) |
|
||||
| [ultralytics/YOLOv5](./yolov5) | YOLOv5 系列模型 | ONNX | [Release/v7.0](https://github.com/ultralytics/yolov5/tree/v7.0) |
|
||||
| [ppogg/YOLOv5-Lite](./yolov5lite) | YOLOv5-Lite 系列模型 | ONNX | [Release/v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) |
|
||||
| [meituan/YOLOv6](./yolov6) | YOLOv6 系列模型 | ONNX | [Release/0.1.0](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) |
|
||||
| [WongKinYiu/YOLOR](./yolor) | YOLOR 系列模型 | ONNX | [Release/weights](https://github.com/WongKinYiu/yolor/releases/tag/weights) |
|
||||
| [Megvii-BaseDetection/YOLOX](./yolox) | YOLOX 系列模型 | ONNX | [Release/v0.1.1](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0) |
|
||||
| [WongKinYiu/ScaledYOLOv4](./scaledyolov4) | ScaledYOLOv4 系列模型 | ONNX | [CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415) |
|
@@ -1,13 +1,13 @@
|
||||
# FastestDet C++部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# FastestDet C++ Deployment Example
|
||||
|
||||
本目录下提供`infer.cc`快速完成FastestDet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of FastestDet on CPU/GPU and GPU accelerated by TensorRT.
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
|
||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
@@ -17,29 +17,29 @@ tar xvf fastdeploy-linux-x64-1.0.3.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-1.0.3
|
||||
make -j
|
||||
|
||||
#下载官方转换好的FastestDet模型文件和测试图片
|
||||
# Download the official converted FastestDet model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/FastestDet.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_demo FastestDet.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_demo FastestDet.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo FastestDet.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/44280887/206176291-61eb118b-391b-4431-b79e-a393b9452138.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## FastestDet C++接口
|
||||
## FastestDet C++ Interface
|
||||
|
||||
### FastestDet类
|
||||
### FastestDet Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::FastestDet(
|
||||
@@ -49,16 +49,16 @@ fastdeploy::vision::detection::FastestDet(
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
FastestDet模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
FastestDet model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. Only passing an empty string when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
#### Predict函数
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> FastestDet::Predict(cv::Mat* im, DetectionResult* result,
|
||||
@@ -66,22 +66,22 @@ FastestDet模型加载和初始化,其中model_file为导出的ONNX模型格
|
||||
> float nms_iou_threshold = 0.45)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
|
||||
> > * **conf_threshold**: Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**: iou threshold during NMS processing
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[352, 352]
|
||||
> > * **size**(vector<int>): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [352, 352]
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
88
examples/vision/detection/fastestdet/cpp/README_CN.md
Normal file
88
examples/vision/detection/fastestdet/cpp/README_CN.md
Normal file
@@ -0,0 +1,88 @@
|
||||
[English](README.md) | 简体中文
|
||||
# FastestDet C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成FastestDet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-1.0.3.tgz
|
||||
tar xvf fastdeploy-linux-x64-1.0.3.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-1.0.3
|
||||
make -j
|
||||
|
||||
#下载官方转换好的FastestDet模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/FastestDet.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo FastestDet.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo FastestDet.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo FastestDet.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/44280887/206176291-61eb118b-391b-4431-b79e-a393b9452138.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## FastestDet C++接口
|
||||
|
||||
### FastestDet类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::FastestDet(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
FastestDet模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> FastestDet::Predict(cv::Mat* im, DetectionResult* result,
|
||||
> float conf_threshold = 0.65,
|
||||
> float nms_iou_threshold = 0.45)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[352, 352]
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,74 +1,75 @@
|
||||
# FastestDet Python部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# FastestDet Python Deployment Example
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成FastestDet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of FastestDet on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download the example code for deployment
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/fastestdet/python/
|
||||
|
||||
#下载fastestdet模型文件和测试图片
|
||||
# Download fastestdet model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/FastestDet.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
python infer.py --model FastestDet.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
python infer.py --model FastestDet.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
python infer.py --model FastestDet.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/44280887/206176291-61eb118b-391b-4431-b79e-a393b9452138.jpg">
|
||||
|
||||
## FastestDet Python接口
|
||||
## FastestDet Python Interface
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.FastestDet(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
FastestDet模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
FastestDet model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
### predict函数
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> FastestDet.predict(image_data)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **image_data**(np.ndarray): Input data in HWC or BGR format
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.DetectionResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for its structure
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Property
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[352, 352]
|
||||
> > * **size**(list[int]): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [352, 352]
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [FastestDet 模型介绍](..)
|
||||
- [FastestDet C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [FastestDet Model Description](..)
|
||||
- [FastestDet C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
75
examples/vision/detection/fastestdet/python/README_CN.md
Normal file
75
examples/vision/detection/fastestdet/python/README_CN.md
Normal file
@@ -0,0 +1,75 @@
|
||||
[English](README.md) | 简体中文
|
||||
# FastestDet Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成FastestDet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/fastestdet/python/
|
||||
|
||||
#下载fastestdet模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/FastestDet.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model FastestDet.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model FastestDet.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model FastestDet.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/44280887/206176291-61eb118b-391b-4431-b79e-a393b9452138.jpg">
|
||||
|
||||
## FastestDet Python接口
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.FastestDet(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
FastestDet模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> FastestDet.predict(image_data)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[352, 352]
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [FastestDet 模型介绍](..)
|
||||
- [FastestDet C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,26 +1,27 @@
|
||||
# NanoDetPlus准备部署模型
|
||||
English | [简体中文](README_CN.md)
|
||||
# NanoDetPlus Ready-to-deploy Model
|
||||
|
||||
|
||||
- NanoDetPlus部署实现来自[NanoDetPlus](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) 的代码,基于coco的[预训练模型](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)。
|
||||
- NanoDetPlus deployment is based on the code of [NanoDetPlus](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) and coco's [Pre-trained Model](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1).
|
||||
|
||||
- (1)[官方库](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)提供的*.onnx可直接进行部署;
|
||||
- (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
- (1)The *.onnx provided by [official repository](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1) can directly conduct the deployment;
|
||||
- (2)Models trained by developers should export ONNX models. Please refer to [Detailed Deployment Documents](#详细部署文档) for deployment.
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
为了方便开发者的测试,下面提供了NanoDetPlus导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 |
|
||||
For developers' testing, models exported by NanoDetPlus are provided below. Developers can download them directly. (The model accuracy in the following table is derived from the source official repository)
|
||||
| Model | Size | Accuracy |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |
|
||||
| [NanoDetPlus_320](https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx ) | 4.6MB | 27.0% |
|
||||
| [NanoDetPlus_320_sim](https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320-sim.onnx) | 4.6MB | 27.0% |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
||||
|
||||
## 版本说明
|
||||
## Release Note
|
||||
|
||||
- 本版本文档和代码基于[NanoDetPlus v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) 编写
|
||||
- Document and code are based on [NanoDetPlus v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1)
|
||||
|
27
examples/vision/detection/nanodet_plus/README_CN.md
Normal file
27
examples/vision/detection/nanodet_plus/README_CN.md
Normal file
@@ -0,0 +1,27 @@
|
||||
[English](README.md) | 简体中文
|
||||
# NanoDetPlus准备部署模型
|
||||
|
||||
|
||||
- NanoDetPlus部署实现来自[NanoDetPlus](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) 的代码,基于coco的[预训练模型](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)。
|
||||
|
||||
- (1)[官方库](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)提供的*.onnx可直接进行部署;
|
||||
- (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了NanoDetPlus导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |
|
||||
| [NanoDetPlus_320](https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx ) | 4.6MB | 27.0% |
|
||||
| [NanoDetPlus_320_sim](https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320-sim.onnx) | 4.6MB | 27.0% |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
|
||||
|
||||
## 版本说明
|
||||
|
||||
- 本版本文档和代码基于[NanoDetPlus v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) 编写
|
@@ -1,46 +1,47 @@
|
||||
# NanoDetPlus C++部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# NanoDetPlus C++ Deployment Example
|
||||
|
||||
本目录下提供`infer.cc`快速完成NanoDetPlus在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of NanoDetPlus on CPU/GPU and GPU accelerated by TensorRT.
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
#下载官方转换好的NanoDetPlus模型文件和测试图片
|
||||
# Download the official converted NanoDetPlus model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301689-87ee5205-2eff-4204-b615-24c400f01323.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## NanoDetPlus C++接口
|
||||
## NanoDetPlus C++ Interface
|
||||
|
||||
### NanoDetPlus类
|
||||
### NanoDetPlus Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::NanoDetPlus(
|
||||
@@ -50,16 +51,16 @@ fastdeploy::vision::detection::NanoDetPlus(
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
NanoDetPlus model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
#### Predict函数
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> NanoDetPlus::Predict(cv::Mat* im, DetectionResult* result,
|
||||
@@ -67,27 +68,27 @@ NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
|
||||
> > * **conf_threshold**: Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**: iou threshold during NMS processing
|
||||
|
||||
### 类成员变量
|
||||
### Class Member Variable
|
||||
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[320, 320]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[0, 0, 0]
|
||||
> > * **keep_ratio**(bool): 通过此参数指定resize时是否保持宽高比例不变,默认是fasle.
|
||||
> > * **reg_max**(int): GFL回归中的reg_max参数,默认是7.
|
||||
> > * **downsample_strides**(vector<int>): 通过此参数可以修改生成anchor的特征图的下采样倍数, 包含三个整型元素, 分别表示默认的生成anchor的下采样倍数, 默认值为[8, 16, 32, 64]
|
||||
> > * **size**(vector<int>): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [320, 320]
|
||||
> > * **padding_value**(vector<float>): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [0, 0, 0]
|
||||
> > * **keep_ratio**(bool): Whether to keep the aspect ratio unchanged during resize. Default fasle
|
||||
> > * **reg_max**(int): The reg_max parameter in GFL regression. Default 7
|
||||
> > * **downsample_strides**(vector<int>): This parameter is used to change the down-sampling multiple of the feature map that generates anchor, containing three integer elements that represent the default down-sampling multiple for generating anchor. Default value [8, 16, 32, 64]
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
94
examples/vision/detection/nanodet_plus/cpp/README_CN.md
Normal file
94
examples/vision/detection/nanodet_plus/cpp/README_CN.md
Normal file
@@ -0,0 +1,94 @@
|
||||
[English](README.md) | 简体中文
|
||||
# NanoDetPlus C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成NanoDetPlus在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
#下载官方转换好的NanoDetPlus模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo nanodet-plus-m_320.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301689-87ee5205-2eff-4204-b615-24c400f01323.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## NanoDetPlus C++接口
|
||||
|
||||
### NanoDetPlus类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::NanoDetPlus(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> NanoDetPlus::Predict(cv::Mat* im, DetectionResult* result,
|
||||
> float conf_threshold = 0.25,
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
|
||||
### 类成员变量
|
||||
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[320, 320]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[0, 0, 0]
|
||||
> > * **keep_ratio**(bool): 通过此参数指定resize时是否保持宽高比例不变,默认是fasle.
|
||||
> > * **reg_max**(int): GFL回归中的reg_max参数,默认是7.
|
||||
> > * **downsample_strides**(vector<int>): 通过此参数可以修改生成anchor的特征图的下采样倍数, 包含三个整型元素, 分别表示默认的生成anchor的下采样倍数, 默认值为[8, 16, 32, 64]
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,81 +1,81 @@
|
||||
# NanoDetPlus Python部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# NanoDetPlus Python Deployment Example
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成NanoDetPlus在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of NanoDetPlus on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download the example code for deployment
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/nanodet_plus/python/
|
||||
|
||||
#下载NanoDetPlus模型文件和测试图片
|
||||
# Download NanoDetPlus model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301689-87ee5205-2eff-4204-b615-24c400f01323.jpg">
|
||||
|
||||
## NanoDetPlus Python接口
|
||||
## NanoDetPlus Python Interface
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.NanoDetPlus(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
NanoDetPlus model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
### predict函数
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> NanoDetPlus.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
> > * **image_data**(np.ndarray): Input data in HWC or BGR format
|
||||
> > * **conf_threshold**(float): Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**(float): iou threshold during NMS processing
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.DetectionResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for its description.
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Property
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[320, 320]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[0, 0, 0]
|
||||
> > * **keep_ratio**(bool): 通过此参数指定resize时是否保持宽高比例不变,默认是fasle.
|
||||
> > * **reg_max**(int): GFL回归中的reg_max参数,默认是7.
|
||||
> > * **downsample_strides**(list[int]): 通过此参数可以修改生成anchor的特征图的下采样倍数, 包含四个整型元素, 分别表示默认的生成anchor的下采样倍数, 默认值为[8, 16, 32, 64]
|
||||
> > * **size**(list[int]): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [320, 320]
|
||||
> > * **padding_value**(list[float]): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [0, 0, 0]
|
||||
> > * **keep_ratio**(bool): Whether to keep the aspect ratio unchanged during resize. Default false
|
||||
> > * **reg_max**(int): The reg_max parameter in GFL regression. Default 7.
|
||||
> > * **downsample_strides**(list[int]): This parameter is used to change the down-sampling multiple of the feature map that generates anchor, containing four integer elements that represent the default down-sampling multiple for generating anchor. Default [8, 16, 32, 64]
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [NanoDetPlus 模型介绍](..)
|
||||
- [NanoDetPlus C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [NanoDetPlus Model Description](..)
|
||||
- [NanoDetPlus C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
82
examples/vision/detection/nanodet_plus/python/README_CN.md
Normal file
82
examples/vision/detection/nanodet_plus/python/README_CN.md
Normal file
@@ -0,0 +1,82 @@
|
||||
[English](README.md) | 简体中文
|
||||
# NanoDetPlus Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成NanoDetPlus在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/nanodet_plus/python/
|
||||
|
||||
#下载NanoDetPlus模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/nanodet-plus-m_320.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model nanodet-plus-m_320.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301689-87ee5205-2eff-4204-b615-24c400f01323.jpg">
|
||||
|
||||
## NanoDetPlus Python接口
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.NanoDetPlus(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
NanoDetPlus模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> NanoDetPlus.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[320, 320]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[0, 0, 0]
|
||||
> > * **keep_ratio**(bool): 通过此参数指定resize时是否保持宽高比例不变,默认是fasle.
|
||||
> > * **reg_max**(int): GFL回归中的reg_max参数,默认是7.
|
||||
> > * **downsample_strides**(list[int]): 通过此参数可以修改生成anchor的特征图的下采样倍数, 包含四个整型元素, 分别表示默认的生成anchor的下采样倍数, 默认值为[8, 16, 32, 64]
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [NanoDetPlus 模型介绍](..)
|
||||
- [NanoDetPlus C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,65 +1,56 @@
|
||||
# PaddleDetection模型部署
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleDetection Model Deployment
|
||||
|
||||
## 模型版本说明
|
||||
## Model Description
|
||||
|
||||
- [PaddleDetection Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4)
|
||||
|
||||
## 支持模型列表
|
||||
## List of Supported Models
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
Now FastDeploy supports the deployment of the following models
|
||||
|
||||
- [PP-YOLOE(含PP-YOLOE+)系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
|
||||
- [PicoDet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet)
|
||||
- [PP-YOLO系列模型(含v2)](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyolo)
|
||||
- [YOLOv3系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/yolov3)
|
||||
- [YOLOX系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/yolox)
|
||||
- [FasterRCNN系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/faster_rcnn)
|
||||
- [MaskRCNN系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/mask_rcnn)
|
||||
- [SSD系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ssd)
|
||||
- [YOLOv5系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov5)
|
||||
- [YOLOv6系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov6)
|
||||
- [YOLOv7系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
|
||||
- [RTMDet系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/rtmdet)
|
||||
- [CascadeRCNN系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/cascade_rcnn)
|
||||
- [PSSDet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/rcnn_enhance)
|
||||
- [RetinaNet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/retinanet)
|
||||
- [PPYOLOESOD系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/smalldet)
|
||||
- [FCOS系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/fcos)
|
||||
- [TTFNet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ttfnet)
|
||||
- [TOOD系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/tood)
|
||||
- [GFL系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/gfl)
|
||||
- [PP-YOLOE(including PP-YOLOE+) models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
|
||||
- [PicoDet models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet)
|
||||
- [PP-YOLO models(including v2)](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyolo)
|
||||
- [YOLOv3 models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/yolov3)
|
||||
- [YOLOX models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/yolox)
|
||||
- [FasterRCNN models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/faster_rcnn)
|
||||
- [MaskRCNN models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/mask_rcnn)
|
||||
- [SSD models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ssd)
|
||||
- [YOLOv5 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov5)
|
||||
- [YOLOv6 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov6)
|
||||
- [YOLOv7 models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
|
||||
- [RTMDet models](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/rtmdet)
|
||||
|
||||
## Export Deployment Model
|
||||
|
||||
## 导出部署模型
|
||||
Before deployment, PaddleDetection needs to be exported into the deployment model. Refer to [Export Models](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md) for more details.
|
||||
|
||||
在部署前,需要先将PaddleDetection导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md)
|
||||
**Attention**
|
||||
- Do not perform NMS removal when exporting the model
|
||||
- If you are running a native TensorRT backend (not a Paddle Inference backend), do not add the --trt parameter
|
||||
- Do not add the parameter `fuse_normalize=True` when exporting the model
|
||||
|
||||
**注意**
|
||||
- 在导出模型时不要进行NMS的去除操作,正常导出即可
|
||||
- 如果用于跑原生TensorRT后端(非Paddle Inference后端),不要添加--trt参数
|
||||
- 导出模型时,不要添加`fuse_normalize=True`参数
|
||||
## Download Pre-trained Model
|
||||
|
||||
## 下载预训练模型
|
||||
For developers' testing, models exported by PaddleDetection are provided below. Developers can download them directly.
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleDetection导出的各系列模型,开发者可直接下载使用。
|
||||
The accuracy metric is from model descriptions in PaddleDetection. Refer to them for details.
|
||||
|
||||
其中精度指标来源于PaddleDetection中对各模型的介绍,详情各参考PaddleDetection中的说明。
|
||||
|
||||
|
||||
| 模型 | 参数大小 | 精度 | 备注 |
|
||||
| Model | Parameter Size | Accuracy | Note |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :------ |
|
||||
| [picodet_l_320_coco_lcnet](https://bj.bcebos.com/paddlehub/fastdeploy/picodet_l_320_coco_lcnet.tgz) |23MB | Box AP 42.6% |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz) |200MB | Box AP 51.4% |
|
||||
| [ppyoloe_plus_crn_m_80e_coco](https://bj.bcebos.com/fastdeploy/models/ppyoloe_plus_crn_m_80e_coco.tgz) |83.3MB | Box AP 49.8% |
|
||||
| [ppyolo_r50vd_dcn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyolo_r50vd_dcn_1x_coco.tgz) | 180MB | Box AP 44.8% | 暂不支持TensorRT |
|
||||
| [ppyolov2_r101vd_dcn_365e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyolov2_r101vd_dcn_365e_coco.tgz) | 282MB | Box AP 49.7% | 暂不支持TensorRT |
|
||||
| [ppyolo_r50vd_dcn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyolo_r50vd_dcn_1x_coco.tgz) | 180MB | Box AP 44.8% | TensorRT not supported yet |
|
||||
| [ppyolov2_r101vd_dcn_365e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyolov2_r101vd_dcn_365e_coco.tgz) | 282MB | Box AP 49.7% | TensorRT not supported yet |
|
||||
| [yolov3_darknet53_270e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov3_darknet53_270e_coco.tgz) |237MB | Box AP 39.1% | |
|
||||
| [yolox_s_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolox_s_300e_coco.tgz) | 35MB | Box AP 40.4% | |
|
||||
| [faster_rcnn_r50_vd_fpn_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/faster_rcnn_r50_vd_fpn_2x_coco.tgz) | 160MB | Box AP 40.8%| 暂不支持TensorRT |
|
||||
| [mask_rcnn_r50_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/mask_rcnn_r50_1x_coco.tgz) | 128M | Box AP 37.4%, Mask AP 32.8%| 暂不支持TensorRT、ORT |
|
||||
| [ssd_mobilenet_v1_300_120e_voc](https://bj.bcebos.com/paddlehub/fastdeploy/ssd_mobilenet_v1_300_120e_voc.tgz) | 24.9M | Box AP 73.8%| 暂不支持TensorRT、ORT |
|
||||
| [ssd_vgg16_300_240e_voc](https://bj.bcebos.com/paddlehub/fastdeploy/ssd_vgg16_300_240e_voc.tgz) | 106.5M | Box AP 77.8%| 暂不支持TensorRT、ORT |
|
||||
| [ssdlite_mobilenet_v1_300_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ssdlite_mobilenet_v1_300_coco.tgz) | 29.1M | | 暂不支持TensorRT、ORT |
|
||||
| [faster_rcnn_r50_vd_fpn_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/faster_rcnn_r50_vd_fpn_2x_coco.tgz) | 160MB | Box AP 40.8%| TensorRT not supported yet |
|
||||
| [mask_rcnn_r50_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/mask_rcnn_r50_1x_coco.tgz) | 128M | Box AP 37.4%, Mask AP 32.8%| TensorRT、ORT not supported yet |
|
||||
| [ssd_mobilenet_v1_300_120e_voc](https://bj.bcebos.com/paddlehub/fastdeploy/ssd_mobilenet_v1_300_120e_voc.tgz) | 24.9M | Box AP 73.8%| TensorRT、ORT not supported yet |
|
||||
| [ssd_vgg16_300_240e_voc](https://bj.bcebos.com/paddlehub/fastdeploy/ssd_vgg16_300_240e_voc.tgz) | 106.5M | Box AP 77.8%| TensorRT、ORT not supported yet |
|
||||
| [ssdlite_mobilenet_v1_300_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ssdlite_mobilenet_v1_300_coco.tgz) | 29.1M | | TensorRT、ORT not supported yet|
|
||||
| [rtmdet_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/rtmdet_l_300e_coco.tgz) | 224M | Box AP 51.2%| |
|
||||
| [rtmdet_s_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/rtmdet_s_300e_coco.tgz) | 42M | Box AP 44.5%| |
|
||||
| [yolov5_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5_l_300e_coco.tgz) | 183M | Box AP 48.9%| |
|
||||
@@ -68,18 +59,8 @@
|
||||
| [yolov6_s_400e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6_s_400e_coco.tgz) | 68M | Box AP 43.4%| |
|
||||
| [yolov7_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_l_300e_coco.tgz) | 145M | Box AP 51.0%| |
|
||||
| [yolov7_x_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_x_300e_coco.tgz) | 277M | Box AP 53.0%| |
|
||||
| [cascade_rcnn_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/cascade_rcnn_r50_fpn_1x_coco.tgz) | 271M | Box AP 41.1%| 暂不支持TensorRT、ORT |
|
||||
| [cascade_rcnn_r50_vd_fpn_ssld_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.tgz) | 271M | Box AP 45.0%| 暂不支持TensorRT、ORT |
|
||||
| [faster_rcnn_enhance_3x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/faster_rcnn_enhance_3x_coco.tgz) | 119M | Box AP 41.5%| 暂不支持TensorRT、ORT |
|
||||
| [fcos_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/fcos_r50_fpn_1x_coco.tgz) | 129M | Box AP 39.6%| 暂不支持TensorRT |
|
||||
| [gfl_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/gfl_r50_fpn_1x_coco.tgz) | 128M | Box AP 41.0%| 暂不支持TensorRT |
|
||||
| [ppyoloe_crn_l_80e_sliced_visdrone_640_025](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_80e_sliced_visdrone_640_025.tgz) | 200M | Box AP 31.9%| |
|
||||
| [retinanet_r101_fpn_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r101_fpn_2x_coco.tgz) | 210M | Box AP 40.6%| 暂不支持TensorRT、ORT |
|
||||
| [retinanet_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r50_fpn_1x_coco.tgz) | 136M | Box AP 37.5%| 暂不支持TensorRT、ORT |
|
||||
| [tood_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/tood_r50_fpn_1x_coco.tgz) | 130M | Box AP 42.5%| 暂不支持TensorRT、ORT |
|
||||
| [ttfnet_darknet53_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ttfnet_darknet53_1x_coco.tgz) | 178M | Box AP 33.5%| 暂不支持TensorRT、ORT |
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
86
examples/vision/detection/paddledetection/README_CN.md
Normal file
86
examples/vision/detection/paddledetection/README_CN.md
Normal file
@@ -0,0 +1,86 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleDetection模型部署
|
||||
|
||||
## 模型版本说明
|
||||
|
||||
- [PaddleDetection Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4)
|
||||
|
||||
## 支持模型列表
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
||||
- [PP-YOLOE(含PP-YOLOE+)系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
|
||||
- [PicoDet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet)
|
||||
- [PP-YOLO系列模型(含v2)](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyolo)
|
||||
- [YOLOv3系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/yolov3)
|
||||
- [YOLOX系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/yolox)
|
||||
- [FasterRCNN系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/faster_rcnn)
|
||||
- [MaskRCNN系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/mask_rcnn)
|
||||
- [SSD系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ssd)
|
||||
- [YOLOv5系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov5)
|
||||
- [YOLOv6系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov6)
|
||||
- [YOLOv7系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov7)
|
||||
- [RTMDet系列模型](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/rtmdet)
|
||||
- [CascadeRCNN系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/cascade_rcnn)
|
||||
- [PSSDet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/rcnn_enhance)
|
||||
- [RetinaNet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/retinanet)
|
||||
- [PPYOLOESOD系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/smalldet)
|
||||
- [FCOS系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/fcos)
|
||||
- [TTFNet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/ttfnet)
|
||||
- [TOOD系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/tood)
|
||||
- [GFL系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/gfl)
|
||||
|
||||
|
||||
## 导出部署模型
|
||||
|
||||
在部署前,需要先将PaddleDetection导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md)
|
||||
|
||||
**注意**
|
||||
- 在导出模型时不要进行NMS的去除操作,正常导出即可
|
||||
- 如果用于跑原生TensorRT后端(非Paddle Inference后端),不要添加--trt参数
|
||||
- 导出模型时,不要添加`fuse_normalize=True`参数
|
||||
|
||||
## 下载预训练模型
|
||||
|
||||
为了方便开发者的测试,下面提供了PaddleDetection导出的各系列模型,开发者可直接下载使用。
|
||||
|
||||
其中精度指标来源于PaddleDetection中对各模型的介绍,详情各参考PaddleDetection中的说明。
|
||||
|
||||
|
||||
| 模型 | 参数大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- | :------ |
|
||||
| [picodet_l_320_coco_lcnet](https://bj.bcebos.com/paddlehub/fastdeploy/picodet_l_320_coco_lcnet.tgz) |23MB | Box AP 42.6% |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz) |200MB | Box AP 51.4% |
|
||||
| [ppyoloe_plus_crn_m_80e_coco](https://bj.bcebos.com/fastdeploy/models/ppyoloe_plus_crn_m_80e_coco.tgz) |83.3MB | Box AP 49.8% |
|
||||
| [ppyolo_r50vd_dcn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyolo_r50vd_dcn_1x_coco.tgz) | 180MB | Box AP 44.8% | 暂不支持TensorRT |
|
||||
| [ppyolov2_r101vd_dcn_365e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyolov2_r101vd_dcn_365e_coco.tgz) | 282MB | Box AP 49.7% | 暂不支持TensorRT |
|
||||
| [yolov3_darknet53_270e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov3_darknet53_270e_coco.tgz) |237MB | Box AP 39.1% | |
|
||||
| [yolox_s_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolox_s_300e_coco.tgz) | 35MB | Box AP 40.4% | |
|
||||
| [faster_rcnn_r50_vd_fpn_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/faster_rcnn_r50_vd_fpn_2x_coco.tgz) | 160MB | Box AP 40.8%| 暂不支持TensorRT |
|
||||
| [mask_rcnn_r50_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/mask_rcnn_r50_1x_coco.tgz) | 128M | Box AP 37.4%, Mask AP 32.8%| 暂不支持TensorRT、ORT |
|
||||
| [ssd_mobilenet_v1_300_120e_voc](https://bj.bcebos.com/paddlehub/fastdeploy/ssd_mobilenet_v1_300_120e_voc.tgz) | 24.9M | Box AP 73.8%| 暂不支持TensorRT、ORT |
|
||||
| [ssd_vgg16_300_240e_voc](https://bj.bcebos.com/paddlehub/fastdeploy/ssd_vgg16_300_240e_voc.tgz) | 106.5M | Box AP 77.8%| 暂不支持TensorRT、ORT |
|
||||
| [ssdlite_mobilenet_v1_300_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ssdlite_mobilenet_v1_300_coco.tgz) | 29.1M | | 暂不支持TensorRT、ORT |
|
||||
| [rtmdet_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/rtmdet_l_300e_coco.tgz) | 224M | Box AP 51.2%| |
|
||||
| [rtmdet_s_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/rtmdet_s_300e_coco.tgz) | 42M | Box AP 44.5%| |
|
||||
| [yolov5_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5_l_300e_coco.tgz) | 183M | Box AP 48.9%| |
|
||||
| [yolov5_s_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5_s_300e_coco.tgz) | 31M | Box AP 37.6%| |
|
||||
| [yolov6_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6_l_300e_coco.tgz) | 229M | Box AP 51.0%| |
|
||||
| [yolov6_s_400e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6_s_400e_coco.tgz) | 68M | Box AP 43.4%| |
|
||||
| [yolov7_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_l_300e_coco.tgz) | 145M | Box AP 51.0%| |
|
||||
| [yolov7_x_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_x_300e_coco.tgz) | 277M | Box AP 53.0%| |
|
||||
| [cascade_rcnn_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/cascade_rcnn_r50_fpn_1x_coco.tgz) | 271M | Box AP 41.1%| 暂不支持TensorRT、ORT |
|
||||
| [cascade_rcnn_r50_vd_fpn_ssld_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/cascade_rcnn_r50_vd_fpn_ssld_2x_coco.tgz) | 271M | Box AP 45.0%| 暂不支持TensorRT、ORT |
|
||||
| [faster_rcnn_enhance_3x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/faster_rcnn_enhance_3x_coco.tgz) | 119M | Box AP 41.5%| 暂不支持TensorRT、ORT |
|
||||
| [fcos_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/fcos_r50_fpn_1x_coco.tgz) | 129M | Box AP 39.6%| 暂不支持TensorRT |
|
||||
| [gfl_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/gfl_r50_fpn_1x_coco.tgz) | 128M | Box AP 41.0%| 暂不支持TensorRT |
|
||||
| [ppyoloe_crn_l_80e_sliced_visdrone_640_025](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_80e_sliced_visdrone_640_025.tgz) | 200M | Box AP 31.9%| |
|
||||
| [retinanet_r101_fpn_2x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r101_fpn_2x_coco.tgz) | 210M | Box AP 40.6%| 暂不支持TensorRT、ORT |
|
||||
| [retinanet_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/retinanet_r50_fpn_1x_coco.tgz) | 136M | Box AP 37.5%| 暂不支持TensorRT、ORT |
|
||||
| [tood_r50_fpn_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/tood_r50_fpn_1x_coco.tgz) | 130M | Box AP 42.5%| 暂不支持TensorRT、ORT |
|
||||
| [ttfnet_darknet53_1x_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ttfnet_darknet53_1x_coco.tgz) | 178M | Box AP 33.5%| 暂不支持TensorRT、ORT |
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
@@ -1,11 +1,12 @@
|
||||
# PP-YOLOE 量化模型在 A311D 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-YOLOE 量化模型到 A311D 上。
|
||||
English | [简体中文](README_CN.md)
|
||||
# Deploy PP-YOLOE Quantification Model on A311D
|
||||
Now FastDeploy supports the deployment of PP-YOLOE quantification model to A311D on Paddle Lite.
|
||||
|
||||
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
|
||||
For model quantification and download, refer to [Model Quantification](../quantize/README.md)
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
在 A311D 上只支持 C++ 的部署。
|
||||
Only C++ deployment is supported on A311D
|
||||
|
||||
- [C++部署](cpp)
|
||||
- [C++ deployment](cpp)
|
||||
|
12
examples/vision/detection/paddledetection/a311d/README_CN.md
Normal file
12
examples/vision/detection/paddledetection/a311d/README_CN.md
Normal file
@@ -0,0 +1,12 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-YOLOE 量化模型在 A311D 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-YOLOE 量化模型到 A311D 上。
|
||||
|
||||
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
在 A311D 上只支持 C++ 的部署。
|
||||
|
||||
- [C++部署](cpp)
|
@@ -1,87 +1,88 @@
|
||||
# 目标检测 PicoDet Android Demo 使用文档
|
||||
English | [简体中文](README_CN.md)
|
||||
# Target Detection PicoDet Android Demo Tutorial
|
||||
|
||||
在 Android 上实现实时的目标检测功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
|
||||
Real-time target detection on Android. This Demo is simple to use for everyone. For example, you can run your own trained model in the Demo.
|
||||
|
||||
## 环境准备
|
||||
## Prepare the Environment
|
||||
|
||||
1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
|
||||
2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
|
||||
1. Install Android Studio in your local environment. Refer to [Android Studio Official Website](https://developer.android.com/studio) for detailed tutorial.
|
||||
2. Prepare an Android phone and turn on the USB debug mode: `Settings -> Find developer options -> Open developer options and USB debug mode`
|
||||
|
||||
## 部署步骤
|
||||
## Deployment Steps
|
||||
|
||||
1. 目标检测 PicoDet Demo 位于 `fastdeploy/examples/vision/detection/paddledetection/android` 目录
|
||||
2. 用 Android Studio 打开 paddledetection/android 工程
|
||||
3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
|
||||
1. The target detection PicoDet Demo is located in the `fastdeploy/examples/vision/detection/paddledetection/android` directory
|
||||
2. Open paddledetection/android project with Android Studio
|
||||
3. Connect the phone to the computer, turn on USB debug mode and file transfer mode, and connect your phone to Android Studio (allow the phone to install software from USB)
|
||||
|
||||
<p align="center">
|
||||
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/31974251/203257262-71b908ab-bb2b-47d3-9efb-67631687b774.png">
|
||||
</p>
|
||||
|
||||
> **注意:**
|
||||
>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
|
||||
> **Attention:**
|
||||
>> If you encounter an NDK configuration error during import, compilation or running, open ` File > Project Structure > SDK Location` and change the path of SDK configured by the `Andriod SDK location`.
|
||||
|
||||
4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
|
||||
成功后效果如下,图一:APP 安装到手机;图二: APP 打开后的效果,会自动识别图片中的物体并标记;图三:APP设置选项,点击右上角的设置图片,可以设置不同选项进行体验。
|
||||
4. Click the Run button to automatically compile the APP and install it to the phone. (The process will automatically download the pre-compiled FastDeploy Android library and model files. Internet is required).
|
||||
The final effect is as follows. Figure 1: Install the APP on the phone; Figure 2: The effect when opening the APP. It will automatically recognize and mark the objects in the image; Figure 3: APP setting option. Click setting in the upper right corner and modify your options.
|
||||
|
||||
| APP 图标 | APP 效果 | APP设置项
|
||||
| APP Icon | APP Effect | APP Settings
|
||||
| --- | --- | --- |
|
||||
| <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/203268599-c94018d8-3683-490a-a5c7-a8136a4fa284.jpg"> | <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/203261763-a7513df7-e0ab-42e5-ad50-79ed7e8c8cd2.gif"> | <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/197332983-afbfa6d5-4a3b-4c54-a528-4a3e58441be1.jpg"> |
|
||||
|
||||
|
||||
### PicoDet Java API 说明
|
||||
- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PicoDet初始化参数说明如下:
|
||||
- modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
|
||||
- paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
|
||||
- configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
|
||||
- labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 coco_label_list.txt,每一行包含一个label
|
||||
- option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
|
||||
### PicoDet Java API Description
|
||||
- Model initialized API: The initialized API contains two ways: Firstly, initialize directly through the constructor. Secondly, initialize at the appropriate program node by calling the init function. PicoDet initialization parameters are as follows.
|
||||
- modelFile: String. Model file path in paddle format, such as model.pdmodel
|
||||
- paramFile: String. Parameter file path in paddle format, such as model.pdiparams
|
||||
- configFile: String. Preprocessing file for model inference, such as infer_cfg.yml
|
||||
- labelFile: String. This optional parameter indicates the path of the label file and is used for visualization, such as coco_label_list.txt, each line containing one label
|
||||
- option: RuntimeOption. Optional parameter for model initialization. Default runtime options if not passing the parameter.
|
||||
|
||||
```java
|
||||
// 构造函数: constructor w/o label file
|
||||
public PicoDet(); // 空构造函数,之后可以调用init初始化
|
||||
// Constructor: constructor w/o label file
|
||||
public PicoDet(); // An empty constructor, which can be initialized by calling init
|
||||
public PicoDet(String modelFile, String paramsFile, String configFile);
|
||||
public PicoDet(String modelFile, String paramsFile, String configFile, String labelFile);
|
||||
public PicoDet(String modelFile, String paramsFile, String configFile, RuntimeOption option);
|
||||
public PicoDet(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
|
||||
// 手动调用init初始化: call init manually w/o label file
|
||||
// Call init manually for initialization: call init manually w/o label file
|
||||
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
|
||||
public boolean init(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
|
||||
```
|
||||
- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
|
||||
- Model Prediction API: The Model Prediction API contains an API for direct prediction and an API for visualization. In direct prediction, we do not save the image and render the result on Bitmap. Instead, we merely predict the inference result. For prediction and visualization, the results are both predicted and visualized, the visualized images are saved to the specified path, and the visualized results are rendered in Bitmap (Now Bitmap in ARGB8888 format is supported). Afterward, the Bitmap can be displayed on the camera.
|
||||
```java
|
||||
// 直接预测:不保存图片以及不渲染结果到Bitmap上
|
||||
// Direct prediction: No image saving and no result rendering to Bitmap
|
||||
public DetectionResult predict(Bitmap ARGB8888Bitmap);
|
||||
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
|
||||
// Prediction and visualization: Predict and visualize the results, save the visualized image to the specified path, and render the visualized results on Bitmap
|
||||
public DetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float scoreThreshold);
|
||||
public DetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // 只渲染 不保存图片
|
||||
public DetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // Render without saving images
|
||||
```
|
||||
- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
|
||||
- Model resource release API: Call release() API to release model resources. Return true for successful release and false for failure; call initialized() to determine whether the model was initialized successfully, with true indicating successful initialization and false indicating failure.
|
||||
```java
|
||||
public boolean release(); // 释放native资源
|
||||
public boolean initialized(); // 检查是否初始化成功
|
||||
public boolean release(); // Release native resources
|
||||
public boolean initialized(); // Check if the initialization is successful
|
||||
```
|
||||
|
||||
- RuntimeOption设置说明
|
||||
- RuntimeOption settings
|
||||
```java
|
||||
public void enableLiteFp16(); // 开启fp16精度推理
|
||||
public void disableLiteFP16(); // 关闭fp16精度推理
|
||||
public void setCpuThreadNum(int threadNum); // 设置线程数
|
||||
public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
|
||||
public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
|
||||
public void enableLiteFp16(); // Enable fp16 accuracy inference
|
||||
public void disableLiteFP16(); // Disable fp16 accuracy inference
|
||||
public void setCpuThreadNum(int threadNum); // Set thread numbers
|
||||
public void setLitePowerMode(LitePowerMode mode); // Set power mode
|
||||
public void setLitePowerMode(String modeStr); // Set power mode through character string
|
||||
```
|
||||
|
||||
- 模型结果DetectionResult说明
|
||||
- Model DetectionResult
|
||||
```java
|
||||
public class DetectionResult {
|
||||
public float[][] mBoxes; // [n,4] 检测框 (x1,y1,x2,y2)
|
||||
public float[] mScores; // [n] 每个检测框得分(置信度,概率值)
|
||||
public int[] mLabelIds; // [n] 分类ID
|
||||
public boolean initialized(); // 检测结果是否有效
|
||||
public float[][] mBoxes; // [n,4] Detection box (x1,y1,x2,y2)
|
||||
public float[] mScores; // [n] Score (confidence, probability)
|
||||
public int[] mLabelIds; // [n] Classification ID
|
||||
public boolean initialized(); // Whether the result is valid
|
||||
}
|
||||
```
|
||||
其他参考:C++/Python对应的DetectionResult说明: [api/vision_results/detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/detection_result.md)
|
||||
Refer to [api/vision_results/detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/detection_result.md) for C++/Python DetectionResult
|
||||
|
||||
- 模型调用示例1:使用构造函数以及默认的RuntimeOption
|
||||
- Model Calling Example 1: Using Constructor and the default RuntimeOption
|
||||
```java
|
||||
import java.nio.ByteBuffer;
|
||||
import android.graphics.Bitmap;
|
||||
@@ -90,63 +91,63 @@ import android.opengl.GLES20;
|
||||
import com.baidu.paddle.fastdeploy.vision.DetectionResult;
|
||||
import com.baidu.paddle.fastdeploy.vision.detection.PicoDet;
|
||||
|
||||
// 初始化模型
|
||||
// Initialize the model
|
||||
PicoDet model = new PicoDet("picodet_s_320_coco_lcnet/model.pdmodel",
|
||||
"picodet_s_320_coco_lcnet/model.pdiparams",
|
||||
"picodet_s_320_coco_lcnet/infer_cfg.yml");
|
||||
|
||||
// 读取图片: 以下仅为读取Bitmap的伪代码
|
||||
// Read the image: The following is merely the pseudo code to read Bitmap
|
||||
ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
|
||||
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
|
||||
Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
|
||||
ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
|
||||
|
||||
// 模型推理
|
||||
// Model inference
|
||||
DetectionResult result = model.predict(ARGB8888ImageBitmap);
|
||||
|
||||
// 释放模型资源
|
||||
// Release model resources
|
||||
model.release();
|
||||
```
|
||||
|
||||
- 模型调用示例2: 在合适的程序节点,手动调用init,并自定义RuntimeOption
|
||||
- Model calling example 2: Manually call init at the appropriate program node and self-define RuntimeOption
|
||||
```java
|
||||
// import 同上 ...
|
||||
// import is as the above...
|
||||
import com.baidu.paddle.fastdeploy.RuntimeOption;
|
||||
import com.baidu.paddle.fastdeploy.LitePowerMode;
|
||||
import com.baidu.paddle.fastdeploy.vision.DetectionResult;
|
||||
import com.baidu.paddle.fastdeploy.vision.detection.PicoDet;
|
||||
// 新建空模型
|
||||
// Create an empty model
|
||||
PicoDet model = new PicoDet();
|
||||
// 模型路径
|
||||
// Model path
|
||||
String modelFile = "picodet_s_320_coco_lcnet/model.pdmodel";
|
||||
String paramFile = "picodet_s_320_coco_lcnet/model.pdiparams";
|
||||
String configFile = "picodet_s_320_coco_lcnet/infer_cfg.yml";
|
||||
// 指定RuntimeOption
|
||||
// Specify RuntimeOption
|
||||
RuntimeOption option = new RuntimeOption();
|
||||
option.setCpuThreadNum(2);
|
||||
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
|
||||
option.enableLiteFp16();
|
||||
// 使用init函数初始化
|
||||
// Use init function for initialization
|
||||
model.init(modelFile, paramFile, configFile, option);
|
||||
// Bitmap读取、模型预测、资源释放 同上 ...
|
||||
// Bitmap reading, model prediction, and resource release are as above...
|
||||
```
|
||||
更详细的用法请参考 [DetectionMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/detection/DetectionMainActivity.java) 中的用法
|
||||
Refer to [DetectionMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/detection/DetectionMainActivity.java) for more information.
|
||||
|
||||
## 替换 FastDeploy SDK和模型
|
||||
替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models/picodet_s_320_coco_lcnet`。
|
||||
- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK,解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
|
||||
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
|
||||
## Replace FastDeploy SDK and Models
|
||||
It’s simple to replace the FastDeploy prediction library and models. The prediction library is located at `app/libs/fastdeploy-android-sdk-xxx.aar`, where `xxx` represents the version of your prediction library. The models are located at `app/src/main/assets/models/picodet_s_320_coco_lcnet`.
|
||||
- Replace the FastDeploy Android SDK: Download or compile the latest FastDeploy Android SDK, unzip and place it in the `app/libs` directory; For detailed configuration, refer to
|
||||
- [FastDeploy Java SDK in Android](../../../../../java/android/)
|
||||
|
||||
- 替换PicoDet模型的步骤:
|
||||
- 将您的PicoDet模型放在 `app/src/main/assets/models` 目录下;
|
||||
- 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
|
||||
- Steps to replace PicoDet models:
|
||||
- Put your PicoDet model in `app/src/main/assets/models`;
|
||||
- Modify the default value of the model path in `app/src/main/res/values/strings.xml`. For example:
|
||||
```xml
|
||||
<!-- 将这个路径指修改成您的模型,如 models/picodet_l_320_coco_lcnet -->
|
||||
<!-- Change this path to your model, such as models/picodet_l_320_coco_lcnet -->
|
||||
<string name="DETECTION_MODEL_DIR_DEFAULT">models/picodet_s_320_coco_lcnet</string>
|
||||
<string name="DETECTION_LABEL_PATH_DEFAULT">labels/coco_label_list.txt</string>
|
||||
```
|
||||
|
||||
## 更多参考文档
|
||||
如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
|
||||
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
|
||||
- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
|
||||
## More Reference Documents
|
||||
For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
|
||||
- [FastDeploy Java SDK in Android](../../../../../java/android/)
|
||||
- [FastDeploy C++ SDK in Android](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
|
||||
|
153
examples/vision/detection/paddledetection/android/README_CN.md
Normal file
153
examples/vision/detection/paddledetection/android/README_CN.md
Normal file
@@ -0,0 +1,153 @@
|
||||
[English](README.md) | 简体中文
|
||||
# 目标检测 PicoDet Android Demo 使用文档
|
||||
|
||||
在 Android 上实现实时的目标检测功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
|
||||
|
||||
## 环境准备
|
||||
|
||||
1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
|
||||
2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
|
||||
|
||||
## 部署步骤
|
||||
|
||||
1. 目标检测 PicoDet Demo 位于 `fastdeploy/examples/vision/detection/paddledetection/android` 目录
|
||||
2. 用 Android Studio 打开 paddledetection/android 工程
|
||||
3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
|
||||
|
||||
<p align="center">
|
||||
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/31974251/203257262-71b908ab-bb2b-47d3-9efb-67631687b774.png">
|
||||
</p>
|
||||
|
||||
> **注意:**
|
||||
>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
|
||||
|
||||
4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
|
||||
成功后效果如下,图一:APP 安装到手机;图二: APP 打开后的效果,会自动识别图片中的物体并标记;图三:APP设置选项,点击右上角的设置图片,可以设置不同选项进行体验。
|
||||
|
||||
| APP 图标 | APP 效果 | APP设置项
|
||||
| --- | --- | --- |
|
||||
| <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/203268599-c94018d8-3683-490a-a5c7-a8136a4fa284.jpg"> | <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/203261763-a7513df7-e0ab-42e5-ad50-79ed7e8c8cd2.gif"> | <img width="300" height="500" alt="image" src="https://user-images.githubusercontent.com/31974251/197332983-afbfa6d5-4a3b-4c54-a528-4a3e58441be1.jpg"> |
|
||||
|
||||
|
||||
### PicoDet Java API 说明
|
||||
- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PicoDet初始化参数说明如下:
|
||||
- modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
|
||||
- paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
|
||||
- configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
|
||||
- labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 coco_label_list.txt,每一行包含一个label
|
||||
- option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
|
||||
|
||||
```java
|
||||
// 构造函数: constructor w/o label file
|
||||
public PicoDet(); // 空构造函数,之后可以调用init初始化
|
||||
public PicoDet(String modelFile, String paramsFile, String configFile);
|
||||
public PicoDet(String modelFile, String paramsFile, String configFile, String labelFile);
|
||||
public PicoDet(String modelFile, String paramsFile, String configFile, RuntimeOption option);
|
||||
public PicoDet(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
|
||||
// 手动调用init初始化: call init manually w/o label file
|
||||
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
|
||||
public boolean init(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
|
||||
```
|
||||
- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
|
||||
```java
|
||||
// 直接预测:不保存图片以及不渲染结果到Bitmap上
|
||||
public DetectionResult predict(Bitmap ARGB8888Bitmap);
|
||||
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
|
||||
public DetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float scoreThreshold);
|
||||
public DetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // 只渲染 不保存图片
|
||||
```
|
||||
- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
|
||||
```java
|
||||
public boolean release(); // 释放native资源
|
||||
public boolean initialized(); // 检查是否初始化成功
|
||||
```
|
||||
|
||||
- RuntimeOption设置说明
|
||||
```java
|
||||
public void enableLiteFp16(); // 开启fp16精度推理
|
||||
public void disableLiteFP16(); // 关闭fp16精度推理
|
||||
public void setCpuThreadNum(int threadNum); // 设置线程数
|
||||
public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
|
||||
public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
|
||||
```
|
||||
|
||||
- 模型结果DetectionResult说明
|
||||
```java
|
||||
public class DetectionResult {
|
||||
public float[][] mBoxes; // [n,4] 检测框 (x1,y1,x2,y2)
|
||||
public float[] mScores; // [n] 每个检测框得分(置信度,概率值)
|
||||
public int[] mLabelIds; // [n] 分类ID
|
||||
public boolean initialized(); // 检测结果是否有效
|
||||
}
|
||||
```
|
||||
其他参考:C++/Python对应的DetectionResult说明: [api/vision_results/detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/detection_result.md)
|
||||
|
||||
- 模型调用示例1:使用构造函数以及默认的RuntimeOption
|
||||
```java
|
||||
import java.nio.ByteBuffer;
|
||||
import android.graphics.Bitmap;
|
||||
import android.opengl.GLES20;
|
||||
|
||||
import com.baidu.paddle.fastdeploy.vision.DetectionResult;
|
||||
import com.baidu.paddle.fastdeploy.vision.detection.PicoDet;
|
||||
|
||||
// 初始化模型
|
||||
PicoDet model = new PicoDet("picodet_s_320_coco_lcnet/model.pdmodel",
|
||||
"picodet_s_320_coco_lcnet/model.pdiparams",
|
||||
"picodet_s_320_coco_lcnet/infer_cfg.yml");
|
||||
|
||||
// 读取图片: 以下仅为读取Bitmap的伪代码
|
||||
ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
|
||||
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
|
||||
Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
|
||||
ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
|
||||
|
||||
// 模型推理
|
||||
DetectionResult result = model.predict(ARGB8888ImageBitmap);
|
||||
|
||||
// 释放模型资源
|
||||
model.release();
|
||||
```
|
||||
|
||||
- 模型调用示例2: 在合适的程序节点,手动调用init,并自定义RuntimeOption
|
||||
```java
|
||||
// import 同上 ...
|
||||
import com.baidu.paddle.fastdeploy.RuntimeOption;
|
||||
import com.baidu.paddle.fastdeploy.LitePowerMode;
|
||||
import com.baidu.paddle.fastdeploy.vision.DetectionResult;
|
||||
import com.baidu.paddle.fastdeploy.vision.detection.PicoDet;
|
||||
// 新建空模型
|
||||
PicoDet model = new PicoDet();
|
||||
// 模型路径
|
||||
String modelFile = "picodet_s_320_coco_lcnet/model.pdmodel";
|
||||
String paramFile = "picodet_s_320_coco_lcnet/model.pdiparams";
|
||||
String configFile = "picodet_s_320_coco_lcnet/infer_cfg.yml";
|
||||
// 指定RuntimeOption
|
||||
RuntimeOption option = new RuntimeOption();
|
||||
option.setCpuThreadNum(2);
|
||||
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
|
||||
option.enableLiteFp16();
|
||||
// 使用init函数初始化
|
||||
model.init(modelFile, paramFile, configFile, option);
|
||||
// Bitmap读取、模型预测、资源释放 同上 ...
|
||||
```
|
||||
更详细的用法请参考 [DetectionMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/detection/DetectionMainActivity.java) 中的用法
|
||||
|
||||
## 替换 FastDeploy SDK和模型
|
||||
替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models/picodet_s_320_coco_lcnet`。
|
||||
- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK,解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
|
||||
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
|
||||
|
||||
- 替换PicoDet模型的步骤:
|
||||
- 将您的PicoDet模型放在 `app/src/main/assets/models` 目录下;
|
||||
- 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
|
||||
```xml
|
||||
<!-- 将这个路径指修改成您的模型,如 models/picodet_l_320_coco_lcnet -->
|
||||
<string name="DETECTION_MODEL_DIR_DEFAULT">models/picodet_s_320_coco_lcnet</string>
|
||||
<string name="DETECTION_LABEL_PATH_DEFAULT">labels/coco_label_list.txt</string>
|
||||
```
|
||||
|
||||
## 更多参考文档
|
||||
如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
|
||||
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
|
||||
- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
|
@@ -1,54 +1,48 @@
|
||||
# PaddleDetection C++部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleDetection C++ Deployment Example
|
||||
|
||||
本目录下提供`infer_xxx.cc`快速完成PaddleDetection模型包括PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet/CascadeRCNN/PSSDet/RetinaNet/PPYOLOESOD/FCOS/TTFNet/TOOD/GFL在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
This directory provides examples that `infer_xxx.cc` fast finishes the deployment of PaddleDetection models, including PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet on CPU/GPU and GPU accelerated by TensorRT.
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
以ppyoloe为例进行推理部署
|
||||
ppyoloe is taken as an example for inference deployment
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载PPYOLOE模型文件和测试图片
|
||||
# Download the PPYOLOE model file and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
||||
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT Inference on GPU
|
||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 2
|
||||
# 昆仑芯XPU推理
|
||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 3
|
||||
# 华为昇腾推理
|
||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 4
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
## PaddleDetection C++ Interface
|
||||
|
||||
## PaddleDetection C++接口
|
||||
### Model Class
|
||||
|
||||
### 模型类
|
||||
|
||||
PaddleDetection目前支持6种模型系列,类名分别为`PPYOLOE`, `PicoDet`, `PaddleYOLOX`, `PPYOLO`, `FasterRCNN`,`SSD`,`PaddleYOLOv5`,`PaddleYOLOv6`,`PaddleYOLOv7`,`RTMDet`,`CascadeRCNN`,`PSSDet`,`RetinaNet`,`PPYOLOESOD`,`FCOS`,`TTFNet`,`TOOD`,`GFL`所有类名的构造函数和预测函数在参数上完全一致,本文档以PPYOLOE为例讲解API
|
||||
PaddleDetection currently supports 6 kinds of models, including `PPYOLOE`, `PicoDet`, `PaddleYOLOX`, `PPYOLO`, `FasterRCNN`,`SSD`,`PaddleYOLOv5`,`PaddleYOLOv6`,`PaddleYOLOv7`,`RTMDet`. The constructors and predictors for all 6 kinds are consistent in terms of parameters. This document takes PPYOLOE as an example to introduce its API
|
||||
```c++
|
||||
fastdeploy::vision::detection::PPYOLOE(
|
||||
const string& model_file,
|
||||
@@ -60,28 +54,28 @@ fastdeploy::vision::detection::PPYOLOE(
|
||||
|
||||
PaddleDetection PPYOLOE模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 配置文件路径,即PaddleDetection导出的部署yaml文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): • Configuration file path, which is the deployment yaml file exported by PaddleDetection
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
#### Predict函数
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> PPYOLOE::Predict(cv::Mat* im, DetectionResult* result)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output results directly.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
88
examples/vision/detection/paddledetection/cpp/README_CN.md
Normal file
88
examples/vision/detection/paddledetection/cpp/README_CN.md
Normal file
@@ -0,0 +1,88 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleDetection C++部署示例
|
||||
|
||||
本目录下提供`infer_xxx.cc`快速完成PaddleDetection模型包括PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet/CascadeRCNN/PSSDet/RetinaNet/PPYOLOESOD/FCOS/TTFNet/TOOD/GFL在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
以ppyoloe为例进行推理部署
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载PPYOLOE模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 2
|
||||
# 昆仑芯XPU推理
|
||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 3
|
||||
# 华为昇腾推理
|
||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 4
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
## PaddleDetection C++接口
|
||||
|
||||
### 模型类
|
||||
|
||||
PaddleDetection目前支持6种模型系列,类名分别为`PPYOLOE`, `PicoDet`, `PaddleYOLOX`, `PPYOLO`, `FasterRCNN`,`SSD`,`PaddleYOLOv5`,`PaddleYOLOv6`,`PaddleYOLOv7`,`RTMDet`,`CascadeRCNN`,`PSSDet`,`RetinaNet`,`PPYOLOESOD`,`FCOS`,`TTFNet`,`TOOD`,`GFL`所有类名的构造函数和预测函数在参数上完全一致,本文档以PPYOLOE为例讲解API
|
||||
```c++
|
||||
fastdeploy::vision::detection::PPYOLOE(
|
||||
const string& model_file,
|
||||
const string& params_file,
|
||||
const string& config_file
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||
```
|
||||
|
||||
PaddleDetection PPYOLOE模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 配置文件路径,即PaddleDetection导出的部署yaml文件
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> PPYOLOE::Predict(cv::Mat* im, DetectionResult* result)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,40 +1,37 @@
|
||||
# PaddleDetection Python部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleDetection Python Deployment Example
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation.
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer_xxx.py`快速完成PPYOLOE/PicoDet等模型在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
This directory provides examples that `infer_xxx.py` fast finishes the deployment of PPYOLOE/PicoDet models on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download deployment example code
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/detection/paddledetection/python/
|
||||
|
||||
#下载PPYOLOE模型文件和测试图片
|
||||
# Download the PPYOLOE model file and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
# TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu --use_trt True
|
||||
# 昆仑芯XPU推理
|
||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device kunlunxin
|
||||
# 华为昇腾推理
|
||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device ascend
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/19339784/184326520-7075e907-10ed-4fad-93f8-52d0e35d4964.jpg", width=480px, height=320px />
|
||||
</div>
|
||||
|
||||
## PaddleDetection Python接口
|
||||
## PaddleDetection Python Interface
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
@@ -49,46 +46,38 @@ fastdeploy.vision.detection.PaddleYOLOv5(model_file, params_file, config_file, r
|
||||
fastdeploy.vision.detection.PaddleYOLOv6(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PaddleYOLOv7(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.RTMDet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.CascadeRCNN(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PSSDet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.RetinaNet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PPYOLOESOD(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.FCOS(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.TTFNet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.TOOD(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.GFL(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleDetection模型加载和初始化,其中model_file, params_file为导出的Paddle部署模型格式, config_file为PaddleDetection同时导出的部署配置yaml文件
|
||||
PaddleDetection model loading and initialization, among which model_file and params_file are the exported Paddle model format. config_file is the configuration yaml file exported by PaddleDetection simultaneously
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理配置yaml文件路径
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path
|
||||
> * **config_file**(str): Inference configuration yaml file path
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)
|
||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||
|
||||
### predict函数
|
||||
### predict Function
|
||||
|
||||
PaddleDetection中各个模型,包括PPYOLOE/PicoDet/PaddleYOLOX/YOLOv3/PPYOLO/FasterRCNN,均提供如下同样的成员函数用于进行图像的检测
|
||||
PaddleDetection models, including PPYOLOE/PicoDet/PaddleYOLOX/YOLOv3/PPYOLO/FasterRCNN, all provide the following member functions for image detection
|
||||
> ```python
|
||||
> PPYOLOE.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output results directly.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **image_data**(np.ndarray): Input data in HWC or BGR format
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.DetectionResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [PaddleDetection 模型介绍](..)
|
||||
- [PaddleDetection C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [PaddleDetection Model Description](..)
|
||||
- [PaddleDetection C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
@@ -0,0 +1,95 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleDetection Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer_xxx.py`快速完成PPYOLOE/PicoDet等模型在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/detection/paddledetection/python/
|
||||
|
||||
#下载PPYOLOE模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
||||
|
||||
# CPU推理
|
||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
|
||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu --use_trt True
|
||||
# 昆仑芯XPU推理
|
||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device kunlunxin
|
||||
# 华为昇腾推理
|
||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device ascend
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/19339784/184326520-7075e907-10ed-4fad-93f8-52d0e35d4964.jpg", width=480px, height=320px />
|
||||
</div>
|
||||
|
||||
## PaddleDetection Python接口
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PicoDet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PaddleYOLOX(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.YOLOv3(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PPYOLO(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.FasterRCNN(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.MaskRCNN(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.SSD(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PaddleYOLOv5(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PaddleYOLOv6(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PaddleYOLOv7(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.RTMDet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.CascadeRCNN(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PSSDet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.RetinaNet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.PPYOLOESOD(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.FCOS(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.TTFNet(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.TOOD(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
fastdeploy.vision.detection.GFL(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||
```
|
||||
|
||||
PaddleDetection模型加载和初始化,其中model_file, params_file为导出的Paddle部署模型格式, config_file为PaddleDetection同时导出的部署配置yaml文件
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径
|
||||
> * **config_file**(str): 推理配置yaml文件路径
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为Paddle
|
||||
|
||||
### predict函数
|
||||
|
||||
PaddleDetection中各个模型,包括PPYOLOE/PicoDet/PaddleYOLOX/YOLOv3/PPYOLO/FasterRCNN,均提供如下同样的成员函数用于进行图像的检测
|
||||
> ```python
|
||||
> PPYOLOE.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [PaddleDetection 模型介绍](..)
|
||||
- [PaddleDetection C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,45 +1,46 @@
|
||||
# PaddleDetection 量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleDetection Quantification Model Deployment
|
||||
FastDeploy supports the deployment of quantification models and provides a convenient tool for automatic model compression.
|
||||
Users can use it to deploy models after quantification or directly deploy quantized models provided by FastDeploy.
|
||||
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
|
||||
## FastDeploy one-click model auto-compression tool
|
||||
FastDeploy provides a one-click auto-compression tool that allows users to quantize models by simply entering a configuration file.
|
||||
Refer to [one-click auto-compression tool](../../../../../tools/common_tools/auto_compression/) for details.
|
||||
|
||||
## 下载量化完成的PP-YOLOE-l模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
|
||||
## Download the quantized PP-YOLOE-l model
|
||||
Users can also directly download the quantized models in the table below. (Click the model name to download it)
|
||||
|
||||
|
||||
Benchmark表格说明:
|
||||
- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
|
||||
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
|
||||
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
|
||||
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
|
||||
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
|
||||
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
|
||||
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
|
||||
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
|
||||
Benchmark table description:
|
||||
- Runtime latency: model’s inference latency on multiple Runtimes, including CPU->GPU data copy, GPU inference, and GPU->CPU data copy time. It does not include the pre and post processing time of the model.
|
||||
- End2End latency: model’s latency in the actual inference scenario, including the pre and post processing time of the model.
|
||||
- Measured latency: The average latency after 1000 times of inference in milliseconds.
|
||||
- INT8 + FP16: Enable FP16 inference for Runtime while inferring the INT8 quantification model
|
||||
- INT8 + FP16 + PM: Use Pinned Memory to speed up the GPU->CPU data copy while inferring the INT8 quantization model with FP16 turned on.
|
||||
- Maximum speedup ratio: Obtained by dividing the FP32 latency by the highest INT8 inference latency.
|
||||
- The strategy is to use a few unlabeled data sets to train the model for quantification and to verify the accuracy on the full validation set. The INT8 accuracy does not represent the highest value.
|
||||
- The CPU is Intel(R) Xeon(R) Gold 6271C, , and the number of CPU threads is fixed to 1. The GPU is Tesla T4 with TensorRT version 8.4.15.
|
||||
|
||||
|
||||
#### Runtime Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
| Model |Inference Backend |Deployment Hardware | FP32 Runtime Latency | INT8 Runtime Latency | INT8 + FP16 Runtime Latency | INT8+FP16+PM Runtime Latency | Maximum Speedup Ratio | FP32 mAP | INT8 mAP | Quantification Method |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | TensorRT | GPU | 27.90 | 6.39 |6.44|5.95 | 4.67 | 51.4 | 50.7 | 量化蒸馏训练 |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | Paddle-TensorRT | GPU | 30.89 |None | 13.78 |14.01 | 2.24 | 51.4 | 50.5 | 量化蒸馏训练 |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar) | ONNX Runtime | CPU | 1057.82 | 449.52 |None|None | 2.35 |51.4 | 50.0 |量化蒸馏训练 |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | TensorRT | GPU | 27.90 | 6.39 |6.44|5.95 | 4.67 | 51.4 | 50.7 | Quantized distillation training |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | Paddle-TensorRT | GPU | 30.89 |None | 13.78 |14.01 | 2.24 | 51.4 | 50.5 | Quantized distillation training |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar) | ONNX Runtime | CPU | 1057.82 | 449.52 |None|None | 2.35 |51.4 | 50.0 | Quantized distillation training |
|
||||
|
||||
NOTE:
|
||||
- TensorRT比Paddle-TensorRT快的原因是在runtime移除了multiclass_nms3算子
|
||||
- The reason why TensorRT is faster than Paddle-TensorRT is that the multiclass_nms3 operator is removed during runtime
|
||||
|
||||
#### 端到端 Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
#### End2End Benchmark
|
||||
| Model | Inference Backend |Deployment Hardware | FP32 End2End Latency | INT8 End2End Latency | INT8 + FP16 End2End Latency | INT8+FP16+PM End2End Latency | Maximum Speedup Ratio | FP32 mAP | INT8 mAP | Quantification Method |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | TensorRT | GPU | 35.75 | 15.42 |20.70|20.85 | 2.32 | 51.4 | 50.7 | 量化蒸馏训练 |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | Paddle-TensorRT | GPU | 33.48 |None | 18.47 |18.03 | 1.81 | 51.4 | 50.5 | 量化蒸馏训练 |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar) | ONNX Runtime | CPU | 1067.17 | 461.037 |None|None | 2.31 |51.4 | 50.0 |量化蒸馏训练 |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | TensorRT | GPU | 35.75 | 15.42 |20.70|20.85 | 2.32 | 51.4 | 50.7 | Quantized distillation training |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | Paddle-TensorRT | GPU | 33.48 |None | 18.47 |18.03 | 1.81 | 51.4 | 50.5 | Quantized distillation training |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar) | ONNX Runtime | CPU | 1067.17 | 461.037 |None|None | 2.31 |51.4 | 50.0 | Quantized distillation training |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
@@ -0,0 +1,46 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleDetection 量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
|
||||
|
||||
## 下载量化完成的PP-YOLOE-l模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
|
||||
|
||||
|
||||
Benchmark表格说明:
|
||||
- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
|
||||
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
|
||||
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
|
||||
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
|
||||
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
|
||||
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
|
||||
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
|
||||
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
|
||||
|
||||
|
||||
#### Runtime Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | TensorRT | GPU | 27.90 | 6.39 |6.44|5.95 | 4.67 | 51.4 | 50.7 | 量化蒸馏训练 |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | Paddle-TensorRT | GPU | 30.89 |None | 13.78 |14.01 | 2.24 | 51.4 | 50.5 | 量化蒸馏训练 |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar) | ONNX Runtime | CPU | 1057.82 | 449.52 |None|None | 2.35 |51.4 | 50.0 |量化蒸馏训练 |
|
||||
|
||||
NOTE:
|
||||
- TensorRT比Paddle-TensorRT快的原因是在runtime移除了multiclass_nms3算子
|
||||
|
||||
#### 端到端 Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | TensorRT | GPU | 35.75 | 15.42 |20.70|20.85 | 2.32 | 51.4 | 50.7 | 量化蒸馏训练 |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar ) | Paddle-TensorRT | GPU | 33.48 |None | 18.47 |18.03 | 1.81 | 51.4 | 50.5 | 量化蒸馏训练 |
|
||||
| [ppyoloe_crn_l_300e_coco](https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar) | ONNX Runtime | CPU | 1067.17 | 461.037 |None|None | 2.31 |51.4 | 50.0 |量化蒸馏训练 |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
@@ -1,45 +1,45 @@
|
||||
# PaddleDetection RKNPU2部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleDetection RKNPU2 Deployment Example
|
||||
|
||||
## 支持模型列表
|
||||
## List of Supported Models
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
- [PicoDet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet)
|
||||
Now FastDeploy supports the deployment of the following models
|
||||
- [PicoDet models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet)
|
||||
|
||||
## 准备PaddleDetection部署模型以及转换模型
|
||||
RKNPU部署模型前需要将Paddle模型转换成RKNN模型,具体步骤如下:
|
||||
* Paddle动态图模型转换为ONNX模型,请参考[PaddleDetection导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md)
|
||||
,注意在转换时请设置**export.nms=True**.
|
||||
* ONNX模型转换RKNN模型的过程,请参考[转换文档](../../../../../docs/cn/faq/rknpu2/export.md)进行转换。
|
||||
## Prepare PaddleDetection deployment models and convert models
|
||||
Before RKNPU deployment, you need to transform Paddle model to RKNN model:
|
||||
* From Paddle dynamic map to ONNX model, refer to [PaddleDetection Model Export](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md)
|
||||
, and set **export.nms=True** during transformation.
|
||||
* From ONNX model to RKNN model, refer to [Transformation Document](../../../../../docs/cn/faq/rknpu2/export.md).
|
||||
|
||||
|
||||
## 模型转换example
|
||||
以下步骤均在Ubuntu电脑上完成,请参考配置文档完成转换模型环境配置。下面以Picodet-s为例子,教大家如何转换PaddleDetection模型到RKNN模型。
|
||||
|
||||
### 导出ONNX模型
|
||||
## Model Transformation Example
|
||||
The following steps are conducted on Ubuntu computers, refer to the configuration document to prepare the environment. Taking Picodet-s as an example, this document demonstrates how to transform PaddleDetection model to RKNN model.
|
||||
### Export the ONNX model
|
||||
```bash
|
||||
# 下载Paddle静态图模型并解压
|
||||
# Download Paddle static map model and unzip it
|
||||
wget https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet.tar
|
||||
tar xvf picodet_s_416_coco_lcnet.tar
|
||||
|
||||
# 静态图转ONNX模型,注意,这里的save_file请和压缩包名对齐
|
||||
# From static map to ONNX model. Attention: Align the save_file with the zip file name
|
||||
paddle2onnx --model_dir picodet_s_416_coco_lcnet \
|
||||
--model_filename model.pdmodel \
|
||||
--params_filename model.pdiparams \
|
||||
--save_file picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet.onnx \
|
||||
--enable_dev_version True
|
||||
|
||||
# 固定shape
|
||||
# Fix shape
|
||||
python -m paddle2onnx.optimize --input_model picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet.onnx \
|
||||
--output_model picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet.onnx \
|
||||
--input_shape_dict "{'image':[1,3,416,416]}"
|
||||
```
|
||||
|
||||
### 编写模型导出配置文件
|
||||
以转化RK3568的RKNN模型为例子,我们需要编辑tools/rknpu2/config/RK3568/picodet_s_416_coco_lcnet.yaml,来转换ONNX模型到RKNN模型。
|
||||
### Write the model export configuration file
|
||||
Taking the example of RKNN model from RK3588, we need to edit tools/rknpu2/config/RK3568/picodet_s_416_coco_lcnet.yaml to convert ONNX model to RKNN model.
|
||||
|
||||
**修改normalize参数**
|
||||
**Modify normalize parameter**
|
||||
|
||||
如果你需要在NPU上执行normalize操作,请根据你的模型配置normalize参数,例如:
|
||||
If you need to perform the normalize operation on NPU, configure the normalize parameters based on your model. For example:
|
||||
```yaml
|
||||
model_path: ./picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet.onnx
|
||||
output_folder: ./picodet_s_416_coco_lcnet
|
||||
@@ -50,13 +50,13 @@ normalize:
|
||||
outputs: ['tmp_17','p2o.Concat.9']
|
||||
```
|
||||
|
||||
**修改outputs参数**
|
||||
由于Paddle2ONNX版本的不同,转换模型的输出节点名称也有所不同,请使用[Netron](https://netron.app),并找到以下蓝色方框标记的NonMaxSuppression节点,红色方框的节点名称即为目标名称。
|
||||
**Modify outputs parameter**
|
||||
The output node names of the transformation model are various based on different versions of Paddle2ONNX. Please use [Netron](https://netron.app) and find the NonMaxSuppression node marked by the blue box below, and the node name in the red box is the target name.
|
||||
|
||||
例如,使用Netron可视化后,得到以下图片:
|
||||
For example, we can obtain the following image after visualization with Netron:
|
||||

|
||||
|
||||
找到蓝色方框标记的NonMaxSuppression节点,可以看到红色方框标记的两个节点名称为tmp_17和p2o.Concat.9,因此需要修改outputs参数,修改后如下:
|
||||
Find the NonMaxSuppression node marked by the blue box,and we can see the names of the two nodes marked by the red box: tmp_17 and p2o.Concat.9. So we need to modify the outputs parameter:
|
||||
```yaml
|
||||
model_path: ./picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet.onnx
|
||||
output_folder: ./picodet_s_416_coco_lcnet
|
||||
@@ -65,22 +65,22 @@ normalize: None
|
||||
outputs: ['tmp_17','p2o.Concat.9']
|
||||
```
|
||||
|
||||
### 转换模型
|
||||
### model transformation
|
||||
```bash
|
||||
|
||||
# ONNX模型转RKNN模型
|
||||
# 转换模型,模型将生成在picodet_s_320_coco_lcnet_non_postprocess目录下
|
||||
# Transform ONNX modle to RKNN model
|
||||
# The transformed model is in the picodet_s_320_coco_lcnet_non_postprocess directory
|
||||
python tools/rknpu2/export.py --config_path tools/rknpu2/config/picodet_s_416_coco_lcnet.yaml \
|
||||
--target_platform rk3588
|
||||
```
|
||||
|
||||
### 修改模型运行时的配置文件
|
||||
### Modify the configuration file during runtime
|
||||
|
||||
配置文件中,我们只需要修改**Preprocess**下的**Normalize**和**Permute**.
|
||||
n the config file, we need to modify **Normalize** and **Permute** under **Preprocess**.
|
||||
|
||||
**删除Permute**
|
||||
**Remove Permute**
|
||||
|
||||
RKNPU只支持NHWC的输入格式,因此需要删除Permute操作.删除后,配置文件Precess部分后如下:
|
||||
The Permute operation needs removing considering that RKNPU only supports the input format of NHWC. After removal, the Precess is as follows:
|
||||
```yaml
|
||||
Preprocess:
|
||||
- interp: 2
|
||||
@@ -101,9 +101,9 @@ Preprocess:
|
||||
type: NormalizeImage
|
||||
```
|
||||
|
||||
**根据模型转换文件决定是否删除Normalize**
|
||||
**Decide whether to remove Normalize based on model transformation file**
|
||||
|
||||
RKNPU支持使用NPU进行Normalize操作,如果你在导出模型时配置了Normalize参数,请删除**Normalize**.删除后配置文件Precess部分如下:
|
||||
RKNPU supports Normalize on NPU. Remove **Normalize** if you configured the Normalize parameter when exporting the model. After removal, the Precess is as follows:
|
||||
```yaml
|
||||
Preprocess:
|
||||
- interp: 2
|
||||
@@ -114,7 +114,7 @@ Preprocess:
|
||||
type: Resize
|
||||
```
|
||||
|
||||
## 其他链接
|
||||
- [Cpp部署](./cpp)
|
||||
- [Python部署](./python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
## Other Links
|
||||
- [Cpp Deployment](./cpp)
|
||||
- [Python Deployment](./python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
|
121
examples/vision/detection/paddledetection/rknpu2/README_CN.md
Normal file
121
examples/vision/detection/paddledetection/rknpu2/README_CN.md
Normal file
@@ -0,0 +1,121 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleDetection RKNPU2部署示例
|
||||
|
||||
## 支持模型列表
|
||||
|
||||
目前FastDeploy支持如下模型的部署
|
||||
- [PicoDet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet)
|
||||
|
||||
## 准备PaddleDetection部署模型以及转换模型
|
||||
RKNPU部署模型前需要将Paddle模型转换成RKNN模型,具体步骤如下:
|
||||
* Paddle动态图模型转换为ONNX模型,请参考[PaddleDetection导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md)
|
||||
,注意在转换时请设置**export.nms=True**.
|
||||
* ONNX模型转换RKNN模型的过程,请参考[转换文档](../../../../../docs/cn/faq/rknpu2/export.md)进行转换。
|
||||
|
||||
|
||||
## 模型转换example
|
||||
以下步骤均在Ubuntu电脑上完成,请参考配置文档完成转换模型环境配置。下面以Picodet-s为例子,教大家如何转换PaddleDetection模型到RKNN模型。
|
||||
|
||||
### 导出ONNX模型
|
||||
```bash
|
||||
# 下载Paddle静态图模型并解压
|
||||
wget https://paddledet.bj.bcebos.com/deploy/Inference/picodet_s_416_coco_lcnet.tar
|
||||
tar xvf picodet_s_416_coco_lcnet.tar
|
||||
|
||||
# 静态图转ONNX模型,注意,这里的save_file请和压缩包名对齐
|
||||
paddle2onnx --model_dir picodet_s_416_coco_lcnet \
|
||||
--model_filename model.pdmodel \
|
||||
--params_filename model.pdiparams \
|
||||
--save_file picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet.onnx \
|
||||
--enable_dev_version True
|
||||
|
||||
# 固定shape
|
||||
python -m paddle2onnx.optimize --input_model picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet.onnx \
|
||||
--output_model picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet.onnx \
|
||||
--input_shape_dict "{'image':[1,3,416,416]}"
|
||||
```
|
||||
|
||||
### 编写模型导出配置文件
|
||||
以转化RK3568的RKNN模型为例子,我们需要编辑tools/rknpu2/config/RK3568/picodet_s_416_coco_lcnet.yaml,来转换ONNX模型到RKNN模型。
|
||||
|
||||
**修改normalize参数**
|
||||
|
||||
如果你需要在NPU上执行normalize操作,请根据你的模型配置normalize参数,例如:
|
||||
```yaml
|
||||
model_path: ./picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet.onnx
|
||||
output_folder: ./picodet_s_416_coco_lcnet
|
||||
target_platform: RK3568
|
||||
normalize:
|
||||
mean: [[0.485,0.456,0.406]]
|
||||
std: [[0.229,0.224,0.225]]
|
||||
outputs: ['tmp_17','p2o.Concat.9']
|
||||
```
|
||||
|
||||
**修改outputs参数**
|
||||
由于Paddle2ONNX版本的不同,转换模型的输出节点名称也有所不同,请使用[Netron](https://netron.app),并找到以下蓝色方框标记的NonMaxSuppression节点,红色方框的节点名称即为目标名称。
|
||||
|
||||
例如,使用Netron可视化后,得到以下图片:
|
||||

|
||||
|
||||
找到蓝色方框标记的NonMaxSuppression节点,可以看到红色方框标记的两个节点名称为tmp_17和p2o.Concat.9,因此需要修改outputs参数,修改后如下:
|
||||
```yaml
|
||||
model_path: ./picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet.onnx
|
||||
output_folder: ./picodet_s_416_coco_lcnet
|
||||
target_platform: RK3568
|
||||
normalize: None
|
||||
outputs: ['tmp_17','p2o.Concat.9']
|
||||
```
|
||||
|
||||
### 转换模型
|
||||
```bash
|
||||
|
||||
# ONNX模型转RKNN模型
|
||||
# 转换模型,模型将生成在picodet_s_320_coco_lcnet_non_postprocess目录下
|
||||
python tools/rknpu2/export.py --config_path tools/rknpu2/config/picodet_s_416_coco_lcnet.yaml \
|
||||
--target_platform rk3588
|
||||
```
|
||||
|
||||
### 修改模型运行时的配置文件
|
||||
|
||||
配置文件中,我们只需要修改**Preprocess**下的**Normalize**和**Permute**.
|
||||
|
||||
**删除Permute**
|
||||
|
||||
RKNPU只支持NHWC的输入格式,因此需要删除Permute操作.删除后,配置文件Precess部分后如下:
|
||||
```yaml
|
||||
Preprocess:
|
||||
- interp: 2
|
||||
keep_ratio: false
|
||||
target_size:
|
||||
- 416
|
||||
- 416
|
||||
type: Resize
|
||||
- is_scale: true
|
||||
mean:
|
||||
- 0.485
|
||||
- 0.456
|
||||
- 0.406
|
||||
std:
|
||||
- 0.229
|
||||
- 0.224
|
||||
- 0.225
|
||||
type: NormalizeImage
|
||||
```
|
||||
|
||||
**根据模型转换文件决定是否删除Normalize**
|
||||
|
||||
RKNPU支持使用NPU进行Normalize操作,如果你在导出模型时配置了Normalize参数,请删除**Normalize**.删除后配置文件Precess部分如下:
|
||||
```yaml
|
||||
Preprocess:
|
||||
- interp: 2
|
||||
keep_ratio: false
|
||||
target_size:
|
||||
- 416
|
||||
- 416
|
||||
type: Resize
|
||||
```
|
||||
|
||||
## 其他链接
|
||||
- [Cpp部署](./cpp)
|
||||
- [Python部署](./python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
@@ -1,11 +1,12 @@
|
||||
# PP-YOLOE 量化模型在 RV1126 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-YOLOE 量化模型到 RV1126 上。
|
||||
English | [简体中文](README_CN.md)
|
||||
# Deploy PP-YOLOE Quantification Model on RV1126
|
||||
Now FastDeploy supports the deployment of PP-YOLOE quantification model to RV1126 based on Paddle Lite.
|
||||
|
||||
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
|
||||
For model quantification and download, refer to [Model Quantification](../quantize/README.md)
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
在 RV1126 上只支持 C++ 的部署。
|
||||
Only C++ deployment is supported on RV1126.
|
||||
|
||||
- [C++部署](cpp)
|
||||
- [C++ Deployment](cpp)
|
||||
|
@@ -0,0 +1,12 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-YOLOE 量化模型在 RV1126 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-YOLOE 量化模型到 RV1126 上。
|
||||
|
||||
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
在 RV1126 上只支持 C++ 的部署。
|
||||
|
||||
- [C++部署](cpp)
|
@@ -1,64 +1,65 @@
|
||||
# PaddleDetection 服务化部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# PaddleDetection Serving Deployment Example
|
||||
|
||||
本文档以PP-YOLOE模型(ppyoloe_crn_l_300e_coco)为例,进行详细介绍。其他PaddleDetection模型都已支持服务化部署,只需将下述命令中的模型和配置名字修改成要部署模型的名字。
|
||||
This document gives a detailed introduction to the deployment of PP-YOLOE models(ppyoloe_crn_l_300e_coco). Other PaddleDetection model all support serving deployment. So users just need to change the model and config name in the following command.
|
||||
|
||||
PaddleDetection模型导出和预训练模型下载请看[PaddleDetection模型部署](../README.md)文档。
|
||||
For PaddleDetection model export and download of pre-trained models, refer to [PaddleDetection Model Deployment](../README.md).
|
||||
|
||||
在服务化部署前,需确认
|
||||
Confirm before the serving deployment
|
||||
|
||||
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
|
||||
- 1. Refer to [FastDeploy Serving Deployment](../../../../../serving/README_CN.md) for software and hardware environment requirements and image pull commands
|
||||
|
||||
|
||||
## 启动服务
|
||||
## Start Service
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download the example code for deployment
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/detection/paddledetection/serving
|
||||
|
||||
#下载PPYOLOE模型文件和测试图片
|
||||
# Download PPYOLOE model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
||||
|
||||
# 将配置文件放入预处理目录
|
||||
# Put the configuration file into the preprocessing directory
|
||||
mv ppyoloe_crn_l_300e_coco/infer_cfg.yml models/preprocess/1/
|
||||
|
||||
# 将模型放入 models/runtime/1目录下, 并重命名为model.pdmodel和model.pdiparams
|
||||
# Place the model under models/runtime/1 and rename them to model.pdmodel and model.pdiparams
|
||||
mv ppyoloe_crn_l_300e_coco/model.pdmodel models/runtime/1/model.pdmodel
|
||||
mv ppyoloe_crn_l_300e_coco/model.pdiparams models/runtime/1/model.pdiparams
|
||||
|
||||
# 将ppdet和runtime中的ppyoloe配置文件重命名成标准的config名字
|
||||
# 其他模型比如faster_rcc就将faster_rcnn_config.pbtxt重命名为config.pbtxt
|
||||
# Rename the ppyoloe config files in ppdet and runtime to standard config names
|
||||
# For other models like faster_rcc, rename faster_rcnn_config.pbtxt to config.pbtxt
|
||||
cp models/ppdet/ppyoloe_config.pbtxt models/ppdet/config.pbtxt
|
||||
cp models/runtime/ppyoloe_runtime_config.pbtxt models/runtime/config.pbtxt
|
||||
|
||||
# 注意: 由于mask_rcnn模型多一个输出,需要将后处理目录(models/postprocess)中的mask_config.pbtxt重命名为config.pbtxt
|
||||
# Attention: Given that the mask_rcnn model has one more output, we need to rename mask_config.pbtxt to config.pbtxt in the postprocess directory (models/postprocess)
|
||||
|
||||
# 拉取fastdeploy镜像(x.y.z为镜像版本号,需替换成fastdeploy版本数字)
|
||||
# GPU镜像
|
||||
# Pull the FastDeploy image (x.y.z represent the image version. Users need to replace them with numbers)
|
||||
# GPU image
|
||||
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10
|
||||
# CPU镜像
|
||||
# CPU image
|
||||
docker pull paddlepaddle/fastdeploy:z.y.z-cpu-only-21.10
|
||||
|
||||
|
||||
# 运行容器.容器名字为 fd_serving, 并挂载当前目录为容器的 /serving 目录
|
||||
# Run the container named fd_serving and mount it in the /serving directory of the container
|
||||
nvidia-docker run -it --net=host --name fd_serving --shm-size="1g" -v `pwd`/:/serving registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash
|
||||
|
||||
# 启动服务(不设置CUDA_VISIBLE_DEVICES环境变量,会拥有所有GPU卡的调度权限)
|
||||
# Start Service (The CUDA_VISIBLE_DEVICES environment variable is not set, which entitles the scheduling authority of all GPU cards)
|
||||
CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/serving/models
|
||||
```
|
||||
>> **注意**:
|
||||
>> **Attention**:
|
||||
|
||||
>> 由于mask_rcnn模型多一个输出,部署mask_rcnn需要将后处理目录(models/postprocess)中的mask_config.pbtxt重命名为config.pbtxt
|
||||
>> Given that the mask_rcnn model has one more output, we need to rename mask_config.pbtxt to config.pbtxt in the postprocess directory (models/postprocess)
|
||||
|
||||
>> 拉取镜像请看[服务化部署主文档](../../../../../serving/README_CN.md)
|
||||
>> To pull images, refer to [Service Deployment Master Document](../../../../../serving/README_CN.md)
|
||||
|
||||
>> 执行fastdeployserver启动服务出现"Address already in use", 请使用`--grpc-port`指定grpc端口号来启动服务,同时更改客户端示例中的请求端口号.
|
||||
>> If "Address already in use" appears when running fastdeployserver to start the service, use `--grpc-port` to specify the port number and change the request port number in the client demo.
|
||||
|
||||
>> 其他启动参数可以使用 fastdeployserver --help 查看
|
||||
>> Other startup parameters can be checked by fastdeployserver --help
|
||||
|
||||
服务启动成功后, 会有以下输出:
|
||||
Successful service start brings the following output:
|
||||
```
|
||||
......
|
||||
I0928 04:51:15.784517 206 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001
|
||||
@@ -67,21 +68,21 @@ I0928 04:51:15.826578 206 http_server.cc:167] Started Metrics Service at 0.0.0.0
|
||||
```
|
||||
|
||||
|
||||
## 客户端请求
|
||||
## Client Request
|
||||
|
||||
在物理机器中执行以下命令,发送grpc请求并输出结果
|
||||
Execute the following command in the physical machine to send the grpc request and output the results
|
||||
```
|
||||
#下载测试图片
|
||||
# Download test images
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
#安装客户端依赖
|
||||
# Install client dependencies
|
||||
python3 -m pip install tritonclient[all]
|
||||
|
||||
# 发送请求
|
||||
# Send requests
|
||||
python3 paddledet_grpc_client.py
|
||||
```
|
||||
|
||||
发送请求成功后,会返回json格式的检测结果并打印输出:
|
||||
The result is returned in json format and printed after sending the request:
|
||||
```
|
||||
output_name: DET_RESULT
|
||||
[[159.93016052246094, 82.35527038574219, 199.8546600341797, 164.68682861328125],
|
||||
@@ -89,6 +90,6 @@ output_name: DET_RESULT
|
||||
[60.200584411621094, 123.73260498046875, 108.83859252929688, 169.07467651367188]]
|
||||
```
|
||||
|
||||
## 配置修改
|
||||
## Configuration Change
|
||||
|
||||
当前默认配置在GPU上运行Paddle引擎, 如果要在CPU或其他推理引擎上运行。 需要修改`models/runtime/config.pbtxt`中配置,详情请参考[配置文档](../../../../../serving/docs/zh_CN/model_configuration.md)
|
||||
The current default configuration runs on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/zh_CN/model_configuration.md) for more information.
|
||||
|
@@ -0,0 +1,95 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleDetection 服务化部署示例
|
||||
|
||||
本文档以PP-YOLOE模型(ppyoloe_crn_l_300e_coco)为例,进行详细介绍。其他PaddleDetection模型都已支持服务化部署,只需将下述命令中的模型和配置名字修改成要部署模型的名字。
|
||||
|
||||
PaddleDetection模型导出和预训练模型下载请看[PaddleDetection模型部署](../README.md)文档。
|
||||
|
||||
在服务化部署前,需确认
|
||||
|
||||
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
|
||||
|
||||
|
||||
## 启动服务
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/detection/paddledetection/serving
|
||||
|
||||
#下载PPYOLOE模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
||||
|
||||
# 将配置文件放入预处理目录
|
||||
mv ppyoloe_crn_l_300e_coco/infer_cfg.yml models/preprocess/1/
|
||||
|
||||
# 将模型放入 models/runtime/1目录下, 并重命名为model.pdmodel和model.pdiparams
|
||||
mv ppyoloe_crn_l_300e_coco/model.pdmodel models/runtime/1/model.pdmodel
|
||||
mv ppyoloe_crn_l_300e_coco/model.pdiparams models/runtime/1/model.pdiparams
|
||||
|
||||
# 将ppdet和runtime中的ppyoloe配置文件重命名成标准的config名字
|
||||
# 其他模型比如faster_rcc就将faster_rcnn_config.pbtxt重命名为config.pbtxt
|
||||
cp models/ppdet/ppyoloe_config.pbtxt models/ppdet/config.pbtxt
|
||||
cp models/runtime/ppyoloe_runtime_config.pbtxt models/runtime/config.pbtxt
|
||||
|
||||
# 注意: 由于mask_rcnn模型多一个输出,需要将后处理目录(models/postprocess)中的mask_config.pbtxt重命名为config.pbtxt
|
||||
|
||||
# 拉取fastdeploy镜像(x.y.z为镜像版本号,需替换成fastdeploy版本数字)
|
||||
# GPU镜像
|
||||
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10
|
||||
# CPU镜像
|
||||
docker pull paddlepaddle/fastdeploy:z.y.z-cpu-only-21.10
|
||||
|
||||
|
||||
# 运行容器.容器名字为 fd_serving, 并挂载当前目录为容器的 /serving 目录
|
||||
nvidia-docker run -it --net=host --name fd_serving --shm-size="1g" -v `pwd`/:/serving registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash
|
||||
|
||||
# 启动服务(不设置CUDA_VISIBLE_DEVICES环境变量,会拥有所有GPU卡的调度权限)
|
||||
CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/serving/models
|
||||
```
|
||||
>> **注意**:
|
||||
|
||||
>> 由于mask_rcnn模型多一个输出,部署mask_rcnn需要将后处理目录(models/postprocess)中的mask_config.pbtxt重命名为config.pbtxt
|
||||
|
||||
>> 拉取镜像请看[服务化部署主文档](../../../../../serving/README_CN.md)
|
||||
|
||||
>> 执行fastdeployserver启动服务出现"Address already in use", 请使用`--grpc-port`指定grpc端口号来启动服务,同时更改客户端示例中的请求端口号.
|
||||
|
||||
>> 其他启动参数可以使用 fastdeployserver --help 查看
|
||||
|
||||
服务启动成功后, 会有以下输出:
|
||||
```
|
||||
......
|
||||
I0928 04:51:15.784517 206 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001
|
||||
I0928 04:51:15.785177 206 http_server.cc:2815] Started HTTPService at 0.0.0.0:8000
|
||||
I0928 04:51:15.826578 206 http_server.cc:167] Started Metrics Service at 0.0.0.0:8002
|
||||
```
|
||||
|
||||
|
||||
## 客户端请求
|
||||
|
||||
在物理机器中执行以下命令,发送grpc请求并输出结果
|
||||
```
|
||||
#下载测试图片
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
#安装客户端依赖
|
||||
python3 -m pip install tritonclient[all]
|
||||
|
||||
# 发送请求
|
||||
python3 paddledet_grpc_client.py
|
||||
```
|
||||
|
||||
发送请求成功后,会返回json格式的检测结果并打印输出:
|
||||
```
|
||||
output_name: DET_RESULT
|
||||
[[159.93016052246094, 82.35527038574219, 199.8546600341797, 164.68682861328125],
|
||||
... ...,
|
||||
[60.200584411621094, 123.73260498046875, 108.83859252929688, 169.07467651367188]]
|
||||
```
|
||||
|
||||
## 配置修改
|
||||
|
||||
当前默认配置在GPU上运行Paddle引擎, 如果要在CPU或其他推理引擎上运行。 需要修改`models/runtime/config.pbtxt`中配置,详情请参考[配置文档](../../../../../serving/docs/zh_CN/model_configuration.md)
|
@@ -1,18 +1,19 @@
|
||||
# RKYOLO准备部署模型
|
||||
English | [简体中文](README_CN.md)
|
||||
|
||||
RKYOLO参考[rknn_model_zoo](https://github.com/airockchip/rknn_model_zoo/tree/main/models/CV/object_detection/yolo)的代码
|
||||
对RKYOLO系列模型进行了封装,目前支持RKYOLOV5系列模型的部署。
|
||||
# RKYOLO Ready-to-deploy Model
|
||||
|
||||
## 支持模型列表
|
||||
RKYOLO models are encapsulated with reference to the code of [rknn_model_zoo](https://github.com/airockchip/rknn_model_zoo/tree/main/models/CV/object_detection/yolo). Now we support the deployment of RKYOLOV5 models.
|
||||
|
||||
## List of Supported Models
|
||||
|
||||
* RKYOLOV5
|
||||
|
||||
## 模型转换example
|
||||
## Model Transformation Example
|
||||
|
||||
请参考[RKNN_model_convert](https://github.com/airockchip/rknn_model_zoo/tree/main/models/CV/object_detection/yolo/RKNN_model_convert)
|
||||
Please refer to [RKNN_model_convert](https://github.com/airockchip/rknn_model_zoo/tree/main/models/CV/object_detection/yolo/RKNN_model_convert)
|
||||
|
||||
|
||||
## 其他链接
|
||||
- [Cpp部署](./cpp)
|
||||
- [Python部署](./python)
|
||||
- [视觉模型预测结果](../../../../docs/api/vision_results/)
|
||||
## Other Links
|
||||
- [Cpp deployment](./cpp)
|
||||
- [Python deployment](./python)
|
||||
- [Visual model predicting results](../../../../docs/api/vision_results/)
|
||||
|
19
examples/vision/detection/rkyolo/README_CN.md
Normal file
19
examples/vision/detection/rkyolo/README_CN.md
Normal file
@@ -0,0 +1,19 @@
|
||||
[English](README.md) | 简体中文
|
||||
# RKYOLO准备部署模型
|
||||
|
||||
RKYOLO参考[rknn_model_zoo](https://github.com/airockchip/rknn_model_zoo/tree/main/models/CV/object_detection/yolo)的代码
|
||||
对RKYOLO系列模型进行了封装,目前支持RKYOLOV5系列模型的部署。
|
||||
|
||||
## 支持模型列表
|
||||
|
||||
* RKYOLOV5
|
||||
|
||||
## 模型转换example
|
||||
|
||||
请参考[RKNN_model_convert](https://github.com/airockchip/rknn_model_zoo/tree/main/models/CV/object_detection/yolo/RKNN_model_convert)
|
||||
|
||||
|
||||
## 其他链接
|
||||
- [Cpp部署](./cpp)
|
||||
- [Python部署](./python)
|
||||
- [视觉模型预测结果](../../../../docs/api/vision_results/)
|
@@ -1,48 +1,51 @@
|
||||
# ScaledYOLOv4准备部署模型
|
||||
English | [简体中文](README_CN.md)
|
||||
# ScaledYOLOv4 Ready-to-deploy Model
|
||||
|
||||
- ScaledYOLOv4部署实现来自[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4)的代码,和[基于COCO的预训练模型](https://github.com/WongKinYiu/ScaledYOLOv4)。
|
||||
- The ScaledYOLOv4 deployment is based on the code of [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) and [Pre-trained Model on COCO](https://github.com/WongKinYiu/ScaledYOLOv4).
|
||||
|
||||
- (1)[官方库](https://github.com/WongKinYiu/ScaledYOLOv4)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
|
||||
- (2)自己数据训练的ScaledYOLOv4模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
- (1)The *.pt provided by [Official Repository](https://github.com/WongKinYiu/ScaledYOLOv4) should [Export the ONNX Model](#导出ONNX模型) to complete the deployment;
|
||||
- (2)The ScaledYOLOv4 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
||||
|
||||
|
||||
## 导出ONNX模型
|
||||
## Export the ONNX Model
|
||||
|
||||
|
||||
访问[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4)官方github库,按照指引下载安装,下载`scaledyolov4.pt` 模型,利用 `models/export.py` 得到`onnx`格式文件。如果您导出的`onnx`模型出现问题,可以参考[ScaledYOLOv4#401](https://github.com/WongKinYiu/ScaledYOLOv4/issues/401)的解决办法
|
||||
Visit the official [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) github repository, follow the guidelines to download the `scaledyolov4.pt` model, and employ `models/export.py` to get the file in `onnx` format. If you have any problems with the exported `onnx` model, refer to [ScaledYOLOv4#401](https://github.com/WongKinYiu/ScaledYOLOv4/issues/401) for solution.
|
||||
|
||||
|
||||
```bash
|
||||
#下载ScaledYOLOv4模型文件
|
||||
# Download the ScaledYOLOv4 model file
|
||||
Download from the goole drive https://drive.google.com/file/d/1aXZZE999sHMP1gev60XhNChtHPRMH3Fz/view?usp=sharing
|
||||
|
||||
# 导出onnx格式文件
|
||||
# Export the file in onnx format
|
||||
python models/export.py --weights PATH/TO/scaledyolov4-xx.pt --img-size 640
|
||||
```
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
为了方便开发者的测试,下面提供了ScaledYOLOv4导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 | 备注 |
|
||||
For developers' testing, models exported by ScaledYOLOv4 are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
|
||||
| Model | Size | Accuracy | Note |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:----- |
|
||||
| [ScaledYOLOv4-P5-896](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5-896.onnx) | 271MB | 51.2% | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P5+BoF-896](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5_-896.onnx) | 271MB | 51.7% | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6-1280.onnx) | 487MB | 53.9% | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6+BoF-1280](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6_-1280.onnx) | 487MB | 54.4% | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P7-1536](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p7-1536.onnx) | 1.1GB | 55.0% | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P5](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5.onnx) | 271MB | - | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P5+BoF](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5_.onnx) | 271MB | -| 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6.onnx) | 487MB | - | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6+BoF](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6_.onnx) | 487MB | - | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P7](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p7.onnx) | 1.1GB | - | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P5-896](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5-896.onnx) | 271MB | 51.2% | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P5+BoF-896](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5_-896.onnx) | 271MB | 51.7% | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6-1280.onnx) | 487MB | 53.9% | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6+BoF-1280](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6_-1280.onnx) | 487MB | 54.4% | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P7-1536](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p7-1536.onnx) | 1.1GB | 55.0% | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P5](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5.onnx) | 271MB | - | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P5+BoF](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5_.onnx) | 271MB | -| This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6.onnx) | 487MB | - | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6+BoF](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6_.onnx) | 487MB | - | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P7](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p7.onnx) | 1.1GB | - | This model file is sourced from [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
||||
|
||||
## 版本说明
|
||||
## Release Note
|
||||
|
||||
- 本版本文档和代码基于[ScaledYOLOv4 CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415) 编写
|
||||
- Document and code are based on [ScaledYOLOv4 CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415)
|
||||
|
49
examples/vision/detection/scaledyolov4/README_CN.md
Normal file
49
examples/vision/detection/scaledyolov4/README_CN.md
Normal file
@@ -0,0 +1,49 @@
|
||||
[English](README.md) | 简体中文
|
||||
# ScaledYOLOv4准备部署模型
|
||||
|
||||
- ScaledYOLOv4部署实现来自[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4)的代码,和[基于COCO的预训练模型](https://github.com/WongKinYiu/ScaledYOLOv4)。
|
||||
|
||||
- (1)[官方库](https://github.com/WongKinYiu/ScaledYOLOv4)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
|
||||
- (2)自己数据训练的ScaledYOLOv4模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
|
||||
|
||||
## 导出ONNX模型
|
||||
|
||||
|
||||
访问[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4)官方github库,按照指引下载安装,下载`scaledyolov4.pt` 模型,利用 `models/export.py` 得到`onnx`格式文件。如果您导出的`onnx`模型出现问题,可以参考[ScaledYOLOv4#401](https://github.com/WongKinYiu/ScaledYOLOv4/issues/401)的解决办法
|
||||
|
||||
```bash
|
||||
#下载ScaledYOLOv4模型文件
|
||||
Download from the goole drive https://drive.google.com/file/d/1aXZZE999sHMP1gev60XhNChtHPRMH3Fz/view?usp=sharing
|
||||
|
||||
# 导出onnx格式文件
|
||||
python models/export.py --weights PATH/TO/scaledyolov4-xx.pt --img-size 640
|
||||
```
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了ScaledYOLOv4导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:----- |
|
||||
| [ScaledYOLOv4-P5-896](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5-896.onnx) | 271MB | 51.2% | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P5+BoF-896](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5_-896.onnx) | 271MB | 51.7% | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6-1280.onnx) | 487MB | 53.9% | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6+BoF-1280](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6_-1280.onnx) | 487MB | 54.4% | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P7-1536](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p7-1536.onnx) | 1.1GB | 55.0% | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P5](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5.onnx) | 271MB | - | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P5+BoF](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5_.onnx) | 271MB | -| 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6.onnx) | 487MB | - | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P6+BoF](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p6_.onnx) | 487MB | - | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
| [ScaledYOLOv4-P7](https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p7.onnx) | 1.1GB | - | 此模型文件来源于[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4),GPL-3.0 License |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
|
||||
|
||||
## 版本说明
|
||||
|
||||
- 本版本文档和代码基于[ScaledYOLOv4 CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415) 编写
|
@@ -1,46 +1,47 @@
|
||||
# ScaledYOLOv4 C++部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# ScaledYOLOv4 C++ Deployment Example
|
||||
|
||||
本目录下提供`infer.cc`快速完成ScaledYOLOv4在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of ScaledYOLOv4 on CPU/GPU and GPU accelerated by TensorRT.
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
#下载官方转换好的ScaledYOLOv4模型文件和测试图片
|
||||
# Download the official converted ScaledYOLOv4 model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_demo scaled_yolov4-p5.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_demo scaled_yolov4-p5.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo scaled_yolov4-p5.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301908-7027cf41-af51-4485-bd32-87aca0e77336.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## ScaledYOLOv4 C++接口
|
||||
## ScaledYOLOv4 C++ Interface
|
||||
|
||||
### ScaledYOLOv4类
|
||||
### ScaledYOLOv4 Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::ScaledYOLOv4(
|
||||
@@ -50,16 +51,16 @@ fastdeploy::vision::detection::ScaledYOLOv4(
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
ScaledYOLOv4 model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
#### Predict函数
|
||||
#### Predict Parameter
|
||||
|
||||
> ```c++
|
||||
> ScaledYOLOv4::Predict(cv::Mat* im, DetectionResult* result,
|
||||
@@ -67,26 +68,26 @@ ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
|
||||
> > * **conf_threshold**: Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**: iou threshold during NMS processing
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
> > * **size**(vector<int>): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
|
||||
> > * **padding_value**(vector<float>): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
|
||||
> > * **is_no_pad**(bool): Specify whether to resize the image through padding or not. `is_no_pad=ture` represents no paddling. Default `is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=false`
|
||||
> > * **stride**(int): Used with the `stris_mini_pad` member variable. Default `stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
93
examples/vision/detection/scaledyolov4/cpp/README_CN.md
Normal file
93
examples/vision/detection/scaledyolov4/cpp/README_CN.md
Normal file
@@ -0,0 +1,93 @@
|
||||
[English](README.md) | 简体中文
|
||||
# ScaledYOLOv4 C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成ScaledYOLOv4在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
#下载官方转换好的ScaledYOLOv4模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo scaled_yolov4-p5.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo scaled_yolov4-p5.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo scaled_yolov4-p5.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301908-7027cf41-af51-4485-bd32-87aca0e77336.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## ScaledYOLOv4 C++接口
|
||||
|
||||
### ScaledYOLOv4类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::ScaledYOLOv4(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> ScaledYOLOv4::Predict(cv::Mat* im, DetectionResult* result,
|
||||
> float conf_threshold = 0.25,
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,84 +1,82 @@
|
||||
# ScaledYOLOv4 Python部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# ScaledYOLOv4 Python Deployment Example
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成ScaledYOLOv4在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of ScaledYOLOv4 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download the example code for deployment
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/scaledyolov4/python/
|
||||
|
||||
#下载scaledyolov4模型文件和测试图片
|
||||
# Download scaledyolov4 model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
python infer.py --model scaled_yolov4-p5.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
python infer.py --model scaled_yolov4-p5.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
python infer.py --model scaled_yolov4-p5.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301908-7027cf41-af51-4485-bd32-87aca0e77336.jpg">
|
||||
|
||||
## ScaledYOLOv4 Python接口
|
||||
## ScaledYOLOv4 Python Interface
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.ScaledYOLOv4(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
ScaledYOLOv4 model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
### predict函数
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> ScaledYOLOv4.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
> > * **image_data**(np.ndarray): Input data in HWC or BGR format
|
||||
> > * **conf_threshold**(float): Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**(float): iou threshold during NMS processing
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.DetectionResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for its description.
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Property
|
||||
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
> > * **size**(list[int]): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
|
||||
> > * **padding_value**(list[float]): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
|
||||
> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=True` represents no paddling. Default `is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=False`
|
||||
> > * **stride**(int): Used with the `stris_mini_padide` member variable. Default `stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [ScaledYOLOv4 模型介绍](..)
|
||||
- [ScaledYOLOv4 C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [ScaledYOLOv4 Model Description](..)
|
||||
- [ScaledYOLOv4 C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
85
examples/vision/detection/scaledyolov4/python/README_CN.md
Normal file
85
examples/vision/detection/scaledyolov4/python/README_CN.md
Normal file
@@ -0,0 +1,85 @@
|
||||
[English](README.md) | 简体中文
|
||||
# ScaledYOLOv4 Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成ScaledYOLOv4在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/scaledyolov4/python/
|
||||
|
||||
#下载scaledyolov4模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/scaled_yolov4-p5.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model scaled_yolov4-p5.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model scaled_yolov4-p5.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model scaled_yolov4-p5.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301908-7027cf41-af51-4485-bd32-87aca0e77336.jpg">
|
||||
|
||||
## ScaledYOLOv4 Python接口
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.ScaledYOLOv4(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
ScaledYOLOv4模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> ScaledYOLOv4.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [ScaledYOLOv4 模型介绍](..)
|
||||
- [ScaledYOLOv4 C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,46 +1,47 @@
|
||||
# YOLOR准备部署模型
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOR Ready-to-deploy Model
|
||||
|
||||
- YOLOR部署实现来自[YOLOR](https://github.com/WongKinYiu/yolor/releases/tag/weights)的代码,和[基于COCO的预训练模型](https://github.com/WongKinYiu/yolor/releases/tag/weights)。
|
||||
- The YOLOR deployment is based on the code of [YOLOR](https://github.com/WongKinYiu/yolor/releases/tag/weights) and [Pre-trained Model Based on COCO](https://github.com/WongKinYiu/yolor/releases/tag/weights).
|
||||
|
||||
- (1)[官方库](https://github.com/WongKinYiu/yolor/releases/tag/weights)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署,*.pose模型不支持部署;
|
||||
- (2)自己数据训练的YOLOR模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
- (1)The *.pt provided by [Official Repository](https://github.com/WongKinYiu/yolor/releases/tag/weights) should [Export the ONNX Model](#导出ONNX模型) to complete the deployment. The *.pose model’s deployment is not supported;
|
||||
- (2)The ScaledYOLOv4 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
||||
|
||||
|
||||
## 导出ONNX模型
|
||||
## Export the ONNX Model
|
||||
|
||||
|
||||
访问[YOLOR](https://github.com/WongKinYiu/yolor)官方github库,按照指引下载安装,下载`yolor.pt` 模型,利用 `models/export.py` 得到`onnx`格式文件。如果您导出的`onnx`模型出现精度不达标或者是数据维度的问题,可以参考[yolor#32](https://github.com/WongKinYiu/yolor/issues/32)的解决办法
|
||||
Visit the official [YOLOR](https://github.com/WongKinYiu/yolor) github repository, follow the guidelines to download the `yolor.pt` model, and employ `models/export.py` to get the file in `onnx` format. If the exported `onnx` model has a substandard accuracy or other problems about data dimension, you can refer to [yolor#32](https://github.com/WongKinYiu/yolor/issues/32) for the solution.
|
||||
|
||||
```bash
|
||||
#下载yolor模型文件
|
||||
# Download yolor model file
|
||||
wget https://github.com/WongKinYiu/yolor/releases/download/weights/yolor-d6-paper-570.pt
|
||||
|
||||
# 导出onnx格式文件
|
||||
# Export the file in onnx format
|
||||
python models/export.py --weights PATH/TO/yolor-xx-xx-xx.pt --img-size 640
|
||||
```
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOR导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 | 备注 |
|
||||
For developers' testing, models exported by YOLOR are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
|
||||
| Model | Size | Accuracy | Note |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:----- |
|
||||
| [YOLOR-P6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-p6-paper-541-1280-1280.onnx) | 143MB | 54.1% | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-W6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-w6-paper-555-1280-1280.onnx) | 305MB | 55.5% | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-E6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-e6-paper-564-1280-1280.onnx ) | 443MB | 56.4% | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-570-1280-1280.onnx) | 580MB | 57.0% | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-573-1280-1280.onnx) | 580MB | 57.3% | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-P6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-p6-paper-541-640-640.onnx) | 143MB | - | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-W6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-w6-paper-555-640-640.onnx) | 305MB | - | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-E6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-e6-paper-564-640-640.onnx ) | 443MB | - | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-570-640-640.onnx) | 580MB | - | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-573-640-640.onnx) | 580MB | - | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-P6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-p6-paper-541-1280-1280.onnx) | 143MB | 54.1% | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-W6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-w6-paper-555-1280-1280.onnx) | 305MB | 55.5% | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-E6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-e6-paper-564-1280-1280.onnx ) | 443MB | 56.4% | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-570-1280-1280.onnx) | 580MB | 57.0% | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-573-1280-1280.onnx) | 580MB | 57.3% | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-P6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-p6-paper-541-640-640.onnx) | 143MB | - | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-W6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-w6-paper-555-640-640.onnx) | 305MB | - | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-E6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-e6-paper-564-640-640.onnx ) | 443MB | - | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-570-640-640.onnx) | 580MB | - | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-573-640-640.onnx) | 580MB | - | This model file is sourced from [YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
||||
## 版本说明
|
||||
## Release Note
|
||||
|
||||
- 本版本文档和代码基于[YOLOR weights](https://github.com/WongKinYiu/yolor/releases/tag/weights) 编写
|
||||
- Document and code are based on [YOLOR weights](https://github.com/WongKinYiu/yolor/releases/tag/weights)
|
||||
|
48
examples/vision/detection/yolor/README_CN.md
Normal file
48
examples/vision/detection/yolor/README_CN.md
Normal file
@@ -0,0 +1,48 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOR准备部署模型
|
||||
|
||||
- YOLOR部署实现来自[YOLOR](https://github.com/WongKinYiu/yolor/releases/tag/weights)的代码,和[基于COCO的预训练模型](https://github.com/WongKinYiu/yolor/releases/tag/weights)。
|
||||
|
||||
- (1)[官方库](https://github.com/WongKinYiu/yolor/releases/tag/weights)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署,*.pose模型不支持部署;
|
||||
- (2)自己数据训练的YOLOR模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
|
||||
|
||||
## 导出ONNX模型
|
||||
|
||||
|
||||
访问[YOLOR](https://github.com/WongKinYiu/yolor)官方github库,按照指引下载安装,下载`yolor.pt` 模型,利用 `models/export.py` 得到`onnx`格式文件。如果您导出的`onnx`模型出现精度不达标或者是数据维度的问题,可以参考[yolor#32](https://github.com/WongKinYiu/yolor/issues/32)的解决办法
|
||||
|
||||
```bash
|
||||
#下载yolor模型文件
|
||||
wget https://github.com/WongKinYiu/yolor/releases/download/weights/yolor-d6-paper-570.pt
|
||||
|
||||
# 导出onnx格式文件
|
||||
python models/export.py --weights PATH/TO/yolor-xx-xx-xx.pt --img-size 640
|
||||
```
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOR导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:----- |
|
||||
| [YOLOR-P6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-p6-paper-541-1280-1280.onnx) | 143MB | 54.1% | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-W6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-w6-paper-555-1280-1280.onnx) | 305MB | 55.5% | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-E6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-e6-paper-564-1280-1280.onnx ) | 443MB | 56.4% | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-570-1280-1280.onnx) | 580MB | 57.0% | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6-1280](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-573-1280-1280.onnx) | 580MB | 57.3% | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-P6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-p6-paper-541-640-640.onnx) | 143MB | - | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-W6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-w6-paper-555-640-640.onnx) | 305MB | - | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-E6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-e6-paper-564-640-640.onnx ) | 443MB | - | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-570-640-640.onnx) | 580MB | - | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
| [YOLOR-D6](https://bj.bcebos.com/paddlehub/fastdeploy/yolor-d6-paper-573-640-640.onnx) | 580MB | - | 此模型文件来源于[YOLOR](https://github.com/WongKinYiu/yolor),GPL-3.0 License |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
|
||||
## 版本说明
|
||||
|
||||
- 本版本文档和代码基于[YOLOR weights](https://github.com/WongKinYiu/yolor/releases/tag/weights) 编写
|
||||
|
@@ -1,46 +1,47 @@
|
||||
# YOLOR C++部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOR C++ Deployment Example
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOR on CPU/GPU and GPU accelerated by TensorRT.
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
#下载官方转换好的YOLOR模型文件和测试图片
|
||||
# Download the official converted YOLOR model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolor-p6-paper-541-640-640.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_demo yolor-p6-paper-541-640-640.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_demo yolor-p6-paper-541-640-640.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo yolor-p6-paper-541-640-640.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301926-fa3711bf-5984-4e61-9c98-7fdeacb622e9.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## YOLOR C++接口
|
||||
## YOLOR C++ Interface
|
||||
|
||||
### YOLOR类
|
||||
### YOLOR Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::YOLOR(
|
||||
@@ -50,16 +51,16 @@ fastdeploy::vision::detection::YOLOR(
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
YOLOR model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
#### Predict函数
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> YOLOR::Predict(cv::Mat* im, DetectionResult* result,
|
||||
@@ -67,26 +68,26 @@ YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
|
||||
> > * **conf_threshold**: Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**: iou threshold during NMS processing
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
> > * **size**(vector<int>): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
|
||||
> > * **padding_value**(vector<float>): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
|
||||
> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=ture` represents no paddling. Default `is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=false`
|
||||
> > * **stride**(int): Used with the `stris_mini_pad` member variable. Default `stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
93
examples/vision/detection/yolor/cpp/README_CN.md
Normal file
93
examples/vision/detection/yolor/cpp/README_CN.md
Normal file
@@ -0,0 +1,93 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOR C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
#下载官方转换好的YOLOR模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolor-p6-paper-541-640-640.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo yolor-p6-paper-541-640-640.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo yolor-p6-paper-541-640-640.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo yolor-p6-paper-541-640-640.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301926-fa3711bf-5984-4e61-9c98-7fdeacb622e9.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## YOLOR C++接口
|
||||
|
||||
### YOLOR类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::YOLOR(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> YOLOR::Predict(cv::Mat* im, DetectionResult* result,
|
||||
> float conf_threshold = 0.25,
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,81 +1,81 @@
|
||||
# YOLOR Python部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOR Python Deployment Example
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
This directory provides examples that `infer.py` ast finishes the deployment of YOLOR on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download the example code for deployment
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/yolor/python/
|
||||
|
||||
#下载YOLOR模型文件和测试图片
|
||||
# Download YOLOR model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolor-p6-paper-541-640-640.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
python infer.py --model yolor-p6-paper-541-640-640.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
python infer.py --model yolor-p6-paper-541-640-640.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
python infer.py --model yolor-p6-paper-541-640-640.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301926-fa3711bf-5984-4e61-9c98-7fdeacb622e9.jpg">
|
||||
|
||||
## YOLOR Python接口
|
||||
## YOLOR Python Interface
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.YOLOR(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
YOLOR model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
### predict函数
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> YOLOR.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
> > * **image_data**(np.ndarray): Input data in HWC or BGR format
|
||||
> > * **conf_threshold**(float): Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**(float): iou threshold during NMS processing
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.DetectionResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for its description.
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Property
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
> > * **size**(list[int]): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
|
||||
> > * **padding_value**(list[float]): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
|
||||
> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=True` represents no paddling. Default `is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=False`
|
||||
> > * **stride**(int): Used with the `stris_mini_padide` member variable. Default `stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [YOLOR 模型介绍](..)
|
||||
- [YOLOR C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [YOLOR Model Description](..)
|
||||
- [YOLOR C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
82
examples/vision/detection/yolor/python/README_CN.md
Normal file
82
examples/vision/detection/yolor/python/README_CN.md
Normal file
@@ -0,0 +1,82 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOR Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/yolor/python/
|
||||
|
||||
#下载YOLOR模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolor-p6-paper-541-640-640.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model yolor-p6-paper-541-640-640.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model yolor-p6-paper-541-640-640.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model yolor-p6-paper-541-640-640.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301926-fa3711bf-5984-4e61-9c98-7fdeacb622e9.jpg">
|
||||
|
||||
## YOLOR Python接口
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.YOLOR(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
YOLOR模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> YOLOR.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [YOLOR 模型介绍](..)
|
||||
- [YOLOR C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,28 +1,30 @@
|
||||
# YOLOv5准备部署模型
|
||||
English | [简体中文](README_CN.md)
|
||||
|
||||
- YOLOv5 v7.0部署模型实现来自[YOLOv5](https://github.com/ultralytics/yolov5/tree/v7.0),和[基于COCO的预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v7.0)
|
||||
- (1)[官方库](https://github.com/ultralytics/yolov5/releases/tag/v7.0)提供的*.onnx可直接进行部署;
|
||||
- (2)开发者基于自己数据训练的YOLOv5 v7.0模型,可使用[YOLOv5](https://github.com/ultralytics/yolov5)中的`export.py`导出ONNX文件后,完成部署。
|
||||
# YOLOv5 Ready-to-deploy Model
|
||||
|
||||
- The deployment of the YOLOv5 v7.0 model is based on [YOLOv5](https://github.com/ultralytics/yolov5/tree/v7.0) and [Pre-trained Model Based on COCO](https://github.com/ultralytics/yolov5/releases/tag/v7.0)
|
||||
- (1)The *.onnx provided by [Official Repository](https://github.com/ultralytics/yolov5/releases/tag/v7.0) can be deployed directly;
|
||||
- (2)The YOLOv5 v7.0 model trained by personal data should employ `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5) to export the ONNX files for deployment.
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv5导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 | 备注 |
|
||||
For developers' testing, models exported by YOLOv5 are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
|
||||
| Model | Size | Accuracy | Note |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:---- |
|
||||
| [YOLOv5n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n.onnx) | 7.6MB | 28.0% | 此模型文件来源于[YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx) | 28MB | 37.4% | 此模型文件来源于[YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5m](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5m.onnx) | 82MB | 45.4% | 此模型文件来源于[YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5l](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5l.onnx) | 178MB | 49.0% | 此模型文件来源于[YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5x.onnx) | 332MB | 50.7% | 此模型文件来源于[YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n.onnx) | 7.6MB | 28.0% | This model file is sourced from [YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx) | 28MB | 37.4% | This model file is sourced from [YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5m](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5m.onnx) | 82MB | 45.4% | This model file is sourced from [YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5l](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5l.onnx) | 178MB | 49.0% | This model file is sourced from [YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5x.onnx) | 332MB | 50.7% | This model file is sourced from [YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [服务化部署](serving)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
- [Serving Deployment](serving)
|
||||
|
||||
## 版本说明
|
||||
## Release Note
|
||||
|
||||
- 本版本文档和代码基于[YOLOv5 v7.0](https://github.com/ultralytics/yolov5/tree/v7.0) 编写
|
||||
- Document and code are based on [YOLOv5 v7.0](https://github.com/ultralytics/yolov5/tree/v7.0)
|
||||
|
29
examples/vision/detection/yolov5/README_CN.md
Normal file
29
examples/vision/detection/yolov5/README_CN.md
Normal file
@@ -0,0 +1,29 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5准备部署模型
|
||||
|
||||
- YOLOv5 v7.0部署模型实现来自[YOLOv5](https://github.com/ultralytics/yolov5/tree/v7.0),和[基于COCO的预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v7.0)
|
||||
- (1)[官方库](https://github.com/ultralytics/yolov5/releases/tag/v7.0)提供的*.onnx可直接进行部署;
|
||||
- (2)开发者基于自己数据训练的YOLOv5 v7.0模型,可使用[YOLOv5](https://github.com/ultralytics/yolov5)中的`export.py`导出ONNX文件后,完成部署。
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv5导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:---- |
|
||||
| [YOLOv5n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5n.onnx) | 7.6MB | 28.0% | 此模型文件来源于[YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx) | 28MB | 37.4% | 此模型文件来源于[YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5m](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5m.onnx) | 82MB | 45.4% | 此模型文件来源于[YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5l](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5l.onnx) | 178MB | 49.0% | 此模型文件来源于[YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
| [YOLOv5x](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5x.onnx) | 332MB | 50.7% | 此模型文件来源于[YOLOv5](https://github.com/ultralytics/yolov5),GPL-3.0 License |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [服务化部署](serving)
|
||||
|
||||
## 版本说明
|
||||
|
||||
- 本版本文档和代码基于[YOLOv5 v7.0](https://github.com/ultralytics/yolov5/tree/v7.0) 编写
|
@@ -1,8 +1,12 @@
|
||||
# YOLOv5 量化模型在 A311D 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) 量化模型到 A311D 上。
|
||||
English | [简体中文](README_CN.md)
|
||||
# Deploy YOLOv5 Quantification Model on A311D
|
||||
Now FastDeploy supports the deployment of YOLOv5 quantification model to A311D based on Paddle Lite.
|
||||
|
||||
## 详细部署文档
|
||||
For model quantification and download, refer to [Model Quantification](../quantize/README.md)
|
||||
|
||||
在 A311D 上只支持 C++ 的部署。
|
||||
|
||||
- [C++部署](cpp)
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
Only C++ deployment is supported on A311D.
|
||||
|
||||
- [C++ Deployment](cpp)
|
||||
|
9
examples/vision/detection/yolov5/a311d/README_CN.md
Normal file
9
examples/vision/detection/yolov5/a311d/README_CN.md
Normal file
@@ -0,0 +1,9 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5 量化模型在 A311D 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) 量化模型到 A311D 上。
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
在 A311D 上只支持 C++ 的部署。
|
||||
|
||||
- [C++部署](cpp)
|
@@ -1,66 +1,61 @@
|
||||
# YOLOv5 C++部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5 C++ Deployment Example
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOv5在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv5 on CPU/GPU and GPU accelerated by TensorRT.
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeployPrecompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载 FastDeploy 预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
#下载官方转换好的 yolov5 Paddle 模型文件和测试图片
|
||||
# Download the official converted yolov5 Paddle model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
|
||||
tar -xvf yolov5s_infer.tar
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 2
|
||||
# 昆仑芯XPU推理
|
||||
# KunlunXin XPU inference
|
||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 3
|
||||
# 华为昇腾推理
|
||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 4
|
||||
```
|
||||
|
||||
上述的模型为 Paddle 模型的推理,如果想要做 ONNX 模型的推理,可以按照如下步骤:
|
||||
The above steps apply to the inference of Paddle models. If you want to conduct the inference of ONNX models, follow these steps:
|
||||
```bash
|
||||
# 1. 下载官方转换好的 yolov5 ONNX 模型文件和测试图片
|
||||
# 1. Download the official converted yolov5 ONNX model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 2
|
||||
```
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
## YOLOv5 C++ Interface
|
||||
|
||||
## YOLOv5 C++接口
|
||||
|
||||
### YOLOv5类
|
||||
### YOLOv5 Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::YOLOv5(
|
||||
@@ -70,16 +65,16 @@ fastdeploy::vision::detection::YOLOv5(
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
YOLOv5 model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
#### Predict函数
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> YOLOv5::Predict(cv::Mat* im, DetectionResult* result,
|
||||
@@ -87,26 +82,26 @@ YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
|
||||
> > * **conf_threshold**: Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**: iou threshold during NMS processing
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
> > * **size**(vector<int>): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
|
||||
> > * **padding_value**(vector<float>): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
|
||||
> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=ture` represents no paddling. Default `is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=false`
|
||||
> > * **stride**(int): Used with the `stris_mini_pad` member variable. Default `stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
113
examples/vision/detection/yolov5/cpp/README_CN.md
Normal file
113
examples/vision/detection/yolov5/cpp/README_CN.md
Normal file
@@ -0,0 +1,113 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5 C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOv5在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载 FastDeploy 预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
#下载官方转换好的 yolov5 Paddle 模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
|
||||
tar -xvf yolov5s_infer.tar
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 2
|
||||
# 昆仑芯XPU推理
|
||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 3
|
||||
# 华为昇腾推理
|
||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 4
|
||||
```
|
||||
|
||||
上述的模型为 Paddle 模型的推理,如果想要做 ONNX 模型的推理,可以按照如下步骤:
|
||||
```bash
|
||||
# 1. 下载官方转换好的 yolov5 ONNX 模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo yolov5s.onnx 000000014439.jpg 2
|
||||
```
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
## YOLOv5 C++接口
|
||||
|
||||
### YOLOv5类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::YOLOv5(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> YOLOv5::Predict(cv::Mat* im, DetectionResult* result,
|
||||
> float conf_threshold = 0.25,
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,86 +1,85 @@
|
||||
# YOLOv5 Python部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5 Python Deployment Example
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOv5在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download the example code for deployment
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/yolov5/python/
|
||||
|
||||
#下载yolov5模型文件和测试图片
|
||||
# Download yolov5 model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
|
||||
tar -xf yolov5s_infer.tar
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu --use_trt True
|
||||
# 昆仑芯XPU推理
|
||||
# KunlunXin XPU inference
|
||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device kunlunxin
|
||||
# 华为昇腾推理
|
||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device ascend
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
|
||||
|
||||
## YOLOv5 Python接口
|
||||
## YOLOv5 Python Interface
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
YOLOv5 model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
### predict函数
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> YOLOv5.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
> > * **image_data**(np.ndarray): Input data in HWC or BGR format
|
||||
> > * **conf_threshold**(float): Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**(float): iou threshold during NMS processing
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.DetectionResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for its description.
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Property
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
> > * **size**(list[int]): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
|
||||
> > * **padding_value**(list[float]): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
|
||||
> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=True` represents no paddling. Default `is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=False`
|
||||
> > * **stride**(int): Used with the `stris_mini_padide` member variable. Default `stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [YOLOv5 模型介绍](..)
|
||||
- [YOLOv5 C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [YOLOv5 Model Description](..)
|
||||
- [YOLOv5 C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
87
examples/vision/detection/yolov5/python/README_CN.md
Normal file
87
examples/vision/detection/yolov5/python/README_CN.md
Normal file
@@ -0,0 +1,87 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5 Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOv5在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/yolov5/python/
|
||||
|
||||
#下载yolov5模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
|
||||
tar -xf yolov5s_infer.tar
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu --use_trt True
|
||||
# 昆仑芯XPU推理
|
||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device kunlunxin
|
||||
# 华为昇腾推理
|
||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device ascend
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
|
||||
|
||||
## YOLOv5 Python接口
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
YOLOv5模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> YOLOv5.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [YOLOv5 模型介绍](..)
|
||||
- [YOLOv5 C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,44 +1,30 @@
|
||||
# YOLOv5量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5 Quantized Model Deployment
|
||||
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
|
||||
FastDeploy supports the deployment of quantized models and provides a one-click model quantization tool.
|
||||
Users can use the one-click model quantization tool to quantize and deploy the models themselves or download the quantized models provided by FastDeploy directly for deployment.
|
||||
|
||||
## 下载量化完成的YOLOv5s模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
|
||||
## FastDeploy One-Click Model Quantization Tool
|
||||
|
||||
Benchmark表格说明:
|
||||
- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
|
||||
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
|
||||
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
|
||||
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
|
||||
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
|
||||
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
|
||||
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
|
||||
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
|
||||
FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file.
|
||||
For a detailed tutorial, please refer to: [One-Click Model Quantization Tool](../../../../../tools/common_tools/auto_compression/)
|
||||
|
||||
## Download Quantized YOLOv5s Model
|
||||
|
||||
#### Runtime Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | TensorRT | GPU | 7.87 | 4.51 | 4.31 | 3.17 | 2.48 | 37.6 | 36.7 | 量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | Paddle-TensorRT | GPU | 7.99 | None | 4.46 | 3.31 | 2.41 | 37.6 | 36.8 | 量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | ONNX Runtime | CPU | 176.41 | 91.90 | None | None | 1.90 | 37.6 | 33.1 |量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | Paddle Inference| CPU | 213.73 | 130.19 | None | None | 1.64 |37.6 | 35.2 | 量化蒸馏训练 |
|
||||
Users can also directly download the quantized models in the table below for deployment.
|
||||
|
||||
#### 端到端 Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | TensorRT | GPU | 24.61 | 21.20 | 20.78 | 20.94 | 1.18 | 37.6 | 36.7 | 量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | Paddle-TensorRT | GPU | 23.53 | None | 21.98 | 19.84 | 1.28 | 37.6 | 36.8 | 量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | ONNX Runtime | CPU | 197.323 | 110.99 | None | None | 1.78 | 37.6 | 33.1 |量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | Paddle Inference| CPU | 235.73 | 144.82 | None | None | 1.63 |37.6 | 35.2 | 量化蒸馏训练 |
|
||||
| Model | Inference Backend | Hardware | FP32 Inference Time Delay | INT8 Inference Time Delay | Acceleration ratio | FP32 mAP | INT8 mAP | Method |
|
||||
| ----------------------------------------------------------------------- | ----------------- | -------- | ------------------------- | -------------------------- | ------------------ | -------- | -------- | ------------------------------- |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | TensorRT | GPU | 8.79 | 5.17 | 1.70 | 37.6 | 36.6 | Quantized distillation training |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | Paddle Inference | CPU | 217.05 | 133.31 | 1.63 | 37.6 | 36.8 | Quantized distillation training |
|
||||
|
||||
The data in the above table shows the end-to-end inference performance of FastDeploy deployment before and after model quantization.
|
||||
|
||||
- The test images are from COCO val2017.
|
||||
- The inference time delay is the inference latency on different Runtime in milliseconds.
|
||||
- CPU is Intel(R) Xeon(R) Gold 6271C, GPU is Tesla T4, TensorRT version 8.4.15, and the fixed CPU thread is 1 for all tests.
|
||||
|
||||
## 详细部署文档
|
||||
## More Detailed Tutorials
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
45
examples/vision/detection/yolov5/quantize/README_CN.md
Executable file
45
examples/vision/detection/yolov5/quantize/README_CN.md
Executable file
@@ -0,0 +1,45 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
|
||||
|
||||
## 下载量化完成的YOLOv5s模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
|
||||
|
||||
Benchmark表格说明:
|
||||
- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
|
||||
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
|
||||
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
|
||||
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
|
||||
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
|
||||
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
|
||||
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
|
||||
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
|
||||
|
||||
|
||||
#### Runtime Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | TensorRT | GPU | 7.87 | 4.51 | 4.31 | 3.17 | 2.48 | 37.6 | 36.7 | 量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | Paddle-TensorRT | GPU | 7.99 | None | 4.46 | 3.31 | 2.41 | 37.6 | 36.8 | 量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | ONNX Runtime | CPU | 176.41 | 91.90 | None | None | 1.90 | 37.6 | 33.1 |量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | Paddle Inference| CPU | 213.73 | 130.19 | None | None | 1.64 |37.6 | 35.2 | 量化蒸馏训练 |
|
||||
|
||||
#### 端到端 Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | TensorRT | GPU | 24.61 | 21.20 | 20.78 | 20.94 | 1.18 | 37.6 | 36.7 | 量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | Paddle-TensorRT | GPU | 23.53 | None | 21.98 | 19.84 | 1.28 | 37.6 | 36.8 | 量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | ONNX Runtime | CPU | 197.323 | 110.99 | None | None | 1.78 | 37.6 | 33.1 |量化蒸馏训练 |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | Paddle Inference| CPU | 235.73 | 144.82 | None | None | 1.63 |37.6 | 35.2 | 量化蒸馏训练 |
|
||||
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
@@ -1,29 +0,0 @@
|
||||
# YOLOv5 Quantized Model Deployment
|
||||
|
||||
FastDeploy supports the deployment of quantized models and provides a one-click model quantization tool.
|
||||
Users can use the one-click model quantization tool to quantize and deploy the models themselves or download the quantized models provided by FastDeploy directly for deployment.
|
||||
|
||||
## FastDeploy One-Click Model Quantization Tool
|
||||
|
||||
FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file.
|
||||
For a detailed tutorial, please refer to: [One-Click Model Quantization Tool](../../../../../tools/common_tools/auto_compression/)
|
||||
|
||||
## Download Quantized YOLOv5s Model
|
||||
|
||||
Users can also directly download the quantized models in the table below for deployment.
|
||||
|
||||
| Model | Inference Backend | Hardware | FP32 Inference Time Delay | INT8 Inference Time Delay | Acceleration ratio | FP32 mAP | INT8 mAP | Method |
|
||||
| ----------------------------------------------------------------------- | ----------------- | -------- | ------------------------- | -------------------------- | ------------------ | -------- | -------- | ------------------------------- |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | TensorRT | GPU | 8.79 | 5.17 | 1.70 | 37.6 | 36.6 | Quantized distillation training |
|
||||
| [YOLOv5s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar) | Paddle Inference | CPU | 217.05 | 133.31 | 1.63 | 37.6 | 36.8 | Quantized distillation training |
|
||||
|
||||
The data in the above table shows the end-to-end inference performance of FastDeploy deployment before and after model quantization.
|
||||
|
||||
- The test images are from COCO val2017.
|
||||
- The inference time delay is the inference latency on different Runtime in milliseconds.
|
||||
- CPU is Intel(R) Xeon(R) Gold 6271C, GPU is Tesla T4, TensorRT version 8.4.15, and the fixed CPU thread is 1 for all tests.
|
||||
|
||||
## More Detailed Tutorials
|
||||
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
@@ -1,8 +1,12 @@
|
||||
# YOLOv5 量化模型在 RV1126 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) 量化模型到 RV1126 上。
|
||||
English | [简体中文](README_CN.md)
|
||||
# Deploy YOLOv5 Quantification Model on RV1126
|
||||
Now FastDeploy supports the deployment of YOLOv5 quantification model to RV1126 based on Paddle Lite.
|
||||
|
||||
## 详细部署文档
|
||||
For model quantification and download, refer to [Model Quantification](../quantize/README.md)
|
||||
|
||||
在 RV1126 上只支持 C++ 的部署。
|
||||
|
||||
- [C++部署](cpp)
|
||||
## Detailed Deployment Tutorials
|
||||
|
||||
Only C++ deployment is supported on RV1126.
|
||||
|
||||
- [C++ Deployment](cpp)
|
||||
|
9
examples/vision/detection/yolov5/rv1126/README_CN.md
Normal file
9
examples/vision/detection/yolov5/rv1126/README_CN.md
Normal file
@@ -0,0 +1,9 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5 量化模型在 RV1126 上的部署
|
||||
目前 FastDeploy 已经支持基于 Paddle Lite 部署 [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) 量化模型到 RV1126 上。
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
在 RV1126 上只支持 C++ 的部署。
|
||||
|
||||
- [C++部署](cpp)
|
@@ -1,38 +1,27 @@
|
||||
# YOLOv5 服务化部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5 Serving Deployment Demo
|
||||
|
||||
在服务化部署前,需确认
|
||||
|
||||
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
|
||||
|
||||
|
||||
## 启动服务
|
||||
## Launch Serving
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/detection/yolov5/serving/
|
||||
|
||||
#下载yolov5模型文件
|
||||
#Download yolov5 model file
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
|
||||
# 将模型放入 models/runtime/1目录下, 并重命名为model.onnx
|
||||
mv yolov5s.onnx models/runtime/1/model.onnx
|
||||
# Save the model under models/infer/1 and rename it as model.onnx
|
||||
mv yolov5s.onnx models/infer/1/
|
||||
|
||||
# 拉取fastdeploy镜像(x.y.z为镜像版本号,需参照serving文档替换为数字)
|
||||
# GPU镜像
|
||||
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10
|
||||
# CPU镜像
|
||||
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-cpu-only-21.10
|
||||
# Pull fastdeploy image, x.y.z is FastDeploy version, example 1.0.0.
|
||||
docker pull paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10
|
||||
|
||||
# 运行容器.容器名字为 fd_serving, 并挂载当前目录为容器的 /yolov5_serving 目录
|
||||
nvidia-docker run -it --net=host --name fd_serving -v `pwd`/:/yolov5_serving registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash
|
||||
# Run the docker. The docker name is fd_serving, and the current directory is mounted as the docker's /yolov5_serving directory
|
||||
nvidia-docker run -it --net=host --name fd_serving -v `pwd`/:/yolov5_serving paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash
|
||||
|
||||
# 启动服务(不设置CUDA_VISIBLE_DEVICES环境变量,会拥有所有GPU卡的调度权限)
|
||||
CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/yolov5_serving/models --backend-config=python,shm-default-byte-size=10485760
|
||||
# Start the service (Without setting the CUDA_VISIBLE_DEVICES environment variable, it will have scheduling privileges for all GPU cards)
|
||||
CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=models --backend-config=python,shm-default-byte-size=10485760
|
||||
```
|
||||
>> **注意**: 当出现"Address already in use", 请使用`--grpc-port`指定端口号来启动服务,同时更改yolov5_grpc_client.py中的请求端口号
|
||||
|
||||
服务启动成功后, 会有以下输出:
|
||||
Output the following contents if serving is launched
|
||||
|
||||
```
|
||||
......
|
||||
I0928 04:51:15.784517 206 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001
|
||||
@@ -40,27 +29,30 @@ I0928 04:51:15.785177 206 http_server.cc:2815] Started HTTPService at 0.0.0.0:80
|
||||
I0928 04:51:15.826578 206 http_server.cc:167] Started Metrics Service at 0.0.0.0:8002
|
||||
```
|
||||
|
||||
## Client Requests
|
||||
|
||||
## 客户端请求
|
||||
Execute the following command in the physical machine to send a grpc request and output the result
|
||||
|
||||
在物理机器中执行以下命令,发送grpc请求并输出结果
|
||||
```
|
||||
#下载测试图片
|
||||
#Download test images
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
#安装客户端依赖
|
||||
python3 -m pip install tritonclient[all]
|
||||
#Installing client-side dependencies
|
||||
python3 -m pip install tritonclient\[all\]
|
||||
|
||||
# 发送请求
|
||||
# Send requests
|
||||
python3 yolov5_grpc_client.py
|
||||
```
|
||||
|
||||
发送请求成功后,会返回json格式的检测结果并打印输出:
|
||||
When the request is sent successfully, the results are returned in json format and printed out:
|
||||
|
||||
```
|
||||
output_name: detction_result
|
||||
{'boxes': [[268.48028564453125, 81.05305480957031, 298.69476318359375, 169.43902587890625], [104.73116302490234, 45.66197204589844, 127.58382415771484, 93.44938659667969], [378.9093933105469, 39.75013732910156, 395.6086120605469, 84.24342346191406], [158.552978515625, 80.36149597167969, 199.18576049804688, 168.18191528320312], [414.37530517578125, 90.94805908203125, 506.3218994140625, 280.40521240234375], [364.00341796875, 56.608917236328125, 381.97857666015625, 115.96823120117188], [351.7251281738281, 42.635345458984375, 366.9103088378906, 98.04837036132812], [505.8882751464844, 114.36674499511719, 593.1248779296875, 275.99530029296875], [327.7086181640625, 38.36369323730469, 346.84991455078125, 80.89302062988281], [583.493408203125, 114.53289794921875, 612.3546142578125, 175.87353515625], [186.4706573486328, 44.941375732421875, 199.6645050048828, 61.037628173828125], [169.6158905029297, 48.01460266113281, 178.1415557861328, 60.88859558105469], [25.81019401550293, 117.19969177246094, 59.88878631591797, 152.85012817382812], [352.1452941894531, 46.71272277832031, 381.9460754394531, 106.75212097167969], [1.875, 150.734375, 37.96875, 173.78125], [464.65728759765625, 15.901412963867188, 472.512939453125, 34.11640930175781], [64.625, 135.171875, 84.5, 154.40625], [57.8125, 151.234375, 103.0, 174.15625], [165.890625, 88.609375, 527.90625, 339.953125], [101.40625, 152.5625, 118.890625, 169.140625]], 'scores': [0.8965693116188049, 0.8695310950279236, 0.8684297800064087, 0.8429877758026123, 0.8358422517776489, 0.8151364326477051, 0.8089362382888794, 0.801361083984375, 0.7947245836257935, 0.7606497406959534, 0.6325908303260803, 0.6139386892318726, 0.5906146764755249, 0.505328893661499, 0.40457233786582947, 0.3460320234298706, 0.33283042907714844, 0.3325657248497009, 0.2594234347343445, 0.25389009714126587], 'label_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 0, 24, 24, 33, 24], 'masks': [], 'contain_masks': False}
|
||||
```
|
||||
|
||||
## 配置修改
|
||||
## Modify Configs
|
||||
|
||||
当前默认配置在CPU上运行ONNXRuntime引擎, 如果要在GPU或其他推理引擎上运行。 需要修改`models/runtime/config.pbtxt`中配置,详情请参考[配置文档](../../../../../serving/docs/zh_CN/model_configuration.md)
|
||||
|
||||
|
||||
The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../serving/docs/zh_CN/model_configuration.md) to modify the configs in `models/runtime/config.pbtxt`.
|
||||
|
@@ -1,26 +1,39 @@
|
||||
# YOLOv5 Serving Deployment Demo
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5 服务化部署示例
|
||||
|
||||
## Launch Serving
|
||||
在服务化部署前,需确认
|
||||
|
||||
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
|
||||
|
||||
|
||||
## 启动服务
|
||||
|
||||
```bash
|
||||
#Download yolov5 model file
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/detection/yolov5/serving/
|
||||
|
||||
#下载yolov5模型文件
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||
|
||||
# Save the model under models/infer/1 and rename it as model.onnx
|
||||
mv yolov5s.onnx models/infer/1/
|
||||
# 将模型放入 models/runtime/1目录下, 并重命名为model.onnx
|
||||
mv yolov5s.onnx models/runtime/1/model.onnx
|
||||
|
||||
# Pull fastdeploy image, x.y.z is FastDeploy version, example 1.0.0.
|
||||
docker pull paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10
|
||||
# 拉取fastdeploy镜像(x.y.z为镜像版本号,需参照serving文档替换为数字)
|
||||
# GPU镜像
|
||||
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10
|
||||
# CPU镜像
|
||||
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-cpu-only-21.10
|
||||
|
||||
# Run the docker. The docker name is fd_serving, and the current directory is mounted as the docker's /yolov5_serving directory
|
||||
nvidia-docker run -it --net=host --name fd_serving -v `pwd`/:/yolov5_serving paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash
|
||||
# 运行容器.容器名字为 fd_serving, 并挂载当前目录为容器的 /yolov5_serving 目录
|
||||
nvidia-docker run -it --net=host --name fd_serving -v `pwd`/:/yolov5_serving registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash
|
||||
|
||||
# Start the service (Without setting the CUDA_VISIBLE_DEVICES environment variable, it will have scheduling privileges for all GPU cards)
|
||||
CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=models --backend-config=python,shm-default-byte-size=10485760
|
||||
# 启动服务(不设置CUDA_VISIBLE_DEVICES环境变量,会拥有所有GPU卡的调度权限)
|
||||
CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/yolov5_serving/models --backend-config=python,shm-default-byte-size=10485760
|
||||
```
|
||||
>> **注意**: 当出现"Address already in use", 请使用`--grpc-port`指定端口号来启动服务,同时更改yolov5_grpc_client.py中的请求端口号
|
||||
|
||||
Output the following contents if serving is launched
|
||||
|
||||
服务启动成功后, 会有以下输出:
|
||||
```
|
||||
......
|
||||
I0928 04:51:15.784517 206 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001
|
||||
@@ -28,30 +41,27 @@ I0928 04:51:15.785177 206 http_server.cc:2815] Started HTTPService at 0.0.0.0:80
|
||||
I0928 04:51:15.826578 206 http_server.cc:167] Started Metrics Service at 0.0.0.0:8002
|
||||
```
|
||||
|
||||
## Client Requests
|
||||
|
||||
Execute the following command in the physical machine to send a grpc request and output the result
|
||||
## 客户端请求
|
||||
|
||||
在物理机器中执行以下命令,发送grpc请求并输出结果
|
||||
```
|
||||
#Download test images
|
||||
#下载测试图片
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
#Installing client-side dependencies
|
||||
python3 -m pip install tritonclient\[all\]
|
||||
#安装客户端依赖
|
||||
python3 -m pip install tritonclient[all]
|
||||
|
||||
# Send requests
|
||||
# 发送请求
|
||||
python3 yolov5_grpc_client.py
|
||||
```
|
||||
|
||||
When the request is sent successfully, the results are returned in json format and printed out:
|
||||
|
||||
发送请求成功后,会返回json格式的检测结果并打印输出:
|
||||
```
|
||||
output_name: detction_result
|
||||
{'boxes': [[268.48028564453125, 81.05305480957031, 298.69476318359375, 169.43902587890625], [104.73116302490234, 45.66197204589844, 127.58382415771484, 93.44938659667969], [378.9093933105469, 39.75013732910156, 395.6086120605469, 84.24342346191406], [158.552978515625, 80.36149597167969, 199.18576049804688, 168.18191528320312], [414.37530517578125, 90.94805908203125, 506.3218994140625, 280.40521240234375], [364.00341796875, 56.608917236328125, 381.97857666015625, 115.96823120117188], [351.7251281738281, 42.635345458984375, 366.9103088378906, 98.04837036132812], [505.8882751464844, 114.36674499511719, 593.1248779296875, 275.99530029296875], [327.7086181640625, 38.36369323730469, 346.84991455078125, 80.89302062988281], [583.493408203125, 114.53289794921875, 612.3546142578125, 175.87353515625], [186.4706573486328, 44.941375732421875, 199.6645050048828, 61.037628173828125], [169.6158905029297, 48.01460266113281, 178.1415557861328, 60.88859558105469], [25.81019401550293, 117.19969177246094, 59.88878631591797, 152.85012817382812], [352.1452941894531, 46.71272277832031, 381.9460754394531, 106.75212097167969], [1.875, 150.734375, 37.96875, 173.78125], [464.65728759765625, 15.901412963867188, 472.512939453125, 34.11640930175781], [64.625, 135.171875, 84.5, 154.40625], [57.8125, 151.234375, 103.0, 174.15625], [165.890625, 88.609375, 527.90625, 339.953125], [101.40625, 152.5625, 118.890625, 169.140625]], 'scores': [0.8965693116188049, 0.8695310950279236, 0.8684297800064087, 0.8429877758026123, 0.8358422517776489, 0.8151364326477051, 0.8089362382888794, 0.801361083984375, 0.7947245836257935, 0.7606497406959534, 0.6325908303260803, 0.6139386892318726, 0.5906146764755249, 0.505328893661499, 0.40457233786582947, 0.3460320234298706, 0.33283042907714844, 0.3325657248497009, 0.2594234347343445, 0.25389009714126587], 'label_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 0, 24, 24, 33, 24], 'masks': [], 'contain_masks': False}
|
||||
```
|
||||
|
||||
## Modify Configs
|
||||
## 配置修改
|
||||
|
||||
|
||||
|
||||
The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../serving/docs/zh_CN/model_configuration.md) to modify the configs in `models/runtime/config.pbtxt`.
|
||||
当前默认配置在CPU上运行ONNXRuntime引擎, 如果要在GPU或其他推理引擎上运行。 需要修改`models/runtime/config.pbtxt`中配置,详情请参考[配置文档](../../../../../serving/docs/zh_CN/model_configuration.md)
|
@@ -1,71 +1,72 @@
|
||||
# YOLOv5Lite准备部署模型
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5Lite Ready-to-deploy Model
|
||||
|
||||
- YOLOv5Lite部署实现来自[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
|
||||
代码,和[基于COCO的预训练模型](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)。
|
||||
- The YOLOv5Lite Deployment is based on the code of [YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
|
||||
and [Pre-trained Model Based on COCO](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)。
|
||||
|
||||
- (1)[官方库](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
|
||||
- (2)自己数据训练的YOLOv5Lite模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
- (1)The *.pt provided by [Official Repository](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) should [Export the ONNX Model](#导出ONNX模型)to complete the deployment;
|
||||
- (2)The YOLOv5Lite model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
||||
|
||||
|
||||
## 导出ONNX模型
|
||||
## Export the ONNX Model
|
||||
|
||||
- 自动获取
|
||||
访问[YOLOv5Lite](https://github.com/ppogg/YOLOv5-Lite)
|
||||
官方github库,按照指引下载安装,下载`yolov5-lite-xx.onnx` 模型(Tips:官方提供的ONNX文件目前是没有decode模块的)
|
||||
- Auto-acquisition
|
||||
Visit official [YOLOv5Lite](https://github.com/ppogg/YOLOv5-Lite)
|
||||
github repository, follow the guidelines to download the `yolov5-lite-xx.onnx` model(Tips: The official ONNX files are currently provided without the decode module)
|
||||
```bash
|
||||
#下载yolov5-lite模型文件(.onnx)
|
||||
# Download yolov5-lite model files(.onnx)
|
||||
Download from https://drive.google.com/file/d/1bJByk9eoS6pv8Z3N4bcLRCV3i7uk24aU/view
|
||||
官方Repo也支持百度云下载
|
||||
Official Repo also supports Baidu cloud download
|
||||
```
|
||||
|
||||
- 手动获取
|
||||
- Manual Acquisition
|
||||
|
||||
访问[YOLOv5Lite](https://github.com/ppogg/YOLOv5-Lite)
|
||||
官方github库,按照指引下载安装,下载`yolov5-lite-xx.pt` 模型,利用 `export.py` 得到`onnx`格式文件。
|
||||
Visit official [YOLOv5Lite](https://github.com/ppogg/YOLOv5-Lite)
|
||||
github repository, follow the guidelines to download the `yolov5-lite-xx.pt` model, and employ `export.py` to get files in `onnx` format.
|
||||
|
||||
- 导出含有decode模块的ONNX文件
|
||||
- Export ONNX files with the decode module
|
||||
|
||||
首先需要参考[YOLOv5-Lite#189](https://github.com/ppogg/YOLOv5-Lite/pull/189)的解决办法,修改代码。
|
||||
First refer to [YOLOv5-Lite#189](https://github.com/ppogg/YOLOv5-Lite/pull/189) to modify the code.
|
||||
|
||||
```bash
|
||||
#下载yolov5-lite模型文件(.pt)
|
||||
# Download yolov5-lite model files(.pt)
|
||||
Download from https://drive.google.com/file/d/1oftzqOREGqDCerf7DtD5BZp9YWELlkMe/view
|
||||
官方Repo也支持百度云下载
|
||||
Official Repo also supports Baidu cloud download
|
||||
|
||||
# 导出onnx格式文件
|
||||
# Export files in onnx format
|
||||
python export.py --grid --dynamic --concat --weights PATH/TO/yolov5-lite-xx.pt
|
||||
|
||||
|
||||
```
|
||||
- 导出无decode模块的ONNX文件(不需要修改代码)
|
||||
- Export ONNX files without the docode module(No code changes are required)
|
||||
|
||||
```bash
|
||||
#下载yolov5-lite模型文件
|
||||
# Download yolov5-lite model files
|
||||
Download from https://drive.google.com/file/d/1oftzqOREGqDCerf7DtD5BZp9YWELlkMe/view
|
||||
官方Repo也支持百度云下载
|
||||
Official Repo also supports Baidu cloud download
|
||||
|
||||
# 导出onnx格式文件
|
||||
# Export files in onnx format
|
||||
python export.py --grid --dynamic --weights PATH/TO/yolov5-lite-xx.pt
|
||||
|
||||
```
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv5Lite导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 | 备注 |
|
||||
For developers' testing, models exported by YOLOv5Lite are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
|
||||
| Model | Size | Accuracy | Note |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:----- |
|
||||
| [YOLOv5Lite-e](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-e-sim-320.onnx) | 3.1MB | 35.1% | 此模型文件来源于[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-s](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-s-sim-416.onnx) | 6.3MB | 42.0% | 此模型文件来源于[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-c](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-c-sim-512.onnx) | 18MB | 50.9% | 此模型文件来源于[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-g](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-g-sim-640.onnx) | 21MB | 57.6% | 此模型文件来源于[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-e](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-e-sim-320.onnx) | 3.1MB | 35.1% | This model file is sourced from [YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-s](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-s-sim-416.onnx) | 6.3MB | 42.0% | This model file is sourced from [YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-c](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-c-sim-512.onnx) | 18MB | 50.9% | This model file is sourced from[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-g](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-g-sim-640.onnx) | 21MB | 57.6% | This model file is sourced from [YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployment](cpp)
|
||||
|
||||
|
||||
## 版本说明
|
||||
## Release Note
|
||||
|
||||
- 本版本文档和代码基于[YOLOv5-Lite v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) 编写
|
||||
- Document and code are based on [YOLOv5-Lite v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
|
||||
|
72
examples/vision/detection/yolov5lite/README_CN.md
Normal file
72
examples/vision/detection/yolov5lite/README_CN.md
Normal file
@@ -0,0 +1,72 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5Lite准备部署模型
|
||||
|
||||
- YOLOv5Lite部署实现来自[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
|
||||
代码,和[基于COCO的预训练模型](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)。
|
||||
|
||||
- (1)[官方库](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
|
||||
- (2)自己数据训练的YOLOv5Lite模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
|
||||
|
||||
## 导出ONNX模型
|
||||
|
||||
- 自动获取
|
||||
访问[YOLOv5Lite](https://github.com/ppogg/YOLOv5-Lite)
|
||||
官方github库,按照指引下载安装,下载`yolov5-lite-xx.onnx` 模型(Tips:官方提供的ONNX文件目前是没有decode模块的)
|
||||
```bash
|
||||
#下载yolov5-lite模型文件(.onnx)
|
||||
Download from https://drive.google.com/file/d/1bJByk9eoS6pv8Z3N4bcLRCV3i7uk24aU/view
|
||||
官方Repo也支持百度云下载
|
||||
```
|
||||
|
||||
- 手动获取
|
||||
|
||||
访问[YOLOv5Lite](https://github.com/ppogg/YOLOv5-Lite)
|
||||
官方github库,按照指引下载安装,下载`yolov5-lite-xx.pt` 模型,利用 `export.py` 得到`onnx`格式文件。
|
||||
|
||||
- 导出含有decode模块的ONNX文件
|
||||
|
||||
首先需要参考[YOLOv5-Lite#189](https://github.com/ppogg/YOLOv5-Lite/pull/189)的解决办法,修改代码。
|
||||
|
||||
```bash
|
||||
#下载yolov5-lite模型文件(.pt)
|
||||
Download from https://drive.google.com/file/d/1oftzqOREGqDCerf7DtD5BZp9YWELlkMe/view
|
||||
官方Repo也支持百度云下载
|
||||
|
||||
# 导出onnx格式文件
|
||||
python export.py --grid --dynamic --concat --weights PATH/TO/yolov5-lite-xx.pt
|
||||
|
||||
|
||||
```
|
||||
- 导出无decode模块的ONNX文件(不需要修改代码)
|
||||
|
||||
```bash
|
||||
#下载yolov5-lite模型文件
|
||||
Download from https://drive.google.com/file/d/1oftzqOREGqDCerf7DtD5BZp9YWELlkMe/view
|
||||
官方Repo也支持百度云下载
|
||||
|
||||
# 导出onnx格式文件
|
||||
python export.py --grid --dynamic --weights PATH/TO/yolov5-lite-xx.pt
|
||||
|
||||
```
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv5Lite导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 | 备注 |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:----- |
|
||||
| [YOLOv5Lite-e](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-e-sim-320.onnx) | 3.1MB | 35.1% | 此模型文件来源于[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-s](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-s-sim-416.onnx) | 6.3MB | 42.0% | 此模型文件来源于[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-c](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-c-sim-512.onnx) | 18MB | 50.9% | 此模型文件来源于[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
| [YOLOv5Lite-g](https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-g-sim-640.onnx) | 21MB | 57.6% | 此模型文件来源于[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite),GPL-3.0 License |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
|
||||
|
||||
## 版本说明
|
||||
|
||||
- 本版本文档和代码基于[YOLOv5-Lite v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) 编写
|
@@ -1,46 +1,47 @@
|
||||
# YOLOv5Lite C++部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5Lite C++ Deployment Example
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOv5Lite在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv5Lite on CPU/GPU and GPU accelerated by TensorRT.
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
#下载官方转换好的YOLOv5Lite模型文件和测试图片
|
||||
# Download the official converted YOLOv5Lite model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-g-sim-640.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
./infer_demo v5Lite-g-sim-640.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
./infer_demo v5Lite-g-sim-640.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
./infer_demo v5Lite-g-sim-640.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301943-263c8153-a52a-4533-a7c1-ee86d05d314b.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## YOLOv5Lite C++接口
|
||||
## YOLOv5Lite C++ Interface
|
||||
|
||||
### YOLOv5Lite类
|
||||
### YOLOv5Lite Class
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::YOLOv5Lite(
|
||||
@@ -50,16 +51,16 @@ fastdeploy::vision::detection::YOLOv5Lite(
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
YOLOv5Lite model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
#### Predict函数
|
||||
#### Predict Function
|
||||
|
||||
> ```c++
|
||||
> YOLOv5Lite::Predict(cv::Mat* im, DetectionResult* result,
|
||||
@@ -67,26 +68,26 @@ YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
> > * **im**: Input images in HWC or BGR format
|
||||
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**: iou threshold during NMS processing
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Variable
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
> > * **size**(vector<int>): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
|
||||
> > * **padding_value**(vector<float>): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
|
||||
> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=ture` represents no paddling. Default `is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=false`
|
||||
> > * **stride**(int): Used with the `stris_mini_pad` member variable. Default `stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [Model Description](../../)
|
||||
- [Python Deployment](../python)
|
||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
93
examples/vision/detection/yolov5lite/cpp/README_CN.md
Normal file
93
examples/vision/detection/yolov5lite/cpp/README_CN.md
Normal file
@@ -0,0 +1,93 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5Lite C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成YOLOv5Lite在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
#下载官方转换好的YOLOv5Lite模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-g-sim-640.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# CPU推理
|
||||
./infer_demo v5Lite-g-sim-640.onnx 000000014439.jpg 0
|
||||
# GPU推理
|
||||
./infer_demo v5Lite-g-sim-640.onnx 000000014439.jpg 1
|
||||
# GPU上TensorRT推理
|
||||
./infer_demo v5Lite-g-sim-640.onnx 000000014439.jpg 2
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301943-263c8153-a52a-4533-a7c1-ee86d05d314b.jpg">
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
## YOLOv5Lite C++接口
|
||||
|
||||
### YOLOv5Lite类
|
||||
|
||||
```c++
|
||||
fastdeploy::vision::detection::YOLOv5Lite(
|
||||
const string& model_file,
|
||||
const string& params_file = "",
|
||||
const RuntimeOption& runtime_option = RuntimeOption(),
|
||||
const ModelFormat& model_format = ModelFormat::ONNX)
|
||||
```
|
||||
|
||||
YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
|
||||
|
||||
#### Predict函数
|
||||
|
||||
> ```c++
|
||||
> YOLOv5Lite::Predict(cv::Mat* im, DetectionResult* result,
|
||||
> float conf_threshold = 0.25,
|
||||
> float nms_iou_threshold = 0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测接口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **im**: 输入图像,注意需为HWC,BGR格式
|
||||
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > * **conf_threshold**: 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
|
||||
|
||||
### 类成员变量
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(vector<float>): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
|
||||
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
- [模型介绍](../../)
|
||||
- [Python部署](../python)
|
||||
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,81 +1,82 @@
|
||||
# YOLOv5Lite Python部署示例
|
||||
English | [简体中文](README_CN.md)
|
||||
# YOLOv5Lite Python Deployment Example
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
Before deployment, two steps require confirmation
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOv5Lite在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5Lite on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
# Download the example code for deployment
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/yolov5lite/python/
|
||||
|
||||
#下载YOLOv5Lite模型文件和测试图片
|
||||
# Download YOLOv5Lite model files and test images
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-g-sim-640.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
# CPU inference
|
||||
python infer.py --model v5Lite-g-sim-640.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
# GPU inference
|
||||
python infer.py --model v5Lite-g-sim-640.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
# TensorRT inference on GPU
|
||||
python infer.py --model v5Lite-g-sim-640.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
The visualized result after running is as follows
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301943-263c8153-a52a-4533-a7c1-ee86d05d314b.jpg">
|
||||
|
||||
## YOLOv5Lite Python接口
|
||||
## YOLOv5Lite Python Interface
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.YOLOv5Lite(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
YOLOv5Lite model loading and initialization, among which model_file is the exported ONNX model format
|
||||
|
||||
**参数**
|
||||
**Parameter**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
> * **model_file**(str): Model file path
|
||||
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||
|
||||
### predict函数
|
||||
### predict function
|
||||
|
||||
> ```python
|
||||
> YOLOv5Lite.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
>
|
||||
> **参数**
|
||||
> **Parameter**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
> > * **image_data**(np.ndarray): Input data in HWC or BGR format
|
||||
> > * **conf_threshold**(float): Filtering threshold of detection box confidence
|
||||
> > * **nms_iou_threshold**(float): iou threshold during NMS processing
|
||||
|
||||
> **返回**
|
||||
> **Return**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
> > Return `fastdeploy.vision.DetectionResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for its description.
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
### Class Member Property
|
||||
#### Pre-processing Parameter
|
||||
Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
> > * **size**(list[int]): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
|
||||
> > * **padding_value**(list[float]): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
|
||||
> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=True` represents no paddling. Default `is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=False`
|
||||
> > * **stride**(int): Used with the `stris_mini_padide` member variable. Default `stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
## Other Documents
|
||||
|
||||
- [YOLOv5Lite 模型介绍](..)
|
||||
- [YOLOv5Lite C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [YOLOv5Lite Model Description](..)
|
||||
- [YOLOv5Lite C++ Deployment](../cpp)
|
||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
|
82
examples/vision/detection/yolov5lite/python/README_CN.md
Normal file
82
examples/vision/detection/yolov5lite/python/README_CN.md
Normal file
@@ -0,0 +1,82 @@
|
||||
[English](README.md) | 简体中文
|
||||
# YOLOv5Lite Python部署示例
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
本目录下提供`infer.py`快速完成YOLOv5Lite在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
|
||||
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/detection/yolov5lite/python/
|
||||
|
||||
#下载YOLOv5Lite模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/v5Lite-g-sim-640.onnx
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# CPU推理
|
||||
python infer.py --model v5Lite-g-sim-640.onnx --image 000000014439.jpg --device cpu
|
||||
# GPU推理
|
||||
python infer.py --model v5Lite-g-sim-640.onnx --image 000000014439.jpg --device gpu
|
||||
# GPU上使用TensorRT推理
|
||||
python infer.py --model v5Lite-g-sim-640.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301943-263c8153-a52a-4533-a7c1-ee86d05d314b.jpg">
|
||||
|
||||
## YOLOv5Lite Python接口
|
||||
|
||||
```python
|
||||
fastdeploy.vision.detection.YOLOv5Lite(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||
```
|
||||
|
||||
YOLOv5Lite模型加载和初始化,其中model_file为导出的ONNX模型格式
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件路径
|
||||
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
|
||||
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
|
||||
> * **model_format**(ModelFormat): 模型格式,默认为ONNX
|
||||
|
||||
### predict函数
|
||||
|
||||
> ```python
|
||||
> YOLOv5Lite.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
|
||||
> ```
|
||||
>
|
||||
> 模型预测结口,输入图像直接输出检测结果。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
|
||||
> > * **conf_threshold**(float): 检测框置信度过滤阈值
|
||||
> > * **nms_iou_threshold**(float): NMS处理过程中iou阈值
|
||||
|
||||
> **返回**
|
||||
>
|
||||
> > 返回`fastdeploy.vision.DetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
### 类成员属性
|
||||
#### 预处理参数
|
||||
用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
|
||||
|
||||
> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
|
||||
> > * **padding_value**(list[float]): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
|
||||
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=True` 表示不使用填充的方式,默认值为`is_no_pad=False`
|
||||
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=False`
|
||||
> > * **stride**(int): 配合`stris_mini_padide`成员变量使用, 默认值为`stride=32`
|
||||
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [YOLOv5Lite 模型介绍](..)
|
||||
- [YOLOv5Lite C++部署](../cpp)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
@@ -1,30 +1,33 @@
|
||||
# YOLOv6准备部署模型
|
||||
English | [简体中文](README_CN.md)
|
||||
|
||||
# YOLOv6 Ready-to-deploy Model
|
||||
|
||||
|
||||
- YOLOv6 部署实现来自[YOLOv6](https://github.com/meituan/YOLOv6/releases/tag/0.1.0),和[基于coco的预训练模型](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)。
|
||||
- The YOLOv6 deployment is based on [YOLOv6](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) and [Pre-trained Model Based on COCO](https://github.com/meituan/YOLOv6/releases/tag/0.1.0).
|
||||
|
||||
- (1)[官方库](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)提供的*.onnx可直接进行部署;
|
||||
- (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。
|
||||
- (1)The *.onnx provided by [Official Repository](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) can directly conduct deployemnt;
|
||||
- (2)Personal models trained by developers should export the ONNX model. Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
||||
|
||||
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
为了方便开发者的测试,下面提供了YOLOv6导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
|
||||
| 模型 | 大小 | 精度 | 备注 |
|
||||
For developers' testing, models exported by YOLOv6 are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
|
||||
| Model | Size | Accuracy | Note |
|
||||
|:---------------------------------------------------------------- |:----- |:----- |:----- |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s.onnx) | 66MB | 43.1% | 此模型文件来源于[YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
|
||||
| [YOLOv6s_640](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s-640x640.onnx) | 66MB | 43.1% | 此模型文件来源于[YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
|
||||
| [YOLOv6t](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6t.onnx) | 58MB | 41.3% | 此模型文件来源于[YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
|
||||
| [YOLOv6n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6n.onnx) | 17MB | 35.0% | 此模型文件来源于[YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s.onnx) | 66MB | 43.1% | This model file is sourced from [YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
|
||||
| [YOLOv6s_640](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s-640x640.onnx) | 66MB | 43.1% | This model file is sourced from [YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
|
||||
| [YOLOv6t](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6t.onnx) | 58MB | 41.3% | This model file is sourced from [YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
|
||||
| [YOLOv6n](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6n.onnx) | 17MB | 35.0% | This model file is sourced from [YOLOv6](https://github.com/meituan/YOLOv6),GPL-3.0 License |
|
||||
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
## Detailed Deployment Documents
|
||||
|
||||
- [Python Deployment](python)
|
||||
- [C++ Deployement](cpp)
|
||||
|
||||
|
||||
## 版本说明
|
||||
## Release Note
|
||||
|
||||
- 本版本文档和代码基于[YOLOv6 0.1.0版本](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) 编写
|
||||
- Document and code are based on [YOLOv6 0.1.0 version](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user