mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-12-24 13:28:13 +08:00
first commit
This commit is contained in:
57
docs/usage/model.md
Normal file
57
docs/usage/model.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# FastDeploy模型
|
||||
|
||||
目前支持的模型如下
|
||||
- [fastdeploy.vision.ppcls.Model](vision/ppcls.md) PaddleClas里的所有分类模型
|
||||
- [fastdeploy.vision.ultralytics/YOLOv5](vision/ultralytics.md) [ultralytics/yolov5](https://github.com/ultralytics/yolov5)模型
|
||||
|
||||
具体模型使用方式可参考各模型文档API和示例说明。 各模型在运行时均有默认的Runtime配置,本文档说明如何修改模型的后端配置,其中如下代码为跑YOLOv5的模型Python示例代码
|
||||
```
|
||||
import fastdeploy as fd
|
||||
model = fd.vision.ulttralytics.YOLOv5("yolov5s.onnx")
|
||||
|
||||
import cv2
|
||||
im = cv2.imread('bus.jpg')
|
||||
|
||||
result = model.predict(im)
|
||||
|
||||
print(model.runtime_option)
|
||||
```
|
||||
通过`print(model.runtime_option)`可以看到如下信息
|
||||
```
|
||||
RuntimeOption(
|
||||
backend : Backend.ORT # 当前推理后端为ONNXRuntime
|
||||
cpu_thread_num : 8 # 推理时CPU线程数设置(仅当模型在CPU上推理时有效)
|
||||
device : Device.GPU # 当前推理设备为GPU
|
||||
device_id : 0 # 当前推理设备id为0
|
||||
model_file : yolov5s.onnx # 模型文件路径
|
||||
model_format : Frontend.ONNX # 模型格式,当前为ONNX格式
|
||||
ort_execution_mode : -1 # ONNXRuntime后端的配置参数,-1表示默认
|
||||
ort_graph_opt_level : -1 # ONNXRuntime后端的配置参数, -1表示默认
|
||||
ort_inter_op_num_threads : -1 # ONNXRuntime后端的配置参数,-1表示默认
|
||||
params_file : # 参数文件(ONNX模型无此文件)
|
||||
trt_enable_fp16 : False # TensorRT参数
|
||||
trt_enable_int8 : False # TensorRT参数
|
||||
trt_fixed_shape : {} # TensorRT参数
|
||||
trt_max_batch_size : 32 # TensorRT参数
|
||||
trt_max_shape : {} # TensorRT参数
|
||||
trt_max_workspace_size : 1073741824 # TensorRT参数
|
||||
trt_min_shape : {} # TensorRT参数
|
||||
trt_opt_shape : {} # TensorRT参数
|
||||
trt_serialize_file : # TensorRT参数
|
||||
)
|
||||
```
|
||||
|
||||
会注意到参数名以`ort`开头的,均为ONNXRuntime后端专有的参数;以`trt`的则为TensorRT后端专有的参数。各后端与参数的配置,可参考[RuntimeOption](runtime_option.md)说明。
|
||||
|
||||
## 切换模型推理方式
|
||||
|
||||
一般而言,用户只需关注推理是在哪种Device下即可。 当然有更进一步需求,可以再为Device选择不同的Backend,但配置时注意Device与Backend的搭配。 如Backend::TRT只支持Device为GPU, 而Backend::ORT则同时支持CPU和GPU
|
||||
|
||||
```
|
||||
import fastdeploy as fd
|
||||
option = fd.RuntimeOption()
|
||||
option.device = fd.Device.CPU
|
||||
option.cpu_thread_num = 12
|
||||
model = fd.vision.ulttralytics.YOLOv5("yolov5s.onnx", option)
|
||||
print(model.runtime_option)
|
||||
```
|
||||
104
docs/usage/vision/ppcls.md
Normal file
104
docs/usage/vision/ppcls.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# PaddleClas分类模型推理
|
||||
|
||||
PaddleClas模型导出参考[PaddleClas](https://github.com/PaddlePaddle/PaddleClas.git)
|
||||
|
||||
## Python API说明
|
||||
|
||||
### Model类
|
||||
```
|
||||
fastdeploy.vision.ppcls.Model(model_file, params_file, config_file, runtime_option=None, model_format=fastdeploy.Frontend.PADDLE)
|
||||
```
|
||||
|
||||
**参数**
|
||||
|
||||
> * **model_file**(str): 模型文件,如resnet50/inference.pdmodel
|
||||
> * **params_file**(str): 参数文件,如resnet50/inference.pdiparams
|
||||
> * **config_file**(str): 配置文件,来源于PaddleClas提供的推理配置文件,如[inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/deploy/configs/inference_cls.yaml)
|
||||
> * **runtime_option**(fd.RuntimeOption): 后端推理的配置, 默认为None,即采用默认配置
|
||||
> * **model_format**(fd.Frontend): 模型格式说明,PaddleClas的模型格式均为Frontend.PADDLE
|
||||
|
||||
#### predict接口
|
||||
```
|
||||
Model.predict(image_data, topk=1)
|
||||
```
|
||||
|
||||
> **参数**
|
||||
>
|
||||
> > * **image_data**(np.ndarray): 输入数据, 注意需为HWC,RGB格式
|
||||
> > * **topk**(int): 取前top的分类
|
||||
|
||||
> **返回结果**
|
||||
>
|
||||
> > * **result**(ClassifyResult):结构体包含`label_ids`和`scores`两个list成员变量,表示类别,和各类别对应的置信度
|
||||
|
||||
### 示例
|
||||
|
||||
> ```
|
||||
> import fastdeploy.vision as vis
|
||||
> import cv2
|
||||
> model = vis.ppcls.Model("resnet50/inference.pdmodel", "resnet50/inference.pdiparams", "resnet50/inference_cls.yaml")
|
||||
> im = cv2.imread("test.jpeg")
|
||||
> result = model.predict(im, topk=5)
|
||||
> print(result.label_ids[0], result.scores[0])
|
||||
> ```
|
||||
|
||||
## C++ API说明
|
||||
|
||||
需添加头文件`#include "fastdeploy/vision.h"`
|
||||
|
||||
### Model类
|
||||
|
||||
```
|
||||
fastdeploy::vision::ppcls::Model(
|
||||
const std::string& model_file,
|
||||
const std::string& params_file,
|
||||
const std::string& config_file,
|
||||
const RuntimeOption& custom_option = RuntimeOption(),
|
||||
const Frontend& model_format = Frontend::PADDLE)
|
||||
```
|
||||
|
||||
**参数**
|
||||
> * **model_file**: 模型文件,如resnet50/inference.pdmodel
|
||||
> * **params_file**: 参数文件,如resnet50/inference.pdiparams
|
||||
> * **config_file**: 配置文件,来源于PaddleClas提供的推理配置文件,如[inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/deploy/configs/inference_cls.yaml)
|
||||
> * **runtime_option**: 后端推理的配置, 不设置的情况下,采用默认配置
|
||||
> * **model_format**: 模型格式说明,PaddleClas的模型格式均为Frontend.PADDLE
|
||||
|
||||
#### Predict接口
|
||||
```
|
||||
bool Model::Predict(cv::Mat* im, ClassifyResult* result, int topk = 1)
|
||||
```
|
||||
|
||||
> **参数**
|
||||
> > * **im**: 输入图像数据,须为HWC,RGB格式(注意传入的im在预处理过程中会被修改)
|
||||
> > * **result**: 分类结果
|
||||
> > * **topk**: 取分类结果前topk
|
||||
|
||||
> **返回结果**
|
||||
> > true或false,表示预测成功与否
|
||||
|
||||
### 示例
|
||||
> ```
|
||||
> #include "fastdeploy/vision.h"
|
||||
>
|
||||
> int main() {
|
||||
> typedef vis = fastdeploy::vision;
|
||||
> auto model = vis::ppcls::Model("resnet50/inference.pdmodel", "resnet50/inference.pdiparams", "resnet50/inference_cls.yaml");
|
||||
>
|
||||
> if (!model.Initialized()) {
|
||||
> std::cerr << "Initialize failed." << std::endl;
|
||||
> return -1;
|
||||
> }
|
||||
>
|
||||
> cv::Mat im = cv::imread("test.jpeg");
|
||||
>
|
||||
> vis::ClassifyResult res;
|
||||
> if (!model.Predict(&im, &res, 5)) {
|
||||
> std::cerr << "Prediction failed." << std::endl;
|
||||
> return -1;
|
||||
> }
|
||||
>
|
||||
> std::cout << res.label_ids[0] << " " << res.scores[0] << std::endl;
|
||||
> return 0;
|
||||
> }
|
||||
```
|
||||
Reference in New Issue
Block a user