mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-07 01:22:59 +08:00
Optimize ocr system code (#209)
* Support PPYOLOE plus model * Optimize ocr system code * modify example code * fix patchelf of openvino * optimize demo code of ocr * remove debug code * update demo code of ocr Co-authored-by: Jack Zhou <zhoushunjie@baidu.com>
This commit is contained in:
18
docs/runtime/README.md
Normal file
18
docs/runtime/README.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# FastDeploy推理后端
|
||||
|
||||
FastDeploy当前已集成多种推理后端,如下表格列出FastDeploy集成的各后端,与在FastDeploy中其支持的平台、硬件等信息
|
||||
|
||||
| 推理后端 | 支持平台 | 支持硬件 | 支持模型格式 |
|
||||
| :------- | :------- | :------- | :---- | :----- |
|
||||
| Paddle Inference | Windows(x64)/Linux(x64) | GPU/CPU | Paddle |
|
||||
| ONNX Runtime | Windows(x64)/Linux(x64/aarch64) | GPU/CPU | Paddle/ONNX |
|
||||
| TensorRT | Windows(x64)/Linux(x64/jetson) | GPU | Paddle/ONNX |
|
||||
| OpenVINO | Windows(x64)/Linux(x64) | CPU | Paddle/ONNX |
|
||||
| Poros[进行中] | Linux(x64) | CPU/GPU | TorchScript |
|
||||
|
||||
FastDeploy中各后端独立,用户在自行编译时可以选择开启其中一种或多种后端,FastDeploy中的`Runtime`模块为所有后端提供了统一的使用API,Runtime使用方式参阅文档[FastDeploy Runtime使用文档](usage.md)
|
||||
|
||||
|
||||
## 其它文档
|
||||
|
||||
- [FastDeploy编译](../compile)
|
44
docs/runtime/usage.md
Normal file
44
docs/runtime/usage.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# FastDeploy Runtime使用文档
|
||||
|
||||
`Runtime`作为FastDeploy中模型推理的模块,目前集成了多种后端,用户通过统一的后端即可快速完成不同格式的模型,在各硬件、平台、后端上的推理。本文档通过如下示例展示各硬件、后端上的推理
|
||||
|
||||
## CPU推理
|
||||
|
||||
Python示例
|
||||
|
||||
```
|
||||
import fastdeploy as fd
|
||||
import numpy as np
|
||||
option = fd.RuntimeOption()
|
||||
# 设定模型路径
|
||||
option.set_model_path("resnet50/inference.pdmodel", "resnet50/inference.pdiparams")
|
||||
# 使用OpenVINO后端
|
||||
option.use_openvino_backend()
|
||||
# 初始化runtime
|
||||
runtime = fd.Runtime(option)
|
||||
# 获取输入名
|
||||
input_name = runtime.get_input_info(0).name
|
||||
# 构造数据进行推理
|
||||
results = runtime.infer({input_name: np.random.rand(1, 3, 224, 224).astype("float32")})
|
||||
```
|
||||
|
||||
## GPU推理
|
||||
```
|
||||
import fastdeploy as fd
|
||||
import numpy as np
|
||||
option = fd.RuntimeOption()
|
||||
# 设定模型路径
|
||||
option.set_model_path("resnet50/inference.pdmodel", "resnet50/inference.pdiparams")
|
||||
# 使用GPU,并且使用第0张GPU卡
|
||||
option.use_gpu(0)
|
||||
# 使用Paddle Inference后端
|
||||
option.use_openvino_backend()
|
||||
# 初始化runtime
|
||||
runtime = fd.Runtime(option)
|
||||
# 获取输入名
|
||||
input_name = runtime.get_input_info(0).name
|
||||
# 构造数据进行推理
|
||||
results = runtime.infer({input_name: np.random.rand(1, 3, 224, 224).astype("float32")})
|
||||
```
|
||||
|
||||
更多Python/C++推理示例请直接参考[FastDeploy/examples/runtime](../../examples/runtime)
|
Reference in New Issue
Block a user