Files
FastDeploy/examples/vision/ocr/PP-OCR
Yi-sir 9e20dab0d6 [Example] Merge Download Paddle Model, Paddle->ONNX->MLIR->BModel (#1643)
* fix infer.py and README

* [Example] Merge Download Paddle Model, Paddle->Onnx->Mlir->Bmodel and
inference into infer.py. Modify README.md

* modify pp_liteseg sophgo infer.py and README.md

* fix PPOCR,PPYOLOE,PICODET,LITESEG sophgo infer.py and README.md

* fix memory overflow problem while inferring with sophgo backend

* fix memory overflow problem while inferring with sophgo backend

---------

Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: xuyizhou <yizhou.xu@sophgo.com>
2023-03-31 15:08:01 +08:00
..

PaddleOCR高性能全场景模型部署方案—FastDeploy

目录

1. FastDeploy介绍

FastDeploy是一款全场景易用灵活极致高效的AI推理部署工具支持云边端部署.使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、算能、瑞芯微等10+款硬件上对PaddleOCR模型进行快速部署并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、SOPHGO、RKNPU2等多种推理后端.

2. PaddleOCR模型部署

2.1 硬件支持列表

硬件类型 该硬件是否支持 使用指南 Python C++
X86 CPU 链接
NVIDIA GPU 链接
飞腾CPU 链接
ARM CPU 链接
Intel GPU(集成显卡) 链接
Intel GPU(独立显卡) 链接
昆仑 链接
昇腾 链接
算能 链接
瑞芯微 链接

2.2. 详细使用文档

2.3 更多部署方式

3. 常见问题

遇到问题可查看常见问题集合搜索FastDeploy issue或给FastDeploy提交issue:

常见问题集合
FastDeploy issues