[Docs] Pick PPOCR fastdeploy docs from PaddleOCR (#1534)
* Pick PPOCR fastdeploy docs from PaddleOCR * improve ppocr * improve readme * remove old PP-OCRv2 and PP-OCRv3 folfers * rename kunlun to kunlunxin * improve readme * improve readme * improve readme --------- Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
88
examples/vision/ocr/PP-OCR/README.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# PaddleOCR高性能全场景模型部署方案—FastDeploy
|
||||
|
||||
## 目录
|
||||
- [FastDeploy介绍](#FastDeploy介绍)
|
||||
- [PaddleOCR模型部署](#PaddleOCR模型部署)
|
||||
- [常见问题](#常见问题)
|
||||
|
||||
## 1. FastDeploy介绍
|
||||
<div id="FastDeploy介绍"></div>
|
||||
|
||||
**[⚡️FastDeploy](https://github.com/PaddlePaddle/FastDeploy)**是一款**全场景**、**易用灵活**、**极致高效**的AI推理部署工具,支持**云边端**部署.使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、算能、瑞芯微等10+款硬件上对PaddleOCR模型进行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、SOPHGO、RKNPU2等多种推理后端.
|
||||
|
||||
<div align="center">
|
||||
|
||||
<img src="https://user-images.githubusercontent.com/31974251/224941235-d5ea4ed0-7626-4c62-8bbd-8e4fad1e72ad.png" >
|
||||
|
||||
</div>
|
||||
|
||||
## 2. PaddleOCR模型部署
|
||||
<div id="PaddleOCR模型部署"></div>
|
||||
|
||||
### 2.1 硬件支持列表
|
||||
|
||||
|硬件类型|该硬件是否支持|使用指南|Python|C++|
|
||||
|:---:|:---:|:---:|:---:|:---:|
|
||||
|X86 CPU|✅|[链接](./cpu-gpu)|✅|✅|
|
||||
|NVIDIA GPU|✅|[链接](./cpu-gpu)|✅|✅|
|
||||
|飞腾CPU|✅|[链接](./cpu-gpu)|✅|✅|
|
||||
|ARM CPU|✅|[链接](./cpu-gpu)|✅|✅|
|
||||
|Intel GPU(集成显卡)|✅|[链接](./cpu-gpu)|✅|✅|
|
||||
|Intel GPU(独立显卡)|✅|[链接](./cpu-gpu)|✅|✅|
|
||||
|昆仑|✅|[链接](./kunlunxin)|✅|✅|
|
||||
|昇腾|✅|[链接](./ascend)|✅|✅|
|
||||
|算能|✅|[链接](./sophgo)|✅|✅|
|
||||
|瑞芯微|✅|[链接](./rockchip)|✅|✅|
|
||||
|
||||
### 2.2. 详细使用文档
|
||||
- X86 CPU
|
||||
- [部署模型准备](./cpu-gpu)
|
||||
- [Python部署示例](./cpu-gpu/python/)
|
||||
- [C++部署示例](./cpu-gpu/cpp/)
|
||||
- NVIDIA GPU
|
||||
- [部署模型准备](./cpu-gpu)
|
||||
- [Python部署示例](./cpu-gpu/python/)
|
||||
- [C++部署示例](./cpu-gpu/cpp/)
|
||||
- 飞腾CPU
|
||||
- [部署模型准备](./cpu-gpu)
|
||||
- [Python部署示例](./cpu-gpu/python/)
|
||||
- [C++部署示例](./cpu-gpu/cpp/)
|
||||
- ARM CPU
|
||||
- [部署模型准备](./cpu-gpu)
|
||||
- [Python部署示例](./cpu-gpu/python/)
|
||||
- [C++部署示例](./cpu-gpu/cpp/)
|
||||
- Intel GPU
|
||||
- [部署模型准备](./cpu-gpu)
|
||||
- [Python部署示例](./cpu-gpu/python/)
|
||||
- [C++部署示例](./cpu-gpu/cpp/)
|
||||
- 昆仑 XPU
|
||||
- [部署模型准备](./kunlunxin)
|
||||
- [Python部署示例](./kunlunxin/python/)
|
||||
- [C++部署示例](./kunlunxin/cpp/)
|
||||
- 昇腾 Ascend
|
||||
- [部署模型准备](./ascend)
|
||||
- [Python部署示例](./ascend/python/)
|
||||
- [C++部署示例](./ascend/cpp/)
|
||||
- 算能 Sophgo
|
||||
- [部署模型准备](./sophgo/)
|
||||
- [Python部署示例](./sophgo/python/)
|
||||
- [C++部署示例](./sophgo/cpp/)
|
||||
- 瑞芯微 Rockchip
|
||||
- [部署模型准备](./rockchip/)
|
||||
- [Python部署示例](./rockchip/rknpu2/)
|
||||
- [C++部署示例](./rockchip/rknpu2/)
|
||||
|
||||
### 2.3 更多部署方式
|
||||
|
||||
- [Android ARM CPU部署](./android)
|
||||
- [服务化Serving部署](./serving)
|
||||
- [web部署](./web)
|
||||
|
||||
|
||||
## 3. 常见问题
|
||||
<div id="常见问题"></div>
|
||||
|
||||
遇到问题可查看常见问题集合,搜索FastDeploy issue,*或给FastDeploy提交[issue](https://github.com/PaddlePaddle/FastDeploy/issues)*:
|
||||
|
||||
[常见问题集合](https://github.com/PaddlePaddle/FastDeploy/tree/develop/docs/cn/faq)
|
||||
[FastDeploy issues](https://github.com/PaddlePaddle/FastDeploy/issues)
|
@@ -1,7 +1,7 @@
|
||||
[English](README.md) | 简体中文
|
||||
# OCR文字识别 Android Demo 使用文档
|
||||
# PaddleOCR Android Demo 使用文档
|
||||
|
||||
在 Android 上实现实时的OCR文字识别功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
|
||||
在 Android 上实现实时的PaddleOCR文字识别功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
|
||||
|
||||
## 环境准备
|
||||
|
||||
@@ -10,9 +10,8 @@
|
||||
|
||||
## 部署步骤
|
||||
|
||||
1. OCR文字识别 Demo 位于 `fastdeploy/examples/vision/ocr/PP-OCRv3/android` 目录
|
||||
2. 用 Android Studio 打开 PP-OCRv3/android 工程
|
||||
3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
|
||||
1. 用 Android Studio 打开 PP-OCRv3/android 工程
|
||||
2. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
|
||||
|
||||
<p align="center">
|
||||
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/31974251/203257262-71b908ab-bb2b-47d3-9efb-67631687b774.png">
|
||||
@@ -186,7 +185,7 @@ model.init(detModel, clsModel, recModel);
|
||||
## 替换 FastDeploy SDK和模型
|
||||
替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models`。
|
||||
- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK,解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
|
||||
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
|
||||
- [在 Android 中使用 FastDeploy Java SDK](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android)
|
||||
|
||||
- 替换OCR模型的步骤:
|
||||
- 将您的OCR模型放在 `app/src/main/assets/models` 目录下;
|
||||
@@ -219,5 +218,6 @@ predictor.init(detModel, recModel);
|
||||
|
||||
## 更多参考文档
|
||||
如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
|
||||
- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
|
||||
- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
|
||||
- [在 Android 中使用 FastDeploy Java SDK](https://github.com/PaddlePaddle/FastDeploy/tree/develop/java/android)
|
||||
- [在 Android 中使用 FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_cpp_sdk_on_android.md)
|
||||
- 如果用户想要调整前后处理超参数、单独使用文字检测识别模型、使用其他模型等,更多详细文档与说明请参考[PP-OCR系列在CPU/GPU上的部署](../../cpu-gpu/python/README.md)
|
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
Before Width: | Height: | Size: 9.7 KiB After Width: | Height: | Size: 9.7 KiB |
Before Width: | Height: | Size: 455 B After Width: | Height: | Size: 455 B |
Before Width: | Height: | Size: 414 B After Width: | Height: | Size: 414 B |
Before Width: | Height: | Size: 6.0 KiB After Width: | Height: | Size: 6.0 KiB |
Before Width: | Height: | Size: 6.0 KiB After Width: | Height: | Size: 6.0 KiB |
Before Width: | Height: | Size: 7.8 KiB After Width: | Height: | Size: 7.8 KiB |
Before Width: | Height: | Size: 7.8 KiB After Width: | Height: | Size: 7.8 KiB |
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 1.9 KiB |
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 944 B After Width: | Height: | Size: 944 B |
Before Width: | Height: | Size: 2.8 KiB After Width: | Height: | Size: 2.8 KiB |
Before Width: | Height: | Size: 4.8 KiB After Width: | Height: | Size: 4.8 KiB |
Before Width: | Height: | Size: 4.8 KiB After Width: | Height: | Size: 4.8 KiB |
Before Width: | Height: | Size: 7.3 KiB After Width: | Height: | Size: 7.3 KiB |
Before Width: | Height: | Size: 7.4 KiB After Width: | Height: | Size: 7.4 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 1.5 KiB |
Before Width: | Height: | Size: 9.8 KiB After Width: | Height: | Size: 9.8 KiB |
Before Width: | Height: | Size: 2.9 KiB After Width: | Height: | Size: 2.9 KiB |
Before Width: | Height: | Size: 4.8 KiB After Width: | Height: | Size: 4.8 KiB |
Before Width: | Height: | Size: 2.0 KiB After Width: | Height: | Size: 2.0 KiB |
Before Width: | Height: | Size: 2.7 KiB After Width: | Height: | Size: 2.7 KiB |
Before Width: | Height: | Size: 4.4 KiB After Width: | Height: | Size: 4.4 KiB |
Before Width: | Height: | Size: 6.7 KiB After Width: | Height: | Size: 6.7 KiB |
Before Width: | Height: | Size: 6.2 KiB After Width: | Height: | Size: 6.2 KiB |
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
Before Width: | Height: | Size: 8.9 KiB After Width: | Height: | Size: 8.9 KiB |
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
@@ -1,20 +1,23 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleOCR 模型部署
|
||||
|
||||
## PaddleOCR为多个模型组合串联任务,包含
|
||||
- 文本检测 `DBDetector`
|
||||
- [可选]方向分类 `Classifer` 用于调整进入文字识别前的图像方向
|
||||
- 文字识别 `Recognizer` 用于从图像中识别出文字
|
||||
# PaddleOCR 模型在华为昇腾上部署方案-FastDeploy
|
||||
|
||||
根据不同场景, FastDeploy汇总提供如下OCR任务部署, 用户需同时下载3个模型与字典文件(或2个,分类器可选), 完成OCR整个预测流程
|
||||
## 1. 说明
|
||||
PaddleOCR支持通过FastDeploy在华为昇腾上部署相关模型
|
||||
|
||||
## 2. 支持模型列表
|
||||
|
||||
### PP-OCR 中英文系列模型
|
||||
下表中的模型下载链接由PaddleOCR模型库提供, 详见[PP-OCR系列模型列表](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/models_list.md)
|
||||
|
||||
| OCR版本 | 文本框检测 | 方向分类模型 | 文字识别 |字典文件| 说明 |
|
||||
| PaddleOCR版本 | 文本框检测 | 方向分类模型 | 文字识别 |字典文件| 说明 |
|
||||
|:----|:----|:----|:----|:----|:--------|
|
||||
| ch_PP-OCRv3[推荐] |[ch_PP-OCRv3_det](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_PP-OCRv3_rec](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv3系列原始超轻量模型,支持中英文、多语种文本检测 |
|
||||
| en_PP-OCRv3[推荐] |[en_PP-OCRv3_det](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [en_PP-OCRv3_rec](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar) | [en_dict.txt](https://bj.bcebos.com/paddlehub/fastdeploy/en_dict.txt) | OCRv3系列原始超轻量模型,支持英文与数字识别,除检测模型和识别模型的训练数据与中文模型不同以外,无其他区别 |
|
||||
| ch_PP-OCRv2 |[ch_PP-OCRv2_det](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_PP-OCRv2_rec](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2系列原始超轻量模型,支持中英文、多语种文本检测 |
|
||||
| ch_PP-OCRv2_mobile |[ch_ppocr_mobile_v2.0_det](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_ppocr_mobile_v2.0_rec](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2系列原始超轻量模型,支持中英文、多语种文本检测,比PPOCRv2更加轻量 |
|
||||
| ch_PP-OCRv2_server |[ch_ppocr_server_v2.0_det](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_ppocr_server_v2.0_rec](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) |[ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2服务器系列模型, 支持中英文、多语种文本检测,比超轻量模型更大,但效果更好|
|
||||
|
||||
|
||||
## 3. 详细部署的部署示例
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
@@ -1,4 +1,4 @@
|
||||
PROJECT(infer_demo C)
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
@@ -9,5 +9,6 @@ include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
# 添加FastDeploy依赖头文件
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.c)
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
63
examples/vision/ocr/PP-OCR/ascend/cpp/README.md
Normal file
@@ -0,0 +1,63 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-OCRv3 Ascend C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`, 供用户完成PP-OCRv3在华为昇腾AI处理器上的部署.
|
||||
|
||||
## 1. 部署环境准备
|
||||
在部署前,需确认以下两个步骤
|
||||
- 1. 在部署前,需自行编译基于华为昇腾AI处理器的预测库,参考文档[华为昇腾AI处理器部署环境编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
- 2. 部署时需要环境初始化, 请参考[如何使用C++在华为昇腾AI处理器部署](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
|
||||
## 2.部署模型准备
|
||||
在部署前, 请准备好您所需要运行的推理模型, 您可以在[FastDeploy支持的PaddleOCR模型列表](../README.md)中下载所需模型.
|
||||
|
||||
## 3.运行部署示例
|
||||
```
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/ocr/PP-OCR/ascend/cpp
|
||||
|
||||
# 如果您希望从PaddleOCR下载示例代码,请运行
|
||||
git clone https://github.com/PaddlePaddle/PaddleOCR.git
|
||||
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到dygraph分支
|
||||
git checkout dygraph
|
||||
cd PaddleOCR/deploy/fastdeploy/ascend/cpp
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
# 使用编译完成的FastDeploy库编译infer_demo
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-ascend
|
||||
make -j
|
||||
|
||||
# 下载PP-OCRv3文字检测模型
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_det_infer.tar
|
||||
# 下载文字方向分类器模型
|
||||
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
# 下载PP-OCRv3文字识别模型
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_rec_infer.tar
|
||||
|
||||
# 下载预测图片与字典文件
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
# 按照上文提供的文档完成环境初始化, 并执行以下命令
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg
|
||||
|
||||
# NOTE:若用户需要连续地预测图片, 输入图片尺寸需要准备为统一尺寸, 例如 N 张, 尺寸为 A * B 的图片.
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<div align="center">
|
||||
<img width="640" src="https://user-images.githubusercontent.com/109218879/185826024-f7593a0c-1bd2-4a60-b76c-15588484fa08.jpg">
|
||||
</div>
|
||||
|
||||
## 4. 更多指南
|
||||
- [PP-OCR系列 C++ API查阅](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1ocr.html)
|
||||
- [FastDeploy部署PaddleOCR模型概览](../../)
|
||||
- [PP-OCRv3 Python部署](../python)
|
||||
- 如果用户想要调整前后处理超参数、单独使用文字检测识别模型、使用其他模型等,更多详细文档与说明请参考[PP-OCR系列在CPU/GPU上的部署](../../cpu-gpu/python/README.md)
|
@@ -19,12 +19,12 @@ const char sep = '\\';
|
||||
const char sep = '/';
|
||||
#endif
|
||||
|
||||
void InitAndInfer(const std::string& det_model_dir,
|
||||
const std::string& cls_model_dir,
|
||||
const std::string& rec_model_dir,
|
||||
const std::string& rec_label_file,
|
||||
const std::string& image_file,
|
||||
const fastdeploy::RuntimeOption& option) {
|
||||
void AscendInfer(const std::string &det_model_dir,
|
||||
const std::string &cls_model_dir,
|
||||
const std::string &rec_model_dir,
|
||||
const std::string &rec_label_file,
|
||||
const std::string &image_file,
|
||||
const fastdeploy::RuntimeOption &option) {
|
||||
auto det_model_file = det_model_dir + sep + "inference.pdmodel";
|
||||
auto det_params_file = det_model_dir + sep + "inference.pdiparams";
|
||||
|
||||
@@ -34,6 +34,9 @@ void InitAndInfer(const std::string& det_model_dir,
|
||||
auto rec_model_file = rec_model_dir + sep + "inference.pdmodel";
|
||||
auto rec_params_file = rec_model_dir + sep + "inference.pdiparams";
|
||||
|
||||
fastdeploy::RuntimeOption option;
|
||||
option.UseAscend();
|
||||
|
||||
auto det_option = option;
|
||||
auto cls_option = option;
|
||||
auto rec_option = option;
|
||||
@@ -45,9 +48,7 @@ void InitAndInfer(const std::string& det_model_dir,
|
||||
auto rec_model = fastdeploy::vision::ocr::Recognizer(
|
||||
rec_model_file, rec_params_file, rec_label_file, rec_option);
|
||||
|
||||
// Users could enable static shape infer for rec model when deploy PP-OCR on
|
||||
// hardware
|
||||
// which can not support dynamic shape infer well, like Huawei Ascend series.
|
||||
// When deploy on Ascend, rec model must enable static shape infer as below.
|
||||
rec_model.GetPreprocessor().SetStaticShapeInfer(true);
|
||||
|
||||
assert(det_model.Initialized());
|
||||
@@ -56,16 +57,16 @@ void InitAndInfer(const std::string& det_model_dir,
|
||||
|
||||
// The classification model is optional, so the PP-OCR can also be connected
|
||||
// in series as follows
|
||||
// auto ppocr_v2 = fastdeploy::pipeline::PPOCRv2(&det_model, &rec_model);
|
||||
auto ppocr_v2 =
|
||||
fastdeploy::pipeline::PPOCRv2(&det_model, &cls_model, &rec_model);
|
||||
// auto ppocr_v3 = fastdeploy::pipeline::PPOCRv3(&det_model, &rec_model);
|
||||
auto ppocr_v3 =
|
||||
fastdeploy::pipeline::PPOCRv3(&det_model, &cls_model, &rec_model);
|
||||
|
||||
// When users enable static shape infer for rec model, the batch size of cls
|
||||
// and rec model must to be set to 1.
|
||||
ppocr_v2.SetClsBatchSize(1);
|
||||
ppocr_v2.SetRecBatchSize(1);
|
||||
ppocr_v3.SetClsBatchSize(1);
|
||||
ppocr_v3.SetRecBatchSize(1);
|
||||
|
||||
if (!ppocr_v2.Initialized()) {
|
||||
if (!ppocr_v3.Initialized()) {
|
||||
std::cerr << "Failed to initialize PP-OCR." << std::endl;
|
||||
return;
|
||||
}
|
||||
@@ -73,7 +74,7 @@ void InitAndInfer(const std::string& det_model_dir,
|
||||
auto im = cv::imread(image_file);
|
||||
|
||||
fastdeploy::vision::OCRResult result;
|
||||
if (!ppocr_v2.Predict(im, &result)) {
|
||||
if (!ppocr_v3.Predict(im, &result)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
@@ -85,36 +86,23 @@ void InitAndInfer(const std::string& det_model_dir,
|
||||
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
|
||||
}
|
||||
|
||||
int main(int argc, char* argv[]) {
|
||||
if (argc < 7) {
|
||||
int main(int argc, char *argv[]) {
|
||||
if (argc < 6) {
|
||||
std::cout << "Usage: infer_demo path/to/det_model path/to/cls_model "
|
||||
"path/to/rec_model path/to/rec_label_file path/to/image "
|
||||
"run_option, "
|
||||
"e.g ./infer_demo ./ch_PP-OCRv2_det_infer "
|
||||
"./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer "
|
||||
"./ppocr_keys_v1.txt ./12.jpg 0"
|
||||
<< std::endl;
|
||||
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
|
||||
"with ascend."
|
||||
"e.g ./infer_demo ./ch_PP-OCRv3_det_infer "
|
||||
"./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer "
|
||||
"./ppocr_keys_v1.txt ./12.jpg"
|
||||
<< std::endl;
|
||||
return -1;
|
||||
}
|
||||
|
||||
fastdeploy::RuntimeOption option;
|
||||
int flag = std::atoi(argv[6]);
|
||||
|
||||
if (flag == 0) {
|
||||
option.UseCpu();
|
||||
} else if (flag == 1) {
|
||||
option.UseAscend();
|
||||
}
|
||||
|
||||
std::string det_model_dir = argv[1];
|
||||
std::string cls_model_dir = argv[2];
|
||||
std::string rec_model_dir = argv[3];
|
||||
std::string rec_label_file = argv[4];
|
||||
std::string test_image = argv[5];
|
||||
InitAndInfer(det_model_dir, cls_model_dir, rec_model_dir, rec_label_file,
|
||||
test_image, option);
|
||||
AscendInfer(det_model_dir, cls_model_dir, rec_model_dir, rec_label_file,
|
||||
test_image);
|
||||
return 0;
|
||||
}
|
55
examples/vision/ocr/PP-OCR/ascend/python/README.md
Normal file
@@ -0,0 +1,55 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PP-OCRv3 Ascend Python部署示例
|
||||
|
||||
本目录下提供`infer.py`, 供用户完成PP-OCRv3在华为昇腾AI处理器上的部署.
|
||||
|
||||
## 1. 部署环境准备
|
||||
在部署前,需自行编译基于华为昇腾AI处理器的FastDeploy python wheel包并安装,参考文档,参考文档[华为昇腾AI处理器部署环境编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
|
||||
|
||||
## 2.部署模型准备
|
||||
在部署前, 请准备好您所需要运行的推理模型, 您可以在[FastDeploy支持的PaddleOCR模型列表](../README.md)中下载所需模型.
|
||||
|
||||
## 3.运行部署示例
|
||||
```
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/ocr/PP-OCR/ascend/python
|
||||
|
||||
# 如果您希望从PaddleOCR下载示例代码,请运行
|
||||
git clone https://github.com/PaddlePaddle/PaddleOCR.git
|
||||
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到dygraph分支
|
||||
git checkout dygraph
|
||||
cd PaddleOCR/deploy/fastdeploy/ascend/python
|
||||
|
||||
# 下载PP-OCRv3文字检测模型
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_det_infer.tar
|
||||
# 下载文字方向分类器模型
|
||||
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
# 下载PP-OCRv3文字识别模型
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_rec_infer.tar
|
||||
|
||||
# 下载预测图片与字典文件
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg
|
||||
# NOTE:若用户需要连续地预测图片, 输入图片尺寸需要准备为统一尺寸, 例如 N 张, 尺寸为 A * B 的图片.
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<div align="center">
|
||||
<img width="640" src="https://user-images.githubusercontent.com/109218879/185826024-f7593a0c-1bd2-4a60-b76c-15588484fa08.jpg">
|
||||
</div>
|
||||
|
||||
## 4. 更多指南
|
||||
- [PP-OCR系列 Python API查阅](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/ocr.html)
|
||||
- [FastDeploy部署PaddleOCR模型概览](../../)
|
||||
- [PP-OCRv3 C++部署](../cpp)
|
||||
- 如果用户想要调整前后处理超参数、单独使用文字检测识别模型、使用其他模型等,更多详细文档与说明请参考[PP-OCR系列在CPU/GPU上的部署](../../cpu-gpu/python/README.md)
|
||||
|
||||
## 5. 常见问题
|
||||
- [如何将视觉模型预测结果转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)
|
@@ -37,16 +37,6 @@ def parse_arguments():
|
||||
help="Path of Recognization model of PPOCR.")
|
||||
parser.add_argument(
|
||||
"--image", type=str, required=True, help="Path of test image file.")
|
||||
parser.add_argument(
|
||||
"--device",
|
||||
type=str,
|
||||
default='cpu',
|
||||
help="Type of inference device, support 'cpu', 'kunlunxin' or 'gpu'.")
|
||||
parser.add_argument(
|
||||
"--cpu_thread_num",
|
||||
type=int,
|
||||
default=9,
|
||||
help="Number of threads while inference on CPU.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
@@ -56,8 +46,6 @@ def build_option(args):
|
||||
cls_option = fd.RuntimeOption()
|
||||
rec_option = fd.RuntimeOption()
|
||||
|
||||
# 当前需要对PP-OCR启用静态shape推理的硬件只有昇腾.
|
||||
if args.device.lower() == "ascend":
|
||||
det_option.use_ascend()
|
||||
cls_option.use_ascend()
|
||||
rec_option.use_ascend()
|
||||
@@ -67,13 +55,12 @@ def build_option(args):
|
||||
|
||||
args = parse_arguments()
|
||||
|
||||
# Detection模型, 检测文字框
|
||||
det_model_file = os.path.join(args.det_model, "inference.pdmodel")
|
||||
det_params_file = os.path.join(args.det_model, "inference.pdiparams")
|
||||
# Classification模型,方向分类,可选
|
||||
|
||||
cls_model_file = os.path.join(args.cls_model, "inference.pdmodel")
|
||||
cls_params_file = os.path.join(args.cls_model, "inference.pdiparams")
|
||||
# Recognition模型,文字识别模型
|
||||
|
||||
rec_model_file = os.path.join(args.rec_model, "inference.pdmodel")
|
||||
rec_params_file = os.path.join(args.rec_model, "inference.pdiparams")
|
||||
rec_label_file = args.rec_label_file
|
||||
@@ -89,26 +76,28 @@ cls_model = fd.vision.ocr.Classifier(
|
||||
rec_model = fd.vision.ocr.Recognizer(
|
||||
rec_model_file, rec_params_file, rec_label_file, runtime_option=rec_option)
|
||||
|
||||
# Rec模型启用静态shape推理
|
||||
# Rec model enable static shape infer.
|
||||
# When deploy on Ascend, it must be true.
|
||||
rec_model.preprocessor.static_shape_infer = True
|
||||
|
||||
# 创建PP-OCR,串联3个模型,其中cls_model可选,如无需求,可设置为None
|
||||
# Create PP-OCRv3, if cls_model is not needed,
|
||||
# just set cls_model=None .
|
||||
ppocr_v3 = fd.vision.ocr.PPOCRv3(
|
||||
det_model=det_model, cls_model=cls_model, rec_model=rec_model)
|
||||
|
||||
# Cls模型和Rec模型的batch size 必须设置为1, 开启静态shape推理
|
||||
# The batch size must be set to 1, when enable static shape infer.
|
||||
ppocr_v3.cls_batch_size = 1
|
||||
ppocr_v3.rec_batch_size = 1
|
||||
|
||||
# 预测图片准备
|
||||
# Prepare image.
|
||||
im = cv2.imread(args.image)
|
||||
|
||||
#预测并打印结果
|
||||
# Print the results.
|
||||
result = ppocr_v3.predict(im)
|
||||
|
||||
print(result)
|
||||
|
||||
# 可视化结果
|
||||
# Visuliaze the output.
|
||||
vis_im = fd.vision.vis_ppocr(im, result)
|
||||
cv2.imwrite("visualized_result.jpg", vis_im)
|
||||
print("Visualized result save in ./visualized_result.jpg")
|
26
examples/vision/ocr/PP-OCR/cpu-gpu/README.md
Normal file
@@ -0,0 +1,26 @@
|
||||
[English](README.md) | 简体中文
|
||||
|
||||
# PaddleOCR 模型在CPU与GPU上的部署方案-FastDeploy
|
||||
|
||||
## 1. 说明
|
||||
PaddleOCR支持通过FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署PaddleOCR系列模型
|
||||
|
||||
## 2. 支持的PaddleOCR推理模型
|
||||
|
||||
下表中的推理模型为FastDeploy测试过的模型, 下载链接由PaddleOCR模型库提供,
|
||||
更多的模型, 详见[PP-OCR系列模型列表](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/models_list.md), 欢迎用户尝试.
|
||||
|
||||
| PaddleOCR版本 | 文本框检测 | 方向分类模型 | 文字识别 |字典文件| 说明 |
|
||||
|:----|:----|:----|:----|:----|:--------|
|
||||
| ch_PP-OCRv3[推荐] |[ch_PP-OCRv3_det](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_PP-OCRv3_rec](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv3系列原始超轻量模型,支持中英文、多语种文本检测 |
|
||||
| en_PP-OCRv3[推荐] |[en_PP-OCRv3_det](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [en_PP-OCRv3_rec](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar) | [en_dict.txt](https://bj.bcebos.com/paddlehub/fastdeploy/en_dict.txt) | OCRv3系列原始超轻量模型,支持英文与数字识别,除检测模型和识别模型的训练数据与中文模型不同以外,无其他区别 |
|
||||
| ch_PP-OCRv2 |[ch_PP-OCRv2_det](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_PP-OCRv2_rec](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2系列原始超轻量模型,支持中英文、多语种文本检测 |
|
||||
| ch_PP-OCRv2_mobile |[ch_ppocr_mobile_v2.0_det](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_ppocr_mobile_v2.0_rec](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2系列原始超轻量模型,支持中英文、多语种文本检测,比PPOCRv2更加轻量 |
|
||||
| ch_PP-OCRv2_server |[ch_ppocr_server_v2.0_det](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_ppocr_server_v2.0_rec](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) |[ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2服务器系列模型, 支持中英文、多语种文本检测,比超轻量模型更大,但效果更好|
|
||||
|
||||
|
||||
## 3. 详细部署的部署示例
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
- [C部署](c)
|
||||
- [C#部署](csharp)
|
74
examples/vision/ocr/PP-OCRv3/c/README_CN.md → examples/vision/ocr/PP-OCR/cpu-gpu/c/README.md
Normal file → Executable file
@@ -1,57 +1,73 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PPOCRv3 C部署示例
|
||||
# PaddleOCR CPU-GPU C部署示例
|
||||
|
||||
本目录下提供`infer.c`来调用C API快速完成PPOCRv3模型在CPU/GPU上部署的示例。
|
||||
本目录下提供`infer.c`来调用C API快速完成PP-OCRv3模型在CPU/GPU上部署的示例。
|
||||
|
||||
在部署前,需确认以下两个步骤
|
||||
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
## 1. 说明
|
||||
PaddleOCR支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署OCR模型.
|
||||
|
||||
## 2. 部署环境准备
|
||||
在部署前,需确认软硬件环境,同时下载预编译部署库,参考[FastDeploy安装文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)安装FastDeploy预编译库.
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
|
||||
|
||||
## 3. 部署模型准备
|
||||
在部署前, 请准备好您所需要运行的推理模型, 您可以在[FastDeploy支持的PaddleOCR模型列表](../README.md)中下载所需模型.
|
||||
|
||||
## 4.运行部署示例
|
||||
```bash
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/ocr/PP-OCR/cpu-gpu/c
|
||||
|
||||
# 如果您希望从PaddleOCR下载示例代码,请运行
|
||||
git clone https://github.com/PaddlePaddle/PaddleOCR.git
|
||||
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到dygraph分支
|
||||
git checkout dygraph
|
||||
cd PaddleOCR/deploy/fastdeploy/cpu-gpu/c
|
||||
|
||||
mkdir build
|
||||
cd build
|
||||
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
|
||||
# 编译Demo
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
|
||||
# 下载模型,图片和字典文件
|
||||
# 下载PP-OCRv3文字检测模型
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_det_infer.tar
|
||||
|
||||
# 下载文字方向分类器模型
|
||||
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
|
||||
# 下载PP-OCRv3文字识别模型
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_rec_infer.tar
|
||||
|
||||
# 下载预测图片与字典文件
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
# CPU推理
|
||||
# 在CPU上使用Paddle Inference推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
# GPU推理
|
||||
# 在GPU上使用Paddle Inference推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||
```
|
||||
|
||||
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
|
||||
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||
|
||||
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
|
||||
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
|
||||
<img width="640" src="https://user-images.githubusercontent.com/109218879/185826024-f7593a0c-1bd2-4a60-b76c-15588484fa08.jpg">
|
||||
|
||||
|
||||
## PPOCRv3 C API接口
|
||||
## 5. PP-OCRv3 C API接口简介
|
||||
下面提供了PP-OCRv3的C API简介
|
||||
|
||||
- 如果用户想要更换部署后端或进行其他定制化操作, 请查看[C Runtime API](https://baidu-paddle.github.io/fastdeploy-api/c/html/runtime__option_8h.html).
|
||||
- 更多 PP-OCR C API 请查看 [C PP-OCR API](https://github.com/PaddlePaddle/FastDeploy/blob/develop/c_api/fastdeploy_capi/vision/ocr/ppocr/model.h)
|
||||
|
||||
### 配置
|
||||
|
||||
@@ -159,7 +175,7 @@ FD_C_PPOCRv3Wrapper* FD_C_CreatePPOCRv3Wrapper(
|
||||
FD_C_RecognizerWrapper* rec_model
|
||||
)
|
||||
```
|
||||
> 创建一个PPOCRv3的模型,并且返回操作它的指针。
|
||||
> 创建一个PP-OCRv3的模型,并且返回操作它的指针。
|
||||
>
|
||||
> **参数**
|
||||
>
|
||||
@@ -169,11 +185,11 @@ FD_C_PPOCRv3Wrapper* FD_C_CreatePPOCRv3Wrapper(
|
||||
>
|
||||
> **返回**
|
||||
>
|
||||
> * **fd_c_ppocrv3_wrapper**(FD_C_PPOCRv3Wrapper*): 指向PPOCRv3模型对象的指针
|
||||
> * **fd_c_ppocrv3_wrapper**(FD_C_PPOCRv3Wrapper*): 指向PP-OCRv3模型对象的指针
|
||||
|
||||
|
||||
|
||||
#### 读写图像
|
||||
### 读写图像
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_Imread(const char* imgpath)
|
||||
@@ -206,7 +222,7 @@ FD_C_Bool FD_C_Imwrite(const char* savepath, FD_C_Mat img);
|
||||
> * **result**(FD_C_Bool): 表示操作是否成功
|
||||
|
||||
|
||||
#### Predict函数
|
||||
### Predict函数
|
||||
|
||||
```c
|
||||
FD_C_Bool FD_C_PPOCRv3WrapperPredict(
|
||||
@@ -218,12 +234,12 @@ FD_C_Bool FD_C_PPOCRv3WrapperPredict(
|
||||
> 模型预测接口,输入图像直接并生成结果。
|
||||
>
|
||||
> **参数**
|
||||
> * **fd_c_ppocrv3_wrapper**(FD_C_PPOCRv3Wrapper*): 指向PPOCRv3模型的指针
|
||||
> * **fd_c_ppocrv3_wrapper**(FD_C_PPOCRv3Wrapper*): 指向PP-OCRv3模型的指针
|
||||
> * **img**(FD_C_Mat): 输入图像的指针,指向cv::Mat对象,可以调用FD_C_Imread读取图像获取
|
||||
> * **result**(FD_C_OCRResult*): OCR预测结果,包括由检测模型输出的检测框位置,分类模型输出的方向分类,以及识别模型输出的识别结果, OCRResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
|
||||
|
||||
|
||||
#### Predict结果
|
||||
### Predict结果
|
||||
|
||||
```c
|
||||
FD_C_Mat FD_C_VisOcr(FD_C_Mat im, FD_C_OCRResult* ocr_result)
|
||||
@@ -239,9 +255,9 @@ FD_C_Mat FD_C_VisOcr(FD_C_Mat im, FD_C_OCRResult* ocr_result)
|
||||
> * **vis_im**(FD_C_Mat): 指向可视化图像的指针
|
||||
|
||||
|
||||
## 其它文档
|
||||
## 6. 其它文档
|
||||
|
||||
- [PPOCR 系列模型介绍](../../)
|
||||
- [PPOCRv3 Python部署](../python)
|
||||
- [模型预测结果说明](../../../../../docs/api/vision_results/)
|
||||
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
|
||||
- [FastDeploy部署PaddleOCR模型概览](../../)
|
||||
- [PP-OCRv3 Python部署](../python)
|
||||
- [PP-OCRv3 C++ 部署](../cpp)
|
||||
- [PP-OCRv3 C# 部署](../csharp)
|
@@ -37,9 +37,9 @@ const char sep = '\\';
|
||||
const char sep = '/';
|
||||
#endif
|
||||
|
||||
void CpuInfer(const char* det_model_dir, const char* cls_model_dir,
|
||||
const char* rec_model_dir, const char* rec_label_file,
|
||||
const char* image_file) {
|
||||
void CpuInfer(const char *det_model_dir, const char *cls_model_dir,
|
||||
const char *rec_model_dir, const char *rec_label_file,
|
||||
const char *image_file) {
|
||||
char det_model_file[100];
|
||||
char det_params_file[100];
|
||||
|
||||
@@ -65,22 +65,22 @@ void CpuInfer(const char* det_model_dir, const char* cls_model_dir,
|
||||
snprintf(rec_params_file, max_size, "%s%c%s", rec_model_dir, sep,
|
||||
"inference.pdiparams");
|
||||
|
||||
FD_C_RuntimeOptionWrapper* det_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapper* cls_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapper* rec_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapper *det_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapper *cls_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapper *rec_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapperUseCpu(det_option);
|
||||
FD_C_RuntimeOptionWrapperUseCpu(cls_option);
|
||||
FD_C_RuntimeOptionWrapperUseCpu(rec_option);
|
||||
|
||||
FD_C_DBDetectorWrapper* det_model = FD_C_CreateDBDetectorWrapper(
|
||||
FD_C_DBDetectorWrapper *det_model = FD_C_CreateDBDetectorWrapper(
|
||||
det_model_file, det_params_file, det_option, FD_C_ModelFormat_PADDLE);
|
||||
FD_C_ClassifierWrapper* cls_model = FD_C_CreateClassifierWrapper(
|
||||
FD_C_ClassifierWrapper *cls_model = FD_C_CreateClassifierWrapper(
|
||||
cls_model_file, cls_params_file, cls_option, FD_C_ModelFormat_PADDLE);
|
||||
FD_C_RecognizerWrapper* rec_model = FD_C_CreateRecognizerWrapper(
|
||||
FD_C_RecognizerWrapper *rec_model = FD_C_CreateRecognizerWrapper(
|
||||
rec_model_file, rec_params_file, rec_label_file, rec_option,
|
||||
FD_C_ModelFormat_PADDLE);
|
||||
|
||||
FD_C_PPOCRv3Wrapper* ppocr_v3 =
|
||||
FD_C_PPOCRv3Wrapper *ppocr_v3 =
|
||||
FD_C_CreatePPOCRv3Wrapper(det_model, cls_model, rec_model);
|
||||
if (!FD_C_PPOCRv3WrapperInitialized(ppocr_v3)) {
|
||||
printf("Failed to initialize.\n");
|
||||
@@ -96,7 +96,7 @@ void CpuInfer(const char* det_model_dir, const char* cls_model_dir,
|
||||
|
||||
FD_C_Mat im = FD_C_Imread(image_file);
|
||||
|
||||
FD_C_OCRResult* result = FD_C_CreateOCRResult();
|
||||
FD_C_OCRResult *result = (FD_C_OCRResult *)malloc(sizeof(FD_C_OCRResult));
|
||||
|
||||
if (!FD_C_PPOCRv3WrapperPredict(ppocr_v3, im, result)) {
|
||||
printf("Failed to predict.\n");
|
||||
@@ -132,9 +132,9 @@ void CpuInfer(const char* det_model_dir, const char* cls_model_dir,
|
||||
FD_C_DestroyMat(vis_im);
|
||||
}
|
||||
|
||||
void GpuInfer(const char* det_model_dir, const char* cls_model_dir,
|
||||
const char* rec_model_dir, const char* rec_label_file,
|
||||
const char* image_file) {
|
||||
void GpuInfer(const char *det_model_dir, const char *cls_model_dir,
|
||||
const char *rec_model_dir, const char *rec_label_file,
|
||||
const char *image_file) {
|
||||
char det_model_file[100];
|
||||
char det_params_file[100];
|
||||
|
||||
@@ -160,22 +160,22 @@ void GpuInfer(const char* det_model_dir, const char* cls_model_dir,
|
||||
snprintf(rec_params_file, max_size, "%s%c%s", rec_model_dir, sep,
|
||||
"inference.pdiparams");
|
||||
|
||||
FD_C_RuntimeOptionWrapper* det_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapper* cls_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapper* rec_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapper *det_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapper *cls_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapper *rec_option = FD_C_CreateRuntimeOptionWrapper();
|
||||
FD_C_RuntimeOptionWrapperUseGpu(det_option, 0);
|
||||
FD_C_RuntimeOptionWrapperUseGpu(cls_option, 0);
|
||||
FD_C_RuntimeOptionWrapperUseGpu(rec_option, 0);
|
||||
|
||||
FD_C_DBDetectorWrapper* det_model = FD_C_CreateDBDetectorWrapper(
|
||||
FD_C_DBDetectorWrapper *det_model = FD_C_CreateDBDetectorWrapper(
|
||||
det_model_file, det_params_file, det_option, FD_C_ModelFormat_PADDLE);
|
||||
FD_C_ClassifierWrapper* cls_model = FD_C_CreateClassifierWrapper(
|
||||
FD_C_ClassifierWrapper *cls_model = FD_C_CreateClassifierWrapper(
|
||||
cls_model_file, cls_params_file, cls_option, FD_C_ModelFormat_PADDLE);
|
||||
FD_C_RecognizerWrapper* rec_model = FD_C_CreateRecognizerWrapper(
|
||||
FD_C_RecognizerWrapper *rec_model = FD_C_CreateRecognizerWrapper(
|
||||
rec_model_file, rec_params_file, rec_label_file, rec_option,
|
||||
FD_C_ModelFormat_PADDLE);
|
||||
|
||||
FD_C_PPOCRv3Wrapper* ppocr_v3 =
|
||||
FD_C_PPOCRv3Wrapper *ppocr_v3 =
|
||||
FD_C_CreatePPOCRv3Wrapper(det_model, cls_model, rec_model);
|
||||
if (!FD_C_PPOCRv3WrapperInitialized(ppocr_v3)) {
|
||||
printf("Failed to initialize.\n");
|
||||
@@ -191,7 +191,7 @@ void GpuInfer(const char* det_model_dir, const char* cls_model_dir,
|
||||
|
||||
FD_C_Mat im = FD_C_Imread(image_file);
|
||||
|
||||
FD_C_OCRResult* result = FD_C_CreateOCRResult();
|
||||
FD_C_OCRResult *result = (FD_C_OCRResult *)malloc(sizeof(FD_C_OCRResult));
|
||||
|
||||
if (!FD_C_PPOCRv3WrapperPredict(ppocr_v3, im, result)) {
|
||||
printf("Failed to predict.\n");
|
||||
@@ -226,7 +226,7 @@ void GpuInfer(const char* det_model_dir, const char* cls_model_dir,
|
||||
FD_C_DestroyMat(im);
|
||||
FD_C_DestroyMat(vis_im);
|
||||
}
|
||||
int main(int argc, char* argv[]) {
|
||||
int main(int argc, char *argv[]) {
|
||||
if (argc < 7) {
|
||||
printf(
|
||||
"Usage: infer_demo path/to/det_model path/to/cls_model "
|
30
examples/vision/ocr/PP-OCR/cpu-gpu/cpp/CMakeLists.txt
Normal file
@@ -0,0 +1,30 @@
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
|
||||
# 添加FastDeploy依赖头文件
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
|
||||
# PP-OCR
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
||||
|
||||
# Only Det
|
||||
add_executable(infer_det ${PROJECT_SOURCE_DIR}/infer_det.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_det ${FASTDEPLOY_LIBS})
|
||||
|
||||
# Only Cls
|
||||
add_executable(infer_cls ${PROJECT_SOURCE_DIR}/infer_cls.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_cls ${FASTDEPLOY_LIBS})
|
||||
|
||||
# Only Rec
|
||||
add_executable(infer_rec ${PROJECT_SOURCE_DIR}/infer_rec.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_rec ${FASTDEPLOY_LIBS})
|
163
examples/vision/ocr/PP-OCR/cpu-gpu/cpp/README.md
Normal file
@@ -0,0 +1,163 @@
|
||||
[English](README.md) | 简体中文
|
||||
# PaddleOCR CPU-GPU C++部署示例
|
||||
|
||||
本目录下提供`infer.cc`快速完成PP-OCRv3在CPU/GPU,以及GPU上通过Paddle-TensorRT加速部署的示例.
|
||||
## 1. 说明
|
||||
PaddleOCR支持利用FastDeploy在NVIDIA GPU、X86 CPU、飞腾CPU、ARM CPU、Intel GPU(独立显卡/集成显卡)硬件上快速部署OCR模型.
|
||||
|
||||
## 2. 部署环境准备
|
||||
在部署前,需确认软硬件环境,同时下载预编译部署库,参考[FastDeploy安装文档](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#FastDeploy预编译库安装)安装FastDeploy预编译库.
|
||||
|
||||
## 3. 部署模型准备
|
||||
在部署前, 请准备好您所需要运行的推理模型, 您可以在[FastDeploy支持的PaddleOCR模型列表](../README.md)中下载所需模型.
|
||||
|
||||
## 4. 运行部署示例
|
||||
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.0以上(x.x.x>=1.0.0)
|
||||
|
||||
```bash
|
||||
# 下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd FastDeploy/examples/vision/ocr/PP-OCR/cpu-gpu/cpp
|
||||
|
||||
# 如果您希望从PaddleOCR下载示例代码,请运行
|
||||
git clone https://github.com/PaddlePaddle/PaddleOCR.git
|
||||
# 注意:如果当前分支找不到下面的fastdeploy测试代码,请切换到dygraph分支
|
||||
git checkout dygraph
|
||||
cd PaddleOCR/deploy/fastdeploy/cpu-gpu/cpp
|
||||
|
||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||
|
||||
# 编译部署示例
|
||||
mkdir build && cd build
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
|
||||
# 下载PP-OCRv3文字检测模型
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_det_infer.tar
|
||||
# 下载文字方向分类器模型
|
||||
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
|
||||
# 下载PP-OCRv3文字识别模型
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
|
||||
tar -xvf ch_PP-OCRv3_rec_infer.tar
|
||||
|
||||
# 下载预测图片与字典文件
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
|
||||
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
|
||||
|
||||
# 运行部署示例
|
||||
# 在CPU上使用Paddle Inference推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
# 在CPU上使用OenVINO推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||
# 在CPU上使用ONNX Runtime推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 2
|
||||
# 在CPU上使用Paddle Lite推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 3
|
||||
# 在GPU上使用Paddle Inference推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 4
|
||||
# 在GPU上使用Paddle TensorRT推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 5
|
||||
# 在GPU上使用ONNX Runtime推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 6
|
||||
# 在GPU上使用Nvidia TensorRT推理
|
||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 7
|
||||
|
||||
# 同时, FastDeploy提供文字检测,文字分类,文字识别三个模型的单独推理,
|
||||
# 有需要的用户, 请准备合适的图片, 同时根据自己的需求, 参考infer.cc来配置自定义硬件与推理后端.
|
||||
|
||||
# 在CPU上,单独使用文字检测模型部署
|
||||
./infer_det ./ch_PP-OCRv3_det_infer ./12.jpg 0
|
||||
|
||||
# 在CPU上,单独使用文字方向分类模型部署
|
||||
./infer_cls ./ch_ppocr_mobile_v2.0_cls_infer ./12.jpg 0
|
||||
|
||||
# 在CPU上,单独使用文字识别模型部署
|
||||
./infer_rec ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
|
||||
```
|
||||
|
||||
运行完成可视化结果如下图所示
|
||||
<div align="center">
|
||||
<img width="640" src="https://user-images.githubusercontent.com/109218879/185826024-f7593a0c-1bd2-4a60-b76c-15588484fa08.jpg">
|
||||
</div>
|
||||
|
||||
- 注意,以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考文档: [如何在Windows中使用FastDeploy C++ SDK](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/use_sdk_on_windows.md)
|
||||
- 关于如何通过FastDeploy使用更多不同的推理后端,以及如何使用不同的硬件,请参考文档:[如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
|
||||
## 5. 部署示例选项说明
|
||||
在我们使用`infer_demo`时, 输入了6个参数, 分别为文字检测模型, 文字分类模型, 文字识别模型, 预测图片, 字典文件与最后一位的数字选项.
|
||||
现在下表将解释最后一位数字选项的含义.
|
||||
|数字选项|含义|
|
||||
|:---:|:---:|
|
||||
|0| 在CPU上使用Paddle Inference推理 |
|
||||
|1| 在CPU上使用OenVINO推理 |
|
||||
|2| 在CPU上使用ONNX Runtime推理 |
|
||||
|3| 在CPU上使用Paddle Lite推理 |
|
||||
|4| 在GPU上使用Paddle Inference推理 |
|
||||
|5| 在GPU上使用Paddle TensorRT推理 |
|
||||
|6| 在GPU上使用ONNX Runtime推理 |
|
||||
|7| 在GPU上使用Nvidia TensorRT推理 |
|
||||
|
||||
关于如何通过FastDeploy使用更多不同的推理后端,以及如何使用不同的硬件,请参考文档:[如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
|
||||
## 6. 更多指南
|
||||
|
||||
### 6.1 如何使用C++部署PP-OCRv2系列模型.
|
||||
本目录下的`infer.cc`代码是以PP-OCRv3模型为例, 如果用户有使用PP-OCRv2的需求, 只需要按照下面所示的方式, 来创建PP-OCRv2并使用.
|
||||
|
||||
```cpp
|
||||
// 此行为创建PP-OCRv3模型的代码
|
||||
auto ppocr_v3 = fastdeploy::pipeline::PPOCRv3(&det_model, &cls_model, &rec_model);
|
||||
// 只需要将PPOCRv3改为PPOCRv2,即可创造PPOCRv2模型, 同时, 后续的接口均使用ppocr_v2来调用
|
||||
auto ppocr_v2 = fastdeploy::pipeline::PPOCRv2(&det_model, &cls_model, &rec_model);
|
||||
|
||||
// 如果用户在部署PP-OCRv2时, 需要使用TensorRT推理, 还需要改动Rec模型的TensorRT的输入shape.
|
||||
// 建议如下修改, 需要把 H 维度改为32, W 纬度按需修改.
|
||||
rec_option.SetTrtInputShape("x", {1, 3, 32, 10}, {rec_batch_size, 3, 32, 320},
|
||||
{rec_batch_size, 3, 32, 2304});
|
||||
```
|
||||
### 6.2 如何在PP-OCRv2/v3系列模型中, 关闭文字方向分类器的使用.
|
||||
|
||||
在PP-OCRv3/v2中, 文字方向分类器是可选的, 用户可以按照以下方式, 来决定自己是否使用方向分类器.
|
||||
```cpp
|
||||
// 使用 Cls 模型
|
||||
auto ppocr_v3 = fastdeploy::pipeline::PPOCRv3(&det_model, &cls_model, &rec_model);
|
||||
|
||||
// 不使用 Cls 模型
|
||||
auto ppocr_v3 = fastdeploy::pipeline::PPOCRv3(&det_model, &rec_model);
|
||||
|
||||
// 当不使用Cls模型时, 请删掉或者注释掉相关代码
|
||||
```
|
||||
|
||||
### 6.3 如何修改前后处理超参数.
|
||||
在示例代码中, 我们展示出了修改前后处理超参数的接口,并设置为默认值,其中, FastDeploy提供的超参数的含义与文档[PaddleOCR推理模型参数解释](https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/inference_args.md)是相同的. 如果用户想要进行更多定制化的开发, 请阅读[PP-OCR系列 C++ API查阅](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1ocr.html)
|
||||
|
||||
```cpp
|
||||
// 设置检测模型的max_side_len
|
||||
det_model.GetPreprocessor().SetMaxSideLen(960);
|
||||
// 其他...
|
||||
```
|
||||
|
||||
### 6.4 其他指南
|
||||
- [FastDeploy部署PaddleOCR模型概览](../../)
|
||||
- [PP-OCRv3 Python部署](../python)
|
||||
- [PP-OCRv3 C 部署](../c)
|
||||
- [PP-OCRv3 C# 部署](../csharp)
|
||||
|
||||
## 7. 常见问题
|
||||
- PaddleOCR能在FastDeploy支持的多种后端上推理,支持情况如下表所示, 如何切换后端, 详见文档[如何切换模型推理后端引擎](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/how_to_change_backend.md)
|
||||
|
||||
|硬件类型|支持的后端|
|
||||
|:---:|:---:|
|
||||
|X86 CPU| Paddle Inference, ONNX Runtime, OpenVINO |
|
||||
|ARM CPU| Paddle Lite |
|
||||
|飞腾 CPU| ONNX Runtime |
|
||||
|NVIDIA GPU| Paddle Inference, ONNX Runtime, TensorRT |
|
||||
|
||||
- [Intel GPU(独立显卡/集成显卡)的使用](https://github.com/PaddlePaddle/FastDeploy/blob/develop/tutorials/intel_gpu/README.md)
|
||||
- [编译CPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/cpu.md)
|
||||
- [编译GPU部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/gpu.md)
|
||||
- [编译Jetson部署库](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/jetson.md)
|