mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-06 17:17:14 +08:00
[Docs] Improve docs related to Ascend inference (#1227)
* Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add Readme for vision results * Add comments to create API docs * Improve OCR comments * fix conflict * Fix OCR Readme * Fix PPOCR readme * Fix PPOCR readme * fix conflict * Improve ascend readme * Improve ascend readme * Improve ascend readme * Improve ascend readme
This commit is contained in:
@@ -118,5 +118,13 @@ FastDeploy现在已经集成FlyCV, 用户可以在支持的硬件平台上使用
|
|||||||
|
|
||||||
|
|
||||||
## 六.昇腾部署Demo参考
|
## 六.昇腾部署Demo参考
|
||||||
- 华为昇腾NPU 上使用C++部署 PaddleClas 分类模型请参考:[PaddleClas 华为升腾NPU C++ 部署示例](../../../examples/vision/classification/paddleclas/cpp/README.md)
|
|
||||||
- 华为昇腾NPU 上使用Python部署 PaddleClas 分类模型请参考:[PaddleClas 华为升腾NPU Python 部署示例](../../../examples/vision/classification/paddleclas/python/README.md)
|
| 模型系列 | C++ 部署示例 | Python 部署示例 |
|
||||||
|
| :-----------| :-------- | :--------------- |
|
||||||
|
| PaddleClas | [昇腾NPU C++ 部署示例](../../../examples/vision/classification/paddleclas/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/classification/paddleclas/python/README_CN.md) |
|
||||||
|
| PaddleDetection | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/paddledetection/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/paddledetection/python/README_CN.md) |
|
||||||
|
| PaddleSeg | [昇腾NPU C++ 部署示例](../../../examples/vision/segmentation/paddleseg/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples//vision/segmentation/paddleseg/python/README_CN.md) |
|
||||||
|
| PaddleOCR | [昇腾NPU C++ 部署示例](../../../examples/vision/ocr/PP-OCRv3/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision//ocr/PP-OCRv3/python/README_CN.md) |
|
||||||
|
| Yolov5 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov5/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov5/python/README_CN.md) |
|
||||||
|
| Yolov6 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov6/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov6/python/README_CN.md) |
|
||||||
|
| Yolov7 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov7/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov7/python/README_CN.md) |
|
||||||
|
@@ -117,6 +117,12 @@ In end-to-end model inference, the pre-processing and post-processing phases are
|
|||||||
|
|
||||||
|
|
||||||
## Deployment demo reference
|
## Deployment demo reference
|
||||||
- Deploying PaddleClas Classification Model on Huawei Ascend NPU using C++ please refer to: [PaddleClas Huawei Ascend NPU C++ Deployment Example](../../../examples/vision/classification/paddleclas/cpp/README.md)
|
| Model | C++ Example | Python Example |
|
||||||
|
| :-----------| :-------- | :--------------- |
|
||||||
- Deploying PaddleClas classification model on Huawei Ascend NPU using Python please refer to: [PaddleClas Huawei Ascend NPU Python Deployment Example](../../../examples/vision/classification/paddleclas/python/README.md)
|
| PaddleClas | [Ascend NPU C++ Example](../../../examples/vision/classification/paddleclas/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/classification/paddleclas/python/README.md) |
|
||||||
|
| PaddleDetection | [Ascend NPU C++ Example](../../../examples/vision/detection/paddledetection/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/paddledetection/python/README.md) |
|
||||||
|
| PaddleSeg | [Ascend NPU C++ Example](../../../examples/vision/segmentation/paddleseg/cpp/README.md) | [Ascend NPU Python Example](../../../examples//vision/segmentation/paddleseg/python/README.md) |
|
||||||
|
| PaddleOCR | [Ascend NPU C++ Example](../../../examples/vision/ocr/PP-OCRv3/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision//ocr/PP-OCRv3/python/README.md) |
|
||||||
|
| Yolov5 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov5/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov5/python/README.md) |
|
||||||
|
| Yolov6 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov6/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov6/python/README.md) |
|
||||||
|
| Yolov7 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov7/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov7/python/README.md) |
|
||||||
|
@@ -1,7 +1,7 @@
|
|||||||
English | [简体中文](README_CN.md)
|
English | [简体中文](README_CN.md)
|
||||||
# PaddleDetection C++ Deployment Example
|
# PaddleDetection C++ Deployment Example
|
||||||
|
|
||||||
This directory provides examples that `infer_xxx.cc` fast finishes the deployment of PaddleDetection models, including PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet on CPU/GPU and GPU accelerated by TensorRT.
|
This directory provides examples that `infer_xxx.cc` fast finishes the deployment of PaddleDetection models, including PPYOLOE/PicoDet/YOLOX/YOLOv3/PPYOLO/FasterRCNN/YOLOv5/YOLOv6/YOLOv7/RTMDet on CPU/GPU and GPU accelerated by TensorRT.
|
||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
@@ -15,13 +15,13 @@ ppyoloe is taken as an example for inference deployment
|
|||||||
|
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
make -j
|
make -j
|
||||||
|
|
||||||
# Download the PPYOLOE model file and test images
|
# Download the PPYOLOE model file and test images
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
||||||
@@ -33,12 +33,16 @@ tar xvf ppyoloe_crn_l_300e_coco.tgz
|
|||||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 1
|
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 1
|
||||||
# TensorRT Inference on GPU
|
# TensorRT Inference on GPU
|
||||||
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 2
|
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 2
|
||||||
|
# Kunlunxin XPU Inference
|
||||||
|
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 3
|
||||||
|
# Huawei Ascend Inference
|
||||||
|
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 4
|
||||||
```
|
```
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## PaddleDetection C++ Interface
|
## PaddleDetection C++ Interface
|
||||||
|
|
||||||
### Model Class
|
### Model Class
|
||||||
|
|
||||||
@@ -56,7 +60,7 @@ Loading and initializing PaddleDetection PPYOLOE model, where the format of mode
|
|||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
> * **model_file**(str): Model file path
|
> * **model_file**(str): Model file path
|
||||||
> * **params_file**(str): Parameter file path
|
> * **params_file**(str): Parameter file path
|
||||||
> * **config_file**(str): • Configuration file path, which is the deployment yaml file exported by PaddleDetection
|
> * **config_file**(str): • Configuration file path, which is the deployment yaml file exported by PaddleDetection
|
||||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||||
@@ -73,7 +77,7 @@ Loading and initializing PaddleDetection PPYOLOE model, where the format of mode
|
|||||||
> **Parameter**
|
> **Parameter**
|
||||||
>
|
>
|
||||||
> > * **im**: Input images in HWC or BGR format
|
> > * **im**: Input images in HWC or BGR format
|
||||||
> > * **result**: Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult
|
> > * **result**: Detection result, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for DetectionResult
|
||||||
|
|
||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
|
@@ -9,11 +9,11 @@ Before deployment, two steps require confirmation.
|
|||||||
This directory provides examples that `infer_xxx.py` fast finishes the deployment of PPYOLOE/PicoDet models on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer_xxx.py` fast finishes the deployment of PPYOLOE/PicoDet models on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Download deployment example code
|
# Download deployment example code
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/python/
|
cd FastDeploy/examples/vision/detection/paddledetection/python/
|
||||||
|
|
||||||
# Download the PPYOLOE model file and test images
|
# Download the PPYOLOE model file and test images
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
||||||
@@ -24,6 +24,10 @@ python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439
|
|||||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu
|
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu
|
||||||
# TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
# TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
|
||||||
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu --use_trt True
|
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu --use_trt True
|
||||||
|
# Kunlunxin XPU Inference
|
||||||
|
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device kunlunxin
|
||||||
|
# Huawei Ascend Inference
|
||||||
|
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device ascend
|
||||||
```
|
```
|
||||||
|
|
||||||
The visualized result after running is as follows
|
The visualized result after running is as follows
|
||||||
@@ -31,7 +35,7 @@ The visualized result after running is as follows
|
|||||||
<img src="https://user-images.githubusercontent.com/19339784/184326520-7075e907-10ed-4fad-93f8-52d0e35d4964.jpg", width=480px, height=320px />
|
<img src="https://user-images.githubusercontent.com/19339784/184326520-7075e907-10ed-4fad-93f8-52d0e35d4964.jpg", width=480px, height=320px />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
## PaddleDetection Python Interface
|
## PaddleDetection Python Interface
|
||||||
|
|
||||||
```python
|
```python
|
||||||
fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
fastdeploy.vision.detection.PPYOLOE(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
|
||||||
@@ -52,7 +56,7 @@ PaddleDetection model loading and initialization, among which model_file and par
|
|||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
> * **model_file**(str): Model file path
|
> * **model_file**(str): Model file path
|
||||||
> * **params_file**(str): Parameter file path
|
> * **params_file**(str): Parameter file path
|
||||||
> * **config_file**(str): Inference configuration yaml file path
|
> * **config_file**(str): Inference configuration yaml file path
|
||||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)
|
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default. (use the default configuration)
|
||||||
|
@@ -12,12 +12,12 @@ Taking the CPU inference on Linux as an example, the compilation test can be com
|
|||||||
```bash
|
```bash
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
make -j
|
make -j
|
||||||
# Download the official converted yolov5 Paddle model files and test images
|
# Download the official converted yolov5 Paddle model files and test images
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
|
||||||
tar -xvf yolov5s_infer.tar
|
tar -xvf yolov5s_infer.tar
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
@@ -31,11 +31,13 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
|
|||||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 2
|
./infer_paddle_demo yolov5s_infer 000000014439.jpg 2
|
||||||
# KunlunXin XPU inference
|
# KunlunXin XPU inference
|
||||||
./infer_paddle_demo yolov5s_infer 000000014439.jpg 3
|
./infer_paddle_demo yolov5s_infer 000000014439.jpg 3
|
||||||
|
# Huawei Ascend Inference
|
||||||
|
./infer_paddle_demo yolov5s_infer 000000014439.jpg 4
|
||||||
```
|
```
|
||||||
|
|
||||||
The above steps apply to the inference of Paddle models. If you want to conduct the inference of ONNX models, follow these steps:
|
The above steps apply to the inference of Paddle models. If you want to conduct the inference of ONNX models, follow these steps:
|
||||||
```bash
|
```bash
|
||||||
# 1. Download the official converted yolov5 ONNX model files and test images
|
# 1. Download the official converted yolov5 ONNX model files and test images
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
@@ -53,7 +55,7 @@ The visualized result after running is as follows
|
|||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## YOLOv5 C++ Interface
|
## YOLOv5 C++ Interface
|
||||||
|
|
||||||
### YOLOv5 Class
|
### YOLOv5 Class
|
||||||
|
|
||||||
@@ -69,7 +71,7 @@ YOLOv5 model loading and initialization, among which model_file is the exported
|
|||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
> * **model_file**(str): Model file path
|
> * **model_file**(str): Model file path
|
||||||
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
||||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||||
|
@@ -22,17 +22,19 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
|
|||||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device cpu
|
python infer.py --model yolov5s_infer --image 000000014439.jpg --device cpu
|
||||||
# GPU inference
|
# GPU inference
|
||||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu
|
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu
|
||||||
# TensorRT inference on GPU
|
# TensorRT inference on GPU
|
||||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu --use_trt True
|
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu --use_trt True
|
||||||
# KunlunXin XPU inference
|
# KunlunXin XPU inference
|
||||||
python infer.py --model yolov5s_infer --image 000000014439.jpg --device kunlunxin
|
python infer.py --model yolov5s_infer --image 000000014439.jpg --device kunlunxin
|
||||||
|
# Huawei Ascend Inference
|
||||||
|
python infer.py --model yolov5s_infer --image 000000014439.jpg --device ascend
|
||||||
```
|
```
|
||||||
|
|
||||||
The visualized result after running is as follows
|
The visualized result after running is as follows
|
||||||
|
|
||||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
|
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
|
||||||
|
|
||||||
## YOLOv5 Python Interface
|
## YOLOv5 Python Interface
|
||||||
|
|
||||||
```python
|
```python
|
||||||
fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||||
@@ -42,7 +44,7 @@ YOLOv5 model loading and initialization, among which model_file is the exported
|
|||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
> * **model_file**(str): Model file path
|
> * **model_file**(str): Model file path
|
||||||
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
||||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||||
|
@@ -23,6 +23,9 @@ python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --d
|
|||||||
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device gpu
|
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device gpu
|
||||||
# KunlunXin XPU inference
|
# KunlunXin XPU inference
|
||||||
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device kunlunxin
|
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device kunlunxin
|
||||||
|
# Huawei Ascend Inference
|
||||||
|
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device ascend
|
||||||
|
|
||||||
```
|
```
|
||||||
If you want to verify the inference of ONNX models, refer to the following command:
|
If you want to verify the inference of ONNX models, refer to the following command:
|
||||||
```bash
|
```bash
|
||||||
@@ -34,7 +37,7 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
|
|||||||
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device cpu
|
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device cpu
|
||||||
# GPU inference
|
# GPU inference
|
||||||
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu
|
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu
|
||||||
# TensorRT inference on GPU
|
# TensorRT inference on GPU
|
||||||
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu --use_trt True
|
python infer.py --model yolov6s.onnx --image 000000014439.jpg --device gpu --use_trt True
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -42,7 +45,7 @@ The visualized result after running is as follows
|
|||||||
|
|
||||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301725-390e4abb-db2b-482d-931d-469381322626.jpg">
|
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301725-390e4abb-db2b-482d-931d-469381322626.jpg">
|
||||||
|
|
||||||
## YOLOv6 Python Interface
|
## YOLOv6 Python Interface
|
||||||
|
|
||||||
```python
|
```python
|
||||||
fastdeploy.vision.detection.YOLOv6(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
fastdeploy.vision.detection.YOLOv6(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
|
||||||
@@ -52,7 +55,7 @@ YOLOv6 model loading and initialization, among which model_file is the exported
|
|||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
> * **model_file**(str): Model file path
|
> * **model_file**(str): Model file path
|
||||||
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
|
||||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||||
|
@@ -1,7 +1,7 @@
|
|||||||
English | [简体中文](README_CN.md)
|
English | [简体中文](README_CN.md)
|
||||||
# YOLOv7 C++ Deployment Example
|
# YOLOv7 C++ Deployment Example
|
||||||
|
|
||||||
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv7 on CPU/GPU and GPU accelerated by TensorRT.
|
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv7 on CPU/GPU and GPU accelerated by TensorRT.
|
||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
@@ -13,7 +13,7 @@ Taking the CPU inference on Linux as an example, the compilation test can be com
|
|||||||
```bash
|
```bash
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
@@ -29,10 +29,12 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
|
|||||||
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 1
|
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 1
|
||||||
# KunlunXin XPU inference
|
# KunlunXin XPU inference
|
||||||
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 2
|
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 2
|
||||||
|
# Huawei Ascend inference
|
||||||
|
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 3
|
||||||
```
|
```
|
||||||
If you want to verify the inference of ONNX models, refer to the following command:
|
If you want to verify the inference of ONNX models, refer to the following command:
|
||||||
```bash
|
```bash
|
||||||
# Download the official converted yolov7 ONNX model files and test images
|
# Download the official converted yolov7 ONNX model files and test images
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov7.onnx
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov7.onnx
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
@@ -52,7 +54,7 @@ The visualized result after running is as follows
|
|||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## YOLOv7 C++ Interface
|
## YOLOv7 C++ Interface
|
||||||
|
|
||||||
### YOLOv7 Class
|
### YOLOv7 Class
|
||||||
|
|
||||||
@@ -68,7 +70,7 @@ YOLOv7 model loading and initialization, among which model_file is the exported
|
|||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
> * **model_file**(str): Model file path
|
> * **model_file**(str): Model file path
|
||||||
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
||||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||||
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
> * **model_format**(ModelFormat): Model format. ONNX format by default
|
||||||
@@ -86,7 +88,7 @@ YOLOv7 model loading and initialization, among which model_file is the exported
|
|||||||
> **Parameter**
|
> **Parameter**
|
||||||
>
|
>
|
||||||
> > * **im**: Input images in HWC or BGR format
|
> > * **im**: Input images in HWC or BGR format
|
||||||
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
|
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult
|
||||||
> > * **conf_threshold**: Filtering threshold of detection box confidence
|
> > * **conf_threshold**: Filtering threshold of detection box confidence
|
||||||
> > * **nms_iou_threshold**: iou threshold during NMS processing
|
> > * **nms_iou_threshold**: iou threshold during NMS processing
|
||||||
|
|
||||||
|
@@ -1,7 +1,7 @@
|
|||||||
English | [简体中文](README_CN.md)
|
English | [简体中文](README_CN.md)
|
||||||
# PPOCRv2 C++ Deployment Example
|
# PPOCRv2 C++ Deployment Example
|
||||||
|
|
||||||
This directory provides examples that `infer.cc` fast finishes the deployment of PPOCRv2 on CPU/GPU and GPU accelerated by TensorRT.
|
This directory provides examples that `infer.cc` fast finishes the deployment of PPOCRv2 on CPU/GPU and GPU accelerated by TensorRT.
|
||||||
|
|
||||||
Two steps before deployment
|
Two steps before deployment
|
||||||
|
|
||||||
@@ -13,7 +13,7 @@ Taking the CPU inference on Linux as an example, the compilation test can be com
|
|||||||
```
|
```
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
@@ -54,7 +54,7 @@ The visualized result after running is as follows
|
|||||||
<img width="640" src="https://user-images.githubusercontent.com/109218879/185826024-f7593a0c-1bd2-4a60-b76c-15588484fa08.jpg">
|
<img width="640" src="https://user-images.githubusercontent.com/109218879/185826024-f7593a0c-1bd2-4a60-b76c-15588484fa08.jpg">
|
||||||
|
|
||||||
|
|
||||||
## PPOCRv2 C++ Interface
|
## PPOCRv2 C++ Interface
|
||||||
|
|
||||||
### PPOCRv2 Class
|
### PPOCRv2 Class
|
||||||
|
|
||||||
@@ -98,7 +98,7 @@ The initialization of PPOCRv2, consisting of detection and recognition models (N
|
|||||||
> > * **result**: OCR prediction results, including the position of the detection box from the detection model, the classification of the direction from the classification model, and the recognition result from the recognition model. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for OCRResult
|
> > * **result**: OCR prediction results, including the position of the detection box from the detection model, the classification of the direction from the classification model, and the recognition result from the recognition model. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for OCRResult
|
||||||
|
|
||||||
|
|
||||||
## DBDetector C++ Interface
|
## DBDetector C++ Interface
|
||||||
|
|
||||||
### DBDetector Class
|
### DBDetector Class
|
||||||
|
|
||||||
@@ -112,7 +112,7 @@ DBDetector model loading and initialization. The model is in paddle format.
|
|||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
> * **model_file**(str): Model file path
|
> * **model_file**(str): Model file path
|
||||||
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
|
||||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||||
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
> * **model_format**(ModelFormat): Model format. Paddle format by default
|
||||||
@@ -139,7 +139,7 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
|
|
||||||
> > * **max_side_len**(int): The long side’s maximum size of the oriented view before detection. The long side will be resized to this size when exceeding the value. And the short side will be scaled in equal proportion. Default 960
|
> > * **max_side_len**(int): The long side’s maximum size of the oriented view before detection. The long side will be resized to this size when exceeding the value. And the short side will be scaled in equal proportion. Default 960
|
||||||
> > * **det_db_thresh**(double): The binarization threshold of the prediction image from DB models. Default 0.3
|
> > * **det_db_thresh**(double): The binarization threshold of the prediction image from DB models. Default 0.3
|
||||||
> > * **det_db_box_thresh**(double): The threshold for the output box of DB models, below which the predicted box is discarded. Default 0.6
|
> > * **det_db_box_thresh**(double): The threshold for the output box of DB models, below which the predicted box is discarded. Default 0.6
|
||||||
> > * **det_db_unclip_ratio**(double): The expansion ratio of the DB model output box. Default 1.5
|
> > * **det_db_unclip_ratio**(double): The expansion ratio of the DB model output box. Default 1.5
|
||||||
> > * **det_db_score_mode**(string): The way to calculate the average score of the text box in DB post-processing. Default slow, which is identical to the calculation of the polygon area’s average score
|
> > * **det_db_score_mode**(string): The way to calculate the average score of the text box in DB post-processing. Default slow, which is identical to the calculation of the polygon area’s average score
|
||||||
> > * **use_dilation**(bool): Whether to expand the feature map from the detection. Default False
|
> > * **use_dilation**(bool): Whether to expand the feature map from the detection. Default False
|
||||||
|
41
examples/vision/ocr/PP-OCRv2/cpp/infer_static_shape.cc
Executable file → Normal file
41
examples/vision/ocr/PP-OCRv2/cpp/infer_static_shape.cc
Executable file → Normal file
@@ -19,7 +19,12 @@ const char sep = '\\';
|
|||||||
const char sep = '/';
|
const char sep = '/';
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
void InitAndInfer(const std::string& det_model_dir, const std::string& cls_model_dir, const std::string& rec_model_dir, const std::string& rec_label_file, const std::string& image_file, const fastdeploy::RuntimeOption& option) {
|
void InitAndInfer(const std::string& det_model_dir,
|
||||||
|
const std::string& cls_model_dir,
|
||||||
|
const std::string& rec_model_dir,
|
||||||
|
const std::string& rec_label_file,
|
||||||
|
const std::string& image_file,
|
||||||
|
const fastdeploy::RuntimeOption& option) {
|
||||||
auto det_model_file = det_model_dir + sep + "inference.pdmodel";
|
auto det_model_file = det_model_dir + sep + "inference.pdmodel";
|
||||||
auto det_params_file = det_model_dir + sep + "inference.pdiparams";
|
auto det_params_file = det_model_dir + sep + "inference.pdiparams";
|
||||||
|
|
||||||
@@ -33,33 +38,40 @@ void InitAndInfer(const std::string& det_model_dir, const std::string& cls_model
|
|||||||
auto cls_option = option;
|
auto cls_option = option;
|
||||||
auto rec_option = option;
|
auto rec_option = option;
|
||||||
|
|
||||||
auto det_model = fastdeploy::vision::ocr::DBDetector(det_model_file, det_params_file, det_option);
|
auto det_model = fastdeploy::vision::ocr::DBDetector(
|
||||||
auto cls_model = fastdeploy::vision::ocr::Classifier(cls_model_file, cls_params_file, cls_option);
|
det_model_file, det_params_file, det_option);
|
||||||
auto rec_model = fastdeploy::vision::ocr::Recognizer(rec_model_file, rec_params_file, rec_label_file, rec_option);
|
auto cls_model = fastdeploy::vision::ocr::Classifier(
|
||||||
|
cls_model_file, cls_params_file, cls_option);
|
||||||
|
auto rec_model = fastdeploy::vision::ocr::Recognizer(
|
||||||
|
rec_model_file, rec_params_file, rec_label_file, rec_option);
|
||||||
|
|
||||||
// Users could enable static shape infer for rec model when deploy PP-OCR on hardware
|
// Users could enable static shape infer for rec model when deploy PP-OCR on
|
||||||
// which can not support dynamic shape infer well, like Huawei Ascend series.
|
// hardware
|
||||||
|
// which can not support dynamic shape infer well, like Huawei Ascend series.
|
||||||
rec_model.GetPreprocessor().SetStaticShapeInfer(true);
|
rec_model.GetPreprocessor().SetStaticShapeInfer(true);
|
||||||
|
|
||||||
assert(det_model.Initialized());
|
assert(det_model.Initialized());
|
||||||
assert(cls_model.Initialized());
|
assert(cls_model.Initialized());
|
||||||
assert(rec_model.Initialized());
|
assert(rec_model.Initialized());
|
||||||
|
|
||||||
// The classification model is optional, so the PP-OCR can also be connected in series as follows
|
// The classification model is optional, so the PP-OCR can also be connected
|
||||||
|
// in series as follows
|
||||||
// auto ppocr_v2 = fastdeploy::pipeline::PPOCRv2(&det_model, &rec_model);
|
// auto ppocr_v2 = fastdeploy::pipeline::PPOCRv2(&det_model, &rec_model);
|
||||||
auto ppocr_v2 = fastdeploy::pipeline::PPOCRv2(&det_model, &cls_model, &rec_model);
|
auto ppocr_v2 =
|
||||||
|
fastdeploy::pipeline::PPOCRv2(&det_model, &cls_model, &rec_model);
|
||||||
|
|
||||||
// When users enable static shape infer for rec model, the batch size of cls and rec model must to be set to 1.
|
// When users enable static shape infer for rec model, the batch size of cls
|
||||||
|
// and rec model must to be set to 1.
|
||||||
ppocr_v2.SetClsBatchSize(1);
|
ppocr_v2.SetClsBatchSize(1);
|
||||||
ppocr_v2.SetRecBatchSize(1);
|
ppocr_v2.SetRecBatchSize(1);
|
||||||
|
|
||||||
if(!ppocr_v2.Initialized()){
|
if (!ppocr_v2.Initialized()) {
|
||||||
std::cerr << "Failed to initialize PP-OCR." << std::endl;
|
std::cerr << "Failed to initialize PP-OCR." << std::endl;
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
auto im = cv::imread(image_file);
|
auto im = cv::imread(image_file);
|
||||||
|
|
||||||
fastdeploy::vision::OCRResult result;
|
fastdeploy::vision::OCRResult result;
|
||||||
if (!ppocr_v2.Predict(im, &result)) {
|
if (!ppocr_v2.Predict(im, &result)) {
|
||||||
std::cerr << "Failed to predict." << std::endl;
|
std::cerr << "Failed to predict." << std::endl;
|
||||||
@@ -92,7 +104,7 @@ int main(int argc, char* argv[]) {
|
|||||||
int flag = std::atoi(argv[6]);
|
int flag = std::atoi(argv[6]);
|
||||||
|
|
||||||
if (flag == 0) {
|
if (flag == 0) {
|
||||||
option.UseCpu();
|
option.UseCpu();
|
||||||
} else if (flag == 1) {
|
} else if (flag == 1) {
|
||||||
option.UseAscend();
|
option.UseAscend();
|
||||||
}
|
}
|
||||||
@@ -102,6 +114,7 @@ int main(int argc, char* argv[]) {
|
|||||||
std::string rec_model_dir = argv[3];
|
std::string rec_model_dir = argv[3];
|
||||||
std::string rec_label_file = argv[4];
|
std::string rec_label_file = argv[4];
|
||||||
std::string test_image = argv[5];
|
std::string test_image = argv[5];
|
||||||
InitAndInfer(det_model_dir, cls_model_dir, rec_model_dir, rec_label_file, test_image, option);
|
InitAndInfer(det_model_dir, cls_model_dir, rec_model_dir, rec_label_file,
|
||||||
|
test_image, option);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@@ -1,7 +1,7 @@
|
|||||||
English | [简体中文](README_CN.md)
|
English | [简体中文](README_CN.md)
|
||||||
# PPOCRv3 C++ Deployment Example
|
# PPOCRv3 C++ Deployment Example
|
||||||
|
|
||||||
This directory provides examples that `infer.cc` fast finishes the deployment of PPOCRv3 on CPU/GPU and GPU accelerated by TensorRT.
|
This directory provides examples that `infer.cc` fast finishes the deployment of PPOCRv3 on CPU/GPU and GPU accelerated by TensorRT.
|
||||||
|
|
||||||
Two steps before deployment
|
Two steps before deployment
|
||||||
|
|
||||||
@@ -13,7 +13,7 @@ Taking the CPU inference on Linux as an example, the compilation test can be com
|
|||||||
```
|
```
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
@@ -44,6 +44,8 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_
|
|||||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 3
|
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 3
|
||||||
# KunlunXin XPU inference
|
# KunlunXin XPU inference
|
||||||
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 4
|
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 4
|
||||||
|
# Huawei Ascend inference, need to use the infer_static_shape_demo, if the user needs to predict the image continuously, the input image size needs to be prepared as a uniform size.
|
||||||
|
./infer_static_shape_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
|
||||||
```
|
```
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK in Windows, refer to:
|
||||||
|
@@ -1,7 +1,7 @@
|
|||||||
English | [简体中文](README_CN.md)
|
English | [简体中文](README_CN.md)
|
||||||
# PaddleSeg C++ Deployment Example
|
# PaddleSeg C++ Deployment Example
|
||||||
|
|
||||||
This directory provides examples that `infer.cc` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT.
|
This directory provides examples that `infer.cc` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT.
|
||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
@@ -15,7 +15,7 @@ Taking the inference on Linux as an example, the compilation test can be complet
|
|||||||
```bash
|
```bash
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
@@ -35,6 +35,8 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
|||||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 2
|
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 2
|
||||||
# kunlunxin XPU inference
|
# kunlunxin XPU inference
|
||||||
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 3
|
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 3
|
||||||
|
# Huawei Ascend Inference
|
||||||
|
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 4
|
||||||
```
|
```
|
||||||
|
|
||||||
The visualized result after running is as follows
|
The visualized result after running is as follows
|
||||||
@@ -45,7 +47,7 @@ The visualized result after running is as follows
|
|||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## PaddleSeg C++ Interface
|
## PaddleSeg C++ Interface
|
||||||
|
|
||||||
### PaddleSeg Class
|
### PaddleSeg Class
|
||||||
|
|
||||||
@@ -62,7 +64,7 @@ PaddleSegModel model loading and initialization, among which model_file is the e
|
|||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
> * **model_file**(str): Model file path
|
> * **model_file**(str): Model file path
|
||||||
> * **params_file**(str): Parameter file path
|
> * **params_file**(str): Parameter file path
|
||||||
> * **config_file**(str): Inference deployment configuration file
|
> * **config_file**(str): Inference deployment configuration file
|
||||||
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
|
||||||
|
Reference in New Issue
Block a user