[Docs] Improve docs related to Ascend inference (#1227)

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add comments to create API docs

* Improve OCR comments

* fix conflict

* Fix OCR Readme

* Fix PPOCR readme

* Fix PPOCR readme

* fix conflict

* Improve ascend readme

* Improve ascend readme

* Improve ascend readme

* Improve ascend readme
This commit is contained in:
yunyaoXYY
2023-02-04 17:03:03 +08:00
committed by GitHub
parent 522e96bce8
commit 870551f3f5
12 changed files with 106 additions and 58 deletions

View File

@@ -118,5 +118,13 @@ FastDeploy现在已经集成FlyCV, 用户可以在支持的硬件平台上使用
## 六.昇腾部署Demo参考
- 华为昇腾NPU 上使用C++部署 PaddleClas 分类模型请参考:[PaddleClas 华为升腾NPU C++ 部署示例](../../../examples/vision/classification/paddleclas/cpp/README.md)
- 华为昇腾NPU 上使用Python部署 PaddleClas 分类模型请参考:[PaddleClas 华为升腾NPU Python 部署示例](../../../examples/vision/classification/paddleclas/python/README.md)
| 模型系列 | C++ 部署示例 | Python 部署示例 |
| :-----------| :-------- | :--------------- |
| PaddleClas | [昇腾NPU C++ 部署示例](../../../examples/vision/classification/paddleclas/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/classification/paddleclas/python/README_CN.md) |
| PaddleDetection | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/paddledetection/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/paddledetection/python/README_CN.md) |
| PaddleSeg | [昇腾NPU C++ 部署示例](../../../examples/vision/segmentation/paddleseg/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples//vision/segmentation/paddleseg/python/README_CN.md) |
| PaddleOCR | [昇腾NPU C++ 部署示例](../../../examples/vision/ocr/PP-OCRv3/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision//ocr/PP-OCRv3/python/README_CN.md) |
| Yolov5 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov5/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov5/python/README_CN.md) |
| Yolov6 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov6/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov6/python/README_CN.md) |
| Yolov7 | [昇腾NPU C++ 部署示例](../../../examples/vision/detection/yolov7/cpp/README_CN.md) | [昇腾NPU Python 部署示例](../../../examples/vision/detection/yolov7/python/README_CN.md) |

View File

@@ -117,6 +117,12 @@ In end-to-end model inference, the pre-processing and post-processing phases are
## Deployment demo reference
- Deploying PaddleClas Classification Model on Huawei Ascend NPU using C++ please refer to: [PaddleClas Huawei Ascend NPU C++ Deployment Example](../../../examples/vision/classification/paddleclas/cpp/README.md)
- Deploying PaddleClas classification model on Huawei Ascend NPU using Python please refer to: [PaddleClas Huawei Ascend NPU Python Deployment Example](../../../examples/vision/classification/paddleclas/python/README.md)
| Model | C++ Example | Python Example |
| :-----------| :-------- | :--------------- |
| PaddleClas | [Ascend NPU C++ Example](../../../examples/vision/classification/paddleclas/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/classification/paddleclas/python/README.md) |
| PaddleDetection | [Ascend NPU C++ Example](../../../examples/vision/detection/paddledetection/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/paddledetection/python/README.md) |
| PaddleSeg | [Ascend NPU C++ Example](../../../examples/vision/segmentation/paddleseg/cpp/README.md) | [Ascend NPU Python Example](../../../examples//vision/segmentation/paddleseg/python/README.md) |
| PaddleOCR | [Ascend NPU C++ Example](../../../examples/vision/ocr/PP-OCRv3/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision//ocr/PP-OCRv3/python/README.md) |
| Yolov5 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov5/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov5/python/README.md) |
| Yolov6 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov6/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov6/python/README.md) |
| Yolov7 | [Ascend NPU C++ Example](../../../examples/vision/detection/yolov7/cpp/README.md) | [Ascend NPU Python Example](../../../examples/vision/detection/yolov7/python/README.md) |

View File

@@ -33,6 +33,10 @@ tar xvf ppyoloe_crn_l_300e_coco.tgz
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 1
# TensorRT Inference on GPU
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 2
# Kunlunxin XPU Inference
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 3
# Huawei Ascend Inference
./infer_ppyoloe_demo ./ppyoloe_crn_l_300e_coco 000000014439.jpg 4
```
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:

View File

@@ -24,6 +24,10 @@ python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu
# TensorRT inference on GPU Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device gpu --use_trt True
# Kunlunxin XPU Inference
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device kunlunxin
# Huawei Ascend Inference
python infer_ppyoloe.py --model_dir ppyoloe_crn_l_300e_coco --image 000000014439.jpg --device ascend
```
The visualized result after running is as follows

View File

@@ -31,6 +31,8 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
./infer_paddle_demo yolov5s_infer 000000014439.jpg 2
# KunlunXin XPU inference
./infer_paddle_demo yolov5s_infer 000000014439.jpg 3
# Huawei Ascend Inference
./infer_paddle_demo yolov5s_infer 000000014439.jpg 4
```
The above steps apply to the inference of Paddle models. If you want to conduct the inference of ONNX models, follow these steps:

View File

@@ -26,6 +26,8 @@ python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu --use_trt True
# KunlunXin XPU inference
python infer.py --model yolov5s_infer --image 000000014439.jpg --device kunlunxin
# Huawei Ascend Inference
python infer.py --model yolov5s_infer --image 000000014439.jpg --device ascend
```
The visualized result after running is as follows

View File

@@ -23,6 +23,9 @@ python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --d
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device gpu
# KunlunXin XPU inference
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device kunlunxin
# Huawei Ascend Inference
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device ascend
```
If you want to verify the inference of ONNX models, refer to the following command:
```bash

View File

@@ -29,6 +29,8 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 1
# KunlunXin XPU inference
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 2
# Huawei Ascend inference
./infer_paddle_model_demo yolov7_infer 000000014439.jpg 3
```
If you want to verify the inference of ONNX models, refer to the following command:
```bash

33
examples/vision/ocr/PP-OCRv2/cpp/infer_static_shape.cc Executable file → Normal file
View File

@@ -19,7 +19,12 @@ const char sep = '\\';
const char sep = '/';
#endif
void InitAndInfer(const std::string& det_model_dir, const std::string& cls_model_dir, const std::string& rec_model_dir, const std::string& rec_label_file, const std::string& image_file, const fastdeploy::RuntimeOption& option) {
void InitAndInfer(const std::string& det_model_dir,
const std::string& cls_model_dir,
const std::string& rec_model_dir,
const std::string& rec_label_file,
const std::string& image_file,
const fastdeploy::RuntimeOption& option) {
auto det_model_file = det_model_dir + sep + "inference.pdmodel";
auto det_params_file = det_model_dir + sep + "inference.pdiparams";
@@ -33,11 +38,15 @@ void InitAndInfer(const std::string& det_model_dir, const std::string& cls_model
auto cls_option = option;
auto rec_option = option;
auto det_model = fastdeploy::vision::ocr::DBDetector(det_model_file, det_params_file, det_option);
auto cls_model = fastdeploy::vision::ocr::Classifier(cls_model_file, cls_params_file, cls_option);
auto rec_model = fastdeploy::vision::ocr::Recognizer(rec_model_file, rec_params_file, rec_label_file, rec_option);
auto det_model = fastdeploy::vision::ocr::DBDetector(
det_model_file, det_params_file, det_option);
auto cls_model = fastdeploy::vision::ocr::Classifier(
cls_model_file, cls_params_file, cls_option);
auto rec_model = fastdeploy::vision::ocr::Recognizer(
rec_model_file, rec_params_file, rec_label_file, rec_option);
// Users could enable static shape infer for rec model when deploy PP-OCR on hardware
// Users could enable static shape infer for rec model when deploy PP-OCR on
// hardware
// which can not support dynamic shape infer well, like Huawei Ascend series.
rec_model.GetPreprocessor().SetStaticShapeInfer(true);
@@ -45,15 +54,18 @@ void InitAndInfer(const std::string& det_model_dir, const std::string& cls_model
assert(cls_model.Initialized());
assert(rec_model.Initialized());
// The classification model is optional, so the PP-OCR can also be connected in series as follows
// The classification model is optional, so the PP-OCR can also be connected
// in series as follows
// auto ppocr_v2 = fastdeploy::pipeline::PPOCRv2(&det_model, &rec_model);
auto ppocr_v2 = fastdeploy::pipeline::PPOCRv2(&det_model, &cls_model, &rec_model);
auto ppocr_v2 =
fastdeploy::pipeline::PPOCRv2(&det_model, &cls_model, &rec_model);
// When users enable static shape infer for rec model, the batch size of cls and rec model must to be set to 1.
// When users enable static shape infer for rec model, the batch size of cls
// and rec model must to be set to 1.
ppocr_v2.SetClsBatchSize(1);
ppocr_v2.SetRecBatchSize(1);
if(!ppocr_v2.Initialized()){
if (!ppocr_v2.Initialized()) {
std::cerr << "Failed to initialize PP-OCR." << std::endl;
return;
}
@@ -102,6 +114,7 @@ int main(int argc, char* argv[]) {
std::string rec_model_dir = argv[3];
std::string rec_label_file = argv[4];
std::string test_image = argv[5];
InitAndInfer(det_model_dir, cls_model_dir, rec_model_dir, rec_label_file, test_image, option);
InitAndInfer(det_model_dir, cls_model_dir, rec_model_dir, rec_label_file,
test_image, option);
return 0;
}

View File

@@ -44,6 +44,8 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 3
# KunlunXin XPU inference
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 4
# Huawei Ascend inference, need to use the infer_static_shape_demo, if the user needs to predict the image continuously, the input image size needs to be prepared as a uniform size.
./infer_static_shape_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
```
The above command works for Linux or MacOS. For SDK in Windows, refer to:

View File

@@ -35,6 +35,8 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 2
# kunlunxin XPU inference
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 3
# Huawei Ascend Inference
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 4
```
The visualized result after running is as follows