[Docs] Improve docs related to Ascend inference (#1227)

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add Readme for vision results

* Add comments to create API docs

* Improve OCR comments

* fix conflict

* Fix OCR Readme

* Fix PPOCR readme

* Fix PPOCR readme

* fix conflict

* Improve ascend readme

* Improve ascend readme

* Improve ascend readme

* Improve ascend readme
This commit is contained in:
yunyaoXYY
2023-02-04 17:03:03 +08:00
committed by GitHub
parent 522e96bce8
commit 870551f3f5
12 changed files with 106 additions and 58 deletions

View File

@@ -22,17 +22,19 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
python infer.py --model yolov5s_infer --image 000000014439.jpg --device cpu
# GPU inference
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu
# TensorRT inference on GPU
# TensorRT inference on GPU
python infer.py --model yolov5s_infer --image 000000014439.jpg --device gpu --use_trt True
# KunlunXin XPU inference
python infer.py --model yolov5s_infer --image 000000014439.jpg --device kunlunxin
# Huawei Ascend Inference
python infer.py --model yolov5s_infer --image 000000014439.jpg --device ascend
```
The visualized result after running is as follows
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
## YOLOv5 Python Interface
## YOLOv5 Python Interface
```python
fastdeploy.vision.detection.YOLOv5(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
@@ -42,7 +44,7 @@ YOLOv5 model loading and initialization, among which model_file is the exported
**Parameter**
> * **model_file**(str): Model file path
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(ModelFormat): Model format. ONNX format by default