[Other] Change all XPU to KunlunXin (#973)

* [FlyCV] Bump up FlyCV -> official release 1.0.0

* XPU to KunlunXin

* update

* update model link

* update doc

* update device

* update code

* useless code

Co-authored-by: DefTruth <qiustudent_r@163.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
This commit is contained in:
yeliang2258
2022-12-27 10:02:02 +08:00
committed by GitHub
parent 6078bd9657
commit 45865c8724
111 changed files with 369 additions and 368 deletions

View File

@@ -12,7 +12,7 @@
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov6/python/
https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_infer.tar
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_infer.tar
tar -xf yolov6s_infer.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
@@ -20,8 +20,8 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device cpu
# GPU推理
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device gpu
# XPU推理
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device xpu
# 昆仑芯XPU推理
python infer_paddle_model.py --model yolov6s_infer --image 000000014439.jpg --device kunlunxin
```
如果想要验证ONNX模型的推理可以参考如下命令
```bash

0
examples/vision/detection/yolov6/python/infer.py Normal file → Executable file
View File

View File

@@ -16,7 +16,7 @@ def parse_arguments():
"--device",
type=str,
default='cpu',
help="Type of inference device, support 'cpu', 'xpu' or 'gpu'.")
help="Type of inference device, support 'cpu', 'kunlunxin' or 'gpu'.")
return parser.parse_args()
@@ -25,8 +25,8 @@ def build_option(args):
if args.device.lower() == "gpu":
option.use_gpu(0)
if args.device.lower() == "xpu":
option.use_xpu()
if args.device.lower() == "kunlunxin":
option.use_kunlunxin()
return option