[Other] Change all XPU to KunlunXin (#973)

* [FlyCV] Bump up FlyCV -> official release 1.0.0

* XPU to KunlunXin

* update

* update model link

* update doc

* update device

* update code

* useless code

Co-authored-by: DefTruth <qiustudent_r@163.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
This commit is contained in:
yeliang2258
2022-12-27 10:02:02 +08:00
committed by GitHub
parent 6078bd9657
commit 45865c8724
111 changed files with 369 additions and 368 deletions

View File

@@ -25,8 +25,8 @@ python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --imag
python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device gpu
# GPU上使用TensorRT推理 注意TensorRT推理第一次运行有序列化模型的操作有一定耗时需要耐心等待
python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device gpu --use_trt True
# XPU推理
python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device xpu
# 昆仑芯XPU推理
python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device kunlunxin
```
运行完成可视化结果如下图所示

View File

@@ -17,7 +17,7 @@ def parse_arguments():
"--device",
type=str,
default='cpu',
help="type of inference device, support 'cpu', 'xpu' or 'gpu'.")
help="type of inference device, support 'cpu', 'kunlunxin' or 'gpu'.")
parser.add_argument(
"--use_trt",
type=ast.literal_eval,
@@ -32,8 +32,8 @@ def build_tinypose_option(args):
if args.device.lower() == "gpu":
option.use_gpu()
if args.device.lower() == "xpu":
option.use_xpu()
if args.device.lower() == "kunlunxin":
option.use_kunlunxin()
if args.use_trt:
option.use_trt_backend()