[Backend] Add Huawei Ascend NPU deploy using PaddleLite CANN. (#757)

* Add Huawei Ascend NPU deploy through PaddleLite CANN

* Add NNAdapter interface for paddlelite

* Modify Huawei Ascend Cmake

* Update way for compiling Huawei Ascend NPU deployment

* remove UseLiteBackend in UseCANN

* Support compile python whlee

* Change names of nnadapter API

* Add nnadapter pybind and remove useless API

* Support Python deployment on Huawei Ascend NPU

* Add models suppor for ascend

* Add PPOCR rec reszie for ascend

* fix conflict for ascend

* Rename CANN to Ascend

* Rename CANN to Ascend

* Improve ascend

* fix ascend bug

* improve ascend docs

* improve ascend docs

* improve ascend docs

* Improve Ascend

* Improve Ascend

* Move ascend python demo

* Imporve ascend

* Improve ascend

* Improve ascend

* Improve ascend

* Improve ascend

* Imporve ascend

* Imporve ascend

* Improve ascend
This commit is contained in:
yunyaoXYY
2022-12-26 10:18:34 +08:00
committed by GitHub
parent 2d3d941372
commit d45382e3cc
42 changed files with 714 additions and 29 deletions

View File

@@ -34,11 +34,16 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 3
# KunlunXin XPU推理
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 4
# Huawei Ascend NPU推理
./infer_demo ResNet50_vd_infer ILSVRC2012_val_00000010.jpeg 5
```
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
## PaddleClas C++接口
### PaddleClas类

View File

@@ -148,6 +148,31 @@ void TrtInfer(const std::string& model_dir, const std::string& image_file) {
std::cout << res.Str() << std::endl;
}
void AscendInfer(const std::string& model_dir, const std::string& image_file) {
auto model_file = model_dir + sep + "inference.pdmodel";
auto params_file = model_dir + sep + "inference.pdiparams";
auto config_file = model_dir + sep + "inference_cls.yaml";
auto option = fastdeploy::RuntimeOption();
option.UseAscend();
auto model = fastdeploy::vision::classification::PaddleClasModel(
model_file, params_file, config_file, option);
assert(model.Initialized());
auto im = cv::imread(image_file);
fastdeploy::vision::ClassifyResult res;
if (!model.Predict(&im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
std::cout << res.Str() << std::endl;
}
int main(int argc, char* argv[]) {
if (argc < 4) {
std::cout << "Usage: infer_demo path/to/model path/to/image run_option, "
@@ -169,6 +194,8 @@ int main(int argc, char* argv[]) {
IpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 4) {
XpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 5) {
AscendInfer(argv[1], argv[2]);
}
return 0;
}

View File

@@ -27,6 +27,8 @@ python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg -
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device ipu --topk 1
# XPU推理
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device xpu --topk 1
# 华为昇腾NPU推理
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device ascend --topk 1
```
运行完成后返回结果如下所示

View File

@@ -17,7 +17,8 @@ def parse_arguments():
"--device",
type=str,
default='cpu',
help="Type of inference device, support 'cpu' or 'gpu' or 'ipu'.")
help="Type of inference device, support 'cpu' or 'gpu' or 'ipu' or 'xpu' or 'ascend' ."
)
parser.add_argument(
"--use_trt",
type=ast.literal_eval,
@@ -38,6 +39,9 @@ def build_option(args):
if args.device.lower() == "xpu":
option.use_xpu()
if args.device.lower() == "ascend":
option.use_ascend()
if args.use_trt:
option.use_trt_backend()
return option