[XPU] Support XPU via Paddle Inference backend (#1987)

* [backend] Support XPU via Paddle Inference backend

* [backend] Support XPU via Paddle Inference backend

* [backend] Support XPU via Paddle Inference backend

* [XPU] support XPU benchmark via paddle inference

* [XPU] support XPU benchmark via paddle inference

* [benchmark] add xpu paddle h2d config files
This commit is contained in:
DefTruth
2023-05-25 14:13:40 +08:00
committed by GitHub
parent 24f32d10a7
commit 49c033a828
16 changed files with 262 additions and 57 deletions

View File

@@ -60,7 +60,9 @@ DEFINE_int32(device_id, -1,
"Optional, set specific device id for GPU/XPU, default -1."
"will force to override the value in config file "
"eg, 0/1/2/...");
DEFINE_bool(enable_log_info, false,
"Optional, whether to enable log info for paddle backend,"
"default false.");
static void PrintUsage() {
std::cout << "Usage: infer_demo --model model_path --image img_path "