Fix set_paddle_mkldnn python interface (#328)

* fd serving add dockerfile

* fix enable_paddle_mkldnn

* delete disable_paddle_mkldnn

* fix python set_paddle_mkldnn

Co-authored-by: Jason <jiangjiajun@baidu.com>
This commit is contained in:
heliqi
2022-10-08 03:49:40 -05:00
committed by GitHub
parent d57e997fa0
commit a3fa5989d2
3 changed files with 4 additions and 4 deletions

View File

@@ -66,7 +66,7 @@ use_openvino_backend()
使用OpenVINO后端进行推理支持CPU, 支持Paddle/ONNX模型格式 使用OpenVINO后端进行推理支持CPU, 支持Paddle/ONNX模型格式
``` ```
set_paddle_mkldnn() set_paddle_mkldnn(pd_mkldnn=True)
``` ```
当使用Paddle Inference后端时通过此开关开启或关闭CPU上MKLDNN推理加速后端默认为开启 当使用Paddle Inference后端时通过此开关开启或关闭CPU上MKLDNN推理加速后端默认为开启

View File

@@ -73,7 +73,7 @@ use_openvino_backend()
Inference with OpenVINO backend (CPU supported, Paddle/ONNX model format supported) Inference with OpenVINO backend (CPU supported, Paddle/ONNX model format supported)
``` ```
set_paddle_mkldnn() set_paddle_mkldnn(pd_mkldnn=True)
``` ```
When using the Paddle Inference backend, this parameter determines whether the MKLDNN inference acceleration on the CPU is on or off. It is on by default. When using the Paddle Inference backend, this parameter determines whether the MKLDNN inference acceleration on the CPU is on or off. It is on by default.

View File

@@ -85,8 +85,8 @@ class RuntimeOption:
def use_lite_backend(self): def use_lite_backend(self):
return self._option.use_lite_backend() return self._option.use_lite_backend()
def set_paddle_mkldnn(self): def set_paddle_mkldnn(self, pd_mkldnn=True):
return self._option.set_paddle_mkldnn() return self._option.set_paddle_mkldnn(pd_mkldnn)
def enable_paddle_log_info(self): def enable_paddle_log_info(self):
return self._option.enable_paddle_log_info() return self._option.enable_paddle_log_info()