mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-06 00:57:33 +08:00
Fd serving add docker images correlation and docs (#311)
* fd serving add dockerfile * fix enable_paddle_mkldnn * delete disable_paddle_mkldnn Co-authored-by: Jason <jiangjiajun@baidu.com>
This commit is contained in:
@@ -73,8 +73,7 @@ use_openvino_backend()
|
||||
Inference with OpenVINO backend (CPU supported, Paddle/ONNX model format supported)
|
||||
|
||||
```
|
||||
enable_paddle_mkldnn()
|
||||
disable_paddle_mkldnn()
|
||||
set_paddle_mkldnn()
|
||||
```
|
||||
|
||||
When using the Paddle Inference backend, this parameter determines whether the MKLDNN inference acceleration on the CPU is on or off. It is on by default.
|
||||
@@ -204,8 +203,7 @@ void UseOpenVINOBackend()
|
||||
Inference with OpenVINO backend (CPU supported, Paddle/ONNX model format supported)
|
||||
|
||||
```
|
||||
void EnablePaddleMKLDNN()
|
||||
void DisablePaddleMKLDNN()
|
||||
void SetPaddleMKLDNN(bool pd_mkldnn = true)
|
||||
```
|
||||
|
||||
When using the Paddle Inference backend, this parameter determines whether the MKLDNN inference acceleration on the CPU is on or off. It is on by default.
|
||||
|
Reference in New Issue
Block a user