Files
FastDeploy/docs/en/faq/how_to_change_backend.md
Jason 3ff562aa77 Bump up to version 0.3.0 (#371)
* Update VERSION_NUMBER

* Update paddle_inference.cmake

* Delete docs directory

* release new docs

* update version number

* add vision result doc

* update version

* fix dead link

* fix vision

* fix dead link

* Update README_EN.md

* Update README_EN.md

* Update README_EN.md

* Update README_EN.md

* Update README_EN.md

* Update README_CN.md

* Update README_EN.md

* Update README_CN.md

* Update README_EN.md

* Update README_CN.md

* Update README_EN.md

* Update README_EN.md

Co-authored-by: leiqing <54695910+leiqing1@users.noreply.github.com>
2022-10-15 22:01:27 +08:00

51 lines
1.2 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# How to Change Model Inference Backend
FastDeploy supports various backends, including
- OpenVINO (supports Paddle/ONNX formats, CPU inference only )
- ONNX Runtime (supports Paddle/ONNX formats, inference on CPU/GPU)
- TensorRT (supports Paddle/ONNX formats, GPU inference only
- Paddle Inference (supports Paddle format, inference on CPU/GPU)
All models can backend via RuntimeOption
**Python**
```python
import fastdeploy as fd
option = fd.RuntimeOption()
# Change CPU/GPU
option.use_cpu()
option.use_gpu()
# Change the Backend
option.use_paddle_backend() # Paddle Inference
option.use_trt_backend() # TensorRT
option.use_openvino_backend() # OpenVINO
option.use_ort_backend() # ONNX Runtime
```
**C++**
```C++
fastdeploy::RuntimeOption option;
// Change CPU/GPU
option.UseCpu();
option.UseGpu();
// Change the Backend
option.UsePaddleBackend(); // Paddle Inference
option.UseTrtBackend(); // TensorRT
option.UseOpenVINOBackend(); // OpenVINO
option.UseOrtBackend(); // ONNX Runtime
```
For more specific demos, please refer to python or c++ inference code for different models under `FastDeploy/examples/vision`
For more deployment methods, please refer to FastDeploy API tutorials.
- [Python API]()
- [C++ API]()