Files
FastDeploy/docs/cn/faq/how_to_change_backend.md
charl-u 02eab973ce [Doc]Add English version of documents in docs/cn and api/vision_results (#931)
* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md
2022-12-22 18:15:01 +08:00

51 lines
1.3 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

[English](../../en/faq/how_to_change_backend.md) | 中文
# 如何切换模型推理后端
FastDeploy中各视觉模型可支持多种后端包括
- OpenVINO (支持Paddle/ONNX两种格式模型, 仅支持CPU上推理)
- ONNX Runtime (支持Paddle/ONNX两种格式模型 支持CPU/GPU
- TensorRT (支持Paddle/ONNX两种格式模型仅支持GPU上推理)
- Paddle Inference(支持Paddle格式模型 支持CPU/GPU)
所有模型切换后端方式均通过RuntimeOption进行切换
**Python**
```python
import fastdeploy as fd
option = fd.RuntimeOption()
# 切换使用CPU/GPU
option.use_cpu()
option.use_gpu()
# 切换不同后端
option.use_paddle_backend() # Paddle Inference
option.use_trt_backend() # TensorRT
option.use_openvino_backend() # OpenVINO
option.use_ort_backend() # ONNX Runtime
```
**C++**
```C++
fastdeploy::RuntimeOption option;
// 切换使用CPU/GPU
option.UseCpu();
option.UseGpu();
// 切换不同后端
option.UsePaddleBackend(); // Paddle Inference
option.UseTrtBackend(); // TensorRT
option.UseOpenVINOBackend(); // OpenVINO
option.UseOrtBackend(); // ONNX Runtime
```
具体示例可参阅`FastDeploy/examples/vision`下不同模型的python或c++推理代码
更多`RuntimeOption`的配置方式查阅FastDeploy API文档
- [Python API]()
- [C++ API]()