mirror of
				https://github.com/PaddlePaddle/FastDeploy.git
				synced 2025-10-31 20:02:53 +08:00 
			
		
		
		
	 02eab973ce
			
		
	
	02eab973ce
	
	
	
		
			
			* 第一次提交 * 补充一处漏翻译 * deleted: docs/en/quantize.md * Update one translation * Update en version * Update one translation in code * Standardize one writing * Standardize one writing * Update some en version * Fix a grammer problem * Update en version for api/vision result * Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop * Checkout the link in README in vision_results/ to the en documents * Modify a title * Add link to serving/docs/ * Finish translation of demo.md
		
			
				
	
	
		
			52 lines
		
	
	
		
			1.3 KiB
		
	
	
	
		
			Markdown
		
	
	
	
	
	
			
		
		
	
	
			52 lines
		
	
	
		
			1.3 KiB
		
	
	
	
		
			Markdown
		
	
	
	
	
	
| English | [中文](../../cn/faq/how_to_change_backend.md)
 | ||
| # How to Change Model Inference Backend
 | ||
| 
 | ||
| FastDeploy supports various backends, including
 | ||
| 
 | ||
| - OpenVINO (supports Paddle/ONNX formats, CPU inference only )
 | ||
| - ONNX Runtime (supports Paddle/ONNX formats, inference on CPU/GPU)
 | ||
| - TensorRT (supports Paddle/ONNX formats, GPU inference only)
 | ||
| - Paddle Inference (supports Paddle format, inference on CPU/GPU)
 | ||
| 
 | ||
| All models can backend via RuntimeOption
 | ||
| 
 | ||
| 
 | ||
| **Python**
 | ||
| ```python
 | ||
| import fastdeploy as fd
 | ||
| option = fd.RuntimeOption()
 | ||
| 
 | ||
| # Change CPU/GPU
 | ||
| option.use_cpu()
 | ||
| option.use_gpu()
 | ||
| 
 | ||
| # Change the Backend
 | ||
| option.use_paddle_backend() # Paddle Inference
 | ||
| option.use_trt_backend() # TensorRT
 | ||
| option.use_openvino_backend() # OpenVINO
 | ||
| option.use_ort_backend() # ONNX Runtime
 | ||
| 
 | ||
| ```
 | ||
| 
 | ||
| **C++**
 | ||
| ```C++
 | ||
| fastdeploy::RuntimeOption option;
 | ||
| 
 | ||
| // Change CPU/GPU
 | ||
| option.UseCpu();
 | ||
| option.UseGpu();
 | ||
| 
 | ||
| // Change the Backend
 | ||
| option.UsePaddleBackend(); // Paddle Inference
 | ||
| option.UseTrtBackend(); // TensorRT
 | ||
| option.UseOpenVINOBackend(); // OpenVINO
 | ||
| option.UseOrtBackend(); // ONNX Runtime
 | ||
| ```
 | ||
| 
 | ||
| For more specific demos, please refer to python or c++ inference code for different models under `FastDeploy/examples/vision`
 | ||
| 
 | ||
| For more deployment methods, please refer to FastDeploy API tutorials. 
 | ||
| 
 | ||
| - [Python API]()
 | ||
| - [C++ API]()
 |