mirror of
				https://github.com/PaddlePaddle/FastDeploy.git
				synced 2025-10-27 02:20:31 +08:00 
			
		
		
		
	 3ff562aa77
			
		
	
	3ff562aa77
	
	
	
		
			
			* Update VERSION_NUMBER * Update paddle_inference.cmake * Delete docs directory * release new docs * update version number * add vision result doc * update version * fix dead link * fix vision * fix dead link * Update README_EN.md * Update README_EN.md * Update README_EN.md * Update README_EN.md * Update README_EN.md * Update README_CN.md * Update README_EN.md * Update README_CN.md * Update README_EN.md * Update README_CN.md * Update README_EN.md * Update README_EN.md Co-authored-by: leiqing <54695910+leiqing1@users.noreply.github.com>
		
			
				
	
	
		
			51 lines
		
	
	
		
			1.2 KiB
		
	
	
	
		
			Markdown
		
	
	
	
	
	
			
		
		
	
	
			51 lines
		
	
	
		
			1.2 KiB
		
	
	
	
		
			Markdown
		
	
	
	
	
	
| # How to Change Model Inference Backend
 | ||
| 
 | ||
| FastDeploy supports various backends, including
 | ||
| 
 | ||
| - OpenVINO (supports Paddle/ONNX formats, CPU inference only )
 | ||
| - ONNX Runtime (supports Paddle/ONNX formats, inference on CPU/GPU)
 | ||
| - TensorRT (supports Paddle/ONNX formats, GPU inference only)
 | ||
| - Paddle Inference (supports Paddle format, inference on CPU/GPU)
 | ||
| 
 | ||
| All models can backend via RuntimeOption
 | ||
| 
 | ||
| 
 | ||
| **Python**
 | ||
| ```python
 | ||
| import fastdeploy as fd
 | ||
| option = fd.RuntimeOption()
 | ||
| 
 | ||
| # Change CPU/GPU
 | ||
| option.use_cpu()
 | ||
| option.use_gpu()
 | ||
| 
 | ||
| # Change the Backend
 | ||
| option.use_paddle_backend() # Paddle Inference
 | ||
| option.use_trt_backend() # TensorRT
 | ||
| option.use_openvino_backend() # OpenVINO
 | ||
| option.use_ort_backend() # ONNX Runtime
 | ||
| 
 | ||
| ```
 | ||
| 
 | ||
| **C++**
 | ||
| ```C++
 | ||
| fastdeploy::RuntimeOption option;
 | ||
| 
 | ||
| // Change CPU/GPU
 | ||
| option.UseCpu();
 | ||
| option.UseGpu();
 | ||
| 
 | ||
| // Change the Backend
 | ||
| option.UsePaddleBackend(); // Paddle Inference
 | ||
| option.UseTrtBackend(); // TensorRT
 | ||
| option.UseOpenVINOBackend(); // OpenVINO
 | ||
| option.UseOrtBackend(); // ONNX Runtime
 | ||
| ```
 | ||
| 
 | ||
| For more specific demos, please refer to python or c++ inference code for different models under `FastDeploy/examples/vision`
 | ||
| 
 | ||
| For more deployment methods, please refer to FastDeploy API tutorials. 
 | ||
| 
 | ||
| - [Python API]()
 | ||
| - [C++ API]()
 |