mirror of
				https://github.com/PaddlePaddle/FastDeploy.git
				synced 2025-10-31 03:46:40 +08:00 
			
		
		
		
	English | 简体中文
FastDeploy Runtime examples
FastDeploy Runtime examples are as follows:
Python Example
| Example Code | Program Language | Description | 
|---|---|---|
| python/infer_paddle_paddle_inference.py | Python | Deploy Paddle model with Paddle Inference(CPU/GPU) | 
| python/infer_paddle_tensorrt.py | Python | Deploy Paddle model with TensorRT(GPU) | 
| python/infer_paddle_openvino.py | Python | Deploy Paddle model with OpenVINO(CPU) | 
| python/infer_paddle_onnxruntime.py | Python | Deploy Paddle model with ONNX Runtime(CPU/GPU) | 
| python/infer_onnx_openvino.py | Python | Deploy ONNX model with OpenVINO(CPU) | 
| python/infer_onnx_tensorrt.py | Python | Deploy ONNX model with TensorRT(GPU) | 
| python/infer_onnx_onnxruntime.py | Python | Deploy ONNX model with ONNX Runtime(CPU/GPU) | 
| python/infer_torchscript_poros.py | Python | Deploy TorchScript model with Poros Runtime(CPU/GPU) | 
C++ Example
| Example Code | Program Language | Description | 
|---|---|---|
| cpp/infer_paddle_paddle_inference.cc | C++ | Deploy Paddle model with Paddle Inference(CPU/GPU) | 
| cpp/infer_paddle_tensorrt.cc | C++ | Deploy Paddle model with TensorRT(GPU) | 
| cpp/infer_paddle_openvino.cc | C++ | Deploy Paddle model with OpenVINO(CPU | 
| cpp/infer_paddle_onnxruntime.cc | C++ | Deploy Paddle model with ONNX Runtime(CPU/GPU) | 
| cpp/infer_onnx_openvino.cc | C++ | Deploy ONNX model with OpenVINO(CPU) | 
| cpp/infer_onnx_tensorrt.cc | C++ | Deploy ONNX model with TensorRT(GPU) | 
| cpp/infer_onnx_onnxruntime.cc | C++ | Deploy ONNX model with ONNX Runtime(CPU/GPU) | 
| cpp/infer_torchscript_poros.cc | C++ | Deploy TorchScript model with Poros Runtime(CPU/GPU) | 
