mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-07 01:22:59 +08:00
33 lines
1.5 KiB
Markdown
Executable File
33 lines
1.5 KiB
Markdown
Executable File
# FastDeploy Runtime examples
|
|
|
|
FastDeploy Runtime 推理示例如下
|
|
|
|
## Python 示例
|
|
|
|
| Example Code | Program Language | Description |
|
|
| :------- | :------- | :---- |
|
|
| python/infer_paddle_paddle_inference.py | Python | Deploy Paddle model with Paddle Inference(CPU/GPU) |
|
|
| python/infer_paddle_tensorrt.py | Python | Deploy Paddle model with TensorRT(GPU) |
|
|
| python/infer_paddle_openvino.py | Python | Deploy Paddle model with OpenVINO(CPU) |
|
|
| python/infer_paddle_onnxruntime.py | Python | Deploy Paddle model with ONNX Runtime(CPU/GPU) |
|
|
| python/infer_onnx_openvino.py | Python | Deploy ONNX model with OpenVINO(CPU) |
|
|
| python/infer_onnx_tensorrt.py | Python | Deploy ONNX model with TensorRT(GPU) |
|
|
| python/infer_onnx_onnxruntime.py | Python | Deploy ONNX model with ONNX Runtime(CPU/GPU) |
|
|
|
|
## C++ 示例
|
|
|
|
| Example Code | Program Language | Description |
|
|
| :------- | :------- | :---- |
|
|
| cpp/infer_paddle_paddle_inference.cc | C++ | Deploy Paddle model with Paddle Inference(CPU/GPU) |
|
|
| cpp/infer_paddle_tensorrt.cc | C++ | Deploy Paddle model with TensorRT(GPU) |
|
|
| cpp/infer_paddle_openvino.cc | C++ | Deploy Paddle model with OpenVINO(CPU |
|
|
| cpp/infer_paddle_onnxruntime.cc | C++ | Deploy Paddle model with ONNX Runtime(CPU/GPU) |
|
|
| cpp/infer_onnx_openvino.cc | C++ | Deploy ONNX model with OpenVINO(CPU) |
|
|
| cpp/infer_onnx_tensorrt.cc | C++ | Deploy ONNX model with TensorRT(GPU) |
|
|
| cpp/infer_onnx_onnxruntime.cc | C++ | Deploy ONNX model with ONNX Runtime(CPU/GPU) |
|
|
|
|
## 详细部署文档
|
|
|
|
- [Python部署](python)
|
|
- [C++部署](cpp)
|