mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-18 22:44:39 +08:00

* add onnx_ort_runtime demo * rm in requirements * support batch eval * fixed MattingResults bug * move assignment for DetectionResult * integrated x2paddle * add model convert readme * update readme * re-lint * add processor api * Add MattingResult Free * change valid_cpu_backends order * add ppocr benchmark * mv bs from 64 to 32 * fixed quantize.md * fixed quantize bugs * Add Monitor for benchmark * update mem monitor * Set trt_max_batch_size default 1 * fixed ocr benchmark bug * support yolov5 in serving * Fixed yolov5 serving * Fixed postprocess * update yolov5 to 7.0 * add poros runtime demos * update readme * Support poros abi=1 * rm useless note * deal with comments Co-authored-by: Jason <jiangjiajun@baidu.com>
35 lines
1.7 KiB
Markdown
Executable File
35 lines
1.7 KiB
Markdown
Executable File
# FastDeploy Runtime examples
|
|
|
|
FastDeploy Runtime 推理示例如下
|
|
|
|
## Python 示例
|
|
|
|
| Example Code | Program Language | Description |
|
|
| :------- | :------- | :---- |
|
|
| python/infer_paddle_paddle_inference.py | Python | Deploy Paddle model with Paddle Inference(CPU/GPU) |
|
|
| python/infer_paddle_tensorrt.py | Python | Deploy Paddle model with TensorRT(GPU) |
|
|
| python/infer_paddle_openvino.py | Python | Deploy Paddle model with OpenVINO(CPU) |
|
|
| python/infer_paddle_onnxruntime.py | Python | Deploy Paddle model with ONNX Runtime(CPU/GPU) |
|
|
| python/infer_onnx_openvino.py | Python | Deploy ONNX model with OpenVINO(CPU) |
|
|
| python/infer_onnx_tensorrt.py | Python | Deploy ONNX model with TensorRT(GPU) |
|
|
| python/infer_onnx_onnxruntime.py | Python | Deploy ONNX model with ONNX Runtime(CPU/GPU) |
|
|
| python/infer_torchscript_poros.py | Python | Deploy TorchScript model with Poros Runtime(CPU/GPU) |
|
|
|
|
## C++ 示例
|
|
|
|
| Example Code | Program Language | Description |
|
|
| :------- | :------- | :---- |
|
|
| cpp/infer_paddle_paddle_inference.cc | C++ | Deploy Paddle model with Paddle Inference(CPU/GPU) |
|
|
| cpp/infer_paddle_tensorrt.cc | C++ | Deploy Paddle model with TensorRT(GPU) |
|
|
| cpp/infer_paddle_openvino.cc | C++ | Deploy Paddle model with OpenVINO(CPU |
|
|
| cpp/infer_paddle_onnxruntime.cc | C++ | Deploy Paddle model with ONNX Runtime(CPU/GPU) |
|
|
| cpp/infer_onnx_openvino.cc | C++ | Deploy ONNX model with OpenVINO(CPU) |
|
|
| cpp/infer_onnx_tensorrt.cc | C++ | Deploy ONNX model with TensorRT(GPU) |
|
|
| cpp/infer_onnx_onnxruntime.cc | C++ | Deploy ONNX model with ONNX Runtime(CPU/GPU) |
|
|
| cpp/infer_torchscript_poros.cc | C++ | Deploy TorchScript model with Poros Runtime(CPU/GPU) |
|
|
|
|
## 详细部署文档
|
|
|
|
- [Python部署](python)
|
|
- [C++部署](cpp)
|