mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-05 16:48:03 +08:00

* add onnx_ort_runtime demo * rm in requirements * support batch eval * fixed MattingResults bug * move assignment for DetectionResult * integrated x2paddle * add model convert readme * update readme * re-lint * add processor api * Add MattingResult Free * change valid_cpu_backends order * add ppocr benchmark * mv bs from 64 to 32 * fixed quantize.md * fixed quantize bugs Co-authored-by: Jason <jiangjiajun@baidu.com>
简体中文| English
Tutorials
Install
- Install FastDeploy Prebuilt Libraries
- Build and Install FastDeploy Library on GPU Platform
- Build and Install FastDeploy Library on CPU Platform
- Build and Install FastDeploy Library on IPU Platform
- Build and Install FastDeploy Library on Nvidia Jetson Platform
- Build and Install FastDeploy Library on Android Platform
- Build and Install FastDeploy Serving Deployment Image
A Quick Start - Demos
- Python Deployment Demo
- C++ Deployment Demo
- A Quick Start on Runtime Python
- A Quick Start on Runtime C++
API
Performance Optimization
Frequent Q&As
- 1. How to Change Inference Backends
- 2. How to Use FastDeploy C++ SDK on Windows Platform
- 3. How to Use FastDeploy C++ SDK on Android Platform
- 4. Tricks of TensorRT
- 5. How to Develop a New Model