mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-06 00:57:33 +08:00

* Fix links in readme * Fix links in readme * Update PPOCRv2/v3 examples * Update auto compression configs * Add neww quantization support for paddleclas model * Update quantized Yolov6s model download link * Improve PPOCR comments * Add English doc for quantization * Fix PPOCR rec model bug * Add new paddleseg quantization support * Add new paddleseg quantization support * Add new paddleseg quantization support * Add new paddleseg quantization support * Add Ascend model list * Add ascend model list * Add ascend model list * Add ascend model list * Add ascend model list * Add ascend model list * Add ascend model list * Support DirectML in onnxruntime * Support onnxruntime DirectML * Support onnxruntime DirectML * Support onnxruntime DirectML * Support OnnxRuntime DirectML * Support OnnxRuntime DirectML * Support OnnxRuntime DirectML * Support OnnxRuntime DirectML * Support OnnxRuntime DirectML * Support OnnxRuntime DirectML * Support OnnxRuntime DirectML * Support OnnxRuntime DirectML * Remove DirectML vision model example * Imporve OnnxRuntime DirectML * Imporve OnnxRuntime DirectML * fix opencv cmake in Windows * recheck codestyle
English | 中文
Install FastDeploy - Tutorials
Install Prebuilt FastDeploy
Build FastDeploy and Install
- Build and Install on GPU Platform
- Build and Install on CPU Platform
- Build and Install on IPU Platform
- Build and Install on Nvidia Jetson Platform
- Build and Install on Android Platform
- Build and Install on RV1126 Platform
- Build and Install on RK3588 Platform
- Build and Install on A311D Platform
- Build and Install on KunlunXin XPU Platform
- Build and Install on Huawei Ascend Platform
- Build and Install on SOPHGO Platform
Build options
option | description |
---|---|
ENABLE_ORT_BACKEND | Default OFF, whether to enable ONNX Runtime backend(CPU/GPU) |
ENABLE_PADDLE_BACKEND | Default OFF,whether to enable Paddle Inference backend(CPU/GPU) |
ENABLE_TRT_BACKEND | Default OFF,whether to enable TensorRT backend(GPU) |
ENABLE_OPENVINO_BACKEND | Default OFF,whether to enable OpenVINO backend(CPU) |
ENABLE_VISION | Default OFF,whether to enable vision models deployment module |
ENABLE_TEXT | Default OFF,whether to enable text models deployment module |
WITH_GPU | Default OFF, if build on GPU, this needs to be ON |
WITH_KUNLUNXIN | Default OFF,if deploy on KunlunXin XPU,this needs to be ON |
WITH_TIMVX | Default OFF,if deploy on RV1126/RV1109/A311D,this needs to be ON |
WITH_ASCEND | Default OFF,if deploy on Huawei Ascend,this needs to be ON |
CUDA_DIRECTORY | Default /usr/local/cuda, if build on GPU, this defines the path of CUDA(>=11.2) |
TRT_DIRECTORY | If build with ENABLE_TRT_BACKEND=ON, this defines the path of TensorRT(>=8.4) |
ORT_DIRECTORY | [Optional] If build with ENABLE_ORT_BACKEND=ON, this flag defines the path of ONNX Runtime, but if this flag is not set, it will download ONNX Runtime library automatically |
OPENCV_DIRECTORY | [Optional] If build with ENABLE_VISION=ON, this flag defines the path of OpenCV, but if this flag is not set, it will download OpenCV library automatically |
OPENVINO_DIRECTORY | [Optional] If build WITH ENABLE_OPENVINO_BACKEND=ON, this flag defines the path of OpenVINO, but if this flag is not set, it will download OpenVINO library automatically |