Files
FastDeploy/docs/en/build_and_install
yunyaoXYY c38b7d4377 [Backend] Support onnxruntime DirectML inference. (#1304)
* Fix links in readme

* Fix links in readme

* Update PPOCRv2/v3 examples

* Update auto compression configs

* Add neww quantization  support for paddleclas model

* Update quantized Yolov6s model download link

* Improve PPOCR comments

* Add English doc for quantization

* Fix PPOCR rec model bug

* Add  new paddleseg quantization support

* Add  new paddleseg quantization support

* Add  new paddleseg quantization support

* Add  new paddleseg quantization support

* Add Ascend model list

* Add ascend model list

* Add ascend model list

* Add ascend model list

* Add ascend model list

* Add ascend model list

* Add ascend model list

* Support DirectML in onnxruntime

* Support onnxruntime DirectML

* Support onnxruntime DirectML

* Support onnxruntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Support OnnxRuntime DirectML

* Remove DirectML vision model example

* Imporve OnnxRuntime DirectML

* Imporve OnnxRuntime DirectML

* fix opencv cmake in Windows

* recheck codestyle
2023-02-17 10:53:51 +08:00
..
2022-12-28 10:46:55 +08:00

English | 中文

Install FastDeploy - Tutorials

Install Prebuilt FastDeploy

Build FastDeploy and Install

Build options

option description
ENABLE_ORT_BACKEND Default OFF, whether to enable ONNX Runtime backend(CPU/GPU)
ENABLE_PADDLE_BACKEND Default OFFwhether to enable Paddle Inference backend(CPU/GPU)
ENABLE_TRT_BACKEND Default OFFwhether to enable TensorRT backend(GPU)
ENABLE_OPENVINO_BACKEND Default OFFwhether to enable OpenVINO backend(CPU)
ENABLE_VISION Default OFFwhether to enable vision models deployment module
ENABLE_TEXT Default OFFwhether to enable text models deployment module
WITH_GPU Default OFF, if build on GPU, this needs to be ON
WITH_KUNLUNXIN Default OFFif deploy on KunlunXin XPUthis needs to be ON
WITH_TIMVX Default OFFif deploy on RV1126/RV1109/A311Dthis needs to be ON
WITH_ASCEND Default OFFif deploy on Huawei Ascendthis needs to be ON
CUDA_DIRECTORY Default /usr/local/cuda, if build on GPU, this defines the path of CUDA(>=11.2)
TRT_DIRECTORY If build with ENABLE_TRT_BACKEND=ON, this defines the path of TensorRT(>=8.4)
ORT_DIRECTORY [Optional] If build with ENABLE_ORT_BACKEND=ON, this flag defines the path of ONNX Runtime, but if this flag is not set, it will download ONNX Runtime library automatically
OPENCV_DIRECTORY [Optional] If build with ENABLE_VISION=ON, this flag defines the path of OpenCV, but if this flag is not set, it will download OpenCV library automatically
OPENVINO_DIRECTORY [Optional] If build WITH ENABLE_OPENVINO_BACKEND=ON, this flag defines the path of OpenVINO, but if this flag is not set, it will download OpenVINO library automatically