Files
FastDeploy/docs/en/build_and_install
yunyaoXYY 43f22f4bf6 [Other] Support Ascend deployment on X86 platform. (#1067)
* Add Huawei Ascend NPU deploy through PaddleLite CANN

* Add NNAdapter interface for paddlelite

* Modify Huawei Ascend Cmake

* Update way for compiling Huawei Ascend NPU deployment

* remove UseLiteBackend in UseCANN

* Support compile python whlee

* Change names of nnadapter API

* Add nnadapter pybind and remove useless API

* Support Python deployment on Huawei Ascend NPU

* Add models suppor for ascend

* Add PPOCR rec reszie for ascend

* fix conflict for ascend

* Rename CANN to Ascend

* Rename CANN to Ascend

* Improve ascend

* fix ascend bug

* improve ascend docs

* improve ascend docs

* improve ascend docs

* Improve Ascend

* Improve Ascend

* Move ascend python demo

* Imporve ascend

* Improve ascend

* Improve ascend

* Improve ascend

* Improve ascend

* Imporve ascend

* Imporve ascend

* Improve ascend

* acc eval script

* acc eval

* remove acc_eval from branch huawei

* Add detection and segmentation examples for Ascend deployment

* Add detection and segmentation examples for Ascend deployment

* Add PPOCR example for ascend deploy

* Imporve paddle lite compiliation

* Add FlyCV doc

* Add FlyCV doc

* Add FlyCV doc

* Imporve Ascend docs

* Imporve Ascend docs

* Improve PPOCR example

* Support Ascend deployment on X86 platform

* Improve Ascend docs

* Improve ascend

* Improve ascend

* Change Paddle Lite Ascend URL
2023-01-09 17:13:12 +08:00
..
2022-12-28 10:46:55 +08:00

English | 中文

Install FastDeploy - Tutorials

Install Prebuilt FastDeploy

Build FastDeploy and Install

Build options

option description
ENABLE_ORT_BACKEND Default OFF, whether to enable ONNX Runtime backend(CPU/GPU)
ENABLE_PADDLE_BACKEND Default OFFwhether to enable Paddle Inference backend(CPU/GPU)
ENABLE_TRT_BACKEND Default OFFwhether to enable TensorRT backend(GPU)
ENABLE_OPENVINO_BACKEND Default OFFwhether to enable OpenVINO backend(CPU)
ENABLE_VISION Default OFFwhether to enable vision models deployment module
ENABLE_TEXT Default OFFwhether to enable text models deployment module
WITH_GPU Default OFF, if build on GPU, this needs to be ON
WITH_KUNLUNXIN Default OFFif deploy on KunlunXin XPUthis needs to be ON
WITH_TIMVX Default OFFif deploy on RV1126/RV1109/A311Dthis needs to be ON
WITH_ASCEND Default OFFif deploy on Huawei Ascendthis needs to be ON
CUDA_DIRECTORY Default /usr/local/cuda, if build on GPU, this defines the path of CUDA(>=11.2)
TRT_DIRECTORY If build with ENABLE_TRT_BACKEND=ON, this defines the path of TensorRT(>=8.4)
ORT_DIRECTORY [Optional] If build with ENABLE_ORT_BACKEND=ON, this flag defines the path of ONNX Runtime, but if this flag is not set, it will download ONNX Runtime library automatically
OPENCV_DIRECTORY [Optional] If build with ENABLE_VISION=ON, this flag defines the path of OpenCV, but if this flag is not set, it will download OpenCV library automatically
OPENVINO_DIRECTORY [Optional] If build WITH ENABLE_OPENVINO_BACKEND=ON, this flag defines the path of OpenVINO, but if this flag is not set, it will download OpenVINO library automatically