Files
FastDeploy/docs/cn/build_and_install
yunyaoXYY 43f22f4bf6 [Other] Support Ascend deployment on X86 platform. (#1067)
* Add Huawei Ascend NPU deploy through PaddleLite CANN

* Add NNAdapter interface for paddlelite

* Modify Huawei Ascend Cmake

* Update way for compiling Huawei Ascend NPU deployment

* remove UseLiteBackend in UseCANN

* Support compile python whlee

* Change names of nnadapter API

* Add nnadapter pybind and remove useless API

* Support Python deployment on Huawei Ascend NPU

* Add models suppor for ascend

* Add PPOCR rec reszie for ascend

* fix conflict for ascend

* Rename CANN to Ascend

* Rename CANN to Ascend

* Improve ascend

* fix ascend bug

* improve ascend docs

* improve ascend docs

* improve ascend docs

* Improve Ascend

* Improve Ascend

* Move ascend python demo

* Imporve ascend

* Improve ascend

* Improve ascend

* Improve ascend

* Improve ascend

* Imporve ascend

* Imporve ascend

* Improve ascend

* acc eval script

* acc eval

* remove acc_eval from branch huawei

* Add detection and segmentation examples for Ascend deployment

* Add detection and segmentation examples for Ascend deployment

* Add PPOCR example for ascend deploy

* Imporve paddle lite compiliation

* Add FlyCV doc

* Add FlyCV doc

* Add FlyCV doc

* Imporve Ascend docs

* Imporve Ascend docs

* Improve PPOCR example

* Support Ascend deployment on X86 platform

* Improve Ascend docs

* Improve ascend

* Improve ascend

* Change Paddle Lite Ascend URL
2023-01-09 17:13:12 +08:00
..
2023-01-04 15:49:17 +08:00
2022-12-28 18:30:11 +08:00
2022-12-28 10:46:55 +08:00

English | 中文

FastDeploy安装

FastDeploy预编译库安装

自行编译安装

FastDeploy编译选项说明

选项 说明
ENABLE_ORT_BACKEND 默认OFF, 是否编译集成ONNX Runtime后端(CPU/GPU上推荐打开)
ENABLE_PADDLE_BACKEND 默认OFF是否编译集成Paddle Inference后端(CPU/GPU上推荐打开)
ENABLE_LITE_BACKEND 默认OFF是否编译集成Paddle Lite后端(编译Android库时需要设置为ON)
ENABLE_RKNPU2_BACKEND 默认OFF是否编译集成RKNPU2后端(RK3588/RK3568/RK3566上推荐打开)
ENABLE_SOPHGO_BACKEND 默认OFF是否编译集成SOPHGO后端, 当在SOPHGO TPU上部署时需要设置为ON
WITH_ASCEND 默认OFF当在华为昇腾NPU上部署时, 需要设置为ON
WITH_KUNLUNXIN 默认OFF当在昆仑芯XPU上部署时需设置为ON
WITH_TIMVX 默认OFF需要在RV1126/RV1109/A311D上部署时需设置为ON
ENABLE_TRT_BACKEND 默认OFF是否编译集成TensorRT后端(GPU上推荐打开)
ENABLE_OPENVINO_BACKEND 默认OFF是否编译集成OpenVINO后端(CPU上推荐打开)
ENABLE_VISION 默认OFF是否编译集成视觉模型的部署模块
ENABLE_TEXT 默认OFF是否编译集成文本NLP模型的部署模块
WITH_GPU 默认OFF, 当需要在GPU上部署时需设置为ON
RKNN2_TARGET_SOC ENABLE_RKNPU2_BACKEND时才需要使用这个编译选项。无默认值, 可输入值为RK3588/RK356X, 必须填入,否则 将编译失败
CUDA_DIRECTORY 默认/usr/local/cuda, 当需要在GPU上部署时用于指定CUDA(>=11.2)的路径
TRT_DIRECTORY 当开启TensorRT后端时必须通过此开关指定TensorRT(>=8.4)的路径
ORT_DIRECTORY 当开启ONNX Runtime后端时用于指定用户本地的ONNX Runtime库路径如果不指定编译过程会自动下载ONNX Runtime库
OPENCV_DIRECTORY 当ENABLE_VISION=ON时用于指定用户本地的OpenCV库路径如果不指定编译过程会自动下载OpenCV库
OPENVINO_DIRECTORY 当开启OpenVINO后端时, 用于指定用户本地的OpenVINO库路径如果不指定编译过程会自动下载OpenVINO库