[Doc] Change all PaddleLite or Paddle-Lite to Paddle Lite (#929)

* [FlyCV] Bump up FlyCV -> official release 1.0.0

* change PaddleLite or Paddle-Lite to Paddle lite

* fix docs

* fix doc

Co-authored-by: DefTruth <qiustudent_r@163.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
This commit is contained in:
yeliang2258
2022-12-21 14:15:50 +08:00
committed by GitHub
parent 725fe52df3
commit b42ec302e6
21 changed files with 104 additions and 86 deletions

View File

@@ -1,9 +1,9 @@
# 晶晨 A311D 部署环境编译安装
FastDeploy 基于 Paddle-Lite 后端支持在晶晨 NPU 上进行部署推理。
更多详细的信息请参考:[PaddleLite部署示例](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html)。
FastDeploy 基于 Paddle Lite 后端支持在晶晨 NPU 上进行部署推理。
更多详细的信息请参考:[Paddle Lite部署示例](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html)。
本文档介绍如何编译基于 PaddleLite 的 C++ FastDeploy 交叉编译库。
本文档介绍如何编译基于 Paddle Lite 的 C++ FastDeploy 交叉编译库。
相关编译选项说明如下:
|编译选项|默认值|说明|备注|
@@ -47,7 +47,7 @@ wget -c https://mms-res.cdn.bcebos.com/cmake-3.10.3-Linux-x86_64.tar.gz && \
ln -s /opt/cmake-3.10/bin/ccmake /usr/bin/ccmake
```
## 基于 PaddleLite 的 FastDeploy 交叉编译库编译
## 基于 Paddle Lite 的 FastDeploy 交叉编译库编译
搭建好交叉编译环境之后,编译命令如下:
```bash
# Download the latest source code
@@ -67,7 +67,7 @@ cmake -DCMAKE_TOOLCHAIN_FILE=./../cmake/toolchain.cmake \
make -j8
make install
```
编译完成之后,会生成 fastdeploy-tmivx 目录,表示基于 PadddleLite TIM-VX 的 FastDeploy 库编译完成。
编译完成之后,会生成 fastdeploy-tmivx 目录,表示基于 Paddle Lite TIM-VX 的 FastDeploy 库编译完成。
## 准备设备运行环境
部署前要保证晶晨 Linux Kernel NPU 驱动 galcore.so 版本及所适用的芯片型号与依赖库保持一致,在部署前,请登录开发板,并通过命令行输入以下命令查询 NPU 驱动版本晶晨建议的驱动版本为6.4.4.3
@@ -82,7 +82,7 @@ dmesg | grep Galcore
2. 刷机,刷取 NPU 驱动版本符合要求的固件。
### 手动替换 NPU 驱动版本
1. 使用如下命令下载解压 PaddleLite demo其中提供了现成的驱动文件
1. 使用如下命令下载解压 Paddle Lite demo其中提供了现成的驱动文件
```bash
wget https://paddlelite-demo.bj.bcebos.com/devices/generic/PaddleLite-generic-demo.tar.gz
tar -xf PaddleLite-generic-demo.tar.gz
@@ -96,7 +96,7 @@ tar -xf PaddleLite-generic-demo.tar.gz
### 刷机
根据具体的开发板型号,向开发板卖家或官网客服索要 6.4.4.3 版本 NPU 驱动对应的固件和刷机方法。
更多细节请参考:[PaddleLite准备设备环境](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html#zhunbeishebeihuanjing)
更多细节请参考:[Paddle Lite准备设备环境](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html#zhunbeishebeihuanjing)
## 基于 FastDeploy 在 A311D 上的部署示例
1. A311D 上部署 PaddleClas 分类模型请参考:[PaddleClas 分类模型在 A311D 上的 C++ 部署示例](../../../examples/vision/classification/paddleclas/a311d/README.md)

2
docs/cn/build_and_install/android.md Normal file → Executable file
View File

@@ -1,6 +1,6 @@
# Android部署库编译
FastDeploy当前在Android仅支持Paddle-Lite后端推理支持armeabi-v7a和arm64-v8a两种cpu架构在armv8.2架构的arm设备支持fp16精度推理。相关编译选项说明如下
FastDeploy当前在Android仅支持Paddle Lite后端推理支持armeabi-v7a和arm64-v8a两种cpu架构在armv8.2架构的arm设备支持fp16精度推理。相关编译选项说明如下
|编译选项|默认值|说明|备注|
|:---|:---|:---|:---|

View File

@@ -1,9 +1,9 @@
# 瑞芯微 RV1126 部署环境编译安装
FastDeploy基于 Paddle-Lite 后端支持在瑞芯微RockchipSoc 上进行部署推理。
更多详细的信息请参考:[PaddleLite部署示例](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html)。
FastDeploy基于 Paddle Lite 后端支持在瑞芯微RockchipSoc 上进行部署推理。
更多详细的信息请参考:[Paddle Lite部署示例](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html)。
本文档介绍如何编译基于 PaddleLite 的 C++ FastDeploy 交叉编译库。
本文档介绍如何编译基于 Paddle Lite 的 C++ FastDeploy 交叉编译库。
相关编译选项说明如下:
|编译选项|默认值|说明|备注|
@@ -47,7 +47,7 @@ wget -c https://mms-res.cdn.bcebos.com/cmake-3.10.3-Linux-x86_64.tar.gz && \
ln -s /opt/cmake-3.10/bin/ccmake /usr/bin/ccmake
```
## 基于 PaddleLite 的 FastDeploy 交叉编译库编译
## 基于 Paddle Lite 的 FastDeploy 交叉编译库编译
搭建好交叉编译环境之后,编译命令如下:
```bash
# Download the latest source code
@@ -67,7 +67,7 @@ cmake -DCMAKE_TOOLCHAIN_FILE=./../cmake/toolchain.cmake \
make -j8
make install
```
编译完成之后,会生成 fastdeploy-tmivx 目录,表示基于 PadddleLite TIM-VX 的 FastDeploy 库编译完成。
编译完成之后,会生成 fastdeploy-tmivx 目录,表示基于 Paddle Lite TIM-VX 的 FastDeploy 库编译完成。
## 准备设备运行环境
部署前要保证芯原 Linux Kernel NPU 驱动 galcore.so 版本及所适用的芯片型号与依赖库保持一致,在部署前,请登录开发板,并通过命令行输入以下命令查询 NPU 驱动版本Rockchip建议的驱动版本为 6.4.6.5
@@ -82,7 +82,7 @@ dmesg | grep Galcore
2. 刷机,刷取 NPU 驱动版本符合要求的固件。
### 手动替换 NPU 驱动版本
1. 使用如下命令下载解压 PaddleLite demo其中提供了现成的驱动文件
1. 使用如下命令下载解压 Paddle Lite demo其中提供了现成的驱动文件
```bash
wget https://paddlelite-demo.bj.bcebos.com/devices/generic/PaddleLite-generic-demo.tar.gz
tar -xf PaddleLite-generic-demo.tar.gz
@@ -96,7 +96,7 @@ tar -xf PaddleLite-generic-demo.tar.gz
### 刷机
根据具体的开发板型号,向开发板卖家或官网客服索要 6.4.6.5 版本 NPU 驱动对应的固件和刷机方法。
更多细节请参考:[PaddleLite准备设备环境](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html#zhunbeishebeihuanjing)
更多细节请参考:[Paddle Lite准备设备环境](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html#zhunbeishebeihuanjing)
## 基于 FastDeploy 在 RV1126 上的部署示例
1. RV1126 上部署 PaddleClas 分类模型请参考:[PaddleClas 分类模型在 RV1126 上的 C++ 部署示例](../../../examples/vision/classification/paddleclas/rv1126/README.md)

View File

@@ -1,9 +1,9 @@
# 昆仑芯 XPU 部署环境编译安装
FastDeploy 基于 Paddle-Lite 后端支持在昆仑芯 XPU 上进行部署推理。
更多详细的信息请参考:[PaddleLite部署示例](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/kunlunxin_xpu.html#xpu)。
FastDeploy 基于 Paddle Lite 后端支持在昆仑芯 XPU 上进行部署推理。
更多详细的信息请参考:[Paddle Lite部署示例](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/kunlunxin_xpu.html#xpu)。
本文档介绍如何编译基于 PaddleLite 的 C++ FastDeploy 编译库。
本文档介绍如何编译基于 Paddle Lite 的 C++ FastDeploy 编译库。
相关编译选项说明如下:
|编译选项|默认值|说明|备注|
@@ -23,7 +23,7 @@ FastDeploy 基于 Paddle-Lite 后端支持在昆仑芯 XPU 上进行部署推理
| OPENVINO_DIRECTORY | 当开启OpenVINO后端时, 用于指定用户本地的OpenVINO库路径如果不指定编译过程会自动下载OpenVINO库 |
更多编译选项请参考[FastDeploy编译选项说明](./README.md)
## 基于 PaddleLite 的 C++ FastDeploy 库编译
## 基于 Paddle Lite 的 C++ FastDeploy 库编译
- OS: Linux
- gcc/g++: version >= 8.2
- cmake: version >= 3.15
@@ -52,7 +52,7 @@ cmake -DWITH_XPU=ON \
make -j8
make install
```
编译完成之后,会生成 fastdeploy-xpu 目录,表示基于 PadddleLite 的 FastDeploy 库编译完成。
编译完成之后,会生成 fastdeploy-xpu 目录,表示基于 Paddle Lite 的 FastDeploy 库编译完成。
## Python 编译
编译命令如下:

View File

@@ -1,8 +1,8 @@
# How to Build A311D Deployment Environment
FastDeploy supports AI deployment on Rockchip Soc based on Paddle-Lite backend. For more detailed information, please refer to: [PaddleLite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html).
FastDeploy supports AI deployment on Rockchip Soc based on Paddle Lite backend. For more detailed information, please refer to: [Paddle Lite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html).
This document describes how to compile the PaddleLite-based C++ FastDeploy cross-compilation library.
This document describes how to compile the Paddle Lite based C++ FastDeploy cross-compilation library.
The relevant compilation options are described as follows:
|Compile Options|Default Values|Description|Remarks|
@@ -46,7 +46,7 @@ wget -c https://mms-res.cdn.bcebos.com/cmake-3.10.3-Linux-x86_64.tar.gz && \
ln -s /opt/cmake-3.10/bin/ccmake /usr/bin/ccmake
```
## FastDeploy cross-compilation library compilation based on PaddleLite
## FastDeploy cross-compilation library compilation based on Paddle Lite
After setting up the cross-compilation environment, the compilation command is as follows:
```bash
# Download the latest source code
@@ -66,7 +66,7 @@ cmake -DCMAKE_TOOLCHAIN_FILE=./../cmake/toolchain.cmake \
make -j8
make install
```
After the compilation is complete, the fastdeploy-tmivx directory will be generated, indicating that the FastDeploy library based on PadddleLite TIM-VX has been compiled.
After the compilation is complete, the fastdeploy-tmivx directory will be generated, indicating that the FastDeploy library based on Paddle Lite TIM-VX has been compiled.
## Prepare the Soc environment
Before deployment, ensure that the version of the driver galcore.so of the Verisilicon Linux Kernel NPU meets the requirements. Before deployment, please log in to the development board, and enter the following command through the command line to query the NPU driver version. The recommended version of the Rockchip driver is: 6.4.4.3
@@ -80,7 +80,7 @@ There are two ways to modify the current NPU driver version:
2. flash the machine, and flash the firmware that meets the requirements of the NPU driver version.
### Manually replace the NPU driver version
1. Use the following command to download and decompress the PaddleLite demo, which provides ready-made driver files
1. Use the following command to download and decompress the Paddle Lite demo, which provides ready-made driver files
```bash
wget https://paddlelite-demo.bj.bcebos.com/devices/generic/PaddleLite-generic-demo.tar.gz
tar -xf PaddleLite-generic-demo.tar.gz
@@ -93,7 +93,7 @@ tar -xf PaddleLite-generic-demo.tar.gz
### flash
According to the specific development board model, ask the development board seller or the official website customer service for the firmware and flashing method corresponding to the 6.4.4.3 version of the NPU driver.
For more details, please refer to: [PaddleLite prepares the device environment](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html#zhunbeishebeihuanjing)
For more details, please refer to: [Paddle Lite prepares the device environment](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html#zhunbeishebeihuanjing)
## Deployment example based on FastDeploy on A311D
1. For deploying the PaddleClas classification model on A311D, please refer to: [C++ deployment example of PaddleClas classification model on A311D](../../../examples/vision/classification/paddleclas/a311d/README.md)

4
docs/en/build_and_install/android.md Normal file → Executable file
View File

@@ -1,12 +1,12 @@
# How to Build FastDeploy Android C++ SDK
FastDeploy supports Paddle-Lite backend on Android. It supports both armeabi-v7a and arm64-v8a cpu architectures, and supports fp16 precision inference on the armv8.2 architecture. The relevant compilation options are described as follows:
FastDeploy supports Paddle Lite backend on Android. It supports both armeabi-v7a and arm64-v8a cpu architectures, and supports fp16 precision inference on the armv8.2 architecture. The relevant compilation options are described as follows:
|Option|Default|Description|Remark|
|:---|:---|:---|:---|
|ENABLE_LITE_BACKEND|OFF|It needs to be set to ON when compiling the Android library| - |
|WITH_OPENCV_STATIC|OFF|Whether to use the OpenCV static library| - |
|WITH_LITE_STATIC|OFF|Whether to use the Paddle-Lite static library| NOT Support now |
|WITH_LITE_STATIC|OFF|Whether to use the Paddle Lite static library| NOT Support now |
Please reference [FastDeploy Compile Options](./README.md) for more details.

View File

@@ -1,8 +1,8 @@
# How to Build RV1126 Deployment Environment
FastDeploy supports AI deployment on Rockchip Soc based on Paddle-Lite backend. For more detailed information, please refer to: [PaddleLite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html).
FastDeploy supports AI deployment on Rockchip Soc based on Paddle Lite backend. For more detailed information, please refer to: [Paddle Lite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html).
This document describes how to compile the PaddleLite-based C++ FastDeploy cross-compilation library.
This document describes how to compile the Paddle Lite based C++ FastDeploy cross-compilation library.
The relevant compilation options are described as follows:
|Compile Options|Default Values|Description|Remarks|
@@ -46,7 +46,7 @@ wget -c https://mms-res.cdn.bcebos.com/cmake-3.10.3-Linux-x86_64.tar.gz && \
ln -s /opt/cmake-3.10/bin/ccmake /usr/bin/ccmake
```
## FastDeploy cross-compilation library compilation based on PaddleLite
## FastDeploy cross-compilation library compilation based on Paddle Lite
After setting up the cross-compilation environment, the compilation command is as follows:
```bash
# Download the latest source code
@@ -66,7 +66,7 @@ cmake -DCMAKE_TOOLCHAIN_FILE=./../cmake/toolchain.cmake \
make -j8
make install
```
After the compilation is complete, the fastdeploy-tmivx directory will be generated, indicating that the FastDeploy library based on PadddleLite TIM-VX has been compiled.
After the compilation is complete, the fastdeploy-tmivx directory will be generated, indicating that the FastDeploy library based on Paddle Lite TIM-VX has been compiled.
## Prepare the Soc environment
Before deployment, ensure that the version of the driver galcore.so of the Verisilicon Linux Kernel NPU meets the requirements. Before deployment, please log in to the development board, and enter the following command through the command line to query the NPU driver version. The recommended version of the Rockchip driver is: 6.4.6.5
@@ -80,7 +80,7 @@ There are two ways to modify the current NPU driver version:
2. flash the machine, and flash the firmware that meets the requirements of the NPU driver version.
### Manually replace the NPU driver version
1. Use the following command to download and decompress the PaddleLite demo, which provides ready-made driver files
1. Use the following command to download and decompress the Paddle Lite demo, which provides ready-made driver files
```bash
wget https://paddlelite-demo.bj.bcebos.com/devices/generic/PaddleLite-generic-demo.tar.gz
tar -xf PaddleLite-generic-demo.tar.gz
@@ -93,7 +93,7 @@ tar -xf PaddleLite-generic-demo.tar.gz
### flash
According to the specific development board model, ask the development board seller or the official website customer service for the firmware and flashing method corresponding to the 6.4.6.5 version of the NPU driver.
For more details, please refer to: [PaddleLite prepares the device environment](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html#zhunbeishebeihuanjing)
For more details, please refer to: [Paddle Lite prepares the device environment](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html#zhunbeishebeihuanjing)
## Deployment example based on FastDeploy on RV1126
1. For deploying the PaddleClas classification model on RV1126, please refer to: [C++ deployment example of PaddleClas classification model on RV1126](../../../examples/vision/classification/paddleclas/rv1126/README.md)

View File

@@ -1,8 +1,8 @@
# How to Build KunlunXin XPU Deployment Environment
FastDeploy supports deployment AI on KunlunXin XPU based on Paddle-Lite backend. For more detailed information, please refer to: [PaddleLite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/kunlunxin_xpu.html#xpu)。
FastDeploy supports deployment AI on KunlunXin XPU based on Paddle Lite backend. For more detailed information, please refer to: [Paddle Lite Deployment Example](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/kunlunxin_xpu.html#xpu)。
This document describes how to compile the C++ FastDeploy library based on PaddleLite.
This document describes how to compile the C++ FastDeploy library based on Paddle Lite.
The relevant compilation options are described as follows:
|Compile Options|Default Values|Description|Remarks|
@@ -24,7 +24,7 @@ The configuration for third libraries(Optional, if the following option is not d
For more compilation options, please refer to [Description of FastDeploy compilation options](./README.md)
## C++ FastDeploy library compilation based on PaddleLite
## C++ FastDeploy library compilation based on Paddle Lite
- OS: Linux
- gcc/g++: version >= 8.2
- cmake: version >= 3.15
@@ -55,7 +55,7 @@ cmake -DWITH_XPU=ON \
make -j8
make install
```
After the compilation is complete, the fastdeploy-xpu directory will be generated, indicating that the PadddleLite-based FastDeploy library has been compiled.
After the compilation is complete, the fastdeploy-xpu directory will be generated, indicating that the Padddle Lite based FastDeploy library has been compiled.
## Python compile
The compilation command is as follows:

2
examples/application/js/converter/DEVELOPMENT.md Normal file → Executable file
View File

@@ -64,7 +64,7 @@ Parameter | description
--modelPath | The model file path, used when the weight file is merged.
--paramPath | The weight file pathused when the weight file is merged.
--outputDir | `Necessary`, the output model directory generated after converting.
--disableOptimize | Whether to disable optimize model, `1`is to disable, `0`is use optimize(need to install PaddleLite), default 0.
--disableOptimize | Whether to disable optimize model, `1`is to disable, `0`is use optimize(need to install Paddle Lite), default 0.
--logModelInfo | Whether to print model structure information `0` means not to print, `1` means to print, default 0.
--sliceDataSize | Shard size (in KB) of each weight file. Default size is 4096.
--useGPUOpt | Whether to use gpu opt, default is False.

2
examples/application/js/converter/DEVELOPMENT_cn.md Normal file → Executable file
View File

@@ -63,7 +63,7 @@ python convertToPaddleJSModel.py --inputDir=<fluid_model_directory> --outputDir=
--modelPath | fluid 模型文件所在路径,使用合并参数文件时使用该参数
--paramPath | fluid 参数文件所在路径,使用合并参数文件时使用该参数
--outputDir | `必要参数` Paddle.js 模型输出路径
--disableOptimize | 是否关闭模型优化, `1` 为关闭优化,`0` 为开启优化(需安装 PaddleLite ),默认执行优化
--disableOptimize | 是否关闭模型优化, `1` 为关闭优化,`0` 为开启优化(需安装 Paddle Lite ),默认执行优化
--logModelInfo | 是否打印模型结构信息, `0` 为不打印, `1` 为打印,默认不打印
--sliceDataSize | 分片输出 Paddle.js 参数文件时每片文件的大小单位KB默认 4096
--useGPUOpt | 是否开启模型 GPU 优化,默认不开启(当模型准备运行在 webgl/webgpu 计算方案时,可以设置为 True 开启,在 wasm/plainjs 方案,则不用开启)

View File

@@ -9,19 +9,20 @@ import stat
import traceback
import copy
def cleanTempModel(optimizedModelTempDir):
""" 清理opt优化完的临时模型文件 """
if os.path.exists(optimizedModelTempDir):
print("Cleaning optimized temporary model...")
shutil.rmtree(optimizedModelTempDir, onerror=grantWritePermission)
def grantWritePermission(func, path, execinfo):
""" 文件授权 """
os.chmod(path, stat.S_IWRITE)
func(path)
def main():
"""
Example:
@@ -29,20 +30,41 @@ def main():
"""
try:
p = argparse.ArgumentParser(description='转化为PaddleJS模型参数解析')
p.add_argument('--inputDir', help='fluid模型所在目录。当且仅当使用分片参数文件时使用该参数。将过滤modelPath和paramsPath参数且模型文件名必须为`__model__`', required=False)
p.add_argument('--modelPath', help='fluid模型文件所在路径使用合并参数文件时使用该参数', required=False)
p.add_argument('--paramPath', help='fluid参数文件所在路径使用合并参数文件时使用该参数', required=False)
p.add_argument("--outputDir", help='paddleJS模型输出路径必要参数', required=True)
p.add_argument("--disableOptimize", type=int, default=0, help='是否关闭模型优化非必要参数1为关闭优化0为开启优化默认开启优化', required=False)
p.add_argument("--logModelInfo", type=int, default=0, help='是否输出模型结构信息非必要参数0为不输出1为输出默认不输出', required=False)
p.add_argument("--sliceDataSize", type=int, default=4096, help='分片输出参数文件时每片文件的大小单位KB非必要参数默认4096KB', required=False)
p.add_argument(
'--inputDir',
help='fluid模型所在目录。当且仅当使用分片参数文件时使用该参数。将过滤modelPath和paramsPath参数且模型文件名必须为`__model__`',
required=False)
p.add_argument(
'--modelPath', help='fluid模型文件所在路径使用合并参数文件时使用该参数', required=False)
p.add_argument(
'--paramPath', help='fluid参数文件所在路径使用合并参数文件时使用该参数', required=False)
p.add_argument(
"--outputDir", help='paddleJS模型输出路径必要参数', required=True)
p.add_argument(
"--disableOptimize",
type=int,
default=0,
help='是否关闭模型优化非必要参数1为关闭优化0为开启优化默认开启优化',
required=False)
p.add_argument(
"--logModelInfo",
type=int,
default=0,
help='是否输出模型结构信息非必要参数0为不输出1为输出默认不输出',
required=False)
p.add_argument(
"--sliceDataSize",
type=int,
default=4096,
help='分片输出参数文件时每片文件的大小单位KB非必要参数默认4096KB',
required=False)
p.add_argument('--useGPUOpt', help='转换模型是否执行GPU优化方法', required=False)
args = p.parse_args()
# 获取当前用户使用的 python 解释器 bin 位置
pythonCmd = sys.executable
# TODO: 由于PaddleLite和PaddlePaddle存在包冲突因此将整个模型转换工具拆成两个python文件由一个入口python文件通过命令行调用
# TODO: 由于Paddle Lite和PaddlePaddle存在包冲突因此将整个模型转换工具拆成两个python文件由一个入口python文件通过命令行调用
# 区分本地执行和命令行执行
if os.path.exists("optimizeModel.py"):
optimizeCmd = pythonCmd + " optimizeModel.py"
@@ -76,7 +98,6 @@ def main():
args.modelPath = os.path.join(optimizedModelTempDir, "model")
args.paramPath = os.path.join(optimizedModelTempDir, "params")
print("============Convert Model Args=============")
if inputDir:
print("inputDir: " + inputDir)
@@ -88,14 +109,14 @@ def main():
print("enableLogModelInfo: " + str(enableLogModelInfo))
print("sliceDataSize:" + str(sliceDataSize))
print("Starting...")
if enableOptimization:
print("Optimizing model...")
for param in ["inputDir", "modelPath", "paramPath", "outputDir"]:
if optArgs.__dict__[param]:
# 用""框起命令参数值,解决路径中的空格问题
optimizeCmd += " --" + param + "="+ '"' + str(optArgs.__dict__[param]) + '"'
optimizeCmd += " --" + param + "=" + '"' + str(
optArgs.__dict__[param]) + '"'
os.system(optimizeCmd)
try:
os.listdir(optimizedModelTempDir)
@@ -110,13 +131,16 @@ def main():
else:
print("\n\033[32mOptimizing model successfully.\033[0m")
else:
print("\033[33mYou choosed not to optimize model, consequently, optimizing model is skiped.\033[0m")
print(
"\033[33mYou choosed not to optimize model, consequently, optimizing model is skiped.\033[0m"
)
print("\nConverting model...")
for param in args.__dict__:
if args.__dict__[param]:
# 用""框起参数,解决路径中的空格问题
convertCmd += " --" + param + "=" + '"' + str(args.__dict__[param]) + '"'
convertCmd += " --" + param + "=" + '"' + str(args.__dict__[
param]) + '"'
os.system(convertCmd)
try:
file = os.listdir(outputDir)

38
examples/application/js/converter/fuseOps.py Normal file → Executable file
View File

@@ -1,20 +1,12 @@
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
def opListFuse(ops):
""" 算子融合 """
fuseOpList = [
'relu',
'relu6',
'leaky_relu',
'scale',
'sigmoid',
'hard_sigmoid',
'pow',
'sqrt',
'tanh',
'hard_swish',
'dropout'
'relu', 'relu6', 'leaky_relu', 'scale', 'sigmoid', 'hard_sigmoid',
'pow', 'sqrt', 'tanh', 'hard_swish', 'dropout'
]
# 判断op是否为单节点
@@ -37,39 +29,41 @@ def opListFuse(ops):
else:
return False
for index in reversed(range(len(ops))):
if index > 0:
op = ops[index]
# 兼容paddlelite 算子融合字段
# 兼容 Paddle Lite 算子融合字段
if 'act_type' in op['attrs']:
name = op['attrs']['act_type']
op['attrs']['fuse_opt'] = {}
op['attrs']['fuse_opt'][name] = {}
if name == 'hard_swish':
op['attrs']['fuse_opt'][name]['offset'] = op['attrs']['hard_swish_offset']
op['attrs']['fuse_opt'][name]['scale'] = op['attrs']['hard_swish_scale']
op['attrs']['fuse_opt'][name]['threshold'] = op['attrs']['hard_swish_threshold']
op['attrs']['fuse_opt'][name]['offset'] = op['attrs'][
'hard_swish_offset']
op['attrs']['fuse_opt'][name]['scale'] = op['attrs'][
'hard_swish_scale']
op['attrs']['fuse_opt'][name]['threshold'] = op['attrs'][
'hard_swish_threshold']
if name == 'relu6':
op['attrs']['fuse_opt'][name]['threshold'] = op['attrs']['fuse_brelu_threshold']
op['attrs']['fuse_opt'][name]['threshold'] = op['attrs'][
'fuse_brelu_threshold']
for fuse in fuseOpList:
if op['type'] == fuse:
prevOp = ops[index - 1]
if opExistSingleNode(prevOp['outputs']['Out'][0]) and len(prevOp['outputs']['Out']) == 1 :
if opExistSingleNode(prevOp['outputs']['Out'][0]) and len(
prevOp['outputs']['Out']) == 1:
prevOp['attrs']['fuse_opt'] = {}
if 'fuse_opt' in op['attrs']:
prevOp['attrs']['fuse_opt'] = op['attrs']['fuse_opt']
prevOp['attrs']['fuse_opt'] = op['attrs'][
'fuse_opt']
del op['attrs']['fuse_opt']
prevOp['attrs']['fuse_opt'][fuse] = op['attrs']
prevOp['outputs']['Out'] = op['outputs']['Out']
del ops[index]

View File

@@ -1,5 +1,5 @@
# PaddleClas 量化模型在 A311D 上的部署
目前 FastDeploy 已经支持基于 PaddleLite 部署 PaddleClas 量化模型到 A311D 上。
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PaddleClas 量化模型到 A311D 上。
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)

View File

@@ -1,5 +1,5 @@
# PaddleClas 量化模型在 RV1126 上的部署
目前 FastDeploy 已经支持基于 PaddleLite 部署 PaddleClas 量化模型到 RV1126 上。
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PaddleClas 量化模型到 RV1126 上。
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)

View File

@@ -1,5 +1,5 @@
# PP-YOLOE 量化模型在 A311D 上的部署
目前 FastDeploy 已经支持基于 PaddleLite 部署 PP-YOLOE 量化模型到 A311D 上。
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-YOLOE 量化模型到 A311D 上。
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)

View File

@@ -1,5 +1,5 @@
# PP-YOLOE 量化模型在 RV1126 上的部署
目前 FastDeploy 已经支持基于 PaddleLite 部署 PP-YOLOE 量化模型到 RV1126 上。
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-YOLOE 量化模型到 RV1126 上。
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)

View File

@@ -1,5 +1,5 @@
# YOLOv5 量化模型在 A311D 上的部署
目前 FastDeploy 已经支持基于 PaddleLite 部署 YOLOv5 量化模型到 A311D 上。
目前 FastDeploy 已经支持基于 Paddle Lite 部署 YOLOv5 量化模型到 A311D 上。
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)

View File

@@ -1,5 +1,5 @@
# YOLOv5 量化模型在 RV1126 上的部署
目前 FastDeploy 已经支持基于 PaddleLite 部署 YOLOv5 量化模型到 RV1126 上。
目前 FastDeploy 已经支持基于 Paddle Lite 部署 YOLOv5 量化模型到 RV1126 上。
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)

View File

@@ -1,5 +1,5 @@
# PP-LiteSeg 量化模型在 A311D 上的部署
目前 FastDeploy 已经支持基于 PaddleLite 部署 PP-LiteSeg 量化模型到 A311D 上。
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-LiteSeg 量化模型到 A311D 上。
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)

View File

@@ -1,5 +1,5 @@
# PP-LiteSeg 量化模型在 RV1126 上的部署
目前 FastDeploy 已经支持基于 PaddleLite 部署 PP-LiteSeg 量化模型到 RV1126 上。
目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-LiteSeg 量化模型到 RV1126 上。
模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)

View File

@@ -372,7 +372,7 @@ struct FASTDEPLOY_DECL RuntimeOption {
float ipu_available_memory_proportion = 1.0;
bool ipu_enable_half_partial = false;
// ======Only for Paddle-Lite Backend=====
// ======Only for Paddle Lite Backend=====
// 0: LITE_POWER_HIGH 1: LITE_POWER_LOW 2: LITE_POWER_FULL
// 3: LITE_POWER_NO_BIND 4: LITE_POWER_RAND_HIGH
// 5: LITE_POWER_RAND_LOW