mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-04 16:22:57 +08:00
[Doc]Update English version of some documents (#1083)
* 第一次提交 * 补充一处漏翻译 * deleted: docs/en/quantize.md * Update one translation * Update en version * Update one translation in code * Standardize one writing * Standardize one writing * Update some en version * Fix a grammer problem * Update en version for api/vision result * Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop * Checkout the link in README in vision_results/ to the en documents * Modify a title * Add link to serving/docs/ * Finish translation of demo.md * Update english version of serving/docs/ * Update title of readme * Update some links * Modify a title * Update some links * Update en version of java android README * Modify some titles * Modify some titles * Modify some titles * modify article to document * update some english version of documents in examples * Add english version of documents in examples/visions * Sync to current branch * Add english version of documents in examples * Add english version of documents in examples * Add english version of documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples
This commit is contained in:
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation.
|
Before deployment, two steps require confirmation.
|
||||||
|
|
||||||
- 1. Environment of software and hardware should meet the requirements. Please refer to[FastDeploy Environment Requirements](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. Based on the develop environment, download the precompiled deployment library and samples code. Please refer to [FastDeploy Precompiled Library](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Based on the develop environment, download the precompiled deployment library and samples code. Please refer to [FastDeploy Precompiled Library](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
This directory provides deployment examples that seq_cls_inferve.py fast finish text classification tasks on CPU/GPU.
|
This directory provides deployment examples that seq_cls_inferve.py fast finish text classification tasks on CPU/GPU.
|
||||||
|
|
||||||
|
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation.
|
Before deployment, two steps require confirmation.
|
||||||
|
|
||||||
- 1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. FastDeploy Python whl package should be installed. Please refer to [FastDeploy Python Installation](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. FastDeploy Python whl package should be installed. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
This directory provides deployment examples that seq_cls_inferve.py fast finish text classification tasks on CPU/GPU.
|
This directory provides deployment examples that seq_cls_inferve.py fast finish text classification tasks on CPU/GPU.
|
||||||
|
|
||||||
|
@@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before serving deployment, you need to confirm
|
Before serving deployment, you need to confirm
|
||||||
|
|
||||||
- 1. Refer to [FastDeploy Serving Deployment](../../../../../serving/README_CN.md) for hardware and software environment requirements and image pull commands of serving images.
|
- 1. Refer to [FastDeploy Serving Deployment](../../../../serving/README.md) for hardware and software environment requirements and image pull commands of serving images.
|
||||||
|
|
||||||
## Prepare Models
|
## Prepare Models
|
||||||
|
|
||||||
@@ -174,4 +174,4 @@ entity: 华夏 label: LOC pos: [14, 15]
|
|||||||
```
|
```
|
||||||
|
|
||||||
## Configuration Modification
|
## Configuration Modification
|
||||||
The current classification task (ernie_seqcls_model/config.pbtxt) is by default configured to run the OpenVINO engine on CPU; the sequence labelling task is by default configured to run the Paddle engine on GPU. If you want to run on CPU/GPU or other inference engines, you should modify the configuration. please refer to the [configuration document.](../../../../serving/docs/zh_CN/model_configuration.md)
|
The current classification task (ernie_seqcls_model/config.pbtxt) is by default configured to run the OpenVINO engine on CPU; the sequence labelling task is by default configured to run the Paddle engine on GPU. If you want to run on CPU/GPU or other inference engines, you should modify the configuration. please refer to the [configuration document.](../../../../serving/docs/EN/model_configuration-en.md)
|
||||||
|
@@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
在服务化部署前,需确认
|
在服务化部署前,需确认
|
||||||
|
|
||||||
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
|
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../serving/README_CN.md)
|
||||||
|
|
||||||
## 准备模型
|
## 准备模型
|
||||||
|
|
||||||
|
@@ -19,7 +19,7 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
## Export Deployment Models
|
## Export Deployment Models
|
||||||
|
|
||||||
Before deployment, you need to export the UIE model into the deployment model. Please refer to [Export Model](https://github.com/PaddlePaddle/PaddleNLP/tree/release/2.4/model_zoo/uie#47-%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)
|
Before deployment, you need to export the UIE model into the deployment model. Please refer to [Export Model](https://github.com/PaddlePaddle/PaddleNLP/tree/release/2.4/model_zoo/uie#47-%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).
|
||||||
|
|
||||||
## Download Pre-trained Models
|
## Download Pre-trained Models
|
||||||
|
|
||||||
|
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps need to be confirmed.
|
Before deployment, two steps need to be confirmed.
|
||||||
|
|
||||||
- 1. The software and hardware environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
- 1. The software and hardware environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. FastDeploy Python whl pacakage needs installation. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
- 2. FastDeploy Python whl pacakage needs installation. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
This directory provides an example that `infer.py` quickly complete CPU deployment conducted by the UIE model with OpenVINO acceleration on CPU/GPU and CPU.
|
This directory provides an example that `infer.py` quickly complete CPU deployment conducted by the UIE model with OpenVINO acceleration on CPU/GPU and CPU.
|
||||||
|
|
||||||
@@ -348,7 +348,7 @@ fd.text.uie.UIEModel(model_file,
|
|||||||
schema_language=SchemaLanguage.ZH)
|
schema_language=SchemaLanguage.ZH)
|
||||||
```
|
```
|
||||||
|
|
||||||
UIEModel loading and initialization. Among them, `model_file`, `params_file` are Paddle inference documents exported by trained models. Please refer to [Model export](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).`vocab_file`refers to the vocabulary file. The vocabulary of the UIE model UIE can be downloaded in [UIE configuration file](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py)
|
UIEModel loading and initialization. Among them, `model_file`, `params_file` are Paddle inference documents exported by trained models. Please refer to [Model export](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2). `vocab_file`refers to the vocabulary file. The vocabulary of the UIE model UIE can be downloaded in [UIE configuration file](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py).
|
||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
|
@@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before serving deployment, you need to confirm:
|
Before serving deployment, you need to confirm:
|
||||||
|
|
||||||
- 1. You can refer to [FastDeploy serving deployment](../../../../../serving/README_CN.md) for hardware and software environment requirements and image pull commands for serving images.
|
- 1. You can refer to [FastDeploy serving deployment](../../../../serving/README.md) for hardware and software environment requirements and image pull commands for serving images.
|
||||||
|
|
||||||
## Prepare models
|
## Prepare models
|
||||||
|
|
||||||
@@ -143,4 +143,4 @@ results:
|
|||||||
|
|
||||||
## Configuration Modification
|
## Configuration Modification
|
||||||
|
|
||||||
The current configuration is by default to run the paddle engine on CPU. If you want to run on CPU/GPU or other inference engines, modifying the configuration is needed.Please refer to [Configuration Document](../../../../serving/docs/zh_CN/model_configuration.md).
|
The current configuration is by default to run the paddle engine on CPU. If you want to run on CPU/GPU or other inference engines, modifying the configuration is needed.Please refer to [Configuration Document](../../../../serving/docs/EN/model_configuration-en.md).
|
||||||
|
@@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
在服务化部署前,需确认
|
在服务化部署前,需确认
|
||||||
|
|
||||||
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
|
- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../serving/README_CN.md)
|
||||||
|
|
||||||
## 准备模型
|
## 准备模型
|
||||||
|
|
||||||
|
@@ -32,5 +32,5 @@ Targeted at the vision suite of PaddlePaddle and external popular models, FastDe
|
|||||||
- Model Loading
|
- Model Loading
|
||||||
- Calling the `predict`interface
|
- Calling the `predict`interface
|
||||||
|
|
||||||
When deploying visual models, FastDeploy supports one-click switching of the backend inference engine. Please refer to [How to switch model inference engine](../../docs/cn/faq/how_to_change_backend.md).
|
When deploying visual models, FastDeploy supports one-click switching of the backend inference engine. Please refer to [How to switch model inference engine](../../docs/en/faq/how_to_change_backend.md).
|
||||||
|
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
[English](README_EN.md) | 简体中文
|
[English](README.md) | 简体中文
|
||||||
# 视觉模型部署
|
# 视觉模型部署
|
||||||
|
|
||||||
本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型
|
本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型
|
||||||
|
@@ -21,7 +21,7 @@ Now FastDeploy supports the deployment of the following models
|
|||||||
|
|
||||||
## Prepare PaddleClas Deployment Model
|
## Prepare PaddleClas Deployment Model
|
||||||
|
|
||||||
For PaddleClas model export, refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
|
For PaddleClas model export, refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA).
|
||||||
|
|
||||||
Attention:The model exported by PaddleClas contains two files, including `inference.pdmodel` and `inference.pdiparams`. However, it is necessary to prepare the generic [inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/deploy/configs/inference_cls.yaml) file provided by PaddleClas to meet the requirements of deployment. FastDeploy will obtain from the yaml file the preprocessing information required during inference. FastDeploy will get the preprocessing information needed by the model from the yaml file. Developers can directly download this file. But they need to modify the configuration parameters in the yaml file based on personalized needs. Refer to the configuration information in the infer section of the PaddleClas model training [config.](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4/ppcls/configs/ImageNet)
|
Attention:The model exported by PaddleClas contains two files, including `inference.pdmodel` and `inference.pdiparams`. However, it is necessary to prepare the generic [inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/deploy/configs/inference_cls.yaml) file provided by PaddleClas to meet the requirements of deployment. FastDeploy will obtain from the yaml file the preprocessing information required during inference. FastDeploy will get the preprocessing information needed by the model from the yaml file. Developers can directly download this file. But they need to modify the configuration parameters in the yaml file based on personalized needs. Refer to the configuration information in the infer section of the PaddleClas model training [config.](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4/ppcls/configs/ImageNet)
|
||||||
|
|
||||||
|
@@ -2,7 +2,7 @@ English | [简体中文](README_CN.md)
|
|||||||
# Deploy PaddleClas Quantification Model on A311D
|
# Deploy PaddleClas Quantification Model on A311D
|
||||||
Now FastDeploy supports the deployment of PaddleClas quantification model to A311D based on Paddle Lite.
|
Now FastDeploy supports the deployment of PaddleClas quantification model to A311D based on Paddle Lite.
|
||||||
|
|
||||||
For model quantification and download, refer to [model quantification](../quantize/README.md)
|
For model quantification and download, refer to [model quantification](../quantize/README.md).
|
||||||
|
|
||||||
|
|
||||||
## Detailed Deployment Tutorials
|
## Detailed Deployment Tutorials
|
||||||
|
@@ -1,26 +1,27 @@
|
|||||||
# PaddleClas A311D 开发板 C++ 部署示例
|
English | [简体中文](README_CN.md)
|
||||||
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 A311D 上的部署推理加速。
|
# PaddleClas A311D Development Board C++ Deployment Example
|
||||||
|
`infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on A311D.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy 交叉编译环境准备
|
### FastDeploy Cross-compile Environment Preparations
|
||||||
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
|
1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction).
|
||||||
|
|
||||||
### 量化模型准备
|
### Quantization Model Preparations
|
||||||
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
1. You can directly use the quantized model provided by FastDeploy for deployment.
|
||||||
2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)
|
2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
||||||
|
|
||||||
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
For more information, please refer to [Model Quantization](../../quantize/README.md).
|
||||||
|
|
||||||
## 在 A311D 上部署量化后的 ResNet50_Vd 分类模型
|
## Deploying the Quantized ResNet50_Vd Segmentation model on A311D
|
||||||
请按照以下步骤完成在 A311D 上部署 ResNet50_Vd 量化模型:
|
Please follow these steps to complete the deployment of the ResNet50_Vd quantization model on A311D.
|
||||||
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite).
|
||||||
|
|
||||||
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
2. Copy the compiled library to the current directory. You can run this line:
|
||||||
```bash
|
```bash
|
||||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
|
||||||
```
|
```
|
||||||
|
|
||||||
3. 在当前路径下载部署所需的模型和示例图片:
|
3. Download the model and example images required for deployment in current path.
|
||||||
```bash
|
```bash
|
||||||
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
|
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
|
||||||
mkdir models && mkdir images
|
mkdir models && mkdir images
|
||||||
@@ -31,26 +32,26 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
|
|||||||
cp -r ILSVRC2012_val_00000010.jpeg images
|
cp -r ILSVRC2012_val_00000010.jpeg images
|
||||||
```
|
```
|
||||||
|
|
||||||
4. 编译部署示例,可使入如下命令:
|
4. Compile the deployment example. You can run the following lines:
|
||||||
```bash
|
```bash
|
||||||
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
|
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
|
||||||
mkdir build && cd build
|
mkdir build && cd build
|
||||||
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
||||||
make -j8
|
make -j8
|
||||||
make install
|
make install
|
||||||
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
# After success, an install folder will be created with a running demo and libraries required for deployment.
|
||||||
```
|
```
|
||||||
|
|
||||||
5. 基于 adb 工具部署 ResNet50 分类模型到晶晨 A311D,可使用如下命令:
|
5. Deploy the ResNet50 segmentation model to A311D based on adb. You can run the following lines:
|
||||||
```bash
|
```bash
|
||||||
# 进入 install 目录
|
# Go to the install directory.
|
||||||
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/build/install/
|
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/build/install/
|
||||||
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
|
||||||
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
|
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
|
||||||
```
|
```
|
||||||
|
|
||||||
部署成功后运行结果如下:
|
The output is:
|
||||||
|
|
||||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/200767389-26519e50-9e4f-4fe1-8d52-260718f73476.png">
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/200767389-26519e50-9e4f-4fe1-8d52-260718f73476.png">
|
||||||
|
|
||||||
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).
|
||||||
|
@@ -0,0 +1,57 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PaddleClas A311D 开发板 C++ 部署示例
|
||||||
|
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 A311D 上的部署推理加速。
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy 交叉编译环境准备
|
||||||
|
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
|
||||||
|
|
||||||
|
### 量化模型准备
|
||||||
|
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
||||||
|
2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)
|
||||||
|
|
||||||
|
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
||||||
|
|
||||||
|
## 在 A311D 上部署量化后的 ResNet50_Vd 分类模型
|
||||||
|
请按照以下步骤完成在 A311D 上部署 ResNet50_Vd 量化模型:
|
||||||
|
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
||||||
|
|
||||||
|
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||||
|
```bash
|
||||||
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
|
||||||
|
```
|
||||||
|
|
||||||
|
3. 在当前路径下载部署所需的模型和示例图片:
|
||||||
|
```bash
|
||||||
|
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
|
||||||
|
mkdir models && mkdir images
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
|
||||||
|
tar -xvf resnet50_vd_ptq.tar
|
||||||
|
cp -r resnet50_vd_ptq models
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||||
|
cp -r ILSVRC2012_val_00000010.jpeg images
|
||||||
|
```
|
||||||
|
|
||||||
|
4. 编译部署示例,可使入如下命令:
|
||||||
|
```bash
|
||||||
|
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
|
||||||
|
mkdir build && cd build
|
||||||
|
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
||||||
|
make -j8
|
||||||
|
make install
|
||||||
|
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
||||||
|
```
|
||||||
|
|
||||||
|
5. 基于 adb 工具部署 ResNet50 分类模型到晶晨 A311D,可使用如下命令:
|
||||||
|
```bash
|
||||||
|
# 进入 install 目录
|
||||||
|
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/build/install/
|
||||||
|
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
||||||
|
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
|
||||||
|
```
|
||||||
|
|
||||||
|
部署成功后运行结果如下:
|
||||||
|
|
||||||
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/200767389-26519e50-9e4f-4fe1-8d52-260718f73476.png">
|
||||||
|
|
||||||
|
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
||||||
|
@@ -148,4 +148,4 @@ set(FastDeploy_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../../../libs/fastdeploy-android
|
|||||||
## More Reference Documents
|
## More Reference Documents
|
||||||
For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
|
For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
|
||||||
- [Use FastDeploy Java SDK in Android](../../../../../java/android/)
|
- [Use FastDeploy Java SDK in Android](../../../../../java/android/)
|
||||||
- [Use FastDeploy C++ SDK in Android](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
|
- [Use FastDeploy C++ SDK in Android](../../../../../docs/en/faq/use_cpp_sdk_on_android.md)
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation.
|
Before deployment, two steps require confirmation.
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
Taking ResNet50_vd inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking ResNet50_vd inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -81,4 +81,4 @@ PaddleClas model loading and initialization, where model_file and params_file ar
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Visual Model prediction results](../../../../../docs/api/vision_results/)
|
- [Visual Model prediction results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation.
|
Before deployment, two steps require confirmation.
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. Install the FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install the FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of ResNet50_vd on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of ResNet50_vd on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
|
|
||||||
@@ -77,4 +77,4 @@ PaddleClas model loading and initialization, where model_file and params_file ar
|
|||||||
- [PaddleClas Model Description](..)
|
- [PaddleClas Model Description](..)
|
||||||
- [PaddleClas C++ Deployment](../cpp)
|
- [PaddleClas C++ Deployment](../cpp)
|
||||||
- [Model prediction results](../../../../../docs/api/vision_results/)
|
- [Model prediction results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -1,36 +1,37 @@
|
|||||||
# PaddleClas 量化模型 C++部署示例
|
English | [简体中文](README_CN.md)
|
||||||
本目录下提供的`infer.cc`,可以帮助用户快速完成PaddleClas量化模型在CPU/GPU上的部署推理加速.
|
# PaddleClas Quantitative Model C++ Deployment Example
|
||||||
|
`infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on CPU/GPU.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy环境准备
|
### FastDeploy Environment Preparations
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
### 量化模型准备
|
### Quantized Model Preparations
|
||||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
- 1. You can directly use the quantized model provided by FastDeploy for deployment.
|
||||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
||||||
|
|
||||||
## 以量化后的ResNet50_Vd模型为例, 进行部署,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
## Take the Quantized PP-YOLOE-l Model as an example for Deployment, FastDeploy version 0.7.0 or higher is required (x.x.x>=0.7.0)
|
||||||
在本目录执行如下命令即可完成编译,以及量化模型部署.
|
Run the following commands in this directory to compile and deploy the quantized model.
|
||||||
```bash
|
```bash
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
# Download pre-compiled FastDeploy libraries. You can choose the appropriate version from `pre-compiled FastDeploy libraries` mentioned above.
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
make -j
|
make -j
|
||||||
|
|
||||||
#下载FastDeloy提供的ResNet50_Vd量化模型文件和测试图片
|
# Download the ResNet50_Vd quantized model and test images provided by FastDeloy.
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
|
||||||
tar -xvf resnet50_vd_ptq.tar
|
tar -xvf resnet50_vd_ptq.tar
|
||||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||||
|
|
||||||
|
|
||||||
# 在CPU上使用ONNX Runtime推理量化模型
|
# Use ONNX Runtime inference quantization model on CPU.
|
||||||
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 0
|
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 0
|
||||||
# 在GPU上使用TensorRT推理量化模型
|
# Use TensorRT inference quantization model on GPU.
|
||||||
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 1
|
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 1
|
||||||
# 在GPU上使用Paddle-TensorRT推理量化模型
|
# Use Paddle-TensorRT inference quantization model on GPU.
|
||||||
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 2
|
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 2
|
||||||
```
|
```
|
||||||
|
@@ -0,0 +1,37 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PaddleClas 量化模型 C++部署示例
|
||||||
|
本目录下提供的`infer.cc`,可以帮助用户快速完成PaddleClas量化模型在CPU/GPU上的部署推理加速.
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy环境准备
|
||||||
|
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
### 量化模型准备
|
||||||
|
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
||||||
|
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
||||||
|
|
||||||
|
## 以量化后的ResNet50_Vd模型为例, 进行部署,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||||
|
在本目录执行如下命令即可完成编译,以及量化模型部署.
|
||||||
|
```bash
|
||||||
|
mkdir build
|
||||||
|
cd build
|
||||||
|
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||||
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
|
make -j
|
||||||
|
|
||||||
|
#下载FastDeloy提供的ResNet50_Vd量化模型文件和测试图片
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
|
||||||
|
tar -xvf resnet50_vd_ptq.tar
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||||
|
|
||||||
|
|
||||||
|
# 在CPU上使用ONNX Runtime推理量化模型
|
||||||
|
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 0
|
||||||
|
# 在GPU上使用TensorRT推理量化模型
|
||||||
|
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 1
|
||||||
|
# 在GPU上使用Paddle-TensorRT推理量化模型
|
||||||
|
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 2
|
||||||
|
```
|
@@ -1,31 +1,32 @@
|
|||||||
# PaddleClas 量化模型 Python部署示例
|
English | [简体中文](README_CN.md)
|
||||||
本目录下提供的`infer.py`,可以帮助用户快速完成PaddleClas量化模型在CPU/GPU上的部署推理加速.
|
# PaddleClas Quantitative Model Python Deployment Example
|
||||||
|
`infer.py` in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on CPU/GPU.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy环境准备
|
### FastDeploy Environment Preparations
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
### 量化模型准备
|
### Quantized Model Preparations
|
||||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
- 1. You can directly use the quantized model provided by FastDeploy for deployment.
|
||||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
||||||
|
|
||||||
|
|
||||||
## 以量化后的ResNet50_Vd模型为例, 进行部署
|
## Take the Quantized ResNet50_Vd Model as an example for Deployment
|
||||||
```bash
|
```bash
|
||||||
#下载部署示例代码
|
# Download sample deployment code.
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
cd examples/vision/classification/paddleclas/quantize/python
|
cd examples/vision/classification/paddleclas/quantize/python
|
||||||
|
|
||||||
#下载FastDeloy提供的ResNet50_Vd量化模型文件和测试图片
|
# Download the ResNet50_Vd quantized model and test images provided by FastDeloy.
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
|
||||||
tar -xvf resnet50_vd_ptq.tar
|
tar -xvf resnet50_vd_ptq.tar
|
||||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||||
|
|
||||||
# 在CPU上使用ONNX Runtime推理量化模型
|
# Use ONNX Runtime inference quantization model on CPU.
|
||||||
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device cpu --backend ort
|
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device cpu --backend ort
|
||||||
# 在GPU上使用TensorRT推理量化模型
|
# Use TensorRT inference quantization model on GPU.
|
||||||
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device gpu --backend trt
|
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device gpu --backend trt
|
||||||
# 在GPU上使用Paddle-TensorRT推理量化模型
|
# Use Paddle-TensorRT inference quantization model on GPU.
|
||||||
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device gpu --backend pptrt
|
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device gpu --backend pptrt
|
||||||
```
|
```
|
||||||
|
@@ -0,0 +1,32 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PaddleClas 量化模型 Python部署示例
|
||||||
|
本目录下提供的`infer.py`,可以帮助用户快速完成PaddleClas量化模型在CPU/GPU上的部署推理加速.
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy环境准备
|
||||||
|
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
### 量化模型准备
|
||||||
|
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
||||||
|
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
||||||
|
|
||||||
|
|
||||||
|
## 以量化后的ResNet50_Vd模型为例, 进行部署
|
||||||
|
```bash
|
||||||
|
#下载部署示例代码
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd examples/vision/classification/paddleclas/quantize/python
|
||||||
|
|
||||||
|
#下载FastDeloy提供的ResNet50_Vd量化模型文件和测试图片
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
|
||||||
|
tar -xvf resnet50_vd_ptq.tar
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||||
|
|
||||||
|
# 在CPU上使用ONNX Runtime推理量化模型
|
||||||
|
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device cpu --backend ort
|
||||||
|
# 在GPU上使用TensorRT推理量化模型
|
||||||
|
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device gpu --backend trt
|
||||||
|
# 在GPU上使用Paddle-TensorRT推理量化模型
|
||||||
|
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device gpu --backend pptrt
|
||||||
|
```
|
@@ -1,28 +1,29 @@
|
|||||||
# PaddleClas C++部署示例
|
English | [简体中文](README_CN.md)
|
||||||
|
# PaddleClas C++ Deployment Example
|
||||||
|
|
||||||
本目录下用于展示 ResNet50_vd 模型在RKNPU2上的部署,以下的部署过程以 ResNet50_vd 为例子。
|
This directory demonstrates the deployment of ResNet50_vd model on RKNPU2. The following deployment process takes ResNet50_vd as an example.
|
||||||
|
|
||||||
在部署前,需确认以下两个步骤:
|
Before deployment, the following two steps need to be confirmed:
|
||||||
|
|
||||||
1. 软硬件环境满足要求
|
1. Hardware and software environment meets the requirements.
|
||||||
2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
|
2. Download the pre-compiled deployment repository or compile the FastDeploy repository from scratch according to the development environment.
|
||||||
|
|
||||||
以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
|
For the above steps, please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md).
|
||||||
|
|
||||||
## 生成基本目录文件
|
## Generate Basic Directory Files
|
||||||
|
|
||||||
该例程由以下几个部分组成
|
The routine consists of the following parts:
|
||||||
```text
|
```text
|
||||||
.
|
.
|
||||||
├── CMakeLists.txt
|
├── CMakeLists.txt
|
||||||
├── build # 编译文件夹
|
├── build # Compile Folder
|
||||||
├── images # 存放图片的文件夹
|
├── images # Folder for images
|
||||||
├── infer.cc
|
├── infer.cc
|
||||||
├── ppclas_model_dir # 存放模型文件的文件夹
|
├── ppclas_model_dir # Folder for models
|
||||||
└── thirdpartys # 存放sdk的文件夹
|
└── thirdpartys # Folder for sdk
|
||||||
```
|
```
|
||||||
|
|
||||||
首先需要先生成目录结构
|
First, please build a directory structure
|
||||||
```bash
|
```bash
|
||||||
mkdir build
|
mkdir build
|
||||||
mkdir images
|
mkdir images
|
||||||
@@ -30,23 +31,22 @@ mkdir ppclas_model_dir
|
|||||||
mkdir thirdpartys
|
mkdir thirdpartys
|
||||||
```
|
```
|
||||||
|
|
||||||
## 编译
|
## Compile
|
||||||
|
|
||||||
### 编译并拷贝SDK到thirdpartys文件夹
|
### Compile and Copy SDK to folder thirdpartys
|
||||||
|
|
||||||
请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成
|
Please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory, please move it to the thirdpartys directory.
|
||||||
fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
|
|
||||||
|
|
||||||
### 拷贝模型文件,以及配置文件至model文件夹
|
### Copy model and configuration files to folder Model
|
||||||
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
|
In the process of Paddle dynamic map model -> Paddle static map model -> ONNX mdoel, ONNX file and the corresponding yaml configuration file will be generated. Please move the configuration file to the folder model.
|
||||||
转换为RKNN后的模型文件也需要拷贝至model,转换方案: ([ResNet50_vd RKNN模型](../README.md))。
|
After converting to RKNN, the model file also needs to be copied to folder model. Please refer to ([ResNet50_vd RKNN model](../README.md)).
|
||||||
|
|
||||||
### 准备测试图片至image文件夹
|
### Prepare Test Images to folder image
|
||||||
```bash
|
```bash
|
||||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||||
```
|
```
|
||||||
|
|
||||||
### 编译example
|
### Compile example
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd build
|
cd build
|
||||||
@@ -55,24 +55,23 @@ make -j8
|
|||||||
make install
|
make install
|
||||||
```
|
```
|
||||||
|
|
||||||
## 运行例程
|
## Running Routines
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd ./build/install
|
cd ./build/install
|
||||||
./rknpu_test ./ppclas_model_dir ./images/ILSVRC2012_val_00000010.jpeg
|
./rknpu_test ./ppclas_model_dir ./images/ILSVRC2012_val_00000010.jpeg
|
||||||
```
|
```
|
||||||
|
|
||||||
## 运行结果展示
|
## Results
|
||||||
ClassifyResult(
|
ClassifyResult(
|
||||||
label_ids: 153,
|
label_ids: 153,
|
||||||
scores: 0.684570,
|
scores: 0.684570,
|
||||||
)
|
)
|
||||||
|
|
||||||
## 注意事项
|
## Notes
|
||||||
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,
|
The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisablePermute(C++) or disable_permute(Python) first when deploying with FastDeploy to disable data format conversion in the preprocessing stage.
|
||||||
DisablePermute(C++)或`disable_permute(Python),在预处理阶段禁用数据格式的转换。
|
|
||||||
|
|
||||||
## 其它文档
|
## Other Documents
|
||||||
- [ResNet50_vd Python 部署](../python)
|
- [ResNet50_vd Python Deployment](../python)
|
||||||
- [模型预测结果说明](../../../../../../docs/api/vision_results/)
|
- [Prediction results](../../../../../../docs/api/vision_results/)
|
||||||
- [转换ResNet50_vd RKNN模型文档](../README.md)
|
- [Converting ResNet50_vd RKNN model](../README.md)
|
||||||
|
@@ -0,0 +1,77 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PaddleClas C++部署示例
|
||||||
|
|
||||||
|
本目录下用于展示 ResNet50_vd 模型在RKNPU2上的部署,以下的部署过程以 ResNet50_vd 为例子。
|
||||||
|
|
||||||
|
在部署前,需确认以下两个步骤:
|
||||||
|
|
||||||
|
1. 软硬件环境满足要求
|
||||||
|
2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
|
||||||
|
|
||||||
|
以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
|
||||||
|
|
||||||
|
## 生成基本目录文件
|
||||||
|
|
||||||
|
该例程由以下几个部分组成
|
||||||
|
```text
|
||||||
|
.
|
||||||
|
├── CMakeLists.txt
|
||||||
|
├── build # 编译文件夹
|
||||||
|
├── images # 存放图片的文件夹
|
||||||
|
├── infer.cc
|
||||||
|
├── ppclas_model_dir # 存放模型文件的文件夹
|
||||||
|
└── thirdpartys # 存放sdk的文件夹
|
||||||
|
```
|
||||||
|
|
||||||
|
首先需要先生成目录结构
|
||||||
|
```bash
|
||||||
|
mkdir build
|
||||||
|
mkdir images
|
||||||
|
mkdir ppclas_model_dir
|
||||||
|
mkdir thirdpartys
|
||||||
|
```
|
||||||
|
|
||||||
|
## 编译
|
||||||
|
|
||||||
|
### 编译并拷贝SDK到thirdpartys文件夹
|
||||||
|
|
||||||
|
请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
|
||||||
|
|
||||||
|
### 拷贝模型文件,以及配置文件至model文件夹
|
||||||
|
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
|
||||||
|
转换为RKNN后的模型文件也需要拷贝至model,转换方案: ([ResNet50_vd RKNN模型](../README.md))。
|
||||||
|
|
||||||
|
### 准备测试图片至image文件夹
|
||||||
|
```bash
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||||
|
```
|
||||||
|
|
||||||
|
### 编译example
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd build
|
||||||
|
cmake ..
|
||||||
|
make -j8
|
||||||
|
make install
|
||||||
|
```
|
||||||
|
|
||||||
|
## 运行例程
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ./build/install
|
||||||
|
./rknpu_test ./ppclas_model_dir ./images/ILSVRC2012_val_00000010.jpeg
|
||||||
|
```
|
||||||
|
|
||||||
|
## 运行结果展示
|
||||||
|
ClassifyResult(
|
||||||
|
label_ids: 153,
|
||||||
|
scores: 0.684570,
|
||||||
|
)
|
||||||
|
|
||||||
|
## 注意事项
|
||||||
|
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisablePermute(C++)或`disable_permute(Python),在预处理阶段禁用数据格式的转换。
|
||||||
|
|
||||||
|
## 其它文档
|
||||||
|
- [ResNet50_vd Python 部署](../python)
|
||||||
|
- [模型预测结果说明](../../../../../../docs/api/vision_results/)
|
||||||
|
- [转换ResNet50_vd RKNN模型文档](../README.md)
|
@@ -1,23 +1,24 @@
|
|||||||
# PaddleClas Python部署示例
|
English | [简体中文](README_CN.md)
|
||||||
|
# PaddleClas Python Deployment Example
|
||||||
|
|
||||||
在部署前,需确认以下两个步骤
|
Before deployment, the following step need to be confirmed:
|
||||||
|
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
|
- 1. Hardware and software environment meets the requirements, please refer to [Environment Requirements for FastDeploy](../../../../../../docs/en/build_and_install/rknpu2.md).
|
||||||
|
|
||||||
本目录下提供`infer.py`快速完成 ResNet50_vd 在RKNPU上部署的示例。执行如下脚本即可完成
|
This directory provides `infer.py` for a quick example of ResNet50_vd deployment on RKNPU. This can be done by running the following script.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 下载部署示例代码
|
# Download the deploying demo code.
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
cd FastDeploy/examples/vision/classification/paddleclas/rknpu2/python
|
cd FastDeploy/examples/vision/classification/paddleclas/rknpu2/python
|
||||||
|
|
||||||
# 下载图片
|
# Download images.
|
||||||
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||||
|
|
||||||
# 推理
|
# Inference.
|
||||||
python3 infer.py --model_file ./ResNet50_vd_infer/ResNet50_vd_infer_rk3588.rknn --config_file ResNet50_vd_infer/inference_cls.yaml --image ILSVRC2012_val_00000010.jpeg
|
python3 infer.py --model_file ./ResNet50_vd_infer/ResNet50_vd_infer_rk3588.rknn --config_file ResNet50_vd_infer/inference_cls.yaml --image ILSVRC2012_val_00000010.jpeg
|
||||||
|
|
||||||
# 运行完成后返回结果如下所示
|
# Results
|
||||||
ClassifyResult(
|
ClassifyResult(
|
||||||
label_ids: 153,
|
label_ids: 153,
|
||||||
scores: 0.684570,
|
scores: 0.684570,
|
||||||
@@ -25,11 +26,10 @@ scores: 0.684570,
|
|||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## 注意事项
|
## Notes
|
||||||
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,
|
The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisablePermute(C++) or disable_permute(Python) first when deploying with FastDeploy to disable data format conversion in the preprocessing stage.
|
||||||
DisablePermute(C++)或`disable_permute(Python),在预处理阶段禁用数据格式的转换。
|
|
||||||
|
|
||||||
## 其它文档
|
## Other Documents
|
||||||
- [ResNet50_vd C++部署](../cpp)
|
- [ResNet50_vd C++ Deployment](../cpp)
|
||||||
- [模型预测结果说明](../../../../../../docs/api/vision_results/)
|
- [Prediction Results](../../../../../../docs/api/vision_results/)
|
||||||
- [转换ResNet50_vd RKNN模型文档](../README.md)
|
- [Converting ResNet50_vd RKNN model](../README.md)
|
||||||
|
@@ -0,0 +1,35 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PaddleClas Python部署示例
|
||||||
|
|
||||||
|
在部署前,需确认以下两个步骤
|
||||||
|
|
||||||
|
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
|
||||||
|
|
||||||
|
本目录下提供`infer.py`快速完成 ResNet50_vd 在RKNPU上部署的示例。执行如下脚本即可完成
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 下载部署示例代码
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd FastDeploy/examples/vision/classification/paddleclas/rknpu2/python
|
||||||
|
|
||||||
|
# 下载图片
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||||
|
|
||||||
|
# 推理
|
||||||
|
python3 infer.py --model_file ./ResNet50_vd_infer/ResNet50_vd_infer_rk3588.rknn --config_file ResNet50_vd_infer/inference_cls.yaml --image ILSVRC2012_val_00000010.jpeg
|
||||||
|
|
||||||
|
# 运行完成后返回结果如下所示
|
||||||
|
ClassifyResult(
|
||||||
|
label_ids: 153,
|
||||||
|
scores: 0.684570,
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## 注意事项
|
||||||
|
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisablePermute(C++)或`disable_permute(Python),在预处理阶段禁用数据格式的转换。
|
||||||
|
|
||||||
|
## 其它文档
|
||||||
|
- [ResNet50_vd C++部署](../cpp)
|
||||||
|
- [模型预测结果说明](../../../../../../docs/api/vision_results/)
|
||||||
|
- [转换ResNet50_vd RKNN模型文档](../README.md)
|
@@ -2,7 +2,7 @@ English | [简体中文](README_CN.md)
|
|||||||
# PaddleClas Quantification Model Deployment on RV1126
|
# PaddleClas Quantification Model Deployment on RV1126
|
||||||
FastDeploy currently supports the deployment of PaddleClas quantification models to RV1126 based on Paddle Lite.
|
FastDeploy currently supports the deployment of PaddleClas quantification models to RV1126 based on Paddle Lite.
|
||||||
|
|
||||||
For model quantization and download of quantized models, refer to [Model Quantization](../quantize/README.md)
|
For model quantization and download of quantized models, refer to [Model Quantization](../quantize/README.md).
|
||||||
|
|
||||||
|
|
||||||
## Detailed Deployment Tutorials
|
## Detailed Deployment Tutorials
|
||||||
|
@@ -1,26 +1,27 @@
|
|||||||
# PaddleClas RV1126 开发板 C++ 部署示例
|
English | [简体中文](README_CN.md)
|
||||||
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 RV1126 上的部署推理加速。
|
# PaddleClas RV1126 Development Board C++ Deployment Example
|
||||||
|
`infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on RV1126.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy 交叉编译环境准备
|
### FastDeploy Cross-compile Environment Preparations
|
||||||
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
|
1. For the software and hardware environment, and the cross-compile environment, please refer to [Preparations for FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/rv1126.md#Cross-compilation-environment-construction).
|
||||||
|
|
||||||
### 量化模型准备
|
### Model Preparations
|
||||||
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
1. You can directly use the quantized model provided by FastDeploy for deployment.
|
||||||
2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)
|
2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
||||||
|
|
||||||
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
For more information, please refer to [Model Quantization](../../quantize/README.md).
|
||||||
|
|
||||||
## 在 RV1126 上部署量化后的 ResNet50_Vd 分类模型
|
## Deploying the Quantized ResNet50_Vd Segmentation model on RV1126
|
||||||
请按照以下步骤完成在 RV1126 上部署 ResNet50_Vd 量化模型:
|
Please follow these steps to complete the deployment of the ResNet50_Vd quantization model on RV1126.
|
||||||
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/rv1126.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite).
|
||||||
|
|
||||||
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
2. Copy the compiled library to the current directory. You can run this line:
|
||||||
```bash
|
```bash
|
||||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
|
||||||
```
|
```
|
||||||
|
|
||||||
3. 在当前路径下载部署所需的模型和示例图片:
|
3. Download the model and example images required for deployment in current path.
|
||||||
```bash
|
```bash
|
||||||
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
|
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
|
||||||
mkdir models && mkdir images
|
mkdir models && mkdir images
|
||||||
@@ -31,26 +32,26 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
|
|||||||
cp -r ILSVRC2012_val_00000010.jpeg images
|
cp -r ILSVRC2012_val_00000010.jpeg images
|
||||||
```
|
```
|
||||||
|
|
||||||
4. 编译部署示例,可使入如下命令:
|
4. Compile the deployment example. You can run the following lines:
|
||||||
```bash
|
```bash
|
||||||
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
|
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
|
||||||
mkdir build && cd build
|
mkdir build && cd build
|
||||||
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
|
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
|
||||||
make -j8
|
make -j8
|
||||||
make install
|
make install
|
||||||
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
# After success, an install folder will be created with a running demo and libraries required for deployment.
|
||||||
```
|
```
|
||||||
|
|
||||||
5. 基于 adb 工具部署 ResNet50 分类模型到 Rockchip RV1126,可使用如下命令:
|
5. Deploy the ResNet50 segmentation model to Rockchip RV1126 based on adb. You can run the following lines:
|
||||||
```bash
|
```bash
|
||||||
# 进入 install 目录
|
# Go to the install directory.
|
||||||
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/build/install/
|
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/build/install/
|
||||||
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
|
||||||
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
|
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
|
||||||
```
|
```
|
||||||
|
|
||||||
部署成功后运行结果如下:
|
The output is:
|
||||||
|
|
||||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/200767389-26519e50-9e4f-4fe1-8d52-260718f73476.png">
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/200767389-26519e50-9e4f-4fe1-8d52-260718f73476.png">
|
||||||
|
|
||||||
需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
Please note that the model deployed on RV1126 needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).
|
||||||
|
@@ -0,0 +1,57 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PaddleClas RV1126 开发板 C++ 部署示例
|
||||||
|
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 RV1126 上的部署推理加速。
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy 交叉编译环境准备
|
||||||
|
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
|
||||||
|
|
||||||
|
### 量化模型准备
|
||||||
|
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
||||||
|
2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)
|
||||||
|
|
||||||
|
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
||||||
|
|
||||||
|
## 在 RV1126 上部署量化后的 ResNet50_Vd 分类模型
|
||||||
|
请按照以下步骤完成在 RV1126 上部署 ResNet50_Vd 量化模型:
|
||||||
|
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
||||||
|
|
||||||
|
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||||
|
```bash
|
||||||
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
|
||||||
|
```
|
||||||
|
|
||||||
|
3. 在当前路径下载部署所需的模型和示例图片:
|
||||||
|
```bash
|
||||||
|
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
|
||||||
|
mkdir models && mkdir images
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
|
||||||
|
tar -xvf resnet50_vd_ptq.tar
|
||||||
|
cp -r resnet50_vd_ptq models
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
|
||||||
|
cp -r ILSVRC2012_val_00000010.jpeg images
|
||||||
|
```
|
||||||
|
|
||||||
|
4. 编译部署示例,可使入如下命令:
|
||||||
|
```bash
|
||||||
|
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
|
||||||
|
mkdir build && cd build
|
||||||
|
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
|
||||||
|
make -j8
|
||||||
|
make install
|
||||||
|
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
||||||
|
```
|
||||||
|
|
||||||
|
5. 基于 adb 工具部署 ResNet50 分类模型到 Rockchip RV1126,可使用如下命令:
|
||||||
|
```bash
|
||||||
|
# 进入 install 目录
|
||||||
|
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/build/install/
|
||||||
|
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
||||||
|
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
|
||||||
|
```
|
||||||
|
|
||||||
|
部署成功后运行结果如下:
|
||||||
|
|
||||||
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/200767389-26519e50-9e4f-4fe1-8d52-260718f73476.png">
|
||||||
|
|
||||||
|
需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
@@ -3,7 +3,7 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before the service deployment, please confirm
|
Before the service deployment, please confirm
|
||||||
|
|
||||||
- 1. Refer to [FastDeploy Service Deployment](../../../../../serving/README_CN.md) for software and hardware environment requirements and image pull commands
|
- 1. Refer to [FastDeploy Service Deployment](../../../../../serving/README.md) for software and hardware environment requirements and image pull commands.
|
||||||
|
|
||||||
|
|
||||||
## Start the Service
|
## Start the Service
|
||||||
@@ -39,7 +39,7 @@ CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/serving/models --bac
|
|||||||
```
|
```
|
||||||
>> **Attention**:
|
>> **Attention**:
|
||||||
|
|
||||||
>> To pull images from other hardware, refer to [Service Deployment Master Document](../../../../../serving/README_CN.md)
|
>> To pull images from other hardware, refer to [Service Deployment Master Document](../../../../../serving/README.md)
|
||||||
|
|
||||||
>> If "Address already in use" appears when running fastdeployserver to start the service, use `--grpc-port` to specify the port number and change the request port number in the client demo.
|
>> If "Address already in use" appears when running fastdeployserver to start the service, use `--grpc-port` to specify the port number and change the request port number in the client demo.
|
||||||
|
|
||||||
@@ -76,4 +76,4 @@ output_name: CLAS_RESULT
|
|||||||
|
|
||||||
## Configuration Change
|
## Configuration Change
|
||||||
|
|
||||||
The current default configuration runs the TensorRT engine on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/zh_CN/model_configuration.md) for more information.
|
The current default configuration runs the TensorRT engine on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/EN/model_configuration-en.md) for more information.
|
||||||
|
@@ -3,7 +3,7 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, the following step need to be confirmed:
|
Before deployment, the following step need to be confirmed:
|
||||||
|
|
||||||
- 1. Hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirement](../../../../../../docs/en/build_and_install/sophgo.md)
|
- 1. Hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirement](../../../../../../docs/en/build_and_install/sophgo.md).
|
||||||
|
|
||||||
`infer.py` in this directory provides a quick example of deployment of the ResNet50_vd model on SOPHGO TPU. Please run the following script:
|
`infer.py` in this directory provides a quick example of deployment of the ResNet50_vd model on SOPHGO TPU. Please run the following script:
|
||||||
|
|
||||||
|
@@ -1,10 +1,10 @@
|
|||||||
English | [简体中文](README_CN.md)
|
English | [简体中文](README_CN.md)
|
||||||
# ResNet Ready-to-deploy Model
|
# ResNet Ready-to-deploy Model
|
||||||
|
|
||||||
- ResNet Deployment is based on the code of [Torchvision](https://github.com/pytorch/vision/tree/v0.12.0) and [Pre-trained Models on ImageNet2012](https://github.com/pytorch/vision/tree/v0.12.0)。
|
- ResNet Deployment is based on the code of [Torchvision](https://github.com/pytorch/vision/tree/v0.12.0) and [Pre-trained Models on ImageNet2012](https://github.com/pytorch/vision/tree/v0.12.0).
|
||||||
|
|
||||||
- (1)Deployment is conducted after [Export ONNX Model](#导出ONNX模型) by the *.pt provided by [Official Repository](https://github.com/pytorch/vision/tree/v0.12.0);
|
- (1)Deployment is conducted after [Export ONNX Model](#Export-the-ONNX-Model) by the *.pt provided by [Official Repository](https://github.com/pytorch/vision/tree/v0.12.0);
|
||||||
- (2)The ResNet Model trained by personal data should [Export ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Tutorials](#详细部署文档) for deployment.
|
- (2)The ResNet Model trained by personal data should [Export ONNX Model](#Export-the-ONNX-Model). Please refer to [Detailed Deployment Tutorials](#Detailed-Deployment-Documents) for deployment.
|
||||||
|
|
||||||
|
|
||||||
## Export the ONNX Model
|
## Export the ONNX Model
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation.
|
Before deployment, two steps require confirmation.
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
Taking ResNet50 inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking ResNet50 inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -33,7 +33,7 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
|
|||||||
```
|
```
|
||||||
|
|
||||||
The above command works for Linux or MacOS. Refer to:
|
The above command works for Linux or MacOS. Refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows
|
||||||
|
|
||||||
## ResNet C++ Interface
|
## ResNet C++ Interface
|
||||||
|
|
||||||
@@ -74,4 +74,4 @@ fastdeploy::vision::classification::ResNet(
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
|
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of ResNet50_vd on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of ResNet50_vd on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
|
|
||||||
@@ -70,4 +70,4 @@ fd.vision.classification.ResNet(model_file, params_file, runtime_option=None, mo
|
|||||||
- [ResNet Model Description](..)
|
- [ResNet Model Description](..)
|
||||||
- [ResNet C++ Deployment](../cpp)
|
- [ResNet C++ Deployment](../cpp)
|
||||||
- [Model prediction results](../../../../../docs/api/vision_results/)
|
- [Model prediction results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -2,8 +2,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
# YOLOv5Cls Ready-to-deploy Model
|
# YOLOv5Cls Ready-to-deploy Model
|
||||||
|
|
||||||
- YOLOv5Cls v6.2 model deployment is based on [YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.2) and [Pre-trained Models on ImageNet](https://github.com/ultralytics/yolov5/releases/tag/v6.2)
|
- YOLOv5Cls v6.2 model deployment is based on [YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.2) and [Pre-trained Models on ImageNet](https://github.com/ultralytics/yolov5/releases/tag/v6.2).
|
||||||
- (1)The *-cls.pt model provided by [Official Repository](https://github.com/ultralytics/yolov5/releases/tag/v6.2) can export the ONNX file using `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5), then deployment can be conducted;
|
- (1)The *-cls.pt model provided by [Official Repository](https://github.com/ultralytics/yolov5/releases/tag/v6.2) can export the ONNX file using `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5), then deployment can be conducted;
|
||||||
- (2)The YOLOv5Cls v6.2 Model trained by personal data should export the ONNX file using `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5).
|
- (2)The YOLOv5Cls v6.2 Model trained by personal data should export the ONNX file using `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5).
|
||||||
|
|
||||||
|
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that ` infer.cc` fast finishes the deployment o
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
Taking CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -41,7 +41,7 @@ scores: 0.196327,
|
|||||||
```
|
```
|
||||||
|
|
||||||
The above command works for Linux or MacOS. Refer to:
|
The above command works for Linux or MacOS. Refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows.
|
||||||
|
|
||||||
## YOLOv5Cls C++ Interface
|
## YOLOv5Cls C++ Interface
|
||||||
|
|
||||||
@@ -87,4 +87,4 @@ YOLOv5Cls model loading and initialization, among which model_file is the export
|
|||||||
- [YOLOv5Cls Model Description](..)
|
- [YOLOv5Cls Model Description](..)
|
||||||
- [YOLOv5Cls Python Deployment](../python)
|
- [YOLOv5Cls Python Deployment](../python)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation.
|
Before deployment, two steps require confirmation.
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5Cls on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5Cls on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
|
|
||||||
@@ -71,4 +71,4 @@ YOLOv5Cls model loading and initialization, among which model_file is the export
|
|||||||
- [YOLOv5Cls Model Description](..)
|
- [YOLOv5Cls Model Description](..)
|
||||||
- [YOLOv5Cls C++ Deployment](../cpp)
|
- [YOLOv5Cls C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
|
|||||||
This directory provides examples that `infer.cc` fast finishes the deployment of FastestDet on CPU/GPU and GPU accelerated by TensorRT.
|
This directory provides examples that `infer.cc` fast finishes the deployment of FastestDet on CPU/GPU and GPU accelerated by TensorRT.
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory.
|
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory.
|
||||||
|
|
||||||
@@ -35,7 +35,7 @@ The visualized result after running is as follows
|
|||||||
<img width="640" src="https://user-images.githubusercontent.com/44280887/206176291-61eb118b-391b-4431-b79e-a393b9452138.jpg">
|
<img width="640" src="https://user-images.githubusercontent.com/44280887/206176291-61eb118b-391b-4431-b79e-a393b9452138.jpg">
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## FastestDet C++ Interface
|
## FastestDet C++ Interface
|
||||||
|
|
||||||
@@ -84,4 +84,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of FastestDet on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of FastestDet on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
|
|
||||||
@@ -72,4 +72,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [FastestDet Model Description](..)
|
- [FastestDet Model Description](..)
|
||||||
- [FastestDet C++ Deployment](../cpp)
|
- [FastestDet C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -5,7 +5,7 @@ English | [简体中文](README_CN.md)
|
|||||||
- NanoDetPlus deployment is based on the code of [NanoDetPlus](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) and coco's [Pre-trained Model](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1).
|
- NanoDetPlus deployment is based on the code of [NanoDetPlus](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) and coco's [Pre-trained Model](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1).
|
||||||
|
|
||||||
- (1)The *.onnx provided by [official repository](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1) can directly conduct the deployment;
|
- (1)The *.onnx provided by [official repository](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1) can directly conduct the deployment;
|
||||||
- (2)Models trained by developers should export ONNX models. Please refer to [Detailed Deployment Documents](#详细部署文档) for deployment.
|
- (2)Models trained by developers should export ONNX models. Please refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) for deployment.
|
||||||
|
|
||||||
## Download Pre-trained ONNX Model
|
## Download Pre-trained ONNX Model
|
||||||
|
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -37,7 +37,7 @@ The visualized result after running is as follows
|
|||||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301689-87ee5205-2eff-4204-b615-24c400f01323.jpg">
|
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301689-87ee5205-2eff-4204-b615-24c400f01323.jpg">
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## NanoDetPlus C++ Interface
|
## NanoDetPlus C++ Interface
|
||||||
|
|
||||||
@@ -91,4 +91,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of NanoDetPlus on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of NanoDetPlus on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
```bash
|
```bash
|
||||||
@@ -78,4 +78,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [NanoDetPlus Model Description](..)
|
- [NanoDetPlus Model Description](..)
|
||||||
- [NanoDetPlus C++ Deployment](../cpp)
|
- [NanoDetPlus C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -1,29 +1,30 @@
|
|||||||
# PP-YOLOE 量化模型 C++ 部署示例
|
English | [简体中文](README_CN.md)
|
||||||
|
# PP-YOLOE Quantitative Model C++ Deployment Example
|
||||||
|
|
||||||
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 量化模型在 A311D 上的部署推理加速。
|
`infer.cc` in this directory can help you quickly complete the inference acceleration of PP-YOLOE quantization model deployment on A311D.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy 交叉编译环境准备
|
### FastDeploy Cross-compile Environment Preparations
|
||||||
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
|
1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction)
|
||||||
|
|
||||||
### 模型准备
|
### Model Preparations
|
||||||
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
1. You can directly use the quantized model provided by FastDeploy for deployment.
|
||||||
2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出模型模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
|
2. You can use PaddleDetection to export Float32 models, note that you need to set the parameter when exporting model: use_shared_conv=False. For more information: [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe).
|
||||||
3. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的 infer_cfg.yml 文件,自行量化的模型文件夹内不包含此 yaml 文件,用户从 FP32 模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。)
|
3. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the infer_cfg.yml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
||||||
4. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
|
4. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
|
||||||
|
|
||||||
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
For more information, please refer to [Model Quantization](../../quantize/README.md)
|
||||||
|
|
||||||
## 在 A311D 上部署量化后的 PP-YOLOE 检测模型
|
## Deploying the Quantized PP-YOLOE Detection model on A311D
|
||||||
请按照以下步骤完成在 A311D 上部署 PP-YOLOE 量化模型:
|
Please follow these steps to complete the deployment of the PP-YOLOE quantization model on A311D.
|
||||||
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
|
||||||
|
|
||||||
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
2. Copy the compiled library to the current directory. You can run this line:
|
||||||
```bash
|
```bash
|
||||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
|
||||||
```
|
```
|
||||||
|
|
||||||
3. 在当前路径下载部署所需的模型和示例图片:
|
3. Download the model and example images required for deployment in current path.
|
||||||
```bash
|
```bash
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
|
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
|
||||||
mkdir models && mkdir images
|
mkdir models && mkdir images
|
||||||
@@ -34,26 +35,26 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
|
|||||||
cp -r 000000014439.jpg images
|
cp -r 000000014439.jpg images
|
||||||
```
|
```
|
||||||
|
|
||||||
4. 编译部署示例,可使入如下命令:
|
4. Compile the deployment example. You can run the following lines:
|
||||||
```bash
|
```bash
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
|
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
|
||||||
mkdir build && cd build
|
mkdir build && cd build
|
||||||
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
||||||
make -j8
|
make -j8
|
||||||
make install
|
make install
|
||||||
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
# After success, an install folder will be created with a running demo and libraries required for deployment.
|
||||||
```
|
```
|
||||||
|
|
||||||
5. 基于 adb 工具部署 PP-YOLOE 检测模型到晶晨 A311D
|
5. Deploy the PP-YOLOE detection model to A311D based on adb.
|
||||||
```bash
|
```bash
|
||||||
# 进入 install 目录
|
# Go to the install directory.
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp/build/install/
|
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp/build/install/
|
||||||
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
|
||||||
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
|
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
|
||||||
```
|
```
|
||||||
|
|
||||||
部署成功后运行结果如下:
|
The output is:
|
||||||
|
|
||||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/203708564-43c49485-9b48-4eb2-8fe7-0fa517979fff.png">
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/203708564-43c49485-9b48-4eb2-8fe7-0fa517979fff.png">
|
||||||
|
|
||||||
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md)
|
||||||
|
@@ -0,0 +1,60 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PP-YOLOE 量化模型 C++ 部署示例
|
||||||
|
|
||||||
|
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 量化模型在 A311D 上的部署推理加速。
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy 交叉编译环境准备
|
||||||
|
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
|
||||||
|
|
||||||
|
### 模型准备
|
||||||
|
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
||||||
|
2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出模型模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
|
||||||
|
3. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的 infer_cfg.yml 文件,自行量化的模型文件夹内不包含此 yaml 文件,用户从 FP32 模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。)
|
||||||
|
4. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
|
||||||
|
|
||||||
|
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
||||||
|
|
||||||
|
## 在 A311D 上部署量化后的 PP-YOLOE 检测模型
|
||||||
|
请按照以下步骤完成在 A311D 上部署 PP-YOLOE 量化模型:
|
||||||
|
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
||||||
|
|
||||||
|
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||||
|
```bash
|
||||||
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
|
||||||
|
```
|
||||||
|
|
||||||
|
3. 在当前路径下载部署所需的模型和示例图片:
|
||||||
|
```bash
|
||||||
|
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
|
||||||
|
mkdir models && mkdir images
|
||||||
|
wget https://bj.bcebos.com/fastdeploy/models/ppyoloe_noshare_qat.tar.gz
|
||||||
|
tar -xvf ppyoloe_noshare_qat.tar.gz
|
||||||
|
cp -r ppyoloe_noshare_qat models
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
cp -r 000000014439.jpg images
|
||||||
|
```
|
||||||
|
|
||||||
|
4. 编译部署示例,可使入如下命令:
|
||||||
|
```bash
|
||||||
|
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
|
||||||
|
mkdir build && cd build
|
||||||
|
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
||||||
|
make -j8
|
||||||
|
make install
|
||||||
|
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
||||||
|
```
|
||||||
|
|
||||||
|
5. 基于 adb 工具部署 PP-YOLOE 检测模型到晶晨 A311D
|
||||||
|
```bash
|
||||||
|
# 进入 install 目录
|
||||||
|
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp/build/install/
|
||||||
|
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
||||||
|
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
|
||||||
|
```
|
||||||
|
|
||||||
|
部署成功后运行结果如下:
|
||||||
|
|
||||||
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/203708564-43c49485-9b48-4eb2-8fe7-0fa517979fff.png">
|
||||||
|
|
||||||
|
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
@@ -150,4 +150,4 @@ It’s simple to replace the FastDeploy prediction library and models. The predi
|
|||||||
## More Reference Documents
|
## More Reference Documents
|
||||||
For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
|
For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
|
||||||
- [FastDeploy Java SDK in Android](../../../../../java/android/)
|
- [FastDeploy Java SDK in Android](../../../../../java/android/)
|
||||||
- [FastDeploy C++ SDK in Android](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
|
- [FastDeploy C++ SDK in Android](../../../../../docs/en/faq/use_cpp_sdk_on_android.md)
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer_xxx.cc` fast finishes the deploymen
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -36,7 +36,7 @@ tar xvf ppyoloe_crn_l_300e_coco.tgz
|
|||||||
```
|
```
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## PaddleDetection C++ Interface
|
## PaddleDetection C++ Interface
|
||||||
|
|
||||||
@@ -52,7 +52,7 @@ fastdeploy::vision::detection::PPYOLOE(
|
|||||||
const ModelFormat& model_format = ModelFormat::PADDLE)
|
const ModelFormat& model_format = ModelFormat::PADDLE)
|
||||||
```
|
```
|
||||||
|
|
||||||
PaddleDetection PPYOLOE模型加载和初始化,其中model_file为导出的ONNX模型格式。
|
Loading and initializing PaddleDetection PPYOLOE model, where the format of model_file is as the exported ONNX model.
|
||||||
|
|
||||||
**Parameter**
|
**Parameter**
|
||||||
|
|
||||||
@@ -78,4 +78,4 @@ PaddleDetection PPYOLOE模型加载和初始化,其中model_file为导出的ON
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
|
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -1 +0,0 @@
|
|||||||
README_CN.md
|
|
@@ -0,0 +1,36 @@
|
|||||||
|
English | [简体中文](README_CN.md)
|
||||||
|
|
||||||
|
# PaddleDetection Python Simple Serving Demo
|
||||||
|
|
||||||
|
|
||||||
|
## Environment
|
||||||
|
|
||||||
|
- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
Server:
|
||||||
|
```bash
|
||||||
|
# Download demo code
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd FastDeploy/examples/vision/detection/paddledetection/python/serving
|
||||||
|
|
||||||
|
# Download PPYOLOE model
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
||||||
|
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
||||||
|
|
||||||
|
# Launch server, change the configurations in server.py to select hardware, backend, etc.
|
||||||
|
# and use --host, --port to specify IP and port
|
||||||
|
fastdeploy simple_serving --app server:app
|
||||||
|
```
|
||||||
|
|
||||||
|
Client:
|
||||||
|
```bash
|
||||||
|
# Download demo code
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd FastDeploy/examples/vision/detection/paddledetection/python/serving
|
||||||
|
|
||||||
|
# Download test image
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
|
# Send request and get inference result (Please adapt the IP and port if necessary)
|
||||||
|
python client.py
|
||||||
|
```
|
@@ -1,4 +1,4 @@
|
|||||||
简体中文 | [English](README_EN.md)
|
简体中文 | [English](README.md)
|
||||||
|
|
||||||
# PaddleDetection Python轻量服务化部署示例
|
# PaddleDetection Python轻量服务化部署示例
|
||||||
|
|
||||||
|
@@ -1,36 +0,0 @@
|
|||||||
English | [简体中文](README_CN.md)
|
|
||||||
|
|
||||||
# PaddleDetection Python Simple Serving Demo
|
|
||||||
|
|
||||||
|
|
||||||
## Environment
|
|
||||||
|
|
||||||
- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
|
|
||||||
Server:
|
|
||||||
```bash
|
|
||||||
# Download demo code
|
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/python/serving
|
|
||||||
|
|
||||||
# Download PPYOLOE model
|
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
|
|
||||||
tar xvf ppyoloe_crn_l_300e_coco.tgz
|
|
||||||
|
|
||||||
# Launch server, change the configurations in server.py to select hardware, backend, etc.
|
|
||||||
# and use --host, --port to specify IP and port
|
|
||||||
fastdeploy simple_serving --app server:app
|
|
||||||
```
|
|
||||||
|
|
||||||
Client:
|
|
||||||
```bash
|
|
||||||
# Download demo code
|
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/python/serving
|
|
||||||
|
|
||||||
# Download test image
|
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
|
||||||
|
|
||||||
# Send request and get inference result (Please adapt the IP and port if necessary)
|
|
||||||
python client.py
|
|
||||||
```
|
|
@@ -1,36 +1,37 @@
|
|||||||
# PP-YOLOE-l量化模型 C++部署示例
|
English | [简体中文](README_CN.md)
|
||||||
|
# PP-YOLOE-l Quantitative Model C++ Deployment Example
|
||||||
|
|
||||||
本目录下提供的`infer_ppyoloe.cc`,可以帮助用户快速完成PP-YOLOE-l量化模型在CPU/GPU上的部署推理加速.
|
`infer_ppyoloe.cc` in this directory can help you quickly complete the inference acceleration of PP-YOLOE-l quantization model deployment on CPU/GPU.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy环境准备
|
### FastDeploy Environment Preparations
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
### 量化模型准备
|
### Quantized Model Preparations
|
||||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
- 1. You can directly use the quantized model provided by FastDeploy for deployment..
|
||||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the infer_cfg.yml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
||||||
|
|
||||||
## 以量化后的PP-YOLOE-l模型为例, 进行部署。支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
## Take the Quantized PP-YOLOE-l Model as an example for Deployment, FastDeploy version 0.7.0 or higher is required (x.x.x>=0.7.0)
|
||||||
在本目录执行如下命令即可完成编译,以及量化模型部署.
|
Run the following commands in this directory to compile and deploy the quantized model.
|
||||||
```bash
|
```bash
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
# Download pre-compiled FastDeploy libraries. You can choose the appropriate version from `pre-compiled FastDeploy libraries` mentioned above.
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
make -j
|
make -j
|
||||||
|
|
||||||
#下载FastDeloy提供的ppyoloe_crn_l_300e_coco量化模型文件和测试图片
|
# Download the ppyoloe_crn_l_300e_coco quantized model and test images provided by FastDeloy.
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar
|
||||||
tar -xvf ppyoloe_crn_l_300e_coco_qat.tar
|
tar -xvf ppyoloe_crn_l_300e_coco_qat.tar
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
# 在CPU上使用ONNX Runtime推理量化模型
|
# Use ONNX Runtime inference quantization model on CPU.
|
||||||
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 0
|
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 0
|
||||||
# 在GPU上使用TensorRT推理量化模型
|
# Use TensorRT inference quantization model on GPU.
|
||||||
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 1
|
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 1
|
||||||
# 在GPU上使用Paddle-TensorRT推理量化模型
|
# Use Paddle-TensorRT inference quantization model on GPU.
|
||||||
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 2
|
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 2
|
||||||
```
|
```
|
||||||
|
@@ -0,0 +1,37 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PP-YOLOE-l量化模型 C++部署示例
|
||||||
|
|
||||||
|
本目录下提供的`infer_ppyoloe.cc`,可以帮助用户快速完成PP-YOLOE-l量化模型在CPU/GPU上的部署推理加速.
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy环境准备
|
||||||
|
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
### 量化模型准备
|
||||||
|
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
||||||
|
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
||||||
|
|
||||||
|
## 以量化后的PP-YOLOE-l模型为例, 进行部署。支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||||
|
在本目录执行如下命令即可完成编译,以及量化模型部署.
|
||||||
|
```bash
|
||||||
|
mkdir build
|
||||||
|
cd build
|
||||||
|
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||||
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
|
make -j
|
||||||
|
|
||||||
|
#下载FastDeloy提供的ppyoloe_crn_l_300e_coco量化模型文件和测试图片
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar
|
||||||
|
tar -xvf ppyoloe_crn_l_300e_coco_qat.tar
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
|
# 在CPU上使用ONNX Runtime推理量化模型
|
||||||
|
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 0
|
||||||
|
# 在GPU上使用TensorRT推理量化模型
|
||||||
|
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 1
|
||||||
|
# 在GPU上使用Paddle-TensorRT推理量化模型
|
||||||
|
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 2
|
||||||
|
```
|
@@ -1,31 +1,32 @@
|
|||||||
# PP-YOLOE-l量化模型 Python部署示例
|
English | [简体中文](README_CN.md)
|
||||||
本目录下提供的`infer.py`,可以帮助用户快速完成PP-YOLOE量化模型在CPU/GPU上的部署推理加速.
|
# PP-YOLOE-l Quantitative Model Python Deployment Example
|
||||||
|
`infer.py` in this directory can help you quickly complete the inference acceleration of PP-YOLOE quantization model deployment on CPU/GPU.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy环境准备
|
### FastDeploy Environment Preparations
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
### 量化模型准备
|
### Quantized Model Preparations
|
||||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
- 1. You can directly use the quantized model provided by FastDeploy for deployment.
|
||||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the infer_cfg.yml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
||||||
|
|
||||||
|
|
||||||
## 以量化后的PP-YOLOE-l模型为例, 进行部署
|
## Take the Quantized PP-YOLOE-l Model as an example for Deployment
|
||||||
```bash
|
```bash
|
||||||
#下载部署示例代码
|
# Download sample deployment code.
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
cd /examples/vision/detection/paddledetection/quantize/python
|
cd /examples/vision/detection/paddledetection/quantize/python
|
||||||
|
|
||||||
#下载FastDeloy提供的ppyoloe_crn_l_300e_coco量化模型文件和测试图片
|
# Download the ppyoloe_crn_l_300e_coco quantized model and test images provided by FastDeloy.
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar
|
||||||
tar -xvf ppyoloe_crn_l_300e_coco_qat.tar
|
tar -xvf ppyoloe_crn_l_300e_coco_qat.tar
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
# 在CPU上使用ONNX Runtime推理量化模型
|
# Use ONNX Runtime inference quantization model on CPU.
|
||||||
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device cpu --backend ort
|
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device cpu --backend ort
|
||||||
# 在GPU上使用TensorRT推理量化模型
|
# Use TensorRT inference quantization model on GPU.
|
||||||
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device gpu --backend trt
|
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device gpu --backend trt
|
||||||
# 在GPU上使用Paddle-TensorRT推理量化模型
|
# Use Paddle-TensorRT inference quantization model on GPU.
|
||||||
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device gpu --backend pptrt
|
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device gpu --backend pptrt
|
||||||
```
|
```
|
||||||
|
@@ -0,0 +1,32 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PP-YOLOE-l量化模型 Python部署示例
|
||||||
|
本目录下提供的`infer.py`,可以帮助用户快速完成PP-YOLOE量化模型在CPU/GPU上的部署推理加速.
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy环境准备
|
||||||
|
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
### 量化模型准备
|
||||||
|
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
||||||
|
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
||||||
|
|
||||||
|
|
||||||
|
## 以量化后的PP-YOLOE-l模型为例, 进行部署
|
||||||
|
```bash
|
||||||
|
#下载部署示例代码
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd /examples/vision/detection/paddledetection/quantize/python
|
||||||
|
|
||||||
|
#下载FastDeloy提供的ppyoloe_crn_l_300e_coco量化模型文件和测试图片
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar
|
||||||
|
tar -xvf ppyoloe_crn_l_300e_coco_qat.tar
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
|
# 在CPU上使用ONNX Runtime推理量化模型
|
||||||
|
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device cpu --backend ort
|
||||||
|
# 在GPU上使用TensorRT推理量化模型
|
||||||
|
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device gpu --backend trt
|
||||||
|
# 在GPU上使用Paddle-TensorRT推理量化模型
|
||||||
|
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device gpu --backend pptrt
|
||||||
|
```
|
@@ -8,9 +8,8 @@ Now FastDeploy supports the deployment of the following models
|
|||||||
|
|
||||||
## Prepare PaddleDetection deployment models and convert models
|
## Prepare PaddleDetection deployment models and convert models
|
||||||
Before RKNPU deployment, you need to transform Paddle model to RKNN model:
|
Before RKNPU deployment, you need to transform Paddle model to RKNN model:
|
||||||
* From Paddle dynamic map to ONNX model, refer to [PaddleDetection Model Export](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md)
|
* From Paddle dynamic map to ONNX model, refer to [PaddleDetection Model Export](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md), and set **export.nms=True** during transformation.
|
||||||
, and set **export.nms=True** during transformation.
|
* From ONNX model to RKNN model, refer to [Transformation Document](../../../../../docs/en/faq/rknpu2/export.md).
|
||||||
* From ONNX model to RKNN model, refer to [Transformation Document](../../../../../docs/cn/faq/rknpu2/export.md).
|
|
||||||
|
|
||||||
|
|
||||||
## Model Transformation Example
|
## Model Transformation Example
|
||||||
|
@@ -1,28 +1,29 @@
|
|||||||
# PaddleDetection C++部署示例
|
English | [简体中文](README_CN.md)
|
||||||
|
# PaddleDetection Deployment Examples for C++
|
||||||
|
|
||||||
本目录下提供`infer_picodet.cc`快速完成PPDetection模型在Rockchip板子上上通过二代NPU加速部署的示例。
|
`infer_picodet.cc` in this directory provides an example of quickly completing the PPDetection model on Rockchip boards for accelerated deployment via second-generation NPUs.
|
||||||
|
|
||||||
在部署前,需确认以下两个步骤:
|
Before deployment, the following two steps need to be confirmed:
|
||||||
|
|
||||||
1. 软硬件环境满足要求
|
1. Hardware and software environment meets the requirements.
|
||||||
2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
|
2. Download the pre-compiled deployment repository or compile the FastDeploy repository from scratch according to the development environment.
|
||||||
|
|
||||||
以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
|
For the above steps, please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md).
|
||||||
|
|
||||||
## 生成基本目录文件
|
## Generate Basic Directory Files
|
||||||
|
|
||||||
该例程由以下几个部分组成
|
The routine consists of the following parts:
|
||||||
```text
|
```text
|
||||||
.
|
.
|
||||||
├── CMakeLists.txt
|
├── CMakeLists.txt
|
||||||
├── build # 编译文件夹
|
├── build # Compile Folder
|
||||||
├── image # 存放图片的文件夹
|
├── image # Folder for images
|
||||||
├── infer_picodet.cc
|
├── infer_picodet.cc
|
||||||
├── model # 存放模型文件的文件夹
|
├── model # Folder for models
|
||||||
└── thirdpartys # 存放sdk的文件夹
|
└── thirdpartys # Folder for sdk
|
||||||
```
|
```
|
||||||
|
|
||||||
首先需要先生成目录结构
|
First, please build a directory structure
|
||||||
```bash
|
```bash
|
||||||
mkdir build
|
mkdir build
|
||||||
mkdir images
|
mkdir images
|
||||||
@@ -30,24 +31,23 @@ mkdir model
|
|||||||
mkdir thirdpartys
|
mkdir thirdpartys
|
||||||
```
|
```
|
||||||
|
|
||||||
## 编译
|
## Compile
|
||||||
|
|
||||||
### 编译并拷贝SDK到thirdpartys文件夹
|
### Compile and Copy SDK to folder thirdpartys
|
||||||
|
|
||||||
请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成
|
Please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory, please move it to the thirdpartys directory.
|
||||||
fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
|
|
||||||
|
|
||||||
### 拷贝模型文件,以及配置文件至model文件夹
|
### Copy model and configuration files to folder Model
|
||||||
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
|
In the process of Paddle dynamic map model -> Paddle static map model -> ONNX mdoel, ONNX file and the corresponding yaml configuration file will be generated. Please move the configuration file to the folder model.
|
||||||
转换为RKNN后的模型文件也需要拷贝至model。
|
After converting to RKNN, the model file also needs to be copied to folder model.
|
||||||
|
|
||||||
### 准备测试图片至image文件夹
|
### Prepare Test Images to folder image
|
||||||
```bash
|
```bash
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
cp 000000014439.jpg ./images
|
cp 000000014439.jpg ./images
|
||||||
```
|
```
|
||||||
|
|
||||||
### 编译example
|
### Compile example
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd build
|
cd build
|
||||||
@@ -56,7 +56,7 @@ make -j8
|
|||||||
make install
|
make install
|
||||||
```
|
```
|
||||||
|
|
||||||
## 运行例程
|
## Running Routines
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd ./build/install
|
cd ./build/install
|
||||||
@@ -64,6 +64,6 @@ cd ./build/install
|
|||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
- [模型介绍](../../)
|
- [Model Description](../../)
|
||||||
- [Python部署](../python)
|
- [Python Deployment](../python)
|
||||||
- [视觉模型预测结果](../../../../../../docs/api/vision_results/)
|
- [Vision model prediction results](../../../../../../docs/api/vision_results/)
|
||||||
|
@@ -0,0 +1,69 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PaddleDetection C++部署示例
|
||||||
|
|
||||||
|
本目录下提供`infer_picodet.cc`快速完成PPDetection模型在Rockchip板子上上通过二代NPU加速部署的示例。
|
||||||
|
|
||||||
|
在部署前,需确认以下两个步骤:
|
||||||
|
|
||||||
|
1. 软硬件环境满足要求
|
||||||
|
2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
|
||||||
|
|
||||||
|
以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
|
||||||
|
|
||||||
|
## 生成基本目录文件
|
||||||
|
|
||||||
|
该例程由以下几个部分组成
|
||||||
|
```text
|
||||||
|
.
|
||||||
|
├── CMakeLists.txt
|
||||||
|
├── build # 编译文件夹
|
||||||
|
├── image # 存放图片的文件夹
|
||||||
|
├── infer_picodet.cc
|
||||||
|
├── model # 存放模型文件的文件夹
|
||||||
|
└── thirdpartys # 存放sdk的文件夹
|
||||||
|
```
|
||||||
|
|
||||||
|
首先需要先生成目录结构
|
||||||
|
```bash
|
||||||
|
mkdir build
|
||||||
|
mkdir images
|
||||||
|
mkdir model
|
||||||
|
mkdir thirdpartys
|
||||||
|
```
|
||||||
|
|
||||||
|
## 编译
|
||||||
|
|
||||||
|
### 编译并拷贝SDK到thirdpartys文件夹
|
||||||
|
|
||||||
|
请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
|
||||||
|
|
||||||
|
### 拷贝模型文件,以及配置文件至model文件夹
|
||||||
|
在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
|
||||||
|
转换为RKNN后的模型文件也需要拷贝至model。
|
||||||
|
|
||||||
|
### 准备测试图片至image文件夹
|
||||||
|
```bash
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
cp 000000014439.jpg ./images
|
||||||
|
```
|
||||||
|
|
||||||
|
### 编译example
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd build
|
||||||
|
cmake ..
|
||||||
|
make -j8
|
||||||
|
make install
|
||||||
|
```
|
||||||
|
|
||||||
|
## 运行例程
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ./build/install
|
||||||
|
./infer_picodet model/picodet_s_416_coco_lcnet images/000000014439.jpg
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
- [模型介绍](../../)
|
||||||
|
- [Python部署](../python)
|
||||||
|
- [视觉模型预测结果](../../../../../../docs/api/vision_results/)
|
@@ -1,35 +1,35 @@
|
|||||||
# PaddleDetection Python部署示例
|
English | [简体中文](README_CN.md)
|
||||||
|
# PaddleDetection Deployment Examples for Python
|
||||||
|
|
||||||
在部署前,需确认以下两个步骤
|
Before deployment, the following step need to be confirmed:
|
||||||
|
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
|
- 1. Hardware and software environment meets the requirements, please refer to [Environment Requirements for FastDeploy](../../../../../../docs/en/build_and_install/rknpu2.md)
|
||||||
|
|
||||||
本目录下提供`infer.py`快速完成Picodet在RKNPU上部署的示例。执行如下脚本即可完成
|
This directory provides `infer.py` for a quick example of Picodet deployment on RKNPU. This can be done by running the following script.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 下载部署示例代码
|
# Download the deploying demo code.
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/rknpu2/python
|
cd FastDeploy/examples/vision/detection/paddledetection/rknpu2/python
|
||||||
|
|
||||||
# 下载图片
|
# Download images.
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
# copy model
|
# copy model
|
||||||
cp -r ./picodet_s_416_coco_lcnet /path/to/FastDeploy/examples/vision/detection/rknpu2detection/paddledetection/python
|
cp -r ./picodet_s_416_coco_lcnet /path/to/FastDeploy/examples/vision/detection/rknpu2detection/paddledetection/python
|
||||||
|
|
||||||
# 推理
|
# Inference.
|
||||||
python3 infer.py --model_file ./picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet_rk3568.rknn \
|
python3 infer.py --model_file ./picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet_rk3568.rknn \
|
||||||
--config_file ./picodet_s_416_coco_lcnet/infer_cfg.yml \
|
--config_file ./picodet_s_416_coco_lcnet/infer_cfg.yml \
|
||||||
--image 000000014439.jpg
|
--image 000000014439.jpg
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## 注意事项
|
## Notes
|
||||||
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,
|
The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisableNormalizePermute(C++) or `disable_normalize_permute(Python) first when deploying with FastDeploy to disable normalization and data format conversion in the preprocessing stage.
|
||||||
需要先调用DisableNormalizePermute(C++)或`disable_normalize_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
|
## Other Documents
|
||||||
## 其它文档
|
|
||||||
|
|
||||||
- [PaddleDetection 模型介绍](..)
|
- [PaddleDetection Model Description](..)
|
||||||
- [PaddleDetection C++部署](../cpp)
|
- [PaddleDetection C++ Deployment](../cpp)
|
||||||
- [模型预测结果说明](../../../../../../docs/api/vision_results/)
|
- [Description of the prediction](../../../../../../docs/api/vision_results/)
|
||||||
- [转换PaddleDetection RKNN模型文档](../README.md)
|
- [Converting PaddleDetection RKNN model](../README.md)
|
||||||
|
@@ -0,0 +1,35 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PaddleDetection Python部署示例
|
||||||
|
|
||||||
|
在部署前,需确认以下步骤
|
||||||
|
|
||||||
|
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
|
||||||
|
|
||||||
|
本目录下提供`infer.py`快速完成Picodet在RKNPU上部署的示例。执行如下脚本即可完成
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 下载部署示例代码
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd FastDeploy/examples/vision/detection/paddledetection/rknpu2/python
|
||||||
|
|
||||||
|
# 下载图片
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
|
# copy model
|
||||||
|
cp -r ./picodet_s_416_coco_lcnet /path/to/FastDeploy/examples/vision/detection/rknpu2detection/paddledetection/python
|
||||||
|
|
||||||
|
# 推理
|
||||||
|
python3 infer.py --model_file ./picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet_rk3568.rknn \
|
||||||
|
--config_file ./picodet_s_416_coco_lcnet/infer_cfg.yml \
|
||||||
|
--image 000000014439.jpg
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## 注意事项
|
||||||
|
RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisableNormalizePermute(C++)或`disable_normalize_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
|
||||||
|
## 其它文档
|
||||||
|
|
||||||
|
- [PaddleDetection 模型介绍](..)
|
||||||
|
- [PaddleDetection C++部署](../cpp)
|
||||||
|
- [模型预测结果说明](../../../../../../docs/api/vision_results/)
|
||||||
|
- [转换PaddleDetection RKNN模型文档](../README.md)
|
@@ -1,29 +1,30 @@
|
|||||||
# PP-YOLOE 量化模型 C++ 部署示例
|
English | [简体中文](README_CN.md)
|
||||||
|
# PP-YOLOE Quantitative Model C++ Deployment Example
|
||||||
|
|
||||||
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 量化模型在 RV1126 上的部署推理加速。
|
`infer.cc` in this directory can help you quickly complete the inference acceleration of PP-YOLOE quantization model deployment on RV1126.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy 交叉编译环境准备
|
### FastDeploy Cross-compile Environment Preparations
|
||||||
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
|
1. For the software and hardware environment, and the cross-compile environment, please refer to [Preparations for FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/rv1126.md#Cross-compilation-environment-construction).
|
||||||
|
|
||||||
### 模型准备
|
### Model Preparations
|
||||||
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
1. You can directly use the quantized model provided by FastDeploy for deployment.
|
||||||
2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出模型模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
|
2. You can use PaddleDetection to export Float32 models, note that you need to set the parameter when exporting model: use_shared_conv=False. For more information: [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe).
|
||||||
3. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的 infer_cfg.yml 文件,自行量化的模型文件夹内不包含此 yaml 文件,用户从 FP32 模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。)
|
3. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the infer_cfg.yml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
|
||||||
4. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
|
4. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
|
||||||
|
|
||||||
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
For more information, please refer to [Model Quantization](../../quantize/README.md)
|
||||||
|
|
||||||
## 在 RV1126 上部署量化后的 PP-YOLOE 检测模型
|
## Deploying the Quantized PP-YOLOE Detection model on RV1126
|
||||||
请按照以下步骤完成在 RV1126 上部署 PP-YOLOE 量化模型:
|
Please follow these steps to complete the deployment of the PP-YOLOE quantization model on RV1126.
|
||||||
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/rv1126.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
|
||||||
|
|
||||||
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
2. Copy the compiled library to the current directory. You can run this line:
|
||||||
```bash
|
```bash
|
||||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
|
||||||
```
|
```
|
||||||
|
|
||||||
3. 在当前路径下载部署所需的模型和示例图片:
|
3. Download the model and example images required for deployment in current path.
|
||||||
```bash
|
```bash
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
|
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
|
||||||
mkdir models && mkdir images
|
mkdir models && mkdir images
|
||||||
@@ -34,26 +35,26 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
|
|||||||
cp -r 000000014439.jpg images
|
cp -r 000000014439.jpg images
|
||||||
```
|
```
|
||||||
|
|
||||||
4. 编译部署示例,可使入如下命令:
|
4. Compile the deployment example. You can run the following lines:
|
||||||
```bash
|
```bash
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
|
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
|
||||||
mkdir build && cd build
|
mkdir build && cd build
|
||||||
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
|
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
|
||||||
make -j8
|
make -j8
|
||||||
make install
|
make install
|
||||||
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
# After success, an install folder will be created with a running demo and libraries required for deployment.
|
||||||
```
|
```
|
||||||
|
|
||||||
5. 基于 adb 工具部署 PP-YOLOE 检测模型到 Rockchip RV1126,可使用如下命令:
|
5. Deploy the PP-YOLOE detection model to Rockchip RV1126 based on adb. You can run the following lines:
|
||||||
```bash
|
```bash
|
||||||
# 进入 install 目录
|
# Go to the install directory.
|
||||||
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp/build/install/
|
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp/build/install/
|
||||||
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
|
||||||
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
|
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
|
||||||
```
|
```
|
||||||
|
|
||||||
部署成功后运行结果如下:
|
The output is:
|
||||||
|
|
||||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/203708564-43c49485-9b48-4eb2-8fe7-0fa517979fff.png">
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/203708564-43c49485-9b48-4eb2-8fe7-0fa517979fff.png">
|
||||||
|
|
||||||
需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
Please note that the model deployed on RV1126 needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md)
|
||||||
|
@@ -0,0 +1,60 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# PP-YOLOE 量化模型 C++ 部署示例
|
||||||
|
|
||||||
|
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 量化模型在 RV1126 上的部署推理加速。
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy 交叉编译环境准备
|
||||||
|
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
|
||||||
|
|
||||||
|
### 模型准备
|
||||||
|
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
|
||||||
|
2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出模型模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
|
||||||
|
3. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的 infer_cfg.yml 文件,自行量化的模型文件夹内不包含此 yaml 文件,用户从 FP32 模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。)
|
||||||
|
4. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
|
||||||
|
|
||||||
|
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
||||||
|
|
||||||
|
## 在 RV1126 上部署量化后的 PP-YOLOE 检测模型
|
||||||
|
请按照以下步骤完成在 RV1126 上部署 PP-YOLOE 量化模型:
|
||||||
|
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
||||||
|
|
||||||
|
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||||
|
```bash
|
||||||
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
|
||||||
|
```
|
||||||
|
|
||||||
|
3. 在当前路径下载部署所需的模型和示例图片:
|
||||||
|
```bash
|
||||||
|
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
|
||||||
|
mkdir models && mkdir images
|
||||||
|
wget https://bj.bcebos.com/fastdeploy/models/ppyoloe_noshare_qat.tar.gz
|
||||||
|
tar -xvf ppyoloe_noshare_qat.tar.gz
|
||||||
|
cp -r ppyoloe_noshare_qat models
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
cp -r 000000014439.jpg images
|
||||||
|
```
|
||||||
|
|
||||||
|
4. 编译部署示例,可使入如下命令:
|
||||||
|
```bash
|
||||||
|
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
|
||||||
|
mkdir build && cd build
|
||||||
|
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
|
||||||
|
make -j8
|
||||||
|
make install
|
||||||
|
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
||||||
|
```
|
||||||
|
|
||||||
|
5. 基于 adb 工具部署 PP-YOLOE 检测模型到 Rockchip RV1126,可使用如下命令:
|
||||||
|
```bash
|
||||||
|
# 进入 install 目录
|
||||||
|
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp/build/install/
|
||||||
|
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
||||||
|
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
|
||||||
|
```
|
||||||
|
|
||||||
|
部署成功后运行结果如下:
|
||||||
|
|
||||||
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/203708564-43c49485-9b48-4eb2-8fe7-0fa517979fff.png">
|
||||||
|
|
||||||
|
需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
@@ -7,7 +7,7 @@ For PaddleDetection model export and download of pre-trained models, refer to [P
|
|||||||
|
|
||||||
Confirm before the serving deployment
|
Confirm before the serving deployment
|
||||||
|
|
||||||
- 1. Refer to [FastDeploy Serving Deployment](../../../../../serving/README_CN.md) for software and hardware environment requirements and image pull commands
|
- 1. Refer to [FastDeploy Serving Deployment](../../../../../serving/README.md) for software and hardware environment requirements and image pull commands
|
||||||
|
|
||||||
|
|
||||||
## Start Service
|
## Start Service
|
||||||
@@ -92,4 +92,4 @@ output_name: DET_RESULT
|
|||||||
|
|
||||||
## Configuration Change
|
## Configuration Change
|
||||||
|
|
||||||
The current default configuration runs on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/zh_CN/model_configuration.md) for more information.
|
The current default configuration runs on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/EN/model_configuration-en.md) for more information.
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
- The ScaledYOLOv4 deployment is based on the code of [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) and [Pre-trained Model on COCO](https://github.com/WongKinYiu/ScaledYOLOv4).
|
- The ScaledYOLOv4 deployment is based on the code of [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) and [Pre-trained Model on COCO](https://github.com/WongKinYiu/ScaledYOLOv4).
|
||||||
|
|
||||||
- (1)The *.pt provided by [Official Repository](https://github.com/WongKinYiu/ScaledYOLOv4) should [Export the ONNX Model](#导出ONNX模型) to complete the deployment;
|
- (1)The *.pt provided by [Official Repository](https://github.com/WongKinYiu/ScaledYOLOv4) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the deployment;
|
||||||
- (2)The ScaledYOLOv4 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
- (2)The ScaledYOLOv4 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||||
|
|
||||||
|
|
||||||
## Export the ONNX Model
|
## Export the ONNX Model
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -37,7 +37,7 @@ The visualized result after running is as follows
|
|||||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301908-7027cf41-af51-4485-bd32-87aca0e77336.jpg">
|
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301908-7027cf41-af51-4485-bd32-87aca0e77336.jpg">
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## ScaledYOLOv4 C++ Interface
|
## ScaledYOLOv4 C++ Interface
|
||||||
|
|
||||||
@@ -90,4 +90,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of ScaledYOLOv4 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of ScaledYOLOv4 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
```bash
|
```bash
|
||||||
@@ -79,4 +79,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [ScaledYOLOv4 Model Description](..)
|
- [ScaledYOLOv4 Model Description](..)
|
||||||
- [ScaledYOLOv4 C++ Deployment](../cpp)
|
- [ScaledYOLOv4 C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
- The YOLOR deployment is based on the code of [YOLOR](https://github.com/WongKinYiu/yolor/releases/tag/weights) and [Pre-trained Model Based on COCO](https://github.com/WongKinYiu/yolor/releases/tag/weights).
|
- The YOLOR deployment is based on the code of [YOLOR](https://github.com/WongKinYiu/yolor/releases/tag/weights) and [Pre-trained Model Based on COCO](https://github.com/WongKinYiu/yolor/releases/tag/weights).
|
||||||
|
|
||||||
- (1)The *.pt provided by [Official Repository](https://github.com/WongKinYiu/yolor/releases/tag/weights) should [Export the ONNX Model](#导出ONNX模型) to complete the deployment. The *.pose model’s deployment is not supported;
|
- (1)The *.pt provided by [Official Repository](https://github.com/WongKinYiu/yolor/releases/tag/weights) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the deployment. The *.pose model’s deployment is not supported;
|
||||||
- (2)The ScaledYOLOv4 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
- (2)The ScaledYOLOv4 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||||
|
|
||||||
|
|
||||||
## Export the ONNX Model
|
## Export the ONNX Model
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -37,7 +37,7 @@ The visualized result after running is as follows
|
|||||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301926-fa3711bf-5984-4e61-9c98-7fdeacb622e9.jpg">
|
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301926-fa3711bf-5984-4e61-9c98-7fdeacb622e9.jpg">
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## YOLOR C++ Interface
|
## YOLOR C++ Interface
|
||||||
|
|
||||||
@@ -90,4 +90,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` ast finishes the deployment of YOLOR on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` ast finishes the deployment of YOLOR on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
```bash
|
```bash
|
||||||
@@ -78,4 +78,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [YOLOR Model Description](..)
|
- [YOLOR Model Description](..)
|
||||||
- [YOLOR C++ Deployment](../cpp)
|
- [YOLOR C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -1,45 +1,46 @@
|
|||||||
# YOLOv5 量化模型 C++ 部署示例
|
English | [简体中文](README_CN.md)
|
||||||
|
# YOLOv5 Quantitative Model C++ Deployment Example
|
||||||
|
|
||||||
本目录下提供的 `infer.cc`,可以帮助用户快速完成 YOLOv5 量化模型在 A311D 上的部署推理加速。
|
`infer.cc` in this directory can help you quickly complete the inference acceleration of YOLOv5 quantization model deployment on A311D.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy 交叉编译环境准备
|
### FastDeploy Cross-compile Environment Preparations
|
||||||
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
|
1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction).
|
||||||
|
|
||||||
### 量化模型准备
|
### Model Preparations
|
||||||
可以直接使用由 FastDeploy 提供的量化模型进行部署,也可以按照如下步骤准备量化模型:
|
The quantified model can be deployed directly using the model provided by FastDeploy, or you can prepare it as follows:
|
||||||
1. 按照 [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) 官方导出方式导出 ONNX 模型,或者直接使用如下命令下载
|
1. Export ONNX model according to the official [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) export method, or you can download it directly with the following command:
|
||||||
```bash
|
```bash
|
||||||
wget https://paddle-slim-models.bj.bcebos.com/act/yolov5s.onnx
|
wget https://paddle-slim-models.bj.bcebos.com/act/yolov5s.onnx
|
||||||
```
|
```
|
||||||
2. 准备 300 张左右量化用的图片,也可以使用如下命令下载我们准备好的数据。
|
2. Prepare about 300 images for quantification, or you can use the following command to download the data we have prepared.
|
||||||
```bash
|
```bash
|
||||||
wget https://bj.bcebos.com/fastdeploy/models/COCO_val_320.tar.gz
|
wget https://bj.bcebos.com/fastdeploy/models/COCO_val_320.tar.gz
|
||||||
tar -xf COCO_val_320.tar.gz
|
tar -xf COCO_val_320.tar.gz
|
||||||
```
|
```
|
||||||
3. 使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。
|
3. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.
|
||||||
```bash
|
```bash
|
||||||
fastdeploy compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model_new/'
|
fastdeploy compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model_new/'
|
||||||
```
|
```
|
||||||
4. YOLOv5 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了 YOLOv5 模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
|
4. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the YOLOv5 model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
|
||||||
```bash
|
```bash
|
||||||
# 先下载我们提供的模型,解压后将其中的 subgraph.txt 文件拷贝到新量化的模型目录中
|
# First download the model we provide, unzip it and copy the subgraph.txt file to the newly quantized model directory.
|
||||||
wget https://bj.bcebos.com/fastdeploy/models/yolov5s_ptq_model.tar.gz
|
wget https://bj.bcebos.com/fastdeploy/models/yolov5s_ptq_model.tar.gz
|
||||||
tar -xvf yolov5s_ptq_model.tar.gz
|
tar -xvf yolov5s_ptq_model.tar.gz
|
||||||
```
|
```
|
||||||
|
|
||||||
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
For more information, please refer to [Model Quantization](../../quantize/README.md)
|
||||||
|
|
||||||
## 在 A311D 上部署量化后的 YOLOv5 检测模型
|
## Deploying the Quantized YOLOv5 Detection model on A311D
|
||||||
请按照以下步骤完成在 A311D 上部署 YOLOv5 量化模型:
|
Please follow these steps to complete the deployment of the YOLOv5 quantization model on A311D.
|
||||||
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
|
||||||
|
|
||||||
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
2. Copy the compiled library to the current directory. You can run this line:
|
||||||
```bash
|
```bash
|
||||||
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/yolov5/a311d/cpp
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/yolov5/a311d/cpp
|
||||||
```
|
```
|
||||||
|
|
||||||
3. 在当前路径下载部署所需的模型和示例图片:
|
3. Download the model and example images required for deployment in current path.
|
||||||
```bash
|
```bash
|
||||||
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
|
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
|
||||||
mkdir models && mkdir images
|
mkdir models && mkdir images
|
||||||
@@ -50,26 +51,26 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
|
|||||||
cp -r 000000014439.jpg images
|
cp -r 000000014439.jpg images
|
||||||
```
|
```
|
||||||
|
|
||||||
4. 编译部署示例,可使入如下命令:
|
4. Compile the deployment example. You can run the following lines:
|
||||||
```bash
|
```bash
|
||||||
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
|
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
|
||||||
mkdir build && cd build
|
mkdir build && cd build
|
||||||
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
||||||
make -j8
|
make -j8
|
||||||
make install
|
make install
|
||||||
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
# After success, an install folder will be created with a running demo and libraries required for deployment.
|
||||||
```
|
```
|
||||||
|
|
||||||
5. 基于 adb 工具部署 YOLOv5 检测模型到晶晨 A311D
|
5. Deploy the YOLOv5 detection model to A311D based on adb.
|
||||||
```bash
|
```bash
|
||||||
# 进入 install 目录
|
# Go to the install directory.
|
||||||
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp/build/install/
|
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp/build/install/
|
||||||
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
|
||||||
bash run_with_adb.sh infer_demo yolov5s_ptq_model 000000014439.jpg $DEVICE_ID
|
bash run_with_adb.sh infer_demo yolov5s_ptq_model 000000014439.jpg $DEVICE_ID
|
||||||
```
|
```
|
||||||
|
|
||||||
部署成功后,vis_result.jpg 保存的结果如下:
|
The result vis_result.jpg is saveed as follows:
|
||||||
|
|
||||||
<img width="640" src="https://user-images.githubusercontent.com/30516196/203706969-dd58493c-6635-4ee7-9421-41c2e0c9524b.png">
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/203706969-dd58493c-6635-4ee7-9421-41c2e0c9524b.png">
|
||||||
|
|
||||||
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md)
|
||||||
|
76
examples/vision/detection/yolov5/a311d/cpp/README_CN.md
Normal file
76
examples/vision/detection/yolov5/a311d/cpp/README_CN.md
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# YOLOv5 量化模型 C++ 部署示例
|
||||||
|
|
||||||
|
本目录下提供的 `infer.cc`,可以帮助用户快速完成 YOLOv5 量化模型在 A311D 上的部署推理加速。
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy 交叉编译环境准备
|
||||||
|
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
|
||||||
|
|
||||||
|
### 量化模型准备
|
||||||
|
可以直接使用由 FastDeploy 提供的量化模型进行部署,也可以按照如下步骤准备量化模型:
|
||||||
|
1. 按照 [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) 官方导出方式导出 ONNX 模型,或者直接使用如下命令下载
|
||||||
|
```bash
|
||||||
|
wget https://paddle-slim-models.bj.bcebos.com/act/yolov5s.onnx
|
||||||
|
```
|
||||||
|
2. 准备 300 张左右量化用的图片,也可以使用如下命令下载我们准备好的数据。
|
||||||
|
```bash
|
||||||
|
wget https://bj.bcebos.com/fastdeploy/models/COCO_val_320.tar.gz
|
||||||
|
tar -xf COCO_val_320.tar.gz
|
||||||
|
```
|
||||||
|
3. 使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。
|
||||||
|
```bash
|
||||||
|
fastdeploy compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model_new/'
|
||||||
|
```
|
||||||
|
4. YOLOv5 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了 YOLOv5 模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
|
||||||
|
```bash
|
||||||
|
# 先下载我们提供的模型,解压后将其中的 subgraph.txt 文件拷贝到新量化的模型目录中
|
||||||
|
wget https://bj.bcebos.com/fastdeploy/models/yolov5s_ptq_model.tar.gz
|
||||||
|
tar -xvf yolov5s_ptq_model.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
|
||||||
|
|
||||||
|
## 在 A311D 上部署量化后的 YOLOv5 检测模型
|
||||||
|
请按照以下步骤完成在 A311D 上部署 YOLOv5 量化模型:
|
||||||
|
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
|
||||||
|
|
||||||
|
2. 将编译后的库拷贝到当前目录,可使用如下命令:
|
||||||
|
```bash
|
||||||
|
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/yolov5/a311d/cpp
|
||||||
|
```
|
||||||
|
|
||||||
|
3. 在当前路径下载部署所需的模型和示例图片:
|
||||||
|
```bash
|
||||||
|
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
|
||||||
|
mkdir models && mkdir images
|
||||||
|
wget https://bj.bcebos.com/fastdeploy/models/yolov5s_ptq_model.tar.gz
|
||||||
|
tar -xvf yolov5s_ptq_model.tar.gz
|
||||||
|
cp -r yolov5s_ptq_model models
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
cp -r 000000014439.jpg images
|
||||||
|
```
|
||||||
|
|
||||||
|
4. 编译部署示例,可使入如下命令:
|
||||||
|
```bash
|
||||||
|
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
|
||||||
|
mkdir build && cd build
|
||||||
|
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
|
||||||
|
make -j8
|
||||||
|
make install
|
||||||
|
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
|
||||||
|
```
|
||||||
|
|
||||||
|
5. 基于 adb 工具部署 YOLOv5 检测模型到晶晨 A311D
|
||||||
|
```bash
|
||||||
|
# 进入 install 目录
|
||||||
|
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp/build/install/
|
||||||
|
# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
|
||||||
|
bash run_with_adb.sh infer_demo yolov5s_ptq_model 000000014439.jpg $DEVICE_ID
|
||||||
|
```
|
||||||
|
|
||||||
|
部署成功后,vis_result.jpg 保存的结果如下:
|
||||||
|
|
||||||
|
<img width="640" src="https://user-images.githubusercontent.com/30516196/203706969-dd58493c-6635-4ee7-9421-41c2e0c9524b.png">
|
||||||
|
|
||||||
|
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
|
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
|
|||||||
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv5 on CPU/GPU and GPU accelerated by TensorRT.
|
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv5 on CPU/GPU and GPU accelerated by TensorRT.
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeployPrecompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeployPrecompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -104,4 +104,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
|
|
||||||
@@ -82,4 +82,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [YOLOv5 Model Description](..)
|
- [YOLOv5 Model Description](..)
|
||||||
- [YOLOv5 C++ Deployment](../cpp)
|
- [YOLOv5 C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -1 +0,0 @@
|
|||||||
README_CN.md
|
|
36
examples/vision/detection/yolov5/python/serving/README.md
Normal file
36
examples/vision/detection/yolov5/python/serving/README.md
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
English | [简体中文](README_CN.md)
|
||||||
|
|
||||||
|
# YOLOv5 Python Simple Serving Demo
|
||||||
|
|
||||||
|
|
||||||
|
## Environment
|
||||||
|
|
||||||
|
- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
Server:
|
||||||
|
```bash
|
||||||
|
# Download demo code
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd FastDeploy/examples/vision/detection/yolov5/python/serving
|
||||||
|
|
||||||
|
# Download model
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
|
||||||
|
tar xvf yolov5s_infer.tar
|
||||||
|
|
||||||
|
# Launch server, change the configurations in server.py to select hardware, backend, etc.
|
||||||
|
# and use --host, --port to specify IP and port
|
||||||
|
fastdeploy simple_serving --app server:app
|
||||||
|
```
|
||||||
|
|
||||||
|
Client:
|
||||||
|
```bash
|
||||||
|
# Download demo code
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd FastDeploy/examples/vision/detection/yolov5/python/serving
|
||||||
|
|
||||||
|
# Download test image
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
|
# Send request and get inference result (Please adapt the IP and port if necessary)
|
||||||
|
python client.py
|
||||||
|
```
|
@@ -1,4 +1,4 @@
|
|||||||
简体中文 | [English](README_EN.md)
|
简体中文 | [English](README.md)
|
||||||
|
|
||||||
# YOLOv5 Python轻量服务化部署示例
|
# YOLOv5 Python轻量服务化部署示例
|
||||||
|
|
||||||
|
@@ -1,36 +0,0 @@
|
|||||||
English | [简体中文](README_CN.md)
|
|
||||||
|
|
||||||
# YOLOv5 Python Simple Serving Demo
|
|
||||||
|
|
||||||
|
|
||||||
## Environment
|
|
||||||
|
|
||||||
- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
|
||||||
|
|
||||||
Server:
|
|
||||||
```bash
|
|
||||||
# Download demo code
|
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
|
||||||
cd FastDeploy/examples/vision/detection/yolov5/python/serving
|
|
||||||
|
|
||||||
# Download model
|
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
|
|
||||||
tar xvf yolov5s_infer.tar
|
|
||||||
|
|
||||||
# Launch server, change the configurations in server.py to select hardware, backend, etc.
|
|
||||||
# and use --host, --port to specify IP and port
|
|
||||||
fastdeploy simple_serving --app server:app
|
|
||||||
```
|
|
||||||
|
|
||||||
Client:
|
|
||||||
```bash
|
|
||||||
# Download demo code
|
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
|
||||||
cd FastDeploy/examples/vision/detection/yolov5/python/serving
|
|
||||||
|
|
||||||
# Download test image
|
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
|
||||||
|
|
||||||
# Send request and get inference result (Please adapt the IP and port if necessary)
|
|
||||||
python client.py
|
|
||||||
```
|
|
@@ -1,37 +1,38 @@
|
|||||||
# YOLOv5量化模型 C++部署示例
|
English | [简体中文](README_CN.md)
|
||||||
|
# YOLOv5 Quantitative Model C++ Deployment Example
|
||||||
|
|
||||||
本目录下提供的`infer.cc`,可以帮助用户快速完成YOLOv5s量化模型在CPU/GPU上的部署推理加速.
|
`infer.cc` in this directory can help you quickly complete the inference acceleration of YOLOv5s quantization model deployment on CPU/GPU.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy环境准备
|
### FastDeploy Environment Preparations
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
### 量化模型准备
|
### Quantized Model Preparations
|
||||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
- 1. You can directly use the quantized model provided by FastDeploy for deployment.
|
||||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
|
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.
|
||||||
|
|
||||||
## 以量化后的YOLOv5s模型为例, 进行部署
|
## Take the Quantized YOLOv5s Model as an example for Deployment
|
||||||
在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
Run the following commands in this directory to compile and deploy the quantized model. FastDeploy version 0.7.0 or higher is required (x.x.x>=0.7.0).
|
||||||
```bash
|
```bash
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
# Download pre-compiled FastDeploy libraries. You can choose the appropriate version from `pre-compiled FastDeploy libraries` mentioned above.
|
||||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
make -j
|
make -j
|
||||||
|
|
||||||
#下载FastDeloy提供的yolov5s量化模型文件和测试图片
|
# Download the yolov5s quantized model and test images provided by FastDeloy.
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
|
||||||
tar -xvf yolov5s_quant.tar
|
tar -xvf yolov5s_quant.tar
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
|
|
||||||
# 在CPU上使用ONNX Runtime推理量化模型
|
# Use ONNX Runtime inference quantization model on CPU.
|
||||||
./infer_demo yolov5s_quant 000000014439.jpg 0
|
./infer_demo yolov5s_quant 000000014439.jpg 0
|
||||||
# 在GPU上使用TensorRT推理量化模型
|
# Use TensorRT inference quantization model on GPU.
|
||||||
./infer_demo yolov5s_quant 000000014439.jpg 1
|
./infer_demo yolov5s_quant 000000014439.jpg 1
|
||||||
# 在GPU上使用Paddle-TensorRT推理量化模型
|
# Use Paddle-TensorRT inference quantization model on GPU.
|
||||||
./infer_demo yolov5s_quant 000000014439.jpg 2
|
./infer_demo yolov5s_quant 000000014439.jpg 2
|
||||||
```
|
```
|
||||||
|
38
examples/vision/detection/yolov5/quantize/cpp/README_CN.md
Normal file
38
examples/vision/detection/yolov5/quantize/cpp/README_CN.md
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# YOLOv5量化模型 C++部署示例
|
||||||
|
|
||||||
|
本目录下提供的`infer.cc`,可以帮助用户快速完成YOLOv5s量化模型在CPU/GPU上的部署推理加速.
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy环境准备
|
||||||
|
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
### 量化模型准备
|
||||||
|
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
||||||
|
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
|
||||||
|
|
||||||
|
## 以量化后的YOLOv5s模型为例, 进行部署
|
||||||
|
在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
|
||||||
|
```bash
|
||||||
|
mkdir build
|
||||||
|
cd build
|
||||||
|
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
|
||||||
|
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
|
||||||
|
tar xvf fastdeploy-linux-x64-x.x.x.tgz
|
||||||
|
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||||
|
make -j
|
||||||
|
|
||||||
|
#下载FastDeloy提供的yolov5s量化模型文件和测试图片
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
|
||||||
|
tar -xvf yolov5s_quant.tar
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
|
|
||||||
|
# 在CPU上使用ONNX Runtime推理量化模型
|
||||||
|
./infer_demo yolov5s_quant 000000014439.jpg 0
|
||||||
|
# 在GPU上使用TensorRT推理量化模型
|
||||||
|
./infer_demo yolov5s_quant 000000014439.jpg 1
|
||||||
|
# 在GPU上使用Paddle-TensorRT推理量化模型
|
||||||
|
./infer_demo yolov5s_quant 000000014439.jpg 2
|
||||||
|
```
|
@@ -1,31 +1,32 @@
|
|||||||
|
English | [简体中文](README_CN.md)
|
||||||
# YOLOv5s量化模型 Python部署示例
|
# YOLOv5s量化模型 Python部署示例
|
||||||
本目录下提供的`infer.py`,可以帮助用户快速完成YOLOv5量化模型在CPU/GPU上的部署推理加速.
|
`infer.py` in this directory can help you quickly complete the inference acceleration of YOLOv5s quantization model deployment on CPU/GPU.
|
||||||
|
|
||||||
## 部署准备
|
## Deployment Preparations
|
||||||
### FastDeploy环境准备
|
### FastDeploy Environment Preparations
|
||||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
|
||||||
|
|
||||||
### 量化模型准备
|
### Quantized Model Preparations
|
||||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
- 1. You can directly use the quantized model provided by FastDeploy for deployment..
|
||||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
|
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.
|
||||||
|
|
||||||
|
|
||||||
## 以量化后的YOLOv5s模型为例, 进行部署
|
## Take the Quantized YOLOv5s Model as an example for Deployment
|
||||||
```bash
|
```bash
|
||||||
#下载部署示例代码
|
# Download sample deployment code.
|
||||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
cd examples/vision/detection/yolov5/quantize/python
|
cd examples/vision/detection/yolov5/quantize/python
|
||||||
|
|
||||||
#下载FastDeloy提供的yolov5s量化模型文件和测试图片
|
# Download the yolov5s quantized model and test images provided by FastDeloy.
|
||||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
|
||||||
tar -xvf yolov5s_quant.tar
|
tar -xvf yolov5s_quant.tar
|
||||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
# 在CPU上使用ONNX Runtime推理量化模型
|
# Use ONNX Runtime inference quantization model on CPU.
|
||||||
python infer.py --model yolov5s_quant --image 000000014439.jpg --device cpu --backend ort
|
python infer.py --model yolov5s_quant --image 000000014439.jpg --device cpu --backend ort
|
||||||
# 在GPU上使用TensorRT推理量化模型
|
# Use TensorRT inference quantization model on GPU.
|
||||||
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend trt
|
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend trt
|
||||||
# 在GPU上使用Paddle-TensorRT推理量化模型
|
# Use Paddle-TensorRT inference quantization model on GPU.
|
||||||
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend pptrt
|
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend pptrt
|
||||||
```
|
```
|
||||||
|
@@ -0,0 +1,32 @@
|
|||||||
|
[English](README.md) | 简体中文
|
||||||
|
# YOLOv5s量化模型 Python部署示例
|
||||||
|
本目录下提供的`infer.py`,可以帮助用户快速完成YOLOv5量化模型在CPU/GPU上的部署推理加速.
|
||||||
|
|
||||||
|
## 部署准备
|
||||||
|
### FastDeploy环境准备
|
||||||
|
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
### 量化模型准备
|
||||||
|
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
||||||
|
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
|
||||||
|
|
||||||
|
|
||||||
|
## 以量化后的YOLOv5s模型为例, 进行部署
|
||||||
|
```bash
|
||||||
|
#下载部署示例代码
|
||||||
|
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||||
|
cd examples/vision/detection/yolov5/quantize/python
|
||||||
|
|
||||||
|
#下载FastDeloy提供的yolov5s量化模型文件和测试图片
|
||||||
|
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
|
||||||
|
tar -xvf yolov5s_quant.tar
|
||||||
|
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||||
|
|
||||||
|
# 在CPU上使用ONNX Runtime推理量化模型
|
||||||
|
python infer.py --model yolov5s_quant --image 000000014439.jpg --device cpu --backend ort
|
||||||
|
# 在GPU上使用TensorRT推理量化模型
|
||||||
|
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend trt
|
||||||
|
# 在GPU上使用Paddle-TensorRT推理量化模型
|
||||||
|
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend pptrt
|
||||||
|
```
|
@@ -55,4 +55,4 @@ output_name: detction_result
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../serving/docs/zh_CN/model_configuration.md) to modify the configs in `models/runtime/config.pbtxt`.
|
The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../serving/docs/EN/model_configuration-en.md) to modify the configs in `models/runtime/config.pbtxt`.
|
||||||
|
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
|
|||||||
- The YOLOv5Lite Deployment is based on the code of [YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
|
- The YOLOv5Lite Deployment is based on the code of [YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
|
||||||
and [Pre-trained Model Based on COCO](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)。
|
and [Pre-trained Model Based on COCO](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)。
|
||||||
|
|
||||||
- (1)The *.pt provided by [Official Repository](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) should [Export the ONNX Model](#导出ONNX模型)to complete the deployment;
|
- (1)The *.pt provided by [Official Repository](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) should [Export the ONNX Model](#Export-the-ONNX-Model)to complete the deployment;
|
||||||
- (2)The YOLOv5Lite model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
- (2)The YOLOv5Lite model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||||
|
|
||||||
|
|
||||||
## Export the ONNX Model
|
## Export the ONNX Model
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -37,7 +37,7 @@ The visualized result after running is as follows
|
|||||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301943-263c8153-a52a-4533-a7c1-ee86d05d314b.jpg">
|
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301943-263c8153-a52a-4533-a7c1-ee86d05d314b.jpg">
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## YOLOv5Lite C++ Interface
|
## YOLOv5Lite C++ Interface
|
||||||
|
|
||||||
@@ -90,4 +90,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5Lite on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5Lite on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
|
|
||||||
@@ -79,4 +79,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [YOLOv5Lite Model Description](..)
|
- [YOLOv5Lite Model Description](..)
|
||||||
- [YOLOv5Lite C++ Deployment](../cpp)
|
- [YOLOv5Lite C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -6,7 +6,7 @@ English | [简体中文](README_CN.md)
|
|||||||
- The YOLOv6 deployment is based on [YOLOv6](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) and [Pre-trained Model Based on COCO](https://github.com/meituan/YOLOv6/releases/tag/0.1.0).
|
- The YOLOv6 deployment is based on [YOLOv6](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) and [Pre-trained Model Based on COCO](https://github.com/meituan/YOLOv6/releases/tag/0.1.0).
|
||||||
|
|
||||||
- (1)The *.onnx provided by [Official Repository](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) can directly conduct deployemnt;
|
- (1)The *.onnx provided by [Official Repository](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) can directly conduct deployemnt;
|
||||||
- (2)Personal models trained by developers should export the ONNX model. Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
- (2)Personal models trained by developers should export the ONNX model. Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -57,7 +57,7 @@ The visualized result after running is as follows
|
|||||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301725-390e4abb-db2b-482d-931d-469381322626.jpg">
|
<img width="640" src="https://user-images.githubusercontent.com/67993288/184301725-390e4abb-db2b-482d-931d-469381322626.jpg">
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## YOLOv6 C++ Interface
|
## YOLOv6 C++ Interface
|
||||||
|
|
||||||
@@ -110,4 +110,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeployEnvironment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeployEnvironment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv6 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv6 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
|
|
||||||
@@ -93,4 +93,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [YOLOv6 Model Description](..)
|
- [YOLOv6 Model Description](..)
|
||||||
- [YOLOv6 C++ Deployment](../cpp)
|
- [YOLOv6 C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
- YOLOv7 deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1) branching code, and [COCO Pre-Trained Models](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1).
|
- YOLOv7 deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1) branching code, and [COCO Pre-Trained Models](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1).
|
||||||
|
|
||||||
- (1)The *.pt provided by the [Official Library](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) can be deployed after the [export ONNX model](#export ONNX model) operation; *.trt and *.pose models do not support deployment.
|
- (1)The *.pt provided by the [Official Library](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) can be deployed after the [export ONNX model](#Export-ONNX-Model) operation; *.trt and *.pose models do not support deployment.
|
||||||
- (2)As for YOLOv7 model trained on customized data, please follow the operations guidelines in [Export ONNX model](#Export-ONNX-Model) and then refer to [Detailed Deployment Tutorials](#Detailed-Deployment-Tutorials) to complete the deployment.
|
- (2)As for YOLOv7 model trained on customized data, please follow the operations guidelines in [Export ONNX model](#Export-ONNX-Model) and then refer to [Detailed Deployment Tutorials](#Detailed-Deployment-Tutorials) to complete the deployment.
|
||||||
|
|
||||||
## Export ONNX Model
|
## Export ONNX Model
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -50,7 +50,7 @@ The visualized result after running is as follows
|
|||||||
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
<img width="640" src="https://user-images.githubusercontent.com/67993288/183847558-abcd9a57-9cd9-4891-b09a-710963c99b74.jpg">
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## YOLOv7 C++ Interface
|
## YOLOv7 C++ Interface
|
||||||
|
|
||||||
@@ -103,4 +103,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -5,8 +5,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Two steps before deployment:
|
Two steps before deployment:
|
||||||
|
|
||||||
- 1. The hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. The hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
|
|
||||||
This doc provides a quick `infer.py` demo of YOLOv7 deployment on CPU/GPU, and accelerated GPU deployment by TensorRT. Run the following command:
|
This doc provides a quick `infer.py` demo of YOLOv7 deployment on CPU/GPU, and accelerated GPU deployment by TensorRT. Run the following command:
|
||||||
|
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
The YOLOv7End2EndORT deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1)branch code and [Pre-trained Model Based on COCO](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1). Attention: YOLOv7End2EndORT is designed for the inference of exported End2End models in the [ORT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L87) version in YOLOv7. YOLOv7 class is for the inference of models without nms. YOLOv7End2EndTRT is for the inference of End2End models in the [TRT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L111) version.
|
The YOLOv7End2EndORT deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1)branch code and [Pre-trained Model Based on COCO](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1). Attention: YOLOv7End2EndORT is designed for the inference of exported End2End models in the [ORT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L87) version in YOLOv7. YOLOv7 class is for the inference of models without nms. YOLOv7End2EndTRT is for the inference of End2End models in the [TRT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L111) version.
|
||||||
|
|
||||||
- (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#导出ONNX模型) to complete the employment. The deployment of *.trt and *.pose models is not supported.
|
- (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the employment. The deployment of *.trt and *.pose models is not supported.
|
||||||
- (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
- (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||||
|
|
||||||
|
|
||||||
## Export the ONNX Model
|
## Export the ONNX Model
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Two steps before deployment
|
Two steps before deployment
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -39,7 +39,7 @@ The visualized result after running is as follows
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
Attention: YOLOv7End2EndORT is designed for the inference of End2End models with [ORT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L87) among the YOLOv7 exported models. For models without nms, use YOLOv7 class for inference. For End2End models with [TRT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L111), use YOLOv7End2EndTRT for inference.
|
Attention: YOLOv7End2EndORT is designed for the inference of End2End models with [ORT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L87) among the YOLOv7 exported models. For models without nms, use YOLOv7 class for inference. For End2End models with [TRT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L111), use YOLOv7End2EndTRT for inference.
|
||||||
|
|
||||||
@@ -92,4 +92,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Two steps before deployment
|
Two steps before deployment
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv7End2End on CPU/GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv7End2End on CPU/GPU accelerated by TensorRT. The script is as follows
|
||||||
|
|
||||||
@@ -83,4 +83,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [YOLOv7End2EndORT Model Description](..)
|
- [YOLOv7End2EndORT Model Description](..)
|
||||||
- [YOLOv7End2EndORT C++ Deployment](../cpp)
|
- [YOLOv7End2EndORT C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
The YOLOv7End2EndTRT deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1) branch code and [Pre-trained Model Baesd on COCO](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1). Attention: YOLOv7End2EndTRT is designed for the inference of exported End2End models in the [TRT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L111) version in YOLOv7. YOLOv7 class is for the inference of models without nms. YOLOv7End2EndORT is for the inference of End2End models in the [ORT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L87) version.
|
The YOLOv7End2EndTRT deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1) branch code and [Pre-trained Model Baesd on COCO](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1). Attention: YOLOv7End2EndTRT is designed for the inference of exported End2End models in the [TRT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L111) version in YOLOv7. YOLOv7 class is for the inference of models without nms. YOLOv7End2EndORT is for the inference of End2End models in the [ORT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L87) version.
|
||||||
|
|
||||||
- (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#导出ONNX模型) to complete the deployment. The deployment of *.trt and *.pose models is not supported.
|
- (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the deployment. The deployment of *.trt and *.pose models is not supported.
|
||||||
- (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
- (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment o
|
|||||||
|
|
||||||
Two steps before deployment
|
Two steps before deployment
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -87,4 +87,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Two steps before deployment
|
Two steps before deployment
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl p ackage. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl p ackage. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv7End2EndTRT accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv7End2EndTRT accelerated by TensorRT. The script is as follows
|
||||||
```bash
|
```bash
|
||||||
@@ -78,4 +78,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [YOLOv7End2EndTRT Model Description](..)
|
- [YOLOv7End2EndTRT Model Description](..)
|
||||||
- [YOLOv7End2EndTRT C++ Deployment](../cpp)
|
- [YOLOv7End2EndTRT C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -2,10 +2,10 @@ English | [简体中文](README_CN.md)
|
|||||||
# YOLOX Ready-to-deploy Model
|
# YOLOX Ready-to-deploy Model
|
||||||
|
|
||||||
|
|
||||||
- The YOLOX deployment is based on [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0) and [coco's pre-trained models](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0)。
|
- The YOLOX deployment is based on [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0) and [coco's pre-trained models](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0).
|
||||||
|
|
||||||
- (1)The *.pth provided by [Official Repository](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0) should export the ONNX model to complete the deployment;
|
- (1)The *.pth provided by [Official Repository](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0) should export the ONNX model to complete the deployment;
|
||||||
- (2)The YOLOX model trained by personal data should export the ONNX model. Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
|
- (2)The YOLOX model trained by personal data should export the ONNX model. Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@@ -6,8 +6,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Two steps before deployment
|
Two steps before deployment
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
|
||||||
|
|
||||||
@@ -39,7 +39,7 @@ The visualized result after running is as follows
|
|||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
|
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## YOLOX C++ Interface
|
## YOLOX C++ Interface
|
||||||
|
|
||||||
@@ -94,4 +94,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Two steps before deployment
|
Two steps before deployment
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of YOLOX on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of YOLOX on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
|
||||||
|
|
||||||
@@ -77,4 +77,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [YOLOX Model Description](..)
|
- [YOLOX Model Description](..)
|
||||||
- [YOLOX C++ Deployment](../cpp)
|
- [YOLOX C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation.
|
Before deployment, two steps require confirmation.
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.2 or above (x.x.x>=1.0.2), or nightly built version is required to support this model.
|
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.2 or above (x.x.x>=1.0.2), or nightly built version is required to support this model.
|
||||||
|
|
||||||
@@ -38,7 +38,7 @@ The visualized result after running is as follows
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
|
||||||
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
|
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
|
||||||
|
|
||||||
## FaceLandmark1000 C++ Interface
|
## FaceLandmark1000 C++ Interface
|
||||||
|
|
||||||
@@ -83,4 +83,4 @@ Users can modify the following pre-processing parameters to their needs, which a
|
|||||||
- [Model Description](../../)
|
- [Model Description](../../)
|
||||||
- [Python Deployment](../python)
|
- [Python Deployment](../python)
|
||||||
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
|
|||||||
|
|
||||||
Before deployment, two steps require confirmation
|
Before deployment, two steps require confirmation
|
||||||
|
|
||||||
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||||
|
|
||||||
This directory provides examples that `infer.py` fast finishes the deployment of FaceLandmark1000 models on CPU/GPU and GPU accelerated by TensorRT. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model. The script is as follows
|
This directory provides examples that `infer.py` fast finishes the deployment of FaceLandmark1000 models on CPU/GPU and GPU accelerated by TensorRT. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model. The script is as follows
|
||||||
|
|
||||||
@@ -68,4 +68,4 @@ FaceLandmark1000 model loading and initialization, among which model_file is the
|
|||||||
- [FaceLandmark1000 Model Description](..)
|
- [FaceLandmark1000 Model Description](..)
|
||||||
- [FaceLandmark1000 C++ Deployment](../cpp)
|
- [FaceLandmark1000 C++ Deployment](../cpp)
|
||||||
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
- [Model Prediction Results](../../../../../docs/api/vision_results/)
|
||||||
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
|
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user