[Doc]Update English version of some documents (#1083)

* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md

* Update english version of serving/docs/

* Update title of readme

* Update some links

* Modify a title

* Update some links

* Update en version of java android README

* Modify some titles

* Modify some titles

* Modify some titles

* modify article to document

* update some english version of documents in examples

* Add english version of documents in examples/visions

* Sync to current branch

* Add english version of documents in examples

* Add english version of documents in examples

* Add english version of documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples
This commit is contained in:
charl-u
2023-01-09 10:08:19 +08:00
committed by GitHub
parent 61c2f87e0c
commit cbf88a46fa
164 changed files with 1557 additions and 777 deletions

View File

@@ -1,45 +1,46 @@
# YOLOv5 量化模型 C++ 部署示例
English | [简体中文](README_CN.md)
# YOLOv5 Quantitative Model C++ Deployment Example
本目录下提供的 `infer.cc`,可以帮助用户快速完成 YOLOv5 量化模型在 A311D 上的部署推理加速。
`infer.cc` in this directory can help you quickly complete the inference acceleration of YOLOv5 quantization model deployment on A311D.
## 部署准备
### FastDeploy 交叉编译环境准备
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
## Deployment Preparations
### FastDeploy Cross-compile Environment Preparations
1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction).
### 量化模型准备
可以直接使用由 FastDeploy 提供的量化模型进行部署,也可以按照如下步骤准备量化模型:
1. 按照 [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) 官方导出方式导出 ONNX 模型,或者直接使用如下命令下载
### Model Preparations
The quantified model can be deployed directly using the model provided by FastDeploy, or you can prepare it as follows:
1. Export ONNX model according to the official [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) export method, or you can download it directly with the following command:
```bash
wget https://paddle-slim-models.bj.bcebos.com/act/yolov5s.onnx
```
2. 准备 300 张左右量化用的图片,也可以使用如下命令下载我们准备好的数据。
2. Prepare about 300 images for quantification, or you can use the following command to download the data we have prepared.
```bash
wget https://bj.bcebos.com/fastdeploy/models/COCO_val_320.tar.gz
tar -xf COCO_val_320.tar.gz
```
3. 使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。
3. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.
```bash
fastdeploy compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model_new/'
```
4. YOLOv5 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了 YOLOv5 模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
4. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the YOLOv5 model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
```bash
# 先下载我们提供的模型,解压后将其中的 subgraph.txt 文件拷贝到新量化的模型目录中
# First download the model we provide, unzip it and copy the subgraph.txt file to the newly quantized model directory.
wget https://bj.bcebos.com/fastdeploy/models/yolov5s_ptq_model.tar.gz
tar -xvf yolov5s_ptq_model.tar.gz
```
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
For more information, please refer to [Model Quantization](../../quantize/README.md)
## 在 A311D 上部署量化后的 YOLOv5 检测模型
请按照以下步骤完成在 A311D 上部署 YOLOv5 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
## Deploying the Quantized YOLOv5 Detection model on A311D
Please follow these steps to complete the deployment of the YOLOv5 quantization model on A311D.
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
2. 将编译后的库拷贝到当前目录,可使用如下命令:
2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/yolov5/a311d/cpp
```
3. 在当前路径下载部署所需的模型和示例图片:
3. Download the model and example images required for deployment in current path.
```bash
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
mkdir models && mkdir images
@@ -50,26 +51,26 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
cp -r 000000014439.jpg images
```
4. 编译部署示例,可使入如下命令:
4. Compile the deployment example. You can run the following lines:
```bash
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
# After success, an install folder will be created with a running demo and libraries required for deployment.
```
5. 基于 adb 工具部署 YOLOv5 检测模型到晶晨 A311D
5. Deploy the YOLOv5 detection model to A311D based on adb.
```bash
# 进入 install 目录
# Go to the install directory.
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp/build/install/
# 如下命令表示bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo yolov5s_ptq_model 000000014439.jpg $DEVICE_ID
```
部署成功后,vis_result.jpg 保存的结果如下:
The result vis_result.jpg is saveed as follows:
<img width="640" src="https://user-images.githubusercontent.com/30516196/203706969-dd58493c-6635-4ee7-9421-41c2e0c9524b.png">
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md)

View File

@@ -0,0 +1,76 @@
[English](README.md) | 简体中文
# YOLOv5 量化模型 C++ 部署示例
本目录下提供的 `infer.cc`,可以帮助用户快速完成 YOLOv5 量化模型在 A311D 上的部署推理加速。
## 部署准备
### FastDeploy 交叉编译环境准备
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
### 量化模型准备
可以直接使用由 FastDeploy 提供的量化模型进行部署,也可以按照如下步骤准备量化模型:
1. 按照 [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) 官方导出方式导出 ONNX 模型,或者直接使用如下命令下载
```bash
wget https://paddle-slim-models.bj.bcebos.com/act/yolov5s.onnx
```
2. 准备 300 张左右量化用的图片,也可以使用如下命令下载我们准备好的数据。
```bash
wget https://bj.bcebos.com/fastdeploy/models/COCO_val_320.tar.gz
tar -xf COCO_val_320.tar.gz
```
3. 使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。
```bash
fastdeploy compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model_new/'
```
4. YOLOv5 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了 YOLOv5 模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
```bash
# 先下载我们提供的模型,解压后将其中的 subgraph.txt 文件拷贝到新量化的模型目录中
wget https://bj.bcebos.com/fastdeploy/models/yolov5s_ptq_model.tar.gz
tar -xvf yolov5s_ptq_model.tar.gz
```
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
## 在 A311D 上部署量化后的 YOLOv5 检测模型
请按照以下步骤完成在 A311D 上部署 YOLOv5 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
2. 将编译后的库拷贝到当前目录,可使用如下命令:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/yolov5/a311d/cpp
```
3. 在当前路径下载部署所需的模型和示例图片:
```bash
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
mkdir models && mkdir images
wget https://bj.bcebos.com/fastdeploy/models/yolov5s_ptq_model.tar.gz
tar -xvf yolov5s_ptq_model.tar.gz
cp -r yolov5s_ptq_model models
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
cp -r 000000014439.jpg images
```
4. 编译部署示例,可使入如下命令:
```bash
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
```
5. 基于 adb 工具部署 YOLOv5 检测模型到晶晨 A311D
```bash
# 进入 install 目录
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp/build/install/
# 如下命令表示bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
bash run_with_adb.sh infer_demo yolov5s_ptq_model 000000014439.jpg $DEVICE_ID
```
部署成功后vis_result.jpg 保存的结果如下:
<img width="640" src="https://user-images.githubusercontent.com/30516196/203706969-dd58493c-6635-4ee7-9421-41c2e0c9524b.png">
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)

View File

@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv5 on CPU/GPU and GPU accelerated by TensorRT.
Before deployment, two steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeployPrecompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeployPrecompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -104,4 +104,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)

View File

@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
@@ -82,4 +82,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [YOLOv5 Model Description](..)
- [YOLOv5 C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)

View File

@@ -1 +0,0 @@
README_CN.md

View File

@@ -0,0 +1,36 @@
English | [简体中文](README_CN.md)
# YOLOv5 Python Simple Serving Demo
## Environment
- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Server:
```bash
# Download demo code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/yolov5/python/serving
# Download model
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
tar xvf yolov5s_infer.tar
# Launch server, change the configurations in server.py to select hardware, backend, etc.
# and use --host, --port to specify IP and port
fastdeploy simple_serving --app server:app
```
Client:
```bash
# Download demo code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/yolov5/python/serving
# Download test image
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# Send request and get inference result (Please adapt the IP and port if necessary)
python client.py
```

View File

@@ -1,4 +1,4 @@
简体中文 | [English](README_EN.md)
简体中文 | [English](README.md)
# YOLOv5 Python轻量服务化部署示例

View File

@@ -1,36 +0,0 @@
English | [简体中文](README_CN.md)
# YOLOv5 Python Simple Serving Demo
## Environment
- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Server:
```bash
# Download demo code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/yolov5/python/serving
# Download model
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
tar xvf yolov5s_infer.tar
# Launch server, change the configurations in server.py to select hardware, backend, etc.
# and use --host, --port to specify IP and port
fastdeploy simple_serving --app server:app
```
Client:
```bash
# Download demo code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/yolov5/python/serving
# Download test image
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# Send request and get inference result (Please adapt the IP and port if necessary)
python client.py
```

View File

@@ -1,37 +1,38 @@
# YOLOv5量化模型 C++部署示例
English | [简体中文](README_CN.md)
# YOLOv5 Quantitative Model C++ Deployment Example
本目录下提供的`infer.cc`,可以帮助用户快速完成YOLOv5s量化模型在CPU/GPU上的部署推理加速.
`infer.cc` in this directory can help you quickly complete the inference acceleration of YOLOv5s quantization model deployment on CPU/GPU.
## 部署准备
### FastDeploy环境准备
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
## Deployment Preparations
### FastDeploy Environment Preparations
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
### Quantized Model Preparations
- 1. You can directly use the quantized model provided by FastDeploy for deployment.
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.
## 以量化后的YOLOv5s模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
## Take the Quantized YOLOv5s Model as an example for Deployment
Run the following commands in this directory to compile and deploy the quantized model. FastDeploy version 0.7.0 or higher is required (x.x.x>=0.7.0).
```bash
mkdir build
cd build
# 下载FastDeploy预编译库用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
# Download pre-compiled FastDeploy libraries. You can choose the appropriate version from `pre-compiled FastDeploy libraries` mentioned above.
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
#下载FastDeloy提供的yolov5s量化模型文件和测试图片
# Download the yolov5s quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
tar -xvf yolov5s_quant.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# 在CPU上使用ONNX Runtime推理量化模型
# Use ONNX Runtime inference quantization model on CPU.
./infer_demo yolov5s_quant 000000014439.jpg 0
# 在GPU上使用TensorRT推理量化模型
# Use TensorRT inference quantization model on GPU.
./infer_demo yolov5s_quant 000000014439.jpg 1
# 在GPU上使用Paddle-TensorRT推理量化模型
# Use Paddle-TensorRT inference quantization model on GPU.
./infer_demo yolov5s_quant 000000014439.jpg 2
```

View File

@@ -0,0 +1,38 @@
[English](README.md) | 简体中文
# YOLOv5量化模型 C++部署示例
本目录下提供的`infer.cc`,可以帮助用户快速完成YOLOv5s量化模型在CPU/GPU上的部署推理加速.
## 部署准备
### FastDeploy环境准备
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
## 以量化后的YOLOv5s模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
```bash
mkdir build
cd build
# 下载FastDeploy预编译库用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
#下载FastDeloy提供的yolov5s量化模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
tar -xvf yolov5s_quant.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# 在CPU上使用ONNX Runtime推理量化模型
./infer_demo yolov5s_quant 000000014439.jpg 0
# 在GPU上使用TensorRT推理量化模型
./infer_demo yolov5s_quant 000000014439.jpg 1
# 在GPU上使用Paddle-TensorRT推理量化模型
./infer_demo yolov5s_quant 000000014439.jpg 2
```

View File

@@ -1,31 +1,32 @@
English | [简体中文](README_CN.md)
# YOLOv5s量化模型 Python部署示例
本目录下提供的`infer.py`,可以帮助用户快速完成YOLOv5量化模型在CPU/GPU上的部署推理加速.
`infer.py` in this directory can help you quickly complete the inference acceleration of YOLOv5s quantization model deployment on CPU/GPU.
## 部署准备
### FastDeploy环境准备
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
## Deployment Preparations
### FastDeploy Environment Preparations
- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
### Quantized Model Preparations
- 1. You can directly use the quantized model provided by FastDeploy for deployment..
- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.
## 以量化后的YOLOv5s模型为例, 进行部署
## Take the Quantized YOLOv5s Model as an example for Deployment
```bash
#下载部署示例代码
# Download sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov5/quantize/python
#下载FastDeloy提供的yolov5s量化模型文件和测试图片
# Download the yolov5s quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
tar -xvf yolov5s_quant.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# 在CPU上使用ONNX Runtime推理量化模型
# Use ONNX Runtime inference quantization model on CPU.
python infer.py --model yolov5s_quant --image 000000014439.jpg --device cpu --backend ort
# 在GPU上使用TensorRT推理量化模型
# Use TensorRT inference quantization model on GPU.
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend trt
# 在GPU上使用Paddle-TensorRT推理量化模型
# Use Paddle-TensorRT inference quantization model on GPU.
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend pptrt
```

View File

@@ -0,0 +1,32 @@
[English](README.md) | 简体中文
# YOLOv5s量化模型 Python部署示例
本目录下提供的`infer.py`,可以帮助用户快速完成YOLOv5量化模型在CPU/GPU上的部署推理加速.
## 部署准备
### FastDeploy环境准备
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
## 以量化后的YOLOv5s模型为例, 进行部署
```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov5/quantize/python
#下载FastDeloy提供的yolov5s量化模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
tar -xvf yolov5s_quant.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# 在CPU上使用ONNX Runtime推理量化模型
python infer.py --model yolov5s_quant --image 000000014439.jpg --device cpu --backend ort
# 在GPU上使用TensorRT推理量化模型
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend trt
# 在GPU上使用Paddle-TensorRT推理量化模型
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend pptrt
```

View File

@@ -55,4 +55,4 @@ output_name: detction_result
The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../serving/docs/zh_CN/model_configuration.md) to modify the configs in `models/runtime/config.pbtxt`.
The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../serving/docs/EN/model_configuration-en.md) to modify the configs in `models/runtime/config.pbtxt`.