[Doc]Add English version of documents in examples/ (#1042)

* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md

* Update english version of serving/docs/

* Update title of readme

* Update some links

* Modify a title

* Update some links

* Update en version of java android README

* Modify some titles

* Modify some titles

* Modify some titles

* modify article to document

* update some english version of documents in examples

* Add english version of documents in examples/visions

* Sync to current branch

* Add english version of documents in examples
This commit is contained in:
charl-u
2023-01-06 09:35:12 +08:00
committed by GitHub
parent bb96a6fe8f
commit 1135d33dd7
74 changed files with 2312 additions and 575 deletions

View File

@@ -1,52 +1,53 @@
# YOLOv5 SOPHGO部署示例
English | [简体中文](README_CN.md)
# YOLOv5 SOPHGO Deployment Example
## 支持模型列表
## Supporting Model List
YOLOv5 v6.0部署模型实现来自[YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.0),和[基于COCO的预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v6.0)
For YOLOv5 v6.0 model deployment, please refer to [YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.0) and [Pretrained model based on COCO](https://github.com/ultralytics/yolov5/releases/tag/v6.0).
## 准备YOLOv5部署模型以及转换模型
## Preparing YOLOv5 Model Deployment and Conversion
SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型具体步骤如下:
- 下载预训练ONNX模型请参考[YOLOv5准备部署模型](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/detection/yolov5)
- ONNX模型转换bmodel模型的过程请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
Before deploying SOPHGO-TPU model, you need to first convert Paddle model to bmodel. Specific steps are as follows:
- Download the pre-trained ONNX model. Please refer to [YOLOv5 Ready-to-deploy Model](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/detection/yolov5).
- Convert ONNX model to bmodel. Please refer to [TPU-MLIR](https://github.com/sophgo/tpu-mlir).
## 模型转换example
## Model conversion example
下面以YOLOv5s为例子,教大家如何转换ONNX模型到SOPHGO-TPU模型
Here we take YOLOv5s as an example to show you how to convert ONNX model to SOPHGO-TPU model.
## 下载YOLOv5s模型
## Download YOLOv5s Model
### 下载ONNX YOLOv5s静态图模型
### Download ONNX YOLOv5s Static Map Model
```shell
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
```
### 导出bmodel模型
### Export bmodel Model
以转化BM1684xbmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)
### 1. 安装
Here we take BM1684x bmodel as an example. You need to download [TPU-MLIR](https://github.com/sophgo/tpu-mlir) project. For the installing process, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
### 1. Installation
``` shell
docker pull sophgo/tpuc_dev:latest
# myname1234是一个示例,也可以设置其他名字
# myname1234 is just an example, you can customize your own name.
docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
source ./envsetup.sh
./build.sh
```
### 2. ONNX模型转换为bmodel模型
### 2. Convert ONNX model to bmodel
``` shell
mkdir YOLOv5s && cd YOLOv5s
# 在该文件中放入测试图片同时将上一步下载的yolov5s.onnx放入该文件夹中
# Put the test image in this file, and put the yolov5s.onnx into this folder.
cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
cp -rf ${REGRESSION_PATH}/image .
# 放入onnx模型文件yolov5s.onnx
# Put in the onnx model file yolov5s.onnx
mkdir workspace && cd workspace
# 将ONNX模型转换为mlir模型其中参数--output_names可以通过NETRON查看
# Convert ONNX model to mlir model, the parameter --output_names can be viewed via NETRON.
model_transform.py \
--model_name yolov5s \
--model_def ../yolov5s.onnx \
@@ -60,7 +61,7 @@ model_transform.py \
--test_result yolov5s_top_outputs.npz \
--mlir yolov5s.mlir
# 将mlir模型转换为BM1684xF32 bmodel模型
# Convert mlir model to BM1684x F32 bmodel.
model_deploy.py \
--mlir yolov5s.mlir \
--quantize F32 \
@@ -69,7 +70,7 @@ model_deploy.py \
--test_reference yolov5s_top_outputs.npz \
--model yolov5s_1684x_f32.bmodel
```
最终获得可以在BM1684x上能够运行的bmodel模型yolov5s_1684x_f32.bmodel。如果需要进一步对模型进行加速可以将ONNX模型转换为INT8 bmodel具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)
The final bmodel, yolov5s_1684x_f32.bmodel, can run on BM1684x. If you want to further accelerate the model, you can convert ONNX model to INT8 bmodel. For details, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
## 其他链接
- [Cpp部署](./cpp)
## Other Documents
- [Cpp Deployment](./cpp)

View File

@@ -0,0 +1,76 @@
[English](README.md) | 简体中文
# YOLOv5 SOPHGO部署示例
## 支持模型列表
YOLOv5 v6.0部署模型实现来自[YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.0),和[基于COCO的预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v6.0)
## 准备YOLOv5部署模型以及转换模型
SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型具体步骤如下:
- 下载预训练ONNX模型请参考[YOLOv5准备部署模型](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/detection/yolov5)
- ONNX模型转换bmodel模型的过程请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
## 模型转换example
下面以YOLOv5s为例子,教大家如何转换ONNX模型到SOPHGO-TPU模型
## 下载YOLOv5s模型
### 下载ONNX YOLOv5s静态图模型
```shell
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
```
### 导出bmodel模型
以转化BM1684x的bmodel模型为例子我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
### 1. 安装
``` shell
docker pull sophgo/tpuc_dev:latest
# myname1234是一个示例也可以设置其他名字
docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
source ./envsetup.sh
./build.sh
```
### 2. ONNX模型转换为bmodel模型
``` shell
mkdir YOLOv5s && cd YOLOv5s
# 在该文件中放入测试图片同时将上一步下载的yolov5s.onnx放入该文件夹中
cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
cp -rf ${REGRESSION_PATH}/image .
# 放入onnx模型文件yolov5s.onnx
mkdir workspace && cd workspace
# 将ONNX模型转换为mlir模型其中参数--output_names可以通过NETRON查看
model_transform.py \
--model_name yolov5s \
--model_def ../yolov5s.onnx \
--input_shapes [[1,3,640,640]] \
--mean 0.0,0.0,0.0 \
--scale 0.0039216,0.0039216,0.0039216 \
--keep_aspect_ratio \
--pixel_format rgb \
--output_names output,350,498,646 \
--test_input ../image/dog.jpg \
--test_result yolov5s_top_outputs.npz \
--mlir yolov5s.mlir
# 将mlir模型转换为BM1684x的F32 bmodel模型
model_deploy.py \
--mlir yolov5s.mlir \
--quantize F32 \
--chip bm1684x \
--test_input yolov5s_in_f32.npz \
--test_reference yolov5s_top_outputs.npz \
--model yolov5s_1684x_f32.bmodel
```
最终获得可以在BM1684x上能够运行的bmodel模型yolov5s_1684x_f32.bmodel。如果需要进一步对模型进行加速可以将ONNX模型转换为INT8 bmodel具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
## 其他链接
- [Cpp部署](./cpp)

View File

@@ -1,43 +1,44 @@
# YOLOv5 C++部署示例
English | [简体中文](README_CN.md)
# YOLOv5 C++ Deployment Example
本目录下提供`infer.cc`快速完成yolov5s模型在SOPHGO BM1684x板子上加速部署的示例。
`infer.cc` in this directory provides a quick example of accelerated deployment of the yolov5s model on SOPHGO BM1684x.
在部署前,需确认以下两个步骤:
Before deployment, the following two steps need to be confirmed:
1. 软硬件环境满足要求
2. 根据开发环境从头编译FastDeploy仓库
1. Hardware and software environment meets the requirements.
2. Compile the FastDeploy repository from scratch according to the development environment.
以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
For the above steps, please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md).
## 生成基本目录文件
## Generate Basic Directory Files
该例程由以下几个部分组成
The routine consists of the following parts:
```text
.
├── CMakeLists.txt
├── build # 编译文件夹
├── image # 存放图片的文件夹
├── build # Compile Folder
├── image # Folder for images
├── infer.cc
└── model # 存放模型文件的文件夹
└── model # Folder for models
```
## 编译
## Compile
### 编译并拷贝SDK到thirdpartys文件夹
### Compile and Copy SDK to folder thirdpartys
请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK编译完成后将在build目录下生成fastdeploy-0.0.3目录.
Please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory.
### 拷贝模型文件以及配置文件至model文件夹
将Paddle模型转换为SOPHGO bmodel模型转换步骤参考[文档](../README.md)
将转换后的SOPHGO bmodel模型文件拷贝至model
### Copy model and configuration files to folder Model
Convert Paddle model to SOPHGO bmodel model. For the conversion steps, please refer to [Document](../README.md).
Please copy the converted SOPHGO bmodel to folder model.
### 准备测试图片至image文件夹
### Prepare Test Images to folder image
```bash
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
cp 000000014439.jpg ./images
```
### 编译example
### Compile example
```bash
cd build
@@ -45,12 +46,12 @@ cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
make
```
## 运行例程
## Running Routines
```bash
./infer_demo model images/000000014439.jpg
```
- [模型介绍](../../)
- [模型转换](../)
- [Model Description](../../)
- [Model Conversion](../)

View File

@@ -0,0 +1,57 @@
[English](README.md) | 简体中文
# YOLOv5 C++部署示例
本目录下提供`infer.cc`快速完成yolov5s模型在SOPHGO BM1684x板子上加速部署的示例。
在部署前,需确认以下两个步骤:
1. 软硬件环境满足要求
2. 根据开发环境从头编译FastDeploy仓库
以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
## 生成基本目录文件
该例程由以下几个部分组成
```text
.
├── CMakeLists.txt
├── build # 编译文件夹
├── image # 存放图片的文件夹
├── infer.cc
└── model # 存放模型文件的文件夹
```
## 编译
### 编译并拷贝SDK到thirdpartys文件夹
请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK编译完成后将在build目录下生成fastdeploy-0.0.3目录.
### 拷贝模型文件以及配置文件至model文件夹
将Paddle模型转换为SOPHGO bmodel模型转换步骤参考[文档](../README.md)
将转换后的SOPHGO bmodel模型文件拷贝至model中
### 准备测试图片至image文件夹
```bash
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
cp 000000014439.jpg ./images
```
### 编译example
```bash
cd build
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
make
```
## 运行例程
```bash
./infer_demo model images/000000014439.jpg
```
- [模型介绍](../../)
- [模型转换](../)

View File

@@ -1,23 +1,24 @@
# YOLOv5 Python部署示例
English | [简体中文](README_CN.md)
# YOLOv5 Python Deployment Example
在部署前,需确认以下两个步骤
Before deployment, the following step need to be confirmed:
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
- 1. Hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirement](../../../../../../docs/en/build_and_install/sophgo.md)
本目录下提供`infer.py`快速完成 YOLOv5 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
`infer.py` in this directory provides a quick example of deployment of the YOLOv5 model on SOPHGO TPU. Please run the following script:
```bash
# 下载部署示例代码
# Download the sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/yolov5/sophgo/python
# 下载图片
# Download images.
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# 推理
# Inference.
python3 infer.py --model_file ./bmodel/yolov5s_1684x_f32.bmodel --image 000000014439.jpg
# 运行完成后返回结果如下所示
# The returned result.
DetectionResult: [xmin, ymin, xmax, ymax, score, label_id]
268.480255,81.053055, 298.694794, 169.439026, 0.896569, 0
104.731163,45.661972, 127.583824, 93.449387, 0.869531, 0
@@ -41,6 +42,6 @@ DetectionResult: [xmin, ymin, xmax, ymax, score, label_id]
101.406250,152.562500, 118.890625, 169.140625, 0.253891, 24
```
## 其它文档
- [YOLOv5 C++部署](../cpp)
- [转换YOLOv5 SOPHGO模型文档](../README.md)
## Other Documents
- [YOLOv5 C++ Deployment](../cpp)
- [Converting YOLOv5 SOPHGO model](../README.md)

View File

@@ -0,0 +1,47 @@
[English](README.md) | 简体中文
# YOLOv5 Python部署示例
在部署前,需确认以下步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
本目录下提供`infer.py`快速完成 YOLOv5 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
```bash
# 下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/yolov5/sophgo/python
# 下载图片
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# 推理
python3 infer.py --model_file ./bmodel/yolov5s_1684x_f32.bmodel --image 000000014439.jpg
# 运行完成后返回结果如下所示
DetectionResult: [xmin, ymin, xmax, ymax, score, label_id]
268.480255,81.053055, 298.694794, 169.439026, 0.896569, 0
104.731163,45.661972, 127.583824, 93.449387, 0.869531, 0
378.909363,39.750137, 395.608643, 84.243454, 0.868430, 0
158.552979,80.361511, 199.185760, 168.181915, 0.842988, 0
414.375305,90.948090, 506.321899, 280.405182, 0.835842, 0
364.003448,56.608932, 381.978607, 115.968216, 0.815136, 0
351.725128,42.635330, 366.910309, 98.048386, 0.808936, 0
505.888306,114.366791, 593.124878, 275.995270, 0.801361, 0
327.708618,38.363693, 346.849915, 80.893021, 0.794725, 0
583.493408,114.532883, 612.354614, 175.873535, 0.760649, 0
186.470657,44.941360, 199.664505, 61.037643, 0.632591, 0
169.615891,48.014603, 178.141556, 60.888596, 0.613938, 0
25.810200,117.199692, 59.888783, 152.850128, 0.590614, 0
352.145294,46.712723, 381.946075, 106.752151, 0.505329, 0
1.875000,150.734375, 37.968750, 173.781250, 0.404573, 24
464.657288,15.901413, 472.512939, 34.116409, 0.346033, 0
64.625000,135.171875, 84.500000, 154.406250, 0.332831, 24
57.812500,151.234375, 103.000000, 174.156250, 0.332566, 24
165.906250,88.609375, 527.906250, 339.953125, 0.259424, 33
101.406250,152.562500, 118.890625, 169.140625, 0.253891, 24
```
## 其它文档
- [YOLOv5 C++部署](../cpp)
- [转换YOLOv5 SOPHGO模型文档](../README.md)