[Doc]Update English version of some documents (#1083)

* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md

* Update english version of serving/docs/

* Update title of readme

* Update some links

* Modify a title

* Update some links

* Update en version of java android README

* Modify some titles

* Modify some titles

* Modify some titles

* modify article to document

* update some english version of documents in examples

* Add english version of documents in examples/visions

* Sync to current branch

* Add english version of documents in examples

* Add english version of documents in examples

* Add english version of documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples

* Update some documents in examples
This commit is contained in:
charl-u
2023-01-09 10:08:19 +08:00
committed by GitHub
parent 61c2f87e0c
commit cbf88a46fa
164 changed files with 1557 additions and 777 deletions

View File

@@ -1,26 +1,27 @@
# PaddleClas A311D 开发板 C++ 部署示例
本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 A311D 上的部署推理加速。
English | [简体中文](README_CN.md)
# PaddleClas A311D Development Board C++ Deployment Example
`infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on A311D.
## 部署准备
### FastDeploy 交叉编译环境准备
1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
## Deployment Preparations
### FastDeploy Cross-compile Environment Preparations
1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction).
### 量化模型准备
1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)
### Quantization Model Preparations
1. You can directly use the quantized model provided by FastDeploy for deployment.
2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
For more information, please refer to [Model Quantization](../../quantize/README.md).
## 在 A311D 上部署量化后的 ResNet50_Vd 分类模型
请按照以下步骤完成在 A311D 上部署 ResNet50_Vd 量化模型:
1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
## Deploying the Quantized ResNet50_Vd Segmentation model on A311D
Please follow these steps to complete the deployment of the ResNet50_Vd quantization model on A311D.
1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite).
2. 将编译后的库拷贝到当前目录,可使用如下命令:
2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
```
3. 在当前路径下载部署所需的模型和示例图片:
3. Download the model and example images required for deployment in current path.
```bash
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
mkdir models && mkdir images
@@ -31,26 +32,26 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
cp -r ILSVRC2012_val_00000010.jpeg images
```
4. 编译部署示例,可使入如下命令:
4. Compile the deployment example. You can run the following lines:
```bash
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
# After success, an install folder will be created with a running demo and libraries required for deployment.
```
5. 基于 adb 工具部署 ResNet50 分类模型到晶晨 A311D可使用如下命令
5. Deploy the ResNet50 segmentation model to A311D based on adb. You can run the following lines:
```bash
# 进入 install 目录
# Go to the install directory.
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/build/install/
# 如下命令表示bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
```
部署成功后运行结果如下:
The output is:
<img width="640" src="https://user-images.githubusercontent.com/30516196/200767389-26519e50-9e4f-4fe1-8d52-260718f73476.png">
需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).