* results);
+> ```
+>
+> 模型预测入口,输入一组图像并输出风格迁移后的结果。
+>
+> **参数**
+>
+> > * **images**: 输入数据,一组图像数据,注意需为HWC,BGR格式
+> > * **results**: 风格转换后的一组图像,BGR格式
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/generation/anemigan/python/README.md b/examples/vision/generation/anemigan/python/README.md
index 9c4562402..1217b5625 100644
--- a/examples/vision/generation/anemigan/python/README.md
+++ b/examples/vision/generation/anemigan/python/README.md
@@ -1,70 +1,71 @@
-# AnimeGAN Python部署示例
+English | [简体中文](README_CN.md)
+# AnimeGAN Python Deployment Example
-在部署前,需确认以下两个步骤
+Two steps before deployment
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`infer.py`快速完成AnimeGAN在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+This directory provides examples that `infer.py` fast finishes the deployment of AnimeGAN on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
-# 下载部署示例代码
+# Download the example code for deployment
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/generation/anemigan/python
-# 下载准备好的测试图片
+# Download prepared test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/style_transfer_testimg.jpg
-# CPU推理
+# CPU inference
python infer.py --model animegan_v1_hayao_60 --image style_transfer_testimg.jpg --device cpu
-# GPU推理
+# GPU inference
python infer.py --model animegan_v1_hayao_60 --image style_transfer_testimg.jpg --device gpu
```
-## AnimeGAN Python接口
+## AnimeGAN Python Interface
```python
fd.vision.generation.AnimeGAN(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
-AnimeGAN模型加载和初始化,其中model_file和params_file为用于Paddle inference的模型结构文件和参数文件。
+AnimeGAN model loading and initialization, among which model_file and params_file are the model file and parameter file for Paddle inference.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. PADDLE format by default
-### predict函数
+### predict function
> ```python
> AnimeGAN.predict(input_image)
> ```
>
-> 模型预测入口,输入图像输出风格迁移后的结果。
+> Model prediction interface. Input images and output style transfer results.
>
-> **参数**
+> **Parameter**
>
-> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+> > * **input_image**(np.ndarray): Input data in HWC or BGR format
-> **返回** np.ndarray, 风格转换后的图像,BGR格式
+> **Return** np.ndarray, the image after style transfer in BGR format
-### batch_predict函数
+### batch_predict function
> ```python
-> AnimeGAN.batch_predict函数(input_images)
+> AnimeGAN.batch_predict function (input_images)
> ```
>
-> 模型预测入口,输入一组图像并输出风格迁移后的结果。
+> Model prediction interface. Input a set of images and output style transfer results
>
-> **参数**
+> **Parameter**
>
-> > * **input_images**(list(np.ndarray)): 输入数据,一组图像数据,注意需为HWC,BGR格式
+> > * **input_images**(list(np.ndarray)): Input data in HWC or BGR format
-> **返回** list(np.ndarray), 风格转换后的一组图像,BGR格式
+> **Return** list(np.ndarray), a set of images after style transfer in BGR format
-## 其它文档
+## Other Documents
-- [风格迁移 模型介绍](..)
-- [C++部署](../cpp)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Style Transfer Model Description](..)
+- [C++ Deployment](../cpp)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/generation/anemigan/python/README_CN.md b/examples/vision/generation/anemigan/python/README_CN.md
new file mode 100644
index 000000000..f69c6b46f
--- /dev/null
+++ b/examples/vision/generation/anemigan/python/README_CN.md
@@ -0,0 +1,71 @@
+[English](README.md) | 简体中文
+# AnimeGAN Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成AnimeGAN在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```bash
+# 下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/generation/anemigan/python
+# 下载准备好的测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/style_transfer_testimg.jpg
+
+# CPU推理
+python infer.py --model animegan_v1_hayao_60 --image style_transfer_testimg.jpg --device cpu
+# GPU推理
+python infer.py --model animegan_v1_hayao_60 --image style_transfer_testimg.jpg --device gpu
+```
+
+## AnimeGAN Python接口
+
+```python
+fd.vision.generation.AnimeGAN(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE)
+```
+
+AnimeGAN模型加载和初始化,其中model_file和params_file为用于Paddle inference的模型结构文件和参数文件。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+
+### predict函数
+
+> ```python
+> AnimeGAN.predict(input_image)
+> ```
+>
+> 模型预测入口,输入图像输出风格迁移后的结果。
+>
+> **参数**
+>
+> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+
+> **返回** np.ndarray, 风格转换后的图像,BGR格式
+
+### batch_predict函数
+> ```python
+> AnimeGAN.batch_predict函数(input_images)
+> ```
+>
+> 模型预测入口,输入一组图像并输出风格迁移后的结果。
+>
+> **参数**
+>
+> > * **input_images**(list(np.ndarray)): 输入数据,一组图像数据,注意需为HWC,BGR格式
+
+> **返回** list(np.ndarray), 风格转换后的一组图像,BGR格式
+
+## 其它文档
+
+- [风格迁移 模型介绍](..)
+- [C++部署](../cpp)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/headpose/README.md b/examples/vision/headpose/README.md
index b727752e0..eec488e8d 100755
--- a/examples/vision/headpose/README.md
+++ b/examples/vision/headpose/README.md
@@ -1,7 +1,8 @@
-# 头部姿态模型
+English | [简体中文](README_CN.md)
+# Headpose Model
-FastDeploy目前支持如下头部姿态模型部署
+FastDeploy currently supports the deployment of the following headpose models
-| 模型 | 说明 | 模型格式 | 版本 |
+| Model | Description | Model Format | Version |
| :--- | :--- | :------- | :--- |
-| [omasaht/headpose-fsanet-pytorch](./fsanet) | FSANet 系列模型 | ONNX | [CommitID:002549c](https://github.com/omasaht/headpose-fsanet-pytorch/commit/002549c) |
+| [omasaht/headpose-fsanet-pytorch](./fsanet) | FSANet models | ONNX | [CommitID:002549c](https://github.com/omasaht/headpose-fsanet-pytorch/commit/002549c) |
diff --git a/examples/vision/headpose/README_CN.md b/examples/vision/headpose/README_CN.md
new file mode 100644
index 000000000..66ba5e13b
--- /dev/null
+++ b/examples/vision/headpose/README_CN.md
@@ -0,0 +1,8 @@
+[English](README.md) | 简体中文
+# 头部姿态模型
+
+FastDeploy目前支持如下头部姿态模型部署
+
+| 模型 | 说明 | 模型格式 | 版本 |
+| :--- | :--- | :------- | :--- |
+| [omasaht/headpose-fsanet-pytorch](./fsanet) | FSANet 系列模型 | ONNX | [CommitID:002549c](https://github.com/omasaht/headpose-fsanet-pytorch/commit/002549c) |
diff --git a/examples/vision/headpose/fsanet/README.md b/examples/vision/headpose/fsanet/README.md
index 8cddca2cc..ee52924a2 100644
--- a/examples/vision/headpose/fsanet/README.md
+++ b/examples/vision/headpose/fsanet/README.md
@@ -1,25 +1,25 @@
-# FSANet 模型部署
+English | [简体中文](README_CN.md)
+# FSANet Model Deployment
-## 模型版本说明
+## Model Description
- [FSANet](https://github.com/omasaht/headpose-fsanet-pytorch/commit/002549c)
-## 支持模型列表
+## List of Supported Models
-目前FastDeploy支持如下模型的部署
+FastDeploy currently supports the deployment of the following models
-- [FSANet 模型](https://github.com/omasaht/headpose-fsanet-pytorch)
+- [FSANet model](https://github.com/omasaht/headpose-fsanet-pytorch)
-## 下载预训练模型
+## Download Pre-trained Models
-为了方便开发者的测试,下面提供了PFLD导出的各系列模型,开发者可直接下载使用。
-
-| 模型 | 参数大小 | 精度 | 备注 |
+For developers' testing, models exported by PFLD are provided below. Developers can download and use them directly.
+| Model | Parameter Size | Accuracy | Note |
|:---------------------------------------------------------------- |:----- |:----- | :------ |
| [fsanet-1x1.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/fsanet-1x1.onnx) | 1.2M | - |
| [fsanet-var.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/fsanet-var.onnx) | 1.2MB | - |
-## 详细部署文档
+## Detailed Deployment Tutorials
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/headpose/fsanet/README_CN.md b/examples/vision/headpose/fsanet/README_CN.md
new file mode 100644
index 000000000..8a14d4f4b
--- /dev/null
+++ b/examples/vision/headpose/fsanet/README_CN.md
@@ -0,0 +1,26 @@
+[English](README.md) | 简体中文
+# FSANet 模型部署
+
+## 模型版本说明
+
+- [FSANet](https://github.com/omasaht/headpose-fsanet-pytorch/commit/002549c)
+
+## 支持模型列表
+
+目前FastDeploy支持如下模型的部署
+
+- [FSANet 模型](https://github.com/omasaht/headpose-fsanet-pytorch)
+
+## 下载预训练模型
+
+为了方便开发者的测试,下面提供了PFLD导出的各系列模型,开发者可直接下载使用。
+
+| 模型 | 参数大小 | 精度 | 备注 |
+|:---------------------------------------------------------------- |:----- |:----- | :------ |
+| [fsanet-1x1.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/fsanet-1x1.onnx) | 1.2M | - |
+| [fsanet-var.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/fsanet-var.onnx) | 1.2MB | - |
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/headpose/fsanet/cpp/README.md b/examples/vision/headpose/fsanet/cpp/README.md
index 1d1b1e943..7b7a766e2 100755
--- a/examples/vision/headpose/fsanet/cpp/README.md
+++ b/examples/vision/headpose/fsanet/cpp/README.md
@@ -1,46 +1,47 @@
-# FSANet C++部署示例
+English | [简体中文](README_CN.md)
+# FSANet C++ Deployment Example
-本目录下提供`infer.cc`快速完成FSANet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+This directory provides examples that `infer.cc` fast finishes the deployment of FSANet on CPU/GPU and GPU accelerated by TensorRT.
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.2以上(x.x.x>=1.0.2), 或使用nightly built版本
+Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.2 or above (x.x.x>=1.0.2), or the nightly built version is required to support this model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-#下载官方转换好的 FSANet 模型文件和测试图片
+# Download the official converted FSANet model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/fsanet-var.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/headpose_input.png
-# CPU推理
+# CPU inference
./infer_demo --model fsanet-var.onnx --image headpose_input.png --device cpu
-# GPU推理
+# GPU inference
./infer_demo --model fsanet-var.onnx --image headpose_input.png --device gpu
-# GPU上TensorRT推理
+# TensorRT Inference on GPU
./infer_demo --model fsanet-var.onnx --image headpose_input.png --device gpu --backend trt
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-## FSANet C++接口
+## FSANet C++ Interface
-### FSANet 类
+### FSANet Class
```c++
fastdeploy::vision::headpose::FSANet(
@@ -49,28 +50,28 @@ fastdeploy::vision::headpose::FSANet(
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX)
```
-FSANet模型加载和初始化,其中model_file为导出的ONNX模型格式。
-**参数**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
-#### Predict函数
+FSANet model loading and initialization, among which model_file is the exported ONNX model format.
+**Parameter**
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path. Only passing an empty string when the model is in ONNX format
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. ONNX format by default
+#### Predict Function
> ```c++
> FSANet::Predict(cv::Mat* im, HeadPoseResult* result)
> ```
>
-> 模型预测接口,输入图像直接输出头部姿态预测结果。
+> Model prediction interface. Input images and output head pose prediction results.
>
-> **参数**
+> **Parameter**
>
-> > * **im**: 输入图像,注意需为HWC,BGR格式
-> > * **result**: 头部姿态预测结果, HeadPoseResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
-### 类成员变量
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
-> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[112, 112]
+> > * **im**: Input images in HWC or BGR format
+> > * **result**: Head pose prediction results. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of HeadPoseResult
+### Class Member Variable
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
+> > * **size**(vector<int>): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [112, 112]
-- [模型介绍](../../)
-- [Python部署](../python)
-- [视觉模型预测结果](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/headpose/fsanet/cpp/README_CN.md b/examples/vision/headpose/fsanet/cpp/README_CN.md
new file mode 100644
index 000000000..d8b938531
--- /dev/null
+++ b/examples/vision/headpose/fsanet/cpp/README_CN.md
@@ -0,0 +1,77 @@
+[English](README.md) | 简体中文
+# FSANet C++部署示例
+
+本目录下提供`infer.cc`快速完成FSANet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.2以上(x.x.x>=1.0.2), 或使用nightly built版本
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+#下载官方转换好的 FSANet 模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/fsanet-var.onnx
+wget https://bj.bcebos.com/paddlehub/fastdeploy/headpose_input.png
+# CPU推理
+./infer_demo --model fsanet-var.onnx --image headpose_input.png --device cpu
+# GPU推理
+./infer_demo --model fsanet-var.onnx --image headpose_input.png --device gpu
+# GPU上TensorRT推理
+./infer_demo --model fsanet-var.onnx --image headpose_input.png --device gpu --backend trt
+```
+
+运行完成可视化结果如下图所示
+
+
+

+
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## FSANet C++接口
+
+### FSANet 类
+
+```c++
+fastdeploy::vision::headpose::FSANet(
+ const string& model_file,
+ const string& params_file = "",
+ const RuntimeOption& runtime_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::ONNX)
+```
+FSANet模型加载和初始化,其中model_file为导出的ONNX模型格式。
+**参数**
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
+#### Predict函数
+> ```c++
+> FSANet::Predict(cv::Mat* im, HeadPoseResult* result)
+> ```
+>
+> 模型预测接口,输入图像直接输出头部姿态预测结果。
+>
+> **参数**
+>
+> > * **im**: 输入图像,注意需为HWC,BGR格式
+> > * **result**: 头部姿态预测结果, HeadPoseResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+### 类成员变量
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[112, 112]
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [视觉模型预测结果](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/headpose/fsanet/python/README.md b/examples/vision/headpose/fsanet/python/README.md
index 7863fb1f1..c1f6aa35b 100644
--- a/examples/vision/headpose/fsanet/python/README.md
+++ b/examples/vision/headpose/fsanet/python/README.md
@@ -1,67 +1,68 @@
-# FSANet Python部署示例
+English | [简体中文](README_CN.md)
+# FSANet Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`infer.py`快速完成FSANet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例,保证 FastDeploy 版本 >= 0.6.0 支持FSANet模型。执行如下脚本即可完成
+This directory provides examples that `infer.py` fast finishes the deployment of FSANet on CPU/GPU and GPU accelerated by TensorRT. FastDeploy version 0.6.0 or above is required to support this model. The script is as follows
```bash
-#下载部署示例代码
+# Download deployment example code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/headpose/fsanet/python
-# 下载FSANet模型文件和测试图片
-## 原版ONNX模型
+# Download the FSANet model files and test images
+## Original ONNX Model
wget https://bj.bcebos.com/paddlehub/fastdeploy/fsanet-var.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/headpose_input.png
-# CPU推理
+# CPU inference
python infer.py --model fsanet-var.onnx --image headpose_input.png --device cpu
-# GPU推理
+# GPU inference
python infer.py --model fsanet-var.onnx --image headpose_input.png --device gpu
-# TRT推理
+# TRT inference
python infer.py --model fsanet-var.onnx --image headpose_input.png --device gpu --backend trt
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-## FSANet Python接口
+## FSANet Python Interface
```python
fd.vision.headpose.FSANet(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
```
-FSANet 模型加载和初始化,其中model_file为导出的ONNX模型格式
+FSANet model loading and initialization, among which model_file is the exported ONNX model format
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为ONNX
-### predict函数
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. ONNX format by default
+### predict function
> ```python
> FSANet.predict(input_image)
> ```
>
-> 模型预测结口,输入图像直接输出头部姿态预测结果。
+> Model prediction interface. Input images and output head pose prediction results.
>
-> **参数**
+> **Parameter**
>
-> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
-> **返回**
+> > * **input_image**(np.ndarray): Input data in HWC or BGR format
+> **Return**
>
-> > 返回`fastdeploy.vision.HeadPoseResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > Return `fastdeploy.vision.HeadPoseResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure
-## 其它文档
+## Other Documents
-- [FSANet 模型介绍](..)
-- [FSANet C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [FSANet Model Description](..)
+- [FSANet C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/headpose/fsanet/python/README_CN.md b/examples/vision/headpose/fsanet/python/README_CN.md
new file mode 100644
index 000000000..26952db2f
--- /dev/null
+++ b/examples/vision/headpose/fsanet/python/README_CN.md
@@ -0,0 +1,68 @@
+[English](README.md) | 简体中文
+# FSANet Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成FSANet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例,保证 FastDeploy 版本 >= 0.6.0 支持FSANet模型。执行如下脚本即可完成
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/headpose/fsanet/python
+
+# 下载FSANet模型文件和测试图片
+## 原版ONNX模型
+wget https://bj.bcebos.com/paddlehub/fastdeploy/fsanet-var.onnx
+wget https://bj.bcebos.com/paddlehub/fastdeploy/headpose_input.png
+# CPU推理
+python infer.py --model fsanet-var.onnx --image headpose_input.png --device cpu
+# GPU推理
+python infer.py --model fsanet-var.onnx --image headpose_input.png --device gpu
+# TRT推理
+python infer.py --model fsanet-var.onnx --image headpose_input.png --device gpu --backend trt
+```
+
+运行完成可视化结果如下图所示
+
+
+

+
+
+## FSANet Python接口
+
+```python
+fd.vision.headpose.FSANet(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
+```
+
+FSANet 模型加载和初始化,其中model_file为导出的ONNX模型格式
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为ONNX
+### predict函数
+
+> ```python
+> FSANet.predict(input_image)
+> ```
+>
+> 模型预测结口,输入图像直接输出头部姿态预测结果。
+>
+> **参数**
+>
+> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+> **返回**
+>
+> > 返回`fastdeploy.vision.HeadPoseResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+## 其它文档
+
+- [FSANet 模型介绍](..)
+- [FSANet C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/keypointdetection/README.md b/examples/vision/keypointdetection/README.md
index bcc368e91..16d1c6f64 100644
--- a/examples/vision/keypointdetection/README.md
+++ b/examples/vision/keypointdetection/README.md
@@ -1,3 +1,4 @@
+English | [简体中文](README_CN.md)
# 关键点检测模型
FastDeploy目前支持两种关键点检测任务方式的部署
diff --git a/examples/vision/keypointdetection/README_CN.md b/examples/vision/keypointdetection/README_CN.md
new file mode 100644
index 000000000..3cc3f31b5
--- /dev/null
+++ b/examples/vision/keypointdetection/README_CN.md
@@ -0,0 +1,18 @@
+[English](README.md) | 简体中文
+# 关键点检测模型
+
+FastDeploy目前支持两种关键点检测任务方式的部署
+
+| 任务 | 说明 | 模型格式 | 示例 | 版本 |
+| :---| :--- | :--- | :------- | :--- |
+| 单人关键点检测 | 部署PP-TinyPose系列模型,输入图像仅包含单人 | Paddle | 参考[tinypose目录](./tiny_pose/) | [Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
+| 单人/多人关键点检测 | 部署PicoDet + PP-TinyPose的模型串联任务,输入图像先通过检测模型,得到独立的人像子图后,再经过PP-TinyPose模型检测关键点 | Paddle | 参考[det_keypoint_unite目录](./det_keypoint_unite/) |[Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
+
+# 预训练模型准备
+本文档提供了如下预训练模型,开发者可直接下载使用
+| 模型 | 说明 | 模型格式 | 版本 |
+| :--- | :--- | :------- | :--- |
+| [PP-TinyPose-128x96](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_128x96_infer.tgz) | 单人关键点检测模型 | Paddle | [Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
+| [PP-TinyPose-256x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz) | 单人关键点检测模型 | Paddle | [Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
+| [PicoDet-S-Lcnet-Pedestrian-192x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_192x192_infer.tgz) + [PP-TinyPose-128x96](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_128x96_infer.tgz) | 单人关键点检测串联配置 | Paddle |[Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
+| [PicoDet-S-Lcnet-Pedestrian-320x320](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz) + [PP-TinyPose-256x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz) | 多人关键点检测串联配置 | Paddle |[Release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose) |
diff --git a/examples/vision/keypointdetection/det_keypoint_unite/README.md b/examples/vision/keypointdetection/det_keypoint_unite/README.md
index c323a5c1c..46c647c08 100644
--- a/examples/vision/keypointdetection/det_keypoint_unite/README.md
+++ b/examples/vision/keypointdetection/det_keypoint_unite/README.md
@@ -1,38 +1,39 @@
-# PP-PicoDet + PP-TinyPose 联合部署(Pipeline)
+English | [简体中文](README_CN.md)
+# PP-PicoDet + PP-TinyPose Co-deployment (Pipeline)
-## 模型版本说明
+## Model Description
- [PaddleDetection release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5)
-目前FastDeploy支持如下模型的部署
+Now FastDeploy supports the deployment of the following models
-- [PP-PicoDet + PP-TinyPose系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
+- [PP-PicoDet + PP-TinyPose Models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
-## 准备PP-TinyPose部署模型
+## Prepare PP-TinyPose Deployment Model
-PP-TinyPose以及PP-PicoDet模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/EXPORT_MODEL.md)
+Export the PP-TinyPose and PP-PicoDet models. Please refer to [Model Export](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/EXPORT_MODEL.md)
-**注意**:导出的推理模型包含`model.pdmodel`、`model.pdiparams`和`infer_cfg.yml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息。
+**Attention**: The exported inference model contains three files, including `model.pdmodel`、`model.pdiparams` and `infer_cfg.yml`. FastDeploy will get the pre-processing information for inference from yaml files.
-## 下载预训练模型
+## Download Pre-trained Model
-为了方便开发者的测试,下面提供了PP-PicoDet + PP-TinyPose(Pipeline)导出的部分模型,开发者可直接下载使用。
+For developers' testing, part of the PP-PicoDet + PP-TinyPose(Pipeline)exported models are provided below. Developers can download and use them directly.
-| 应用场景 | 模型 | 参数文件大小 | AP(业务数据集) | AP(COCO Val 单人/多人) | 单人/多人推理耗时 (FP32) | 单人/多人推理耗时(FP16) |
+| Application Scenario | Model | Parameter File Size | AP(Service Data set) | AP(COCO Val Single/Multi-person) | Single/Multi-person Inference Time (FP32) | Single/Multi-person Inference Time(FP16) |
|:-------------------------------|:--------------------------------- |:----- |:----- | :----- | :----- | :----- |
-| 单人模型配置 |[PicoDet-S-Lcnet-Pedestrian-192x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_192x192_infer.tgz) + [PP-TinyPose-128x96](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_128x96_infer.tgz) | 4.6MB + 5.3MB | 86.2% | 52.8% | 12.90ms | 9.61ms |
-| 多人模型配置 |[PicoDet-S-Lcnet-Pedestrian-320x320](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz) + [PP-TinyPose-256x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz) | 4.6M + 5.3MB | 85.7% | 49.9% | 47.63ms | 34.62ms |
+| Single-person Model Configuration |[PicoDet-S-Lcnet-Pedestrian-192x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_192x192_infer.tgz) + [PP-TinyPose-128x96](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_128x96_infer.tgz) | 4.6MB + 5.3MB | 86.2% | 52.8% | 12.90ms | 9.61ms |
+| Multi-person Model Configuration |[PicoDet-S-Lcnet-Pedestrian-320x320](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz) + [PP-TinyPose-256x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz) | 4.6M + 5.3MB | 85.7% | 49.9% | 47.63ms | 34.62ms |
-**说明**
-- 关键点检测模型的精度指标是基于对应行人检测模型检测得到的检测框。
-- 精度测试中去除了flip操作,且检测置信度阈值要求0.5。
-- 速度测试环境为qualcomm snapdragon 865,采用arm8下4线程推理。
-- Pipeline速度包含模型的预处理、推理及后处理部分。
-- 精度测试中,为了公平比较,多人数据去除了6人以上(不含6人)的图像。
+**Note**
+- The accuracy of the keypoint detection model is based on the detection frame obtained by the pedestrian detection model.
+- The flip operation is removed from the accuracy test with the detection confidence threshold of 0.5.
+- The speed test environment is qualcomm snapdragon 865 with 4-thread inference under arm8.
+- The Pipeline speed covers the preprocessing, inference, and post-processing of the model.
+- In the accuracy test, images with more than 6 people (excluding 6 people) were removed from the multi-person data for fair comparison.
-更多信息请参考:[PP-TinyPose 官方文档](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
+For more information: refer to [PP-TinyPose official document](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
-## 详细部署文档
+## Detailed Deployment Tutorials
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/keypointdetection/det_keypoint_unite/README_CN.md b/examples/vision/keypointdetection/det_keypoint_unite/README_CN.md
new file mode 100644
index 000000000..ab7dba213
--- /dev/null
+++ b/examples/vision/keypointdetection/det_keypoint_unite/README_CN.md
@@ -0,0 +1,39 @@
+[English](README.md) | 简体中文
+# PP-PicoDet + PP-TinyPose 联合部署(Pipeline)
+
+## 模型版本说明
+
+- [PaddleDetection release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5)
+
+目前FastDeploy支持如下模型的部署
+
+- [PP-PicoDet + PP-TinyPose系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
+
+## 准备PP-TinyPose部署模型
+
+PP-TinyPose以及PP-PicoDet模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/EXPORT_MODEL.md)
+
+**注意**:导出的推理模型包含`model.pdmodel`、`model.pdiparams`和`infer_cfg.yml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息。
+
+## 下载预训练模型
+
+为了方便开发者的测试,下面提供了PP-PicoDet + PP-TinyPose(Pipeline)导出的部分模型,开发者可直接下载使用。
+
+| 应用场景 | 模型 | 参数文件大小 | AP(业务数据集) | AP(COCO Val 单人/多人) | 单人/多人推理耗时 (FP32) | 单人/多人推理耗时(FP16) |
+|:-------------------------------|:--------------------------------- |:----- |:----- | :----- | :----- | :----- |
+| 单人模型配置 |[PicoDet-S-Lcnet-Pedestrian-192x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_192x192_infer.tgz) + [PP-TinyPose-128x96](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_128x96_infer.tgz) | 4.6MB + 5.3MB | 86.2% | 52.8% | 12.90ms | 9.61ms |
+| 多人模型配置 |[PicoDet-S-Lcnet-Pedestrian-320x320](https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz) + [PP-TinyPose-256x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz) | 4.6M + 5.3MB | 85.7% | 49.9% | 47.63ms | 34.62ms |
+
+**说明**
+- 关键点检测模型的精度指标是基于对应行人检测模型检测得到的检测框。
+- 精度测试中去除了flip操作,且检测置信度阈值要求0.5。
+- 速度测试环境为qualcomm snapdragon 865,采用arm8下4线程推理。
+- Pipeline速度包含模型的预处理、推理及后处理部分。
+- 精度测试中,为了公平比较,多人数据去除了6人以上(不含6人)的图像。
+
+更多信息请参考:[PP-TinyPose 官方文档](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/keypointdetection/det_keypoint_unite/cpp/README.md b/examples/vision/keypointdetection/det_keypoint_unite/cpp/README.md
index 57c513cda..e0f392351 100755
--- a/examples/vision/keypointdetection/det_keypoint_unite/cpp/README.md
+++ b/examples/vision/keypointdetection/det_keypoint_unite/cpp/README.md
@@ -1,53 +1,53 @@
-# PP-PicoDet + PP-TinyPose (Pipeline) C++部署示例
+English | [简体中文](README_CN.md)
+# PP-PicoDet + PP-TinyPose (Pipeline) C++ Deployment Example
-本目录下提供`det_keypoint_unite_infer.cc`快速完成多人模型配置 PP-PicoDet + PP-TinyPose 在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图多人关键点检测`示例。执行如下脚本即可完成
->> **注意**: PP-TinyPose单模型独立部署,请参考[PP-TinyPose 单模型](../../tiny_pose/cpp/README.md)
+This directory provides the `Multi-person keypoint detection in a single image` example that `det_keypoint_unite_infer.cc` fast finishes the deployment of multi-person detection model PP-PicoDet + PP-TinyPose on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载PP-TinyPose和PP-PicoDet模型文件和测试图片
+# Download PP-TinyPose+PP-PicoDet model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
tar -xvf PP_TinyPose_256x192_infer.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
tar -xvf PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/000000018491.jpg
-# CPU推理
+# CPU inference
./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 0
-# GPU推理
+# GPU inference
./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 1
-# GPU上TensorRT推理
+# TensorRT inference on GPU
./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 2
-# 昆仑芯XPU推理
+# kunlunxin XPU inference
./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 3
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-## PP-TinyPose C++接口
+## PP-TinyPose C++ Interface
-### PP-TinyPose类
+### PP-TinyPose Class
```c++
fastdeploy::pipeline::PPTinyPose(
@@ -55,32 +55,31 @@ fastdeploy::pipeline::PPTinyPose(
fastdeploy::vision::keypointdetection::PPTinyPose* pptinypose_model)
```
-PPTinyPose Pipeline模型加载和初始化。
+PPTinyPose Pipeline model loading and initialization.
-**参数**
+**Parameter**
-> * **model_det_modelfile**(fastdeploy::vision::detection): 初始化后的检测模型,参考[PP-TinyPose](../../tiny_pose/README.md)
-> * **pptinypose_model**(fastdeploy::vision::keypointdetection): 初始化后的检测模型[Detection](../../../detection/paddledetection/README.md),暂时只提供PaddleDetection系列
+> * **model_det_modelfile**(fastdeploy::vision::detection): Initialized detection model. Refer to [PP-TinyPose](../../tiny_pose/README.md)
+> * **pptinypose_model**(fastdeploy::vision::keypointdetection): Initialized detection model [Detection](../../../detection/paddledetection/README.md). Currently only PaddleDetection series is available.
-#### Predict函数
+#### Predict Function
> ```c++
> PPTinyPose::Predict(cv::Mat* im, KeyPointDetectionResult* result)
> ```
>
-> 模型预测接口,输入图像直接输出关键点检测结果。
+> Model prediction interface. Input images and output keypoint detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **im**: 输入图像,注意需为HWC,BGR格式
-> > * **result**: 关键点检测结果,包括关键点的坐标以及关键点对应的概率值, KeyPointDetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > * **im**: Input images in HWC or BGR format
+> > * **result**: Keypoint detection results, including coordinates and the corresponding probability value. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of KeyPointDetectionResult
-### 类成员属性
-#### 后处理参数
-> > * **detection_model_score_threshold**(bool):
-输入PP-TinyPose模型前,Detectin模型过滤检测框的分数阈值
+### Class Member Property
+#### Post-processing Parameter
+> > * **detection_model_score_threshold**(bool): Score threshold of the Detectin model for filtering detection boxes before entering the PP-TinyPose model
-- [模型介绍](../../)
-- [Python部署](../python)
-- [视觉模型预测结果](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/keypointdetection/det_keypoint_unite/cpp/README_CN.md b/examples/vision/keypointdetection/det_keypoint_unite/cpp/README_CN.md
new file mode 100644
index 000000000..b6eafe475
--- /dev/null
+++ b/examples/vision/keypointdetection/det_keypoint_unite/cpp/README_CN.md
@@ -0,0 +1,87 @@
+[English](README.md) | 简体中文
+# PP-PicoDet + PP-TinyPose (Pipeline) C++部署示例
+
+本目录下提供`det_keypoint_unite_infer.cc`快速完成多人模型配置 PP-PicoDet + PP-TinyPose 在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图多人关键点检测`示例。执行如下脚本即可完成
+>> **注意**: PP-TinyPose单模型独立部署,请参考[PP-TinyPose 单模型](../../tiny_pose/cpp/README.md)
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+
+以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+# 下载PP-TinyPose和PP-PicoDet模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
+tar -xvf PP_TinyPose_256x192_infer.tgz
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
+tar -xvf PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
+wget https://bj.bcebos.com/paddlehub/fastdeploy/000000018491.jpg
+
+# CPU推理
+./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 0
+# GPU推理
+./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 1
+# GPU上TensorRT推理
+./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 2
+# 昆仑芯XPU推理
+./infer_demo PP_PicoDet_V2_S_Pedestrian_320x320_infer PP_TinyPose_256x192_infer 000000018491.jpg 3
+```
+
+运行完成可视化结果如下图所示
+
+

+
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## PP-TinyPose C++接口
+
+### PP-TinyPose类
+
+```c++
+fastdeploy::pipeline::PPTinyPose(
+ fastdeploy::vision::detection::PPYOLOE* det_model,
+ fastdeploy::vision::keypointdetection::PPTinyPose* pptinypose_model)
+```
+
+PPTinyPose Pipeline模型加载和初始化。
+
+**参数**
+
+> * **model_det_modelfile**(fastdeploy::vision::detection): 初始化后的检测模型,参考[PP-TinyPose](../../tiny_pose/README.md)
+> * **pptinypose_model**(fastdeploy::vision::keypointdetection): 初始化后的检测模型[Detection](../../../detection/paddledetection/README.md),暂时只提供PaddleDetection系列
+
+#### Predict函数
+
+> ```c++
+> PPTinyPose::Predict(cv::Mat* im, KeyPointDetectionResult* result)
+> ```
+>
+> 模型预测接口,输入图像直接输出关键点检测结果。
+>
+> **参数**
+>
+> > * **im**: 输入图像,注意需为HWC,BGR格式
+> > * **result**: 关键点检测结果,包括关键点的坐标以及关键点对应的概率值, KeyPointDetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+### 类成员属性
+#### 后处理参数
+> > * **detection_model_score_threshold**(bool):
+输入PP-TinyPose模型前,Detectin模型过滤检测框的分数阈值
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [视觉模型预测结果](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/keypointdetection/det_keypoint_unite/python/README.md b/examples/vision/keypointdetection/det_keypoint_unite/python/README.md
index a6366b800..48621705c 100755
--- a/examples/vision/keypointdetection/det_keypoint_unite/python/README.md
+++ b/examples/vision/keypointdetection/det_keypoint_unite/python/README.md
@@ -1,76 +1,77 @@
-# PP-PicoDet + PP-TinyPose (Pipeline) Python部署示例
+English | [简体中文](README_CN.md)
+# PP-PicoDet + PP-TinyPose (Pipeline) Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`det_keypoint_unite_infer.py`快速完成多人模型配置 PP-PicoDet + PP-TinyPose 在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图多人关键点检测`示例。执行如下脚本即可完成
->> **注意**: PP-TinyPose单模型独立部署,请参考[PP-TinyPose 单模型](../../tiny_pose//python/README.md)
+This directory provides the `Multi-person keypoint detection in a single image` example that `det_keypoint_unite_infer.py` fast finishes the deployment of multi-person detection model PP-PicoDet + PP-TinyPose on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
+>> **Attention**: For standalone deployment of PP-TinyPose single model, refer to [PP-TinyPose Single Model](../../tiny_pose//python/README.md)
```bash
-#下载部署示例代码
+# Download the deployment example code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/keypointdetection/det_keypoint_unite/python
-# 下载PP-TinyPose模型文件和测试图片
+# Download PP-TinyPose model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
tar -xvf PP_TinyPose_256x192_infer.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
tar -xvf PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/000000018491.jpg
-# CPU推理
+# CPU inference
python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image 000000018491.jpg --device cpu
-# GPU推理
+# GPU inference
python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image 000000018491.jpg --device gpu
-# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+# TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image 000000018491.jpg --device gpu --use_trt True
-# 昆仑芯XPU推理
+# kunlunxin XPU inference
python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image 000000018491.jpg --device kunlunxin
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-## PPTinyPosePipeline Python接口
+## PPTinyPosePipeline Python Interface
```python
fd.pipeline.PPTinyPose(det_model=None, pptinypose_model=None)
```
-PPTinyPosePipeline模型加载和初始化,其中det_model是使用`fd.vision.detection.PicoDet`[参考Detection文档](../../../detection/paddledetection/python/)初始化的检测模型,pptinypose_model是使用`fd.vision.keypointdetection.PPTinyPose`[参考PP-TinyPose文档](../../tiny_pose/python/)初始化的检测模型
+PPTinyPosePipeline model loading and initialization, among which the det_model is the detection model initialized by `fd.vision.detection.PicoDet`[Refer to Detection Document](../../../detection/paddledetection/python/) and pptinypose_model is the detection model initialized by `fd.vision.keypointdetection.PPTinyPose`[Refer to PP-TinyPose Document](../../tiny_pose/python/)
-**参数**
+**Parameter**
-> * **det_model**(str): 初始化后的检测模型
-> * **pptinypose_model**(str): 初始化后的PP-TinyPose模型
+> * **det_model**(str): Initialized detection model
+> * **pptinypose_model**(str): Initialized PP-TinyPose model
-### predict函数
+### predict function
> ```python
> PPTinyPosePipeline.predict(input_image)
> ```
>
-> 模型预测结口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output keypoint detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+> > * **input_image**(np.ndarray): Input data in HWC or BGR format
-> **返回**
+> **Return**
>
-> > 返回`fastdeploy.vision.KeyPointDetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > Return `fastdeploy.vision.KeyPointDetectionResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
-### 类成员属性
-#### 后处理参数
+### Class Member Property
+#### Post-processing Parameter
> > * **detection_model_score_threshold**(bool):
-输入PP-TinyPose模型前,Detectin模型过滤检测框的分数阈值
+Score threshold of the Detectin model for filtering detection boxes before entering the PP-TinyPose model
-## 其它文档
+## Other Documents
-- [Pipeline 模型介绍](..)
-- [Pipeline C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Pipeline Model Description](..)
+- [Pipeline C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/keypointdetection/det_keypoint_unite/python/README_CN.md b/examples/vision/keypointdetection/det_keypoint_unite/python/README_CN.md
new file mode 100644
index 000000000..7f994d826
--- /dev/null
+++ b/examples/vision/keypointdetection/det_keypoint_unite/python/README_CN.md
@@ -0,0 +1,77 @@
+[English](README.md) | 简体中文
+# PP-PicoDet + PP-TinyPose (Pipeline) Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`det_keypoint_unite_infer.py`快速完成多人模型配置 PP-PicoDet + PP-TinyPose 在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图多人关键点检测`示例。执行如下脚本即可完成
+>> **注意**: PP-TinyPose单模型独立部署,请参考[PP-TinyPose 单模型](../../tiny_pose//python/README.md)
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/keypointdetection/det_keypoint_unite/python
+
+# 下载PP-TinyPose模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
+tar -xvf PP_TinyPose_256x192_infer.tgz
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
+tar -xvf PP_PicoDet_V2_S_Pedestrian_320x320_infer.tgz
+wget https://bj.bcebos.com/paddlehub/fastdeploy/000000018491.jpg
+# CPU推理
+python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image 000000018491.jpg --device cpu
+# GPU推理
+python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image 000000018491.jpg --device gpu
+# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image 000000018491.jpg --device gpu --use_trt True
+# 昆仑芯XPU推理
+python det_keypoint_unite_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --det_model_dir PP_PicoDet_V2_S_Pedestrian_320x320_infer --image 000000018491.jpg --device kunlunxin
+```
+
+运行完成可视化结果如下图所示
+
+

+
+
+## PPTinyPosePipeline Python接口
+
+```python
+fd.pipeline.PPTinyPose(det_model=None, pptinypose_model=None)
+```
+
+PPTinyPosePipeline模型加载和初始化,其中det_model是使用`fd.vision.detection.PicoDet`[参考Detection文档](../../../detection/paddledetection/python/)初始化的检测模型,pptinypose_model是使用`fd.vision.keypointdetection.PPTinyPose`[参考PP-TinyPose文档](../../tiny_pose/python/)初始化的检测模型
+
+**参数**
+
+> * **det_model**(str): 初始化后的检测模型
+> * **pptinypose_model**(str): 初始化后的PP-TinyPose模型
+
+### predict函数
+
+> ```python
+> PPTinyPosePipeline.predict(input_image)
+> ```
+>
+> 模型预测结口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+
+> **返回**
+>
+> > 返回`fastdeploy.vision.KeyPointDetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+### 类成员属性
+#### 后处理参数
+> > * **detection_model_score_threshold**(bool):
+输入PP-TinyPose模型前,Detectin模型过滤检测框的分数阈值
+
+## 其它文档
+
+- [Pipeline 模型介绍](..)
+- [Pipeline C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/keypointdetection/tiny_pose/README.md b/examples/vision/keypointdetection/tiny_pose/README.md
index 2166a1c03..99ec0e365 100644
--- a/examples/vision/keypointdetection/tiny_pose/README.md
+++ b/examples/vision/keypointdetection/tiny_pose/README.md
@@ -1,37 +1,39 @@
-# PP-TinyPose 模型部署
+English | [简体中文](README_CN.md)
+# PP-TinyPose Model Deployment
-## 模型版本说明
+## Model Description
- [PaddleDetection release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5)
-目前FastDeploy支持如下模型的部署
+Now FastDeploy supports the deployment of the following models
-- [PP-TinyPose系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
+- [PP-TinyPose models](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
-## 准备PP-TinyPose部署模型
+## Prepare PP-TinyPose Deployment Model
-PP-TinyPose模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/EXPORT_MODEL.md)
+Export the PP-TinyPose model. Please refer to [Model Export](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/EXPORT_MODEL.md)
-**注意**:PP-TinyPose导出的模型包含`model.pdmodel`、`model.pdiparams`和`infer_cfg.yml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息。
+**Attention**: The exported PP-TinyPose model contains three files, including `model.pdmodel`、`model.pdiparams` and `infer_cfg.yml`. FastDeploy will get the pre-processing information for inference from yaml files.
-## 下载预训练模型
+## Download Pre-trained Model
-为了方便开发者的测试,下面提供了PP-TinyPose导出的部分模型,开发者可直接下载使用。
+For developers' testing, part of the PP-TinyPose exported models are provided below. Developers can download and use them directly.
-| 模型 | 参数文件大小 |输入Shape | AP(业务数据集) | AP(COCO Val) | FLOPS | 单人推理耗时 (FP32) | 单人推理耗时(FP16) |
+| Model | Parameter File Size | Input Shape | AP(Service Data set) | AP(COCO Val) | FLOPS | Single/Multi-person Inference Time (FP32) | Single/Multi-person Inference Time(FP16) |
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | :----- | :----- |
| [PP-TinyPose-128x96](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_128x96_infer.tgz) | 5.3MB | 128x96 | 84.3% | 58.4% | 81.56 M | 4.57ms | 3.27ms |
| [PP-TinyPose-256x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz) | 5.3M | 256x96 | 91.0% | 68.3% | 326.24M | 14.07ms | 8.33ms |
-**说明**
-- 关键点检测模型使用`COCO train2017`和`AI Challenger trainset`作为训练集。使用`COCO person keypoints val2017`作为测试集。
-- 关键点检测模型的精度指标所依赖的检测框为ground truth标注得到。
-- 推理速度测试环境为 Qualcomm Snapdragon 865,采用arm8下4线程推理得到。
+**Note**
+- The keypoint detection model uses `COCO train2017` and `AI Challenger trainset` as the training sets and `COCO person keypoints val2017` as the test set.
+- The detection frame, through which we get the accuracy of the keypoint detection model, is obtained from the ground truth annotation.
+- The speed test environment is Qualcomm Snapdragon 865 with 4-thread inference under arm8.
-更多信息请参考:[PP-TinyPose 官方文档](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
-## 详细部署文档
+For more information: refer to [PP-TinyPose official document](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
-- [Python部署](python)
-- [C++部署](cpp)
+## Detailed Deployment Tutorials
+
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/keypointdetection/tiny_pose/README_CN.md b/examples/vision/keypointdetection/tiny_pose/README_CN.md
new file mode 100644
index 000000000..8a8c92ab7
--- /dev/null
+++ b/examples/vision/keypointdetection/tiny_pose/README_CN.md
@@ -0,0 +1,38 @@
+[English](README.md) | 简体中文
+# PP-TinyPose 模型部署
+
+## 模型版本说明
+
+- [PaddleDetection release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5)
+
+目前FastDeploy支持如下模型的部署
+
+- [PP-TinyPose系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
+
+## 准备PP-TinyPose部署模型
+
+PP-TinyPose模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/EXPORT_MODEL.md)
+
+**注意**:PP-TinyPose导出的模型包含`model.pdmodel`、`model.pdiparams`和`infer_cfg.yml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息。
+
+
+## 下载预训练模型
+
+为了方便开发者的测试,下面提供了PP-TinyPose导出的部分模型,开发者可直接下载使用。
+
+| 模型 | 参数文件大小 |输入Shape | AP(业务数据集) | AP(COCO Val) | FLOPS | 单人推理耗时 (FP32) | 单人推理耗时(FP16) |
+|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | :----- | :----- |
+| [PP-TinyPose-128x96](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_128x96_infer.tgz) | 5.3MB | 128x96 | 84.3% | 58.4% | 81.56 M | 4.57ms | 3.27ms |
+| [PP-TinyPose-256x192](https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz) | 5.3M | 256x96 | 91.0% | 68.3% | 326.24M | 14.07ms | 8.33ms |
+
+**说明**
+- 关键点检测模型使用`COCO train2017`和`AI Challenger trainset`作为训练集。使用`COCO person keypoints val2017`作为测试集。
+- 关键点检测模型的精度指标所依赖的检测框为ground truth标注得到。
+- 推理速度测试环境为 Qualcomm Snapdragon 865,采用arm8下4线程推理得到。
+
+更多信息请参考:[PP-TinyPose 官方文档](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/configs/keypoint/tiny_pose/README.md)
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/keypointdetection/tiny_pose/cpp/README.md b/examples/vision/keypointdetection/tiny_pose/cpp/README.md
index 867e4251c..b6e12268b 100755
--- a/examples/vision/keypointdetection/tiny_pose/cpp/README.md
+++ b/examples/vision/keypointdetection/tiny_pose/cpp/README.md
@@ -1,52 +1,53 @@
-# PP-TinyPose C++部署示例
+English | [简体中文](README_CN.md)
+# PP-TinyPose C++ Deployment Example
-本目录下提供`pptinypose_infer.cc`快速完成PP-TinyPose在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图单人关键点检测`示例
->> **注意**: PP-Tinypose单模型目前只支持单图单人关键点检测,因此输入的图片应只包含一个人或者进行过裁剪的图像。多人关键点检测请参考[PP-TinyPose Pipeline](../../det_keypoint_unite/cpp/README.md)
+This directory provides the `Multi-person keypoint detection in a single image` example that `pptinypose_infer.cc` fast finishes the deployment of PP-TinyPose on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
+>> **Attention**: PP-Tinypose single model currently supports single-person keypoint detection in a single image. Therefore, the input image should contain one person only or should be cropped. For multi-person keypoint detection, refer to [PP-TinyPose Pipeline](../../det_keypoint_unite/cpp/README.md)
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载PP-TinyPose模型文件和测试图片
+# Download PP-TinyPose model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
tar -xvf PP_TinyPose_256x192_infer.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/hrnet_demo.jpg
-# CPU推理
+# CPU inference
./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 0
-# GPU推理
+# GPU inference
./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 1
-# GPU上TensorRT推理
+# TensorRT inference on GPU
./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 2
-# 昆仑芯XPU推理
+# KunlunXin XPU inference
./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 3
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-## PP-TinyPose C++接口
+## PP-TinyPose C++ Interface
-### PP-TinyPose类
+### PP-TinyPose Class
```c++
fastdeploy::vision::keypointdetection::PPTinyPose(
@@ -57,34 +58,34 @@ fastdeploy::vision::keypointdetection::PPTinyPose(
const ModelFormat& model_format = ModelFormat::PADDLE)
```
-PPTinyPose模型加载和初始化,其中model_file为导出的Paddle模型格式。
+PPTinyPose model loading and initialization, among which model_file is the exported Paddle model format.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **config_file**(str): 推理部署配置文件
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **config_file**(str): Inference deployment configuration file
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-#### Predict函数
+#### Predict function
> ```c++
> PPTinyPose::Predict(cv::Mat* im, KeyPointDetectionResult* result)
> ```
>
-> 模型预测接口,输入图像直接输出关键点检测结果。
+> Model prediction interface. Input images and output keypoint detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **im**: 输入图像,注意需为HWC,BGR格式
-> > * **result**: 关键点检测结果,包括关键点的坐标以及关键点对应的概率值, KeyPointDetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > * **im**: Input images in HWC or BGR format
+> > * **result**: Keypoint detection results, including coordinates and the corresponding probability value. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of KeyPointDetectionResult
-### 类成员属性
-#### 后处理参数
-> > * **use_dark**(bool): 是否使用DARK进行后处理[参考论文](https://arxiv.org/abs/1910.06278)
+### Class Member Property
+#### Post-processing Parameter
+> > * **use_dark**(bool): Whether to use DARK for post-processing. Refer to [Reference Paper](https://arxiv.org/abs/1910.06278)
-- [模型介绍](../../)
-- [Python部署](../python)
-- [视觉模型预测结果](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/keypointdetection/tiny_pose/cpp/README_CN.md b/examples/vision/keypointdetection/tiny_pose/cpp/README_CN.md
new file mode 100644
index 000000000..388b8816d
--- /dev/null
+++ b/examples/vision/keypointdetection/tiny_pose/cpp/README_CN.md
@@ -0,0 +1,91 @@
+[English](README.md) | 简体中文
+# PP-TinyPose C++部署示例
+
+本目录下提供`pptinypose_infer.cc`快速完成PP-TinyPose在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图单人关键点检测`示例
+>> **注意**: PP-Tinypose单模型目前只支持单图单人关键点检测,因此输入的图片应只包含一个人或者进行过裁剪的图像。多人关键点检测请参考[PP-TinyPose Pipeline](../../det_keypoint_unite/cpp/README.md)
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+
+以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+# 下载PP-TinyPose模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
+tar -xvf PP_TinyPose_256x192_infer.tgz
+wget https://bj.bcebos.com/paddlehub/fastdeploy/hrnet_demo.jpg
+
+
+# CPU推理
+./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 0
+# GPU推理
+./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 1
+# GPU上TensorRT推理
+./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 2
+# 昆仑芯XPU推理
+./infer_tinypose_demo PP_TinyPose_256x192_infer hrnet_demo.jpg 3
+```
+
+运行完成可视化结果如下图所示
+
+

+
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## PP-TinyPose C++接口
+
+### PP-TinyPose类
+
+```c++
+fastdeploy::vision::keypointdetection::PPTinyPose(
+ const string& model_file,
+ const string& params_file = "",
+ const string& config_file,
+ const RuntimeOption& runtime_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::PADDLE)
+```
+
+PPTinyPose模型加载和初始化,其中model_file为导出的Paddle模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **config_file**(str): 推理部署配置文件
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+#### Predict函数
+
+> ```c++
+> PPTinyPose::Predict(cv::Mat* im, KeyPointDetectionResult* result)
+> ```
+>
+> 模型预测接口,输入图像直接输出关键点检测结果。
+>
+> **参数**
+>
+> > * **im**: 输入图像,注意需为HWC,BGR格式
+> > * **result**: 关键点检测结果,包括关键点的坐标以及关键点对应的概率值, KeyPointDetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+### 类成员属性
+#### 后处理参数
+> > * **use_dark**(bool): 是否使用DARK进行后处理[参考论文](https://arxiv.org/abs/1910.06278)
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [视觉模型预测结果](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/keypointdetection/tiny_pose/python/README.md b/examples/vision/keypointdetection/tiny_pose/python/README.md
index 4ac811bca..22b5b89dd 100755
--- a/examples/vision/keypointdetection/tiny_pose/python/README.md
+++ b/examples/vision/keypointdetection/tiny_pose/python/README.md
@@ -1,81 +1,81 @@
-# PP-TinyPose Python部署示例
+English | [简体中文](README_CN.md)
+# PP-TinyPose Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`pptinypose_infer.py`快速完成PP-TinyPose在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图单人关键点检测`示例。执行如下脚本即可完成
-
->> **注意**: PP-Tinypose单模型目前只支持单图单人关键点检测,因此输入的图片应只包含一个人或者进行过裁剪的图像。多人关键点检测请参考[PP-TinyPose Pipeline](../../det_keypoint_unite/python/README.md)
+This directory provides the `Multi-person keypoint detection in a single image` example that `pptinypose_infer.py` fast finishes the deployment of PP-TinyPose on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
+>> **Attention**: single model currently only supports single-person keypoint detection in a single image. Therefore, the input image should contain one person only or should be cropped. For multi-person keypoint detection, refer to [PP-TinyPose Pipeline](../../det_keypoint_unite/python/README.md)
```bash
-#下载部署示例代码
+# Download the example code for deployment
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/keypointdetection/tiny_pose/python
-# 下载PP-TinyPose模型文件和测试图片
+# Download PP-TinyPose model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
tar -xvf PP_TinyPose_256x192_infer.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/hrnet_demo.jpg
-# CPU推理
+# CPU inference
python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device cpu
-# GPU推理
+# GPU inference
python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device gpu
-# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+# TensorRT inference on GPU(Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device gpu --use_trt True
-# 昆仑芯XPU推理
+# KunlunXin XPU inference
python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device kunlunxin
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-## PP-TinyPose Python接口
+## PP-TinyPose Python Interface
```python
fd.vision.keypointdetection.PPTinyPose(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
-PP-TinyPose模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/EXPORT_MODEL.md)
+PP-TinyPose model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/EXPORT_MODEL.md) for more information
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **config_file**(str): 推理部署配置文件
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **config_file**(str): Inference deployment configuration file
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-### predict函数
+### predict function
> ```python
> PPTinyPose.predict(input_image)
> ```
>
-> 模型预测结口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+> > * **input_image**(np.ndarray): Input data in HWC or BGR format
-> **返回**
+> **Return**
>
-> > 返回`fastdeploy.vision.KeyPointDetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > Return `fastdeploy.vision.KeyPointDetectionResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
-### 类成员属性
-#### 后处理参数
-用户可按照自己的实际需求,修改下列后处理参数,从而影响最终的推理和部署效果
+### Class Member Property
+#### Post-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-> > * **use_dark**(bool): 是否使用DARK进行后处理[参考论文](https://arxiv.org/abs/1910.06278)
+> > * **use_dark**(bool): • Whether to use DARK for post-processing. Refer to [Reference Paper](https://arxiv.org/abs/1910.06278)
-## 其它文档
+## Other Documents
-- [PP-TinyPose 模型介绍](..)
-- [PP-TinyPose C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [PP-TinyPose Model Description](..)
+- [PP-TinyPose C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/keypointdetection/tiny_pose/python/README_CN.md b/examples/vision/keypointdetection/tiny_pose/python/README_CN.md
new file mode 100644
index 000000000..1879a898b
--- /dev/null
+++ b/examples/vision/keypointdetection/tiny_pose/python/README_CN.md
@@ -0,0 +1,82 @@
+[English](README.md) | 简体中文
+# PP-TinyPose Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`pptinypose_infer.py`快速完成PP-TinyPose在CPU/GPU,以及GPU上通过TensorRT加速部署的`单图单人关键点检测`示例。执行如下脚本即可完成
+
+>> **注意**: PP-Tinypose单模型目前只支持单图单人关键点检测,因此输入的图片应只包含一个人或者进行过裁剪的图像。多人关键点检测请参考[PP-TinyPose Pipeline](../../det_keypoint_unite/python/README.md)
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/keypointdetection/tiny_pose/python
+
+# 下载PP-TinyPose模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_TinyPose_256x192_infer.tgz
+tar -xvf PP_TinyPose_256x192_infer.tgz
+wget https://bj.bcebos.com/paddlehub/fastdeploy/hrnet_demo.jpg
+
+# CPU推理
+python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device cpu
+# GPU推理
+python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device gpu
+# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device gpu --use_trt True
+# 昆仑芯XPU推理
+python pptinypose_infer.py --tinypose_model_dir PP_TinyPose_256x192_infer --image hrnet_demo.jpg --device kunlunxin
+```
+
+运行完成可视化结果如下图所示
+
+

+
+
+## PP-TinyPose Python接口
+
+```python
+fd.vision.keypointdetection.PPTinyPose(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
+```
+
+PP-TinyPose模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/EXPORT_MODEL.md)
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **config_file**(str): 推理部署配置文件
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+### predict函数
+
+> ```python
+> PPTinyPose.predict(input_image)
+> ```
+>
+> 模型预测结口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+
+> **返回**
+>
+> > 返回`fastdeploy.vision.KeyPointDetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+### 类成员属性
+#### 后处理参数
+用户可按照自己的实际需求,修改下列后处理参数,从而影响最终的推理和部署效果
+
+> > * **use_dark**(bool): 是否使用DARK进行后处理[参考论文](https://arxiv.org/abs/1910.06278)
+
+
+## 其它文档
+
+- [PP-TinyPose 模型介绍](..)
+- [PP-TinyPose C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/README.md b/examples/vision/matting/README.md
index f7582fcdf..d058a35c3 100755
--- a/examples/vision/matting/README.md
+++ b/examples/vision/matting/README.md
@@ -1,11 +1,12 @@
-# 抠图模型
+English | [简体中文](README_CN.md)
+# Matting Model
-FastDeploy目前支持如下抠图模型部署
+Now FastDeploy supports the deployment of the following matting models
-| 模型 | 说明 | 模型格式 | 版本 |
+| Model | Description | Model Format | Version |
| :--- | :--- | :------- | :--- |
-| [ZHKKKe/MODNet](./modnet) | MODNet 系列模型 | ONNX | [CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4) |
-| [PeterL1n/RobustVideoMatting](./rvm) | RobustVideoMatting 系列模型 | ONNX | [CommitID:81a1093](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093) |
-| [PaddleSeg/PP-Matting](./ppmatting) | PP-Matting 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
-| [PaddleSeg/PP-HumanMatting](./ppmatting) | PP-HumanMatting 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
-| [PaddleSeg/ModNet](./ppmatting) | ModNet 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
+| [ZHKKKe/MODNet](./modnet) | MODNet models | ONNX | [CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4) |
+| [PeterL1n/RobustVideoMatting](./rvm) | RobustVideoMatting models | ONNX | [CommitID:81a1093](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093) |
+| [PaddleSeg/PP-Matting](./ppmatting) | PP-Matting models | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
+| [PaddleSeg/PP-HumanMatting](./ppmatting) | PP-HumanMatting models | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
+| [PaddleSeg/ModNet](./ppmatting) | ModNet models | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
diff --git a/examples/vision/matting/README_CN.md b/examples/vision/matting/README_CN.md
new file mode 100644
index 000000000..90abbcf73
--- /dev/null
+++ b/examples/vision/matting/README_CN.md
@@ -0,0 +1,12 @@
+[English](README.md) | 简体中文
+# 抠图模型
+
+FastDeploy目前支持如下抠图模型部署
+
+| 模型 | 说明 | 模型格式 | 版本 |
+| :--- | :--- | :------- | :--- |
+| [ZHKKKe/MODNet](./modnet) | MODNet 系列模型 | ONNX | [CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4) |
+| [PeterL1n/RobustVideoMatting](./rvm) | RobustVideoMatting 系列模型 | ONNX | [CommitID:81a1093](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093) |
+| [PaddleSeg/PP-Matting](./ppmatting) | PP-Matting 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
+| [PaddleSeg/PP-HumanMatting](./ppmatting) | PP-HumanMatting 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
+| [PaddleSeg/ModNet](./ppmatting) | ModNet 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
diff --git a/examples/vision/matting/modnet/README.md b/examples/vision/matting/modnet/README.md
index 2edf99b7e..78219aff9 100644
--- a/examples/vision/matting/modnet/README.md
+++ b/examples/vision/matting/modnet/README.md
@@ -1,25 +1,26 @@
-# MODNet准备部署模型
+English | [简体中文](README_CN.md)
+# MODNet Ready-to-deploy Model
- [MODNet](https://github.com/ZHKKKe/MODNet/commit/28165a4)
- - (1)[官方库](https://github.com/ZHKKKe/MODNet/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
- - (2)开发者基于自己数据训练的MODNet模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。
+ - (1)The *.pt provided by the [Official Library](https://github.com/ZHKKKe/MODNet/) can be deployed after [Export ONNX Model](#导出ONNX模型);
+ - (2)As for MODNet model trained on customized data, please follow the operations guidelines in [Export ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B) to complete the deployment.
-## 导出ONNX模型
+## Export ONNX Model
-访问[MODNet](https://github.com/ZHKKKe/MODNet)官方github库,按照指引下载安装,下载模型文件,利用 `onnx/export_onnx.py` 得到`onnx`格式文件。
+Visit [MODNet](https://github.com/ZHKKKe/MODNet) official github repository, follow the guidelines to download model files, and employ `onnx/export_onnx.py` to get files in `onnx` format.
-* 导出onnx格式文件
+* Export files in onnx format
```bash
python -m onnx.export_onnx \
--ckpt-path=pretrained/modnet_photographic_portrait_matting.ckpt \
--output-path=pretrained/modnet_photographic_portrait_matting.onnx
```
-## 下载预训练ONNX模型
+## Download Pre-trained ONNX Model
-为了方便开发者的测试,下面提供了MODNet导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
-| 模型 | 大小 | 精度 |
+For developers' testing, models exported by MODNet are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository)
+| Model | Size | Accuracy |
|:---------------------------------------------------------------- |:----- |:----- |
| [modnet_photographic](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting.onnx) | 25MB | - |
| [modnet_webcam](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_webcam_portrait_matting.onnx) | 25MB | -|
@@ -33,12 +34,12 @@
-## 详细部署文档
+## Detailed Deployment Tutorials
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
-## 版本说明
+## Release Note
-- 本版本文档和代码基于[MODNet CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4) 编写
+- This tutorial and related code are written based on [MODNet CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4)
diff --git a/examples/vision/matting/modnet/README_CN.md b/examples/vision/matting/modnet/README_CN.md
new file mode 100644
index 000000000..d491c0411
--- /dev/null
+++ b/examples/vision/matting/modnet/README_CN.md
@@ -0,0 +1,45 @@
+[English](README.md) | 简体中文
+# MODNet准备部署模型
+
+- [MODNet](https://github.com/ZHKKKe/MODNet/commit/28165a4)
+ - (1)[官方库](https://github.com/ZHKKKe/MODNet/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;
+ - (2)开发者基于自己数据训练的MODNet模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。
+
+## 导出ONNX模型
+
+
+访问[MODNet](https://github.com/ZHKKKe/MODNet)官方github库,按照指引下载安装,下载模型文件,利用 `onnx/export_onnx.py` 得到`onnx`格式文件。
+
+* 导出onnx格式文件
+ ```bash
+ python -m onnx.export_onnx \
+ --ckpt-path=pretrained/modnet_photographic_portrait_matting.ckpt \
+ --output-path=pretrained/modnet_photographic_portrait_matting.onnx
+ ```
+
+## 下载预训练ONNX模型
+
+为了方便开发者的测试,下面提供了MODNet导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)
+| 模型 | 大小 | 精度 |
+|:---------------------------------------------------------------- |:----- |:----- |
+| [modnet_photographic](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting.onnx) | 25MB | - |
+| [modnet_webcam](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_webcam_portrait_matting.onnx) | 25MB | -|
+| [modnet_photographic_256](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting-256x256.onnx) | 25MB | - |
+| [modnet_webcam_256](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_webcam_portrait_matting-256x256.onnx) | 25MB | - |
+| [modnet_photographic_512](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting-512x512.onnx) | 25MB | - |
+| [modnet_webcam_512](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_webcam_portrait_matting-512x512.onnx) | 25MB | - |
+| [modnet_photographic_1024](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting-1024x1024.onnx) | 25MB | - |
+| [modnet_webcam_1024](https://bj.bcebos.com/paddlehub/fastdeploy/modnet_webcam_portrait_matting-1024x1024.onnx) | 25MB | -|
+
+
+
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
+
+
+## 版本说明
+
+- 本版本文档和代码基于[MODNet CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4) 编写
diff --git a/examples/vision/matting/modnet/cpp/README.md b/examples/vision/matting/modnet/cpp/README.md
index 25f37c107..37964404b 100644
--- a/examples/vision/matting/modnet/cpp/README.md
+++ b/examples/vision/matting/modnet/cpp/README.md
@@ -1,39 +1,40 @@
-# MODNet C++部署示例
+English | [简体中文](README_CN.md)
+# MODNet C++ Deployment Example
-本目录下提供`infer.cc`快速完成MODNet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+This directory provides examples that `infer.cc` fast finishes the deployment of ArcFace on CPU/GPU and GPU accelerated by TensorRT.
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-#下载官方转换好的MODNet模型文件和测试图片
+# Download the official converted MODNet model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
-# CPU推理
+# CPU inference
./infer_demo modnet_photographic_portrait_matting.onnx matting_input.jpg matting_bgr.jpg 0
-# GPU推理
+# GPU inference
./infer_demo modnet_photographic_portrait_matting.onnx matting_input.jpg matting_bgr.jpg 1
-# GPU上TensorRT推理
+# TensorRT inference on GPU
./infer_demo modnet_photographic_portrait_matting.onnx matting_input.jpg matting_bgr.jpg 2
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows

@@ -42,12 +43,12 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-## MODNet C++接口
+## MODNet C++ Interface
-### MODNet类
+### MODNet Class
```c++
fastdeploy::vision::matting::MODNet(
@@ -57,16 +58,16 @@ fastdeploy::vision::matting::MODNet(
const ModelFormat& model_format = ModelFormat::ONNX)
```
-MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式。
+MODNet model loading and initialization, among which model_file is the exported ONNX model format
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path. Only passing an empty string when the model is in ONNX format
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. ONNX format by default
-#### Predict函数
+#### Predict Function
> ```c++
> MODNet::Predict(cv::Mat* im, MattingResult* result,
@@ -74,26 +75,26 @@ MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式。
> float nms_iou_threshold = 0.5)
> ```
>
-> 模型预测接口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **im**: 输入图像,注意需为HWC,BGR格式
-> > * **result**: 检测结果,包括检测框,各个框的置信度, MattingResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
-> > * **conf_threshold**: 检测框置信度过滤阈值
-> > * **nms_iou_threshold**: NMS处理过程中iou阈值
+> > * **im**: Input images in HWC or BGR format
+> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for MattingResult
+> > * **conf_threshold**: Filtering threshold of detection box confidence
+> > * **nms_iou_threshold**: iou threshold during NMS processing
-### 类成员变量
-#### 预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+### Class Member Variable
+#### Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[256, 256]
-> > * **alpha**(vector<float>): 预处理归一化的alpha值,计算公式为`x'=x*alpha+beta`,alpha默认为[1. / 127.5, 1.f / 127.5, 1. / 127.5]
-> > * **beta**(vector<float>): 预处理归一化的beta值,计算公式为`x'=x*alpha+beta`,beta默认为[-1.f, -1.f, -1.f]
-> > * **swap_rb**(bool): 预处理是否将BGR转换成RGB,默认true
+> > * **size**(vector<int>): This parameter changes the size of the resize used during preprocessing, containing two integer elements for [width, height] with default value [256, 256]
+> > * **alpha**(vector<float>): Preprocess normalized alpha, and calculated as `x'=x*alpha+beta`,alpha defaults to [1. / 127.5, 1.f / 127.5, 1. / 127.5]
+> > * **beta**(vector<float>): Preprocess normalized beta, and calculated as `x'=x*alpha+beta`,beta defaults to [-1.f, -1.f, -1.f]
+> > * **swap_rb**(bool): Whether to convert BGR to RGB in pre-processing. Default True
-- [模型介绍](../../)
-- [Python部署](../python)
-- [视觉模型预测结果](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/modnet/cpp/README_CN.md b/examples/vision/matting/modnet/cpp/README_CN.md
new file mode 100644
index 000000000..9073bd2ec
--- /dev/null
+++ b/examples/vision/matting/modnet/cpp/README_CN.md
@@ -0,0 +1,100 @@
+[English](README.md) | 简体中文
+# MODNet C++部署示例
+
+本目录下提供`infer.cc`快速完成MODNet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+#下载官方转换好的MODNet模型文件和测试图片
+
+wget https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting.onnx
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
+
+
+# CPU推理
+./infer_demo modnet_photographic_portrait_matting.onnx matting_input.jpg matting_bgr.jpg 0
+# GPU推理
+./infer_demo modnet_photographic_portrait_matting.onnx matting_input.jpg matting_bgr.jpg 1
+# GPU上TensorRT推理
+./infer_demo modnet_photographic_portrait_matting.onnx matting_input.jpg matting_bgr.jpg 2
+```
+
+运行完成可视化结果如下图所示
+
+
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## MODNet C++接口
+
+### MODNet类
+
+```c++
+fastdeploy::vision::matting::MODNet(
+ const string& model_file,
+ const string& params_file = "",
+ const RuntimeOption& runtime_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::ONNX)
+```
+
+MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
+
+#### Predict函数
+
+> ```c++
+> MODNet::Predict(cv::Mat* im, MattingResult* result,
+> float conf_threshold = 0.25,
+> float nms_iou_threshold = 0.5)
+> ```
+>
+> 模型预测接口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **im**: 输入图像,注意需为HWC,BGR格式
+> > * **result**: 检测结果,包括检测框,各个框的置信度, MattingResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > * **conf_threshold**: 检测框置信度过滤阈值
+> > * **nms_iou_threshold**: NMS处理过程中iou阈值
+
+### 类成员变量
+#### 预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+
+> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[256, 256]
+> > * **alpha**(vector<float>): 预处理归一化的alpha值,计算公式为`x'=x*alpha+beta`,alpha默认为[1. / 127.5, 1.f / 127.5, 1. / 127.5]
+> > * **beta**(vector<float>): 预处理归一化的beta值,计算公式为`x'=x*alpha+beta`,beta默认为[-1.f, -1.f, -1.f]
+> > * **swap_rb**(bool): 预处理是否将BGR转换成RGB,默认true
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [视觉模型预测结果](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/modnet/python/README.md b/examples/vision/matting/modnet/python/README.md
index d84d95ac5..44ba8801e 100755
--- a/examples/vision/matting/modnet/python/README.md
+++ b/examples/vision/matting/modnet/python/README.md
@@ -1,31 +1,32 @@
-# MODNet Python部署示例
+English | [简体中文](README_CN.md)
+# MODNet Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`infer.py`快速完成MODNet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+This directory provides examples that `infer.py` fast finishes the deployment of MODNet on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
-#下载部署示例代码
+# Download the example code for deployment
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/matting/modnet/python/
-#下载modnet模型文件和测试图片
+# Download modnet model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
-# CPU推理
+# CPU inference
python infer.py --model modnet_photographic_portrait_matting.onnx --image matting_input.jpg --bg matting_bgr.jpg --device cpu
-# GPU推理
+# GPU inference
python infer.py --model modnet_photographic_portrait_matting.onnx --image matting_input.jpg --bg matting_bgr.jpg --device gpu
-# GPU上使用TensorRT推理
+# TensorRT inference on GPU
python infer.py --model modnet_photographic_portrait_matting.onnx --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows

@@ -34,52 +35,51 @@ python infer.py --model modnet_photographic_portrait_matting.onnx --image mattin
-## MODNet Python接口
+## MODNet Python Interface
```python
fastdeploy.vision.matting.MODNet(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
```
-MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式
+MODNet model loading and initialization, among which model_file is the exported ONNX model format
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为ONNX
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. ONNX format by default
-### predict函数
+### predict function
> ```python
> MODNet.predict(image_data)
> ```
>
-> 模型预测结口,输入图像直接输出抠图结果。
+> Model prediction interface. Input images and output matting results.
>
-> **参数**
+> **Parameter**
>
-> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+> > * **image_data**(np.ndarray): Input data in HWC or BGR format
-> **返回**
+> **Return**
>
-> > 返回`fastdeploy.vision.MattingResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > Return `fastdeploy.vision.MattingResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for its description.
-### 类成员属性
-#### 预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+### Class Member Property
+#### Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-
-> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[256, 256]
-> > * **alpha**(list[float]): 预处理归一化的alpha值,计算公式为`x'=x*alpha+beta`,alpha默认为[1. / 127.5, 1.f / 127.5, 1. / 127.5]
-> > * **beta**(list[float]): 预处理归一化的beta值,计算公式为`x'=x*alpha+beta`,beta默认为[-1.f, -1.f, -1.f]
-> > * **swap_rb**(bool): 预处理是否将BGR转换成RGB,默认True
+> > * **size**(list[int]): This parameter changes the size of the resize during preprocessing, containing two integer elements for [width, height] with default value [256, 256]
+> > * **alpha**(list[float]): Preprocess normalized alpha, and calculated as `x'=x*alpha+beta`. alpha defaults to [1. / 127.5, 1.f / 127.5, 1. / 127.5]
+> > * **beta**(list[float]): Preprocess normalized beta, and calculated as `x'=x*alpha+beta`. beta defaults to [-1.f, -1.f, -1.f]
+> > * **swap_rb**(bool): Whether to convert BGR to RGB in pre-processing. Default True
-## 其它文档
+## Other Documents
-- [MODNet 模型介绍](..)
-- [MODNet C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [MODNet Model Description](..)
+- [MODNet C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/modnet/python/README_CN.md b/examples/vision/matting/modnet/python/README_CN.md
new file mode 100644
index 000000000..066750d25
--- /dev/null
+++ b/examples/vision/matting/modnet/python/README_CN.md
@@ -0,0 +1,86 @@
+[English](README.md) | 简体中文
+# MODNet Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成MODNet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd examples/vision/matting/modnet/python/
+
+#下载modnet模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/modnet_photographic_portrait_matting.onnx
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
+
+# CPU推理
+python infer.py --model modnet_photographic_portrait_matting.onnx --image matting_input.jpg --bg matting_bgr.jpg --device cpu
+# GPU推理
+python infer.py --model modnet_photographic_portrait_matting.onnx --image matting_input.jpg --bg matting_bgr.jpg --device gpu
+# GPU上使用TensorRT推理
+python infer.py --model modnet_photographic_portrait_matting.onnx --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
+```
+
+运行完成可视化结果如下图所示
+
+
+
+## MODNet Python接口
+
+```python
+fastdeploy.vision.matting.MODNet(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
+```
+
+MODNet模型加载和初始化,其中model_file为导出的ONNX模型格式
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为ONNX
+
+### predict函数
+
+> ```python
+> MODNet.predict(image_data)
+> ```
+>
+> 模型预测结口,输入图像直接输出抠图结果。
+>
+> **参数**
+>
+> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+
+> **返回**
+>
+> > 返回`fastdeploy.vision.MattingResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+### 类成员属性
+#### 预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+
+> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[256, 256]
+> > * **alpha**(list[float]): 预处理归一化的alpha值,计算公式为`x'=x*alpha+beta`,alpha默认为[1. / 127.5, 1.f / 127.5, 1. / 127.5]
+> > * **beta**(list[float]): 预处理归一化的beta值,计算公式为`x'=x*alpha+beta`,beta默认为[-1.f, -1.f, -1.f]
+> > * **swap_rb**(bool): 预处理是否将BGR转换成RGB,默认True
+
+
+
+## 其它文档
+
+- [MODNet 模型介绍](..)
+- [MODNet C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/ppmatting/README.md b/examples/vision/matting/ppmatting/README.md
index de5391f03..a2cbdc346 100644
--- a/examples/vision/matting/ppmatting/README.md
+++ b/examples/vision/matting/ppmatting/README.md
@@ -1,31 +1,31 @@
-# PP-Matting模型部署
+English | [简体中文](README_CN.md)
+# PP-Matting Model Deployment
-## 模型版本说明
+## Model Description
- [PP-Matting Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
-## 支持模型列表
+## List of Supported Models
-目前FastDeploy支持如下模型的部署
+Now FastDeploy supports the deployment of the following models
-- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
-- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
-- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
+- [PP-Matting models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
+- [PP-HumanMatting models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
+- [ModNet models](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
-## 导出部署模型
+## Export Deployment Model
-在部署前,需要先将PP-Matting导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)(Tips:导出PP-Matting系列模型和PP-HumanMatting系列模型需要设置导出脚本的`--input_shape`参数)
+Before deployment, PP-Matting needs to be exported into the deployment model. Refer to [Export Model](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) for more information. (Tips: You need to set the `--input_shape` parameter of the export script when exporting PP-Matting and PP-HumanMatting models)
-## 下载预训练模型
+## Download Pre-trained Models
-为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。
+For developers' testing, models exported by PP-Matting are provided below. Developers can download and use them directly.
-其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。
+The accuracy metric is sourced from the model description in PP-Matting. (Accuracy data are not provided) Refer to the introduction in PP-Matting for more details.
-
-| 模型 | 参数大小 | 精度 | 备注 |
+| Model | Parameter Size | Accuracy | Note |
|:---------------------------------------------------------------- |:----- |:----- | :------ |
| [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - |
| [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - |
@@ -36,7 +36,7 @@
-## 详细部署文档
+## Detailed Deployment Tutorials
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/matting/ppmatting/README_CN.md b/examples/vision/matting/ppmatting/README_CN.md
new file mode 100644
index 000000000..a1c9801aa
--- /dev/null
+++ b/examples/vision/matting/ppmatting/README_CN.md
@@ -0,0 +1,43 @@
+[English](README.md) | 简体中文
+# PP-Matting模型部署
+
+## 模型版本说明
+
+- [PP-Matting Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
+
+## 支持模型列表
+
+目前FastDeploy支持如下模型的部署
+
+- [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
+- [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
+- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
+
+
+## 导出部署模型
+
+在部署前,需要先将PP-Matting导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)(Tips:导出PP-Matting系列模型和PP-HumanMatting系列模型需要设置导出脚本的`--input_shape`参数)
+
+
+## 下载预训练模型
+
+为了方便开发者的测试,下面提供了PP-Matting导出的各系列模型,开发者可直接下载使用。
+
+其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据),详情各参考PP-Matting中的说明。
+
+
+| 模型 | 参数大小 | 精度 | 备注 |
+|:---------------------------------------------------------------- |:----- |:----- | :------ |
+| [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - |
+| [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - |
+| [PP-HumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - |
+| [Modnet-ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - |
+| [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - |
+| [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - |
+
+
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/matting/ppmatting/cpp/README.md b/examples/vision/matting/ppmatting/cpp/README.md
index 21fd779be..f8a4088eb 100755
--- a/examples/vision/matting/ppmatting/cpp/README.md
+++ b/examples/vision/matting/ppmatting/cpp/README.md
@@ -1,41 +1,41 @@
-# PP-Matting C++部署示例
+English | [简体中文](README_CN.md)
+# PP-Matting C++ Deployment Example
-本目录下提供`infer.cc`快速完成PP-Matting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+This directory provides examples that `infer.cc` fast finishes the deployment of PP-Matting on CPU/GPU and GPU accelerated by TensorRT.
+Before deployment, two steps require confirmation
-在部署前,需确认以下两个步骤
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-
-以Linux上 PP-Matting 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+Taking the PP-Matting inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载PP-Matting模型文件和测试图片
+# Download PP-Matting model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
tar -xvf PP-Matting-512.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
-# CPU推理
+# CPU inference
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
-# GPU推理
+# GPU inference
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1
-# GPU上TensorRT推理
+# TensorRT inference on GPU
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2
-# 昆仑芯XPU推理
+# kunlunxin XPU inference
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 3
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-## PP-Matting C++接口
+## PP-Matting C++ Interface
-### PPMatting类
+### PPMatting Class
```c++
fastdeploy::vision::matting::PPMatting(
@@ -59,35 +59,35 @@ fastdeploy::vision::matting::PPMatting(
const ModelFormat& model_format = ModelFormat::PADDLE)
```
-PP-Matting模型加载和初始化,其中model_file为导出的Paddle模型格式。
+PP-Matting model loading and initialization, among which model_file is the exported Paddle model format.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **config_file**(str): 推理部署配置文件
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **config_file**(str): Inference deployment configuration file
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-#### Predict函数
+#### Predict Function
> ```c++
> PPMatting::Predict(cv::Mat* im, MattingResult* result)
> ```
>
-> 模型预测接口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **im**: 输入图像,注意需为HWC,BGR格式
-> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, MattingResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > * **im**: Input images in HWC or BGR format
+> > * **result**: The segmentation result, including the predicted label of the segmentation and the corresponding probability of the label. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of SegmentationResult
-### 类成员属性
-#### 预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+### Class Member Variable
+#### Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-- [模型介绍](../../)
-- [Python部署](../python)
-- [视觉模型预测结果](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/ppmatting/cpp/README_CN.md b/examples/vision/matting/ppmatting/cpp/README_CN.md
new file mode 100644
index 000000000..38e2e592a
--- /dev/null
+++ b/examples/vision/matting/ppmatting/cpp/README_CN.md
@@ -0,0 +1,94 @@
+[English](README.md) | 简体中文
+# PP-Matting C++部署示例
+
+本目录下提供`infer.cc`快速完成PP-Matting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+以Linux上 PP-Matting 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+# 下载PP-Matting模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
+tar -xvf PP-Matting-512.tgz
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
+
+
+# CPU推理
+./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
+# GPU推理
+./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1
+# GPU上TensorRT推理
+./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2
+# 昆仑芯XPU推理
+./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 3
+```
+
+运行完成可视化结果如下图所示
+
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## PP-Matting C++接口
+
+### PPMatting类
+
+```c++
+fastdeploy::vision::matting::PPMatting(
+ const string& model_file,
+ const string& params_file = "",
+ const string& config_file,
+ const RuntimeOption& runtime_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::PADDLE)
+```
+
+PP-Matting模型加载和初始化,其中model_file为导出的Paddle模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **config_file**(str): 推理部署配置文件
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+#### Predict函数
+
+> ```c++
+> PPMatting::Predict(cv::Mat* im, MattingResult* result)
+> ```
+>
+> 模型预测接口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **im**: 输入图像,注意需为HWC,BGR格式
+> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, MattingResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+### 类成员属性
+#### 预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [视觉模型预测结果](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/ppmatting/python/README.md b/examples/vision/matting/ppmatting/python/README.md
index c0791d5d6..ed91d1db2 100755
--- a/examples/vision/matting/ppmatting/python/README.md
+++ b/examples/vision/matting/ppmatting/python/README.md
@@ -1,80 +1,81 @@
-# PP-Matting Python部署示例
+English | [简体中文](README_CN.md)
+# PP-Matting Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-
-本目录下提供`infer.py`快速完成PP-Matting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+This directory provides examples that `infer.py` fast finishes the deployment of PP-Matting on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
-#下载部署示例代码
+# Download the deployment example code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/matting/ppmatting/python
-# 下载PP-Matting模型文件和测试图片
+# Download PP-Matting model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
tar -xvf PP-Matting-512.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
-# CPU推理
+# CPU inference
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu
-# GPU推理
+# GPU inference
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu
-# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+# TensorRT inference on GPU(Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
-# 昆仑芯XPU推理
+# kunlunxin XPU inference
python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device kunlunxin
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-## PP-Matting Python接口
+## PP-Matting Python Interface
```python
fd.vision.matting.PPMatting(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
-PP-Matting模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
+PP-Matting model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) for more information
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **config_file**(str): 推理部署配置文件
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **config_file**(str): Inference deployment configuration file
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-### predict函数
+### predict function
> ```python
> PPMatting.predict(input_image)
> ```
>
-> 模型预测结口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+> > * **input_image**(np.ndarray): Input data in HWC or BGR format
-> **返回**
+> **Return**
>
-> > 返回`fastdeploy.vision.MattingResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > Return `fastdeploy.vision.MattingResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
-### 类成员属性
-#### 预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+### Class Member Variable
+
+#### Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-## 其它文档
+## Other Documents
-- [PP-Matting 模型介绍](..)
-- [PP-Matting C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [PP-Matting Model Description](..)
+- [PP-Matting C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/ppmatting/python/README_CN.md b/examples/vision/matting/ppmatting/python/README_CN.md
new file mode 100644
index 000000000..cdfd7d378
--- /dev/null
+++ b/examples/vision/matting/ppmatting/python/README_CN.md
@@ -0,0 +1,81 @@
+[English](README.md) | 简体中文
+# PP-Matting Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成PP-Matting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/matting/ppmatting/python
+
+# 下载PP-Matting模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
+tar -xvf PP-Matting-512.tgz
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
+# CPU推理
+python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device cpu
+# GPU推理
+python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu
+# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
+# 昆仑芯XPU推理
+python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bgr.jpg --device kunlunxin
+```
+
+运行完成可视化结果如下图所示
+
+## PP-Matting Python接口
+
+```python
+fd.vision.matting.PPMatting(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
+```
+
+PP-Matting模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **config_file**(str): 推理部署配置文件
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+### predict函数
+
+> ```python
+> PPMatting.predict(input_image)
+> ```
+>
+> 模型预测结口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+
+> **返回**
+>
+> > 返回`fastdeploy.vision.MattingResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+### 类成员属性
+#### 预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+
+
+## 其它文档
+
+- [PP-Matting 模型介绍](..)
+- [PP-Matting C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/rvm/README.md b/examples/vision/matting/rvm/README.md
index 56d371c5c..320c52bea 100755
--- a/examples/vision/matting/rvm/README.md
+++ b/examples/vision/matting/rvm/README.md
@@ -1,30 +1,31 @@
-# RobustVideoMatting 模型部署
+English | [简体中文](README_CN.md)
+# RobustVideoMatting Model Deployment
-## 模型版本说明
+## Model Description
- [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093)
-## 支持模型列表
+## List of Supported Models
-目前FastDeploy支持如下模型的部署
+Now FastDeploy supports the deployment of the following models
-- [RobustVideoMatting 模型](https://github.com/PeterL1n/RobustVideoMatting)
+- [RobustVideoMatting model](https://github.com/PeterL1n/RobustVideoMatting)
-## 下载预训练模型
+## Download Pre-trained Models
-为了方便开发者的测试,下面提供了RobustVideoMatting导出的各系列模型,开发者可直接下载使用。
+For developers' testing, models exported by RobustVideoMatting are provided below. Developers can download and use them directly.
-| 模型 | 参数大小 | 精度 | 备注 |
+| Model | Parameter Size | Accuracy | Note |
|:---------------------------------------------------------------- |:----- |:----- | :------ |
-| [rvm_mobilenetv3_fp32.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_fp32.onnx) | 15MB ||exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
-| [rvm_resnet50_fp32.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_resnet50_fp32.onnx) | 103MB | |exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
-| [rvm_mobilenetv3_trt.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_trt.onnx) | 15MB | |exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
-| [rvm_resnet50_trt.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_resnet50_trt.onnx) | 103MB | | exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
+| [rvm_mobilenetv3_fp32.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_fp32.onnx) | 15MB | - |
+| [rvm_resnet50_fp32.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_resnet50_fp32.onnx) | 103MB | - |
+| [rvm_mobilenetv3_trt.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_trt.onnx) | 15MB | - |
+| [rvm_resnet50_trt.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_resnet50_trt.onnx) | 103MB | - |
**Note**:
-- 如果要使用 TensorRT 进行推理,需要下载后缀为 trt 的 onnx 模型文件
+- If you want to use TensorRT for inference, download onnx model file with the trt suffix is necessary.
-## 详细部署文档
+## Detailed Deployment Tutorials
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/matting/rvm/README_CN.md b/examples/vision/matting/rvm/README_CN.md
new file mode 100644
index 000000000..bf2b57a1e
--- /dev/null
+++ b/examples/vision/matting/rvm/README_CN.md
@@ -0,0 +1,31 @@
+[English](README.md) | 简体中文
+# RobustVideoMatting 模型部署
+
+## 模型版本说明
+
+- [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093)
+
+## 支持模型列表
+
+目前FastDeploy支持如下模型的部署
+
+- [RobustVideoMatting 模型](https://github.com/PeterL1n/RobustVideoMatting)
+
+## 下载预训练模型
+
+为了方便开发者的测试,下面提供了RobustVideoMatting导出的各系列模型,开发者可直接下载使用。
+
+| 模型 | 参数大小 | 精度 | 备注 |
+|:---------------------------------------------------------------- |:----- |:----- | :------ |
+| [rvm_mobilenetv3_fp32.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_fp32.onnx) | 15MB ||exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
+| [rvm_resnet50_fp32.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_resnet50_fp32.onnx) | 103MB | |exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
+| [rvm_mobilenetv3_trt.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_trt.onnx) | 15MB | |exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
+| [rvm_resnet50_trt.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_resnet50_trt.onnx) | 103MB | | exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
+
+**Note**:
+- 如果要使用 TensorRT 进行推理,需要下载后缀为 trt 的 onnx 模型文件
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/matting/rvm/cpp/README.md b/examples/vision/matting/rvm/cpp/README.md
index d8e00400c..3b9c842cd 100755
--- a/examples/vision/matting/rvm/cpp/README.md
+++ b/examples/vision/matting/rvm/cpp/README.md
@@ -1,41 +1,42 @@
-# RobustVideoMatting C++部署示例
+English | [简体中文](README_CN.md)
+# RobustVideoMatting C++ Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-以Linux上 RobustVideoMatting 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+Taking the RobustVideoMatting inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
-本目录下提供`infer.cc`快速完成RobustVideoMatting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+This directory provides examples that `infer.cc` fast finishes the deployment of RobustVideoMatting on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载RobustVideoMatting模型文件和测试图片以及视频
-## 原版ONNX模型
+# Download RobustVideoMatting model files, test images and videos
+## Original ONNX model
wget https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_fp32.onnx
-## 为加载TRT特殊处理ONNX模型
+## The ONNX model is specially processed for loading TRT
wget https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_trt.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/video.mp4
-# CPU推理
+# CPU inference
./infer_demo rvm_mobilenetv3_fp32.onnx matting_input.jpg matting_bgr.jpg 0
-# GPU推理
+# GPU inference
./infer_demo rvm_mobilenetv3_fp32.onnx matting_input.jpg matting_bgr.jpg 1
-# TRT推理
+# TRT inference
./infer_demo rvm_mobilenetv3_trt.onnx matting_input.jpg matting_bgr.jpg 2
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-## RobustVideoMatting C++接口
+## RobustVideoMatting C++ Interface
```c++
fastdeploy::vision::matting::RobustVideoMatting(
@@ -56,32 +57,32 @@ fastdeploy::vision::matting::RobustVideoMatting(
const ModelFormat& model_format = ModelFormat::ONNX)
```
-RobustVideoMatting模型加载和初始化,其中model_file为导出的ONNX模型格式。
+RobustVideoMatting model loading and initialization, among which model_file is the exported ONNX model format.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. ONNX format by default
-#### Predict函数
+#### Predict Function
> ```c++
> RobustVideoMatting::Predict(cv::Mat* im, MattingResult* result)
> ```
>
-> 模型预测接口,输入图像直接输出抠图结果。
+> Model prediction interface. Input images and output matting results.
>
-> **参数**
+> **Parameter**
>
-> > * **im**: 输入图像,注意需为HWC,BGR格式
-> > * **result**: 抠图结果, MattingResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > * **im**: Input images in HWC or BGR format
+> > * **result**: Matting result. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of MattingResult
-## 其它文档
+## Other Documents
-- [模型介绍](../../)
-- [Python部署](../python)
-- [视觉模型预测结果](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/rvm/cpp/README_CN.md b/examples/vision/matting/rvm/cpp/README_CN.md
new file mode 100644
index 000000000..24e5c71b8
--- /dev/null
+++ b/examples/vision/matting/rvm/cpp/README_CN.md
@@ -0,0 +1,88 @@
+[English](README.md) | 简体中文
+# RobustVideoMatting C++部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+以Linux上 RobustVideoMatting 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+本目录下提供`infer.cc`快速完成RobustVideoMatting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+# 下载RobustVideoMatting模型文件和测试图片以及视频
+## 原版ONNX模型
+wget https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_fp32.onnx
+## 为加载TRT特殊处理ONNX模型
+wget https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_trt.onnx
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
+wget https://bj.bcebos.com/paddlehub/fastdeploy/video.mp4
+
+# CPU推理
+./infer_demo rvm_mobilenetv3_fp32.onnx matting_input.jpg matting_bgr.jpg 0
+# GPU推理
+./infer_demo rvm_mobilenetv3_fp32.onnx matting_input.jpg matting_bgr.jpg 1
+# TRT推理
+./infer_demo rvm_mobilenetv3_trt.onnx matting_input.jpg matting_bgr.jpg 2
+```
+
+运行完成可视化结果如下图所示
+
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## RobustVideoMatting C++接口
+
+```c++
+fastdeploy::vision::matting::RobustVideoMatting(
+ const string& model_file,
+ const string& params_file = "",
+ const RuntimeOption& runtime_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::ONNX)
+```
+
+RobustVideoMatting模型加载和初始化,其中model_file为导出的ONNX模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
+
+#### Predict函数
+
+> ```c++
+> RobustVideoMatting::Predict(cv::Mat* im, MattingResult* result)
+> ```
+>
+> 模型预测接口,输入图像直接输出抠图结果。
+>
+> **参数**
+>
+> > * **im**: 输入图像,注意需为HWC,BGR格式
+> > * **result**: 抠图结果, MattingResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+
+## 其它文档
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [视觉模型预测结果](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/rvm/export.md b/examples/vision/matting/rvm/export.md
index 85167754d..eeef9d9e4 100755
--- a/examples/vision/matting/rvm/export.md
+++ b/examples/vision/matting/rvm/export.md
@@ -1,20 +1,21 @@
-# RobustVideoMatting 支持TRT的动态ONNX导出
+English | [简体中文](README_CN.md)
+# RobustVideoMatting supports TRT’s dynamic ONNX export
-## 环境依赖
+## Environment Dependencies
- python >= 3.5
- pytorch 1.12.0
- onnx 1.10.0
- onnxsim 0.4.8
-## 步骤一:拉取 RobustVideoMatting onnx 分支代码
+## Step 1: Pull the RobustVideoMatting onnx branch code
```shell
git clone -b onnx https://github.com/PeterL1n/RobustVideoMatting.git
cd RobustVideoMatting
```
-## 步骤二:去掉 downsample_ratio 动态输入
+## Step 2: Remove downsample_ratio dynamic input
在```model/model.py```中,将 ```downsample_ratio``` 输入去掉,如下图所示
@@ -49,9 +50,9 @@ def forward(self, src, r1, r2, r3, r4,
return [seg, *rec]
```
-## 步骤三:修改导出 ONNX 脚本
+## Step 3: Modify the export ONNX script
-修改```export_onnx.py```脚本,去掉```downsample_ratio```输入
+Modify ```export_onnx.py``` script to remove the ```downsample_ratio``` input
```python
def export(self):
@@ -89,7 +90,7 @@ def export(self):
})
```
-运行下列命令
+Run the following commands
```shell
python export_onnx.py \
@@ -102,15 +103,15 @@ python export_onnx.py \
```
**Note**:
-- trt关于多输入ONNX模型的dynamic shape,如果x0和x1的shape不同,不能都以height、width去表示,要以height0、height1去区分,要不然build engine阶段会出错
+- For the dynamic shape of the multi-input ONNX model in trt, if the shapes of x0 and x1 are different, we cannot use height and width but height0 and height1 to differentiate them, otherwise, there will be some mistakes when building engine
-## 步骤四:使用onnxsim简化
+## Step 4: Simplify with onnxsim
-安装 onnxsim,并简化步骤三导出的 ONNX 模型
+Install onnxsim and simplify the ONNX model exported in step 3
```shell
pip install onnxsim
onnxsim rvm_mobilenetv3.onnx rvm_mobilenetv3_trt.onnx
```
-```rvm_mobilenetv3_trt.onnx```即为可运行 TRT 后端的动态 shape 的 ONNX 模型
+```rvm_mobilenetv3_trt.onnx```: The ONNX model in dynamic shape that can run the TRT backend
diff --git a/examples/vision/matting/rvm/export_cn.md b/examples/vision/matting/rvm/export_cn.md
new file mode 100644
index 000000000..bf2b57a1e
--- /dev/null
+++ b/examples/vision/matting/rvm/export_cn.md
@@ -0,0 +1,31 @@
+[English](README.md) | 简体中文
+# RobustVideoMatting 模型部署
+
+## 模型版本说明
+
+- [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093)
+
+## 支持模型列表
+
+目前FastDeploy支持如下模型的部署
+
+- [RobustVideoMatting 模型](https://github.com/PeterL1n/RobustVideoMatting)
+
+## 下载预训练模型
+
+为了方便开发者的测试,下面提供了RobustVideoMatting导出的各系列模型,开发者可直接下载使用。
+
+| 模型 | 参数大小 | 精度 | 备注 |
+|:---------------------------------------------------------------- |:----- |:----- | :------ |
+| [rvm_mobilenetv3_fp32.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_fp32.onnx) | 15MB ||exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
+| [rvm_resnet50_fp32.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_resnet50_fp32.onnx) | 103MB | |exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
+| [rvm_mobilenetv3_trt.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_trt.onnx) | 15MB | |exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
+| [rvm_resnet50_trt.onnx](https://bj.bcebos.com/paddlehub/fastdeploy/rvm_resnet50_trt.onnx) | 103MB | | exported from [RobustVideoMatting](https://github.com/PeterL1n/RobustVideoMatting/commit/81a1093),GPL-3.0 License |
+
+**Note**:
+- 如果要使用 TensorRT 进行推理,需要下载后缀为 trt 的 onnx 模型文件
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/matting/rvm/python/README.md b/examples/vision/matting/rvm/python/README.md
index 5b3676c08..4ab70abc9 100755
--- a/examples/vision/matting/rvm/python/README.md
+++ b/examples/vision/matting/rvm/python/README.md
@@ -1,44 +1,45 @@
-# RobustVideoMatting Python部署示例
+English | [简体中文](README_CN.md)
+# RobustVideoMatting Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`infer.py`快速完成RobustVideoMatting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+This directory provides examples that `infer.py` fast finishes the deployment of RobustVideoMatting on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
-#下载部署示例代码
+# Download the deployment example code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/matting/rvm/python
-# 下载RobustVideoMatting模型文件和测试图片以及视频
-## 原版ONNX模型
+# Download RobustVideoMatting model files, test images and videos
+## Original ONNX Model
wget https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_fp32.onnx
-## 为加载TRT特殊处理ONNX模型
+## Specially process the ONNX model for loading TRT
wget https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_trt.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/video.mp4
-# CPU推理
-## 图片
+# CPU inference
+## image
python infer.py --model rvm_mobilenetv3_fp32.onnx --image matting_input.jpg --bg matting_bgr.jpg --device cpu
-## 视频
+## video
python infer.py --model rvm_mobilenetv3_fp32.onnx --video video.mp4 --bg matting_bgr.jpg --device cpu
-# GPU推理
-## 图片
+# GPU inference
+## image
python infer.py --model rvm_mobilenetv3_fp32.onnx --image matting_input.jpg --bg matting_bgr.jpg --device gpu
-## 视频
+## video
python infer.py --model rvm_mobilenetv3_fp32.onnx --video video.mp4 --bg matting_bgr.jpg --device gpu
-# TRT推理
-## 图片
+# TRT inference
+## image
python infer.py --model rvm_mobilenetv3_trt.onnx --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
-## 视频
+## video
python infer.py --model rvm_mobilenetv3_trt.onnx --video video.mp4 --bg matting_bgr.jpg --device gpu --use_trt True
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-## RobustVideoMatting Python接口
+## RobustVideoMatting Python Interface
```python
fd.vision.matting.RobustVideoMatting(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
```
-RobustVideoMatting模型加载和初始化,其中model_file为导出的ONNX模型格式
+RobustVideoMatting model loading and initialization, among which model_file is the exported ONNX model format
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为ONNX
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. ONNX format by default
-### predict函数
+### predict function
> ```python
> RobustVideoMatting.predict(input_image)
> ```
>
-> 模型预测结口,输入图像直接输出抠图结果。
+> Model prediction interface. Input images and output matting results.
>
-> **参数**
+> **Parameter**
>
-> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+> > * **input_image**(np.ndarray): Input data in HWC or BGR format
-> **返回**
+> **Return**
>
-> > 返回`fastdeploy.vision.MattingResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > Return `fastdeploy.vision.MattingResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
-## 其它文档
+## Other Documents
-- [RobustVideoMatting 模型介绍](..)
-- [RobustVideoMatting C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [RobustVideoMatting Model Description](..)
+- [RobustVideoMatting C++ Deployment](../cpp)
+- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/matting/rvm/python/README_CN.md b/examples/vision/matting/rvm/python/README_CN.md
new file mode 100644
index 000000000..d70ec5250
--- /dev/null
+++ b/examples/vision/matting/rvm/python/README_CN.md
@@ -0,0 +1,89 @@
+[English](README.md) | 简体中文
+# RobustVideoMatting Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成RobustVideoMatting在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/matting/rvm/python
+
+# 下载RobustVideoMatting模型文件和测试图片以及视频
+## 原版ONNX模型
+wget https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_fp32.onnx
+## 为加载TRT特殊处理ONNX模型
+wget https://bj.bcebos.com/paddlehub/fastdeploy/rvm_mobilenetv3_trt.onnx
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
+wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
+wget https://bj.bcebos.com/paddlehub/fastdeploy/video.mp4
+
+# CPU推理
+## 图片
+python infer.py --model rvm_mobilenetv3_fp32.onnx --image matting_input.jpg --bg matting_bgr.jpg --device cpu
+## 视频
+python infer.py --model rvm_mobilenetv3_fp32.onnx --video video.mp4 --bg matting_bgr.jpg --device cpu
+# GPU推理
+## 图片
+python infer.py --model rvm_mobilenetv3_fp32.onnx --image matting_input.jpg --bg matting_bgr.jpg --device gpu
+## 视频
+python infer.py --model rvm_mobilenetv3_fp32.onnx --video video.mp4 --bg matting_bgr.jpg --device gpu
+# TRT推理
+## 图片
+python infer.py --model rvm_mobilenetv3_trt.onnx --image matting_input.jpg --bg matting_bgr.jpg --device gpu --use_trt True
+## 视频
+python infer.py --model rvm_mobilenetv3_trt.onnx --video video.mp4 --bg matting_bgr.jpg --device gpu --use_trt True
+```
+
+运行完成可视化结果如下图所示
+
+
+## RobustVideoMatting Python接口
+
+```python
+fd.vision.matting.RobustVideoMatting(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
+```
+
+RobustVideoMatting模型加载和初始化,其中model_file为导出的ONNX模型格式
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为ONNX
+
+### predict函数
+
+> ```python
+> RobustVideoMatting.predict(input_image)
+> ```
+>
+> 模型预测结口,输入图像直接输出抠图结果。
+>
+> **参数**
+>
+> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+
+> **返回**
+>
+> > 返回`fastdeploy.vision.MattingResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+
+## 其它文档
+
+- [RobustVideoMatting 模型介绍](..)
+- [RobustVideoMatting C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/ocr/PP-OCRv2/android/README.md b/examples/vision/ocr/PP-OCRv2/android/README.md
index 4181a93e1..52edf3a81 100644
--- a/examples/vision/ocr/PP-OCRv2/android/README.md
+++ b/examples/vision/ocr/PP-OCRv2/android/README.md
@@ -1,94 +1,95 @@
-# OCR文字识别 Android Demo 使用文档
+English | [简体中文](README_CN.md)
+# OCR Text Recognition Android Demo Tutorial
-在 Android 上实现实时的OCR文字识别功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
+Real-time OCR text recognition on Android. This demo is easy to use for everyone. For example, you can run your own trained model in the demo.
-## 环境准备
+## Prepare the Environment
-1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
-2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
+1. Install Android Studio in your local environment. Refer to [Android Studio Official Website](https://developer.android.com/studio) for detailed tutorial.
+2. Prepare an Android phone and turn on the USB debug mode. Opening: `Settings -> Find developer options -> Open developer options and USB debug mode`
-## 部署步骤
+## Deployment Steps
-1. OCR文字识别 Demo 位于 `fastdeploy/examples/vision/ocr/PP-OCRv2/android` 目录
-2. 用 Android Studio 打开 PP-OCRv2/android 工程
-3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
+1. The OCR text recognition Demo is located in the `fastdeploy/examples/vision/ocr/PP-OCRv2/android`
+2. Open PP-OCRv2/android project with Android Studio
+3. Connect the phone to the computer, turn on USB debug mode and file transfer mode, and connect your phone to Android Studio (allow the phone to install software from USB)
-> **注意:**
->> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
+> **Attention:**
+>> If you encounter an NDK configuration error during import, compilation or running, open ` File > Project Structure > SDK Location` and change the path of SDK configured by the `Andriod SDK location`.
-4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
- 成功后效果如下,图一:APP 安装到手机;图二: APP 打开后的效果,会自动识别图片中的物体并标记;图三:APP设置选项,点击右上角的设置图片,可以设置不同选项进行体验。
+4. Click the Run button to automatically compile the APP and install it to the phone. (The process will automatically download the pre-compiled FastDeploy Android library and model files. Internet is required).
+The final effect is as follows. Figure 1: Install the APP on the phone; Figure 2: The effect after opening the APP. It will automatically recognize and mark the objects in the image; Figure 3: APP setting option. Click setting in the upper right corner and modify your options.
-| APP 图标 | APP 效果 | APP设置项
+| APP Icon | APP Effect | APP Settings
| --- | --- | --- |
|  |  |  |
-### PP-OCRv2 Java API 说明
+### PP-OCRv2 Java API Description
-- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。 PP-OCR初始化参数说明如下:
- - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
- - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
- - labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 ppocr_keys_v1.txt,每一行包含一个label
- - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
- 与其他模型不同的是,PP-OCRv2 包含 DBDetector、Classifier和Recognizer等基础模型,以及pipeline类型。
+- Model initialized API: The initialized API contains two ways: Firstly, initialize directly through the constructor. Secondly, initialize at the appropriate program node by calling the init function. PP-OCR initialization parameters are as follows.
+ - modelFile: String. Model file path in paddle format, such as model.pdmodel
+ - paramFile: String. Parameter file path in paddle format, such as model.pdiparams
+ - labelFile: String. This optional parameter indicates the path of the label file and is used for visualization. such as ppocr_keys_v1.txt, each line containing one label
+ - option: RuntimeOption. Optional parameter for model initialization. Default runtime options if the parameter is not passed. Different from other models, PP-OCRv2 contains base models such as DBDetector, Classifier, Recognizer and the pipeline type.
+
```java
-// 构造函数: constructor w/o label file
+// Constructor: constructor w/o label file
public DBDetector(String modelFile, String paramsFile);
public DBDetector(String modelFile, String paramsFile, RuntimeOption option);
public Classifier(String modelFile, String paramsFile);
public Classifier(String modelFile, String paramsFile, RuntimeOption option);
public Recognizer(String modelFile, String paramsFile, String labelPath);
public Recognizer(String modelFile, String paramsFile, String labelPath, RuntimeOption option);
-public PPOCRv2(); // 空构造函数,之后可以调用init初始化
+public PPOCRv2(); // An empty constructor, which can be initialized by calling init
// Constructor w/o classifier
public PPOCRv2(DBDetector detModel, Recognizer recModel);
public PPOCRv2(DBDetector detModel, Classifier clsModel, Recognizer recModel);
```
-- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+- Model Prediction API: The Model Prediction API contains an API for direct prediction and an API for visualization. In direct prediction, we do not save the image and render the result on Bitmap. Instead, we merely predict the inference result. For prediction and visualization, the results are both predicted and visualized, the visualized images are saved to the specified path, and the visualized results are rendered in Bitmap (Now Bitmap in ARGB8888 format is supported). Afterward, the Bitmap can be displayed on the camera.
```java
-// 直接预测:不保存图片以及不渲染结果到Bitmap上
+// Direct prediction: No image saving and no result rendering to Bitmap
public OCRResult predict(Bitmap ARGB8888Bitmap);
-// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+// Prediction and visualization: Predict and visualize the results, save the visualized image to the specified path, and render the visualized results on Bitmap
public OCRResult predict(Bitmap ARGB8888Bitmap, String savedImagePath);
-public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // 只渲染 不保存图片
+public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // Render without saving images
```
-- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+- Model resource release API: Call release() API to release model resources. Return true for successful release and false for failure; call initialized() to determine whether the model was initialized successfully, with true indicating successful initialization and false indicating failure.
```java
-public boolean release(); // 释放native资源
-public boolean initialized(); // 检查是否初始化成功
+public boolean release(); // Release native resources
+public boolean initialized(); // Check if initialization was successful
```
-- RuntimeOption设置说明
+- RuntimeOption settings
```java
-public void enableLiteFp16(); // 开启fp16精度推理
-public void disableLiteFP16(); // 关闭fp16精度推理
-public void enableLiteInt8(); // 开启int8精度推理,针对量化模型
-public void disableLiteInt8(); // 关闭int8精度推理
-public void setCpuThreadNum(int threadNum); // 设置线程数
-public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
-public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
+public void enableLiteFp16(); // Enable fp16 accuracy inference
+public void disableLiteFP16(); // Disable fp16 accuracy inference
+public void enableLiteInt8(); // Enable int8 accuracy inference for quantification models
+public void disableLiteInt8(); // Disable int8 accuracy inference
+public void setCpuThreadNum(int threadNum); // Set thread numbers
+public void setLitePowerMode(LitePowerMode mode); // Set power mode
+public void setLitePowerMode(String modeStr); // Set power mode through character string
```
-- 模型结果OCRResult说明
+- Model OCRResult
```java
public class OCRResult {
- public int[][] mBoxes; // 表示单张图片检测出来的所有目标框坐标,每个框以8个int数值依次表示框的4个坐标点,顺序为左下,右下,右上,左上
- public String[] mText; // 表示多个文本框内被识别出来的文本内容
- public float[] mRecScores; // 表示文本框内识别出来的文本的置信度
- public float[] mClsScores; // 表示文本框的分类结果的置信度
- public int[] mClsLabels; // 表示文本框的方向分类类别
- public boolean mInitialized = false; // 检测结果是否有效
+ public int[][] mBoxes; // The coordinates of all target boxes in a single image. 8 int values represent the 4 coordinate points of the box in the order of bottom left, bottom right, top right and top left
+ public String[] mText; // Recognized text in multiple text boxes
+ public float[] mRecScores; // Confidence of the recognized text in the box
+ public float[] mClsScores; // Confidence of the classification result of the text box
+ public int[] mClsLabels; // the directional classification of the text box
+ public boolean mInitialized = false; // Whether the result is valid or not
}
```
-其他参考:C++/Python对应的OCRResult说明: [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md)
+Refer to [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md) for C++/Python OCRResult
-- 模型调用示例1:使用构造函数
+- Model Calling Example 1: Using Constructor
```java
import java.nio.ByteBuffer;
import android.graphics.Bitmap;
@@ -101,7 +102,7 @@ import com.baidu.paddle.fastdeploy.vision.ocr.Classifier;
import com.baidu.paddle.fastdeploy.vision.ocr.DBDetector;
import com.baidu.paddle.fastdeploy.vision.ocr.Recognizer;
-// 模型路径
+// Model path
String detModelFile = "ch_PP-OCRv2_det_infer/inference.pdmodel";
String detParamsFile = "ch_PP-OCRv2_det_infer/inference.pdiparams";
String clsModelFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdmodel";
@@ -109,7 +110,7 @@ String clsParamsFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdiparams";
String recModelFile = "ch_PP-OCRv2_rec_infer/inference.pdmodel";
String recParamsFile = "ch_PP-OCRv2_rec_infer/inference.pdiparams";
String recLabelFilePath = "labels/ppocr_keys_v1.txt";
-// 设置RuntimeOption
+// Set the RuntimeOption
RuntimeOption detOption = new RuntimeOption();
RuntimeOption clsOption = new RuntimeOption();
RuntimeOption recOption = new RuntimeOption();
@@ -122,37 +123,37 @@ recOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
detOption.enableLiteFp16();
clsOption.enableLiteFp16();
recOption.enableLiteFp16();
-// 初始化模型
+// Initialize the model
DBDetector detModel = new DBDetector(detModelFile, detParamsFile, detOption);
Classifier clsModel = new Classifier(clsModelFile, clsParamsFile, clsOption);
Recognizer recModel = new Recognizer(recModelFile, recParamsFile, recLabelFilePath, recOption);
PPOCRv2 model = new PPOCRv2(detModel, clsModel, recModel);
-// 读取图片: 以下仅为读取Bitmap的伪代码
+// Read the image: The following is merely the pseudo code to read the Bitmap
ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
-// 模型推理
+// Model Inference
OCRResult result = model.predict(ARGB8888ImageBitmap);
-// 释放模型资源
+// Release model resources
model.release();
```
-- 模型调用示例2: 在合适的程序节点,手动调用init
+- Model calling example 2: Manually call init at the appropriate program node
```java
-// import 同上 ...
+// import is as the above ...
import com.baidu.paddle.fastdeploy.RuntimeOption;
import com.baidu.paddle.fastdeploy.LitePowerMode;
import com.baidu.paddle.fastdeploy.vision.OCRResult;
import com.baidu.paddle.fastdeploy.vision.ocr.Classifier;
import com.baidu.paddle.fastdeploy.vision.ocr.DBDetector;
import com.baidu.paddle.fastdeploy.vision.ocr.Recognizer;
-// 新建空模型
+// Create an empty model
PPOCRv2 model = new PPOCRv2();
-// 模型路径
+// Model path
String detModelFile = "ch_PP-OCRv2_det_infer/inference.pdmodel";
String detParamsFile = "ch_PP-OCRv2_det_infer/inference.pdiparams";
String clsModelFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdmodel";
@@ -160,7 +161,7 @@ String clsParamsFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdiparams";
String recModelFile = "ch_PP-OCRv2_rec_infer/inference.pdmodel";
String recParamsFile = "ch_PP-OCRv2_rec_infer/inference.pdiparams";
String recLabelFilePath = "labels/ppocr_keys_v1.txt";
-// 设置RuntimeOption
+// Set the RuntimeOption
RuntimeOption detOption = new RuntimeOption();
RuntimeOption clsOption = new RuntimeOption();
RuntimeOption recOption = new RuntimeOption();
@@ -173,30 +174,30 @@ recOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
detOption.enableLiteFp16();
clsOption.enableLiteFp16();
recOption.enableLiteFp16();
-// 使用init函数初始化
+// Use init function for initialization
DBDetector detModel = new DBDetector(detModelFile, detParamsFile, detOption);
Classifier clsModel = new Classifier(clsModelFile, clsParamsFile, clsOption);
Recognizer recModel = new Recognizer(recModelFile, recParamsFile, recLabelFilePath, recOption);
model.init(detModel, clsModel, recModel);
-// Bitmap读取、模型预测、资源释放 同上 ...
+// Bitmap reading, model prediction, and resource release are as above
```
-更详细的用法请参考 [OcrMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/ocr/OcrMainActivity.java)中的用法
+Refer to [OcrMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/ocr/OcrMainActivity.java) for more details
-## 替换 FastDeploy SDK和模型
-替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models`。
-- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK,解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
- - [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
+## Replace FastDeploy SDK and Models
+It’s simple to replace the FastDeploy prediction library and models. The prediction library is located at `app/libs/fastdeploy-android-sdk-xxx.aar`, where `xxx` represents the version of your prediction library. The models are located at `app/src/main/assets/models`
+- Replace the FastDeploy Android SDK: Download or compile the latest FastDeploy Android SDK, unzip and place it in the `app/libs` directory; For detailed configuration, refer to
+ - [FastDeploy Java SDK in Android](../../../../../java/android/)
-- 替换OCR模型的步骤:
- - 将您的OCR模型放在 `app/src/main/assets/models` 目录下;
- - 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
+- Steps to replace OCR models:
+ - Put your OCR model in `app/src/main/assets/models`;
+ - Modify the default value of the model path in `app/src/main/res/values/strings.xml`. For example,
```xml
-
+
models
labels/ppocr_keys_v1.txt
```
-## 更多参考文档
-如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
-- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
-- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
+## More Reference Documents
+For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
+- [FastDeploy Java SDK in Android](../../../../../java/android/)
+- [FastDeploy C++ SDK in Android](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
diff --git a/examples/vision/ocr/PP-OCRv2/android/README_CN.md b/examples/vision/ocr/PP-OCRv2/android/README_CN.md
new file mode 100644
index 000000000..4e501ae12
--- /dev/null
+++ b/examples/vision/ocr/PP-OCRv2/android/README_CN.md
@@ -0,0 +1,203 @@
+[English](README.md) | 简体中文
+# OCR文字识别 Android Demo 使用文档
+
+在 Android 上实现实时的OCR文字识别功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
+
+## 环境准备
+
+1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
+2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
+
+## 部署步骤
+
+1. OCR文字识别 Demo 位于 `fastdeploy/examples/vision/ocr/PP-OCRv2/android` 目录
+2. 用 Android Studio 打开 PP-OCRv2/android 工程
+3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
+
+
+
+
+
+> **注意:**
+>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
+
+4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
+ 成功后效果如下,图一:APP 安装到手机;图二: APP 打开后的效果,会自动识别图片中的物体并标记;图三:APP设置选项,点击右上角的设置图片,可以设置不同选项进行体验。
+
+| APP 图标 | APP 效果 | APP设置项
+ | --- | --- | --- |
+|  |  |  |
+
+### PP-OCRv2 Java API 说明
+
+- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。 PP-OCR初始化参数说明如下:
+ - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
+ - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
+ - labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 ppocr_keys_v1.txt,每一行包含一个label
+ - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
+ 与其他模型不同的是,PP-OCRv2 包含 DBDetector、Classifier和Recognizer等基础模型,以及pipeline类型。
+```java
+// 构造函数: constructor w/o label file
+public DBDetector(String modelFile, String paramsFile);
+public DBDetector(String modelFile, String paramsFile, RuntimeOption option);
+public Classifier(String modelFile, String paramsFile);
+public Classifier(String modelFile, String paramsFile, RuntimeOption option);
+public Recognizer(String modelFile, String paramsFile, String labelPath);
+public Recognizer(String modelFile, String paramsFile, String labelPath, RuntimeOption option);
+public PPOCRv2(); // 空构造函数,之后可以调用init初始化
+// Constructor w/o classifier
+public PPOCRv2(DBDetector detModel, Recognizer recModel);
+public PPOCRv2(DBDetector detModel, Classifier clsModel, Recognizer recModel);
+```
+- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+```java
+// 直接预测:不保存图片以及不渲染结果到Bitmap上
+public OCRResult predict(Bitmap ARGB8888Bitmap);
+// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+public OCRResult predict(Bitmap ARGB8888Bitmap, String savedImagePath);
+public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // 只渲染 不保存图片
+```
+- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+```java
+public boolean release(); // 释放native资源
+public boolean initialized(); // 检查是否初始化成功
+```
+
+- RuntimeOption设置说明
+
+```java
+public void enableLiteFp16(); // 开启fp16精度推理
+public void disableLiteFP16(); // 关闭fp16精度推理
+public void enableLiteInt8(); // 开启int8精度推理,针对量化模型
+public void disableLiteInt8(); // 关闭int8精度推理
+public void setCpuThreadNum(int threadNum); // 设置线程数
+public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
+public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
+```
+
+- 模型结果OCRResult说明
+```java
+public class OCRResult {
+ public int[][] mBoxes; // 表示单张图片检测出来的所有目标框坐标,每个框以8个int数值依次表示框的4个坐标点,顺序为左下,右下,右上,左上
+ public String[] mText; // 表示多个文本框内被识别出来的文本内容
+ public float[] mRecScores; // 表示文本框内识别出来的文本的置信度
+ public float[] mClsScores; // 表示文本框的分类结果的置信度
+ public int[] mClsLabels; // 表示文本框的方向分类类别
+ public boolean mInitialized = false; // 检测结果是否有效
+}
+```
+其他参考:C++/Python对应的OCRResult说明: [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md)
+
+
+- 模型调用示例1:使用构造函数
+```java
+import java.nio.ByteBuffer;
+import android.graphics.Bitmap;
+import android.opengl.GLES20;
+
+import com.baidu.paddle.fastdeploy.RuntimeOption;
+import com.baidu.paddle.fastdeploy.LitePowerMode;
+import com.baidu.paddle.fastdeploy.vision.OCRResult;
+import com.baidu.paddle.fastdeploy.vision.ocr.Classifier;
+import com.baidu.paddle.fastdeploy.vision.ocr.DBDetector;
+import com.baidu.paddle.fastdeploy.vision.ocr.Recognizer;
+
+// 模型路径
+String detModelFile = "ch_PP-OCRv2_det_infer/inference.pdmodel";
+String detParamsFile = "ch_PP-OCRv2_det_infer/inference.pdiparams";
+String clsModelFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdmodel";
+String clsParamsFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdiparams";
+String recModelFile = "ch_PP-OCRv2_rec_infer/inference.pdmodel";
+String recParamsFile = "ch_PP-OCRv2_rec_infer/inference.pdiparams";
+String recLabelFilePath = "labels/ppocr_keys_v1.txt";
+// 设置RuntimeOption
+RuntimeOption detOption = new RuntimeOption();
+RuntimeOption clsOption = new RuntimeOption();
+RuntimeOption recOption = new RuntimeOption();
+detOption.setCpuThreadNum(2);
+clsOption.setCpuThreadNum(2);
+recOption.setCpuThreadNum(2);
+detOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+clsOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+recOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+detOption.enableLiteFp16();
+clsOption.enableLiteFp16();
+recOption.enableLiteFp16();
+// 初始化模型
+DBDetector detModel = new DBDetector(detModelFile, detParamsFile, detOption);
+Classifier clsModel = new Classifier(clsModelFile, clsParamsFile, clsOption);
+Recognizer recModel = new Recognizer(recModelFile, recParamsFile, recLabelFilePath, recOption);
+PPOCRv2 model = new PPOCRv2(detModel, clsModel, recModel);
+
+// 读取图片: 以下仅为读取Bitmap的伪代码
+ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
+GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
+Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
+ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
+
+// 模型推理
+OCRResult result = model.predict(ARGB8888ImageBitmap);
+
+// 释放模型资源
+model.release();
+```
+
+- 模型调用示例2: 在合适的程序节点,手动调用init
+```java
+// import 同上 ...
+import com.baidu.paddle.fastdeploy.RuntimeOption;
+import com.baidu.paddle.fastdeploy.LitePowerMode;
+import com.baidu.paddle.fastdeploy.vision.OCRResult;
+import com.baidu.paddle.fastdeploy.vision.ocr.Classifier;
+import com.baidu.paddle.fastdeploy.vision.ocr.DBDetector;
+import com.baidu.paddle.fastdeploy.vision.ocr.Recognizer;
+// 新建空模型
+PPOCRv2 model = new PPOCRv2();
+// 模型路径
+String detModelFile = "ch_PP-OCRv2_det_infer/inference.pdmodel";
+String detParamsFile = "ch_PP-OCRv2_det_infer/inference.pdiparams";
+String clsModelFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdmodel";
+String clsParamsFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdiparams";
+String recModelFile = "ch_PP-OCRv2_rec_infer/inference.pdmodel";
+String recParamsFile = "ch_PP-OCRv2_rec_infer/inference.pdiparams";
+String recLabelFilePath = "labels/ppocr_keys_v1.txt";
+// 设置RuntimeOption
+RuntimeOption detOption = new RuntimeOption();
+RuntimeOption clsOption = new RuntimeOption();
+RuntimeOption recOption = new RuntimeOption();
+detOption.setCpuThreadNum(2);
+clsOption.setCpuThreadNum(2);
+recOption.setCpuThreadNum(2);
+detOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+clsOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+recOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+detOption.enableLiteFp16();
+clsOption.enableLiteFp16();
+recOption.enableLiteFp16();
+// 使用init函数初始化
+DBDetector detModel = new DBDetector(detModelFile, detParamsFile, detOption);
+Classifier clsModel = new Classifier(clsModelFile, clsParamsFile, clsOption);
+Recognizer recModel = new Recognizer(recModelFile, recParamsFile, recLabelFilePath, recOption);
+model.init(detModel, clsModel, recModel);
+// Bitmap读取、模型预测、资源释放 同上 ...
+```
+更详细的用法请参考 [OcrMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/ocr/OcrMainActivity.java)中的用法
+
+## 替换 FastDeploy SDK和模型
+替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models`。
+- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK,解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
+ - [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
+
+- 替换OCR模型的步骤:
+ - 将您的OCR模型放在 `app/src/main/assets/models` 目录下;
+ - 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
+```xml
+
+models
+labels/ppocr_keys_v1.txt
+```
+
+## 更多参考文档
+如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
+- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
+- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
diff --git a/examples/vision/ocr/PP-OCRv2/cpp/README.md b/examples/vision/ocr/PP-OCRv2/cpp/README.md
index 9052dd80e..230b8cf3f 100755
--- a/examples/vision/ocr/PP-OCRv2/cpp/README.md
+++ b/examples/vision/ocr/PP-OCRv2/cpp/README.md
@@ -1,25 +1,26 @@
-# PPOCRv2 C++部署示例
+English | [简体中文](README_CN.md)
+# PPOCRv2 C++ Deployment Example
-本目录下提供`infer.cc`快速完成PPOCRv2在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+This directory provides examples that `infer.cc` fast finishes the deployment of PPOCRv2 on CPU/GPU and GPU accelerated by TensorRT.
-在部署前,需确认以下两个步骤
+Two steps before deployment
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
```
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载模型,图片和字典文件
+# Download model, image, and dictionary files
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar
tar -xvf ch_PP-OCRv2_det_infer.tar
@@ -33,34 +34,29 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
-# CPU推理
+# CPU inference
./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
-# GPU推理
+# GPU inference
./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
-# GPU上TensorRT推理
+# TensorRT inference on GPU
./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 2
-# GPU上Paddle-TRT推理
+# Paddle-TRT inference on GPU
./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 3
-# 昆仑芯XPU推理
+# KunlunXin XPU inference
./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 4
-# 华为昇腾推理, 需要使用静态shape的demo, 若用户需要连续地预测图片, 输入图片尺寸需要准备为统一尺寸
-./infer_static_shape_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
```
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
-- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
-
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-## PPOCRv2 C++接口
+## PPOCRv2 C++ Interface
-### PPOCRv2类
+### PPOCRv2 Class
```
fastdeploy::pipeline::PPOCRv2(fastdeploy::vision::ocr::DBDetector* det_model,
@@ -68,43 +64,43 @@ fastdeploy::pipeline::PPOCRv2(fastdeploy::vision::ocr::DBDetector* det_model,
fastdeploy::vision::ocr::Recognizer* rec_model);
```
-PPOCRv2 的初始化,由检测,分类和识别模型串联构成
+The initialization of PPOCRv2, consisting of detection, classification and recognition models
-**参数**
+**Parameter**
-> * **DBDetector**(model): OCR中的检测模型
-> * **Classifier**(model): OCR中的分类模型
-> * **Recognizer**(model): OCR中的识别模型
+> * **DBDetector**(model): Detection model in OCR
+> * **Classifier**(model): Classification model in OCR
+> * **Recognizer**(model): Recognition model in OCR
```
fastdeploy::pipeline::PPOCRv2(fastdeploy::vision::ocr::DBDetector* det_model,
fastdeploy::vision::ocr::Recognizer* rec_model);
```
-PPOCRv2 的初始化,由检测,识别模型串联构成(无分类器)
+The initialization of PPOCRv2, consisting of detection and recognition models (No classifier)
-**参数**
+**Parameter**
-> * **DBDetector**(model): OCR中的检测模型
-> * **Recognizer**(model): OCR中的识别模型
+> * **DBDetector**(model): Detection model in OCROCR中的检测模型
+> * **Recognizer**(model): Recognition model in OCR
-#### Predict函数
+#### Predict Function
> ```
> bool Predict(cv::Mat* img, fastdeploy::vision::OCRResult* result);
> bool Predict(const cv::Mat& img, fastdeploy::vision::OCRResult* result);
> ```
>
-> 模型预测接口,输入一张图片,返回OCR预测结果
+> Model prediction interface. Input images and output OCR prediction results
>
-> **参数**
+> **Parameter**
>
-> > * **img**: 输入图像,注意需为HWC,BGR格式
-> > * **result**: OCR预测结果,包括由检测模型输出的检测框位置,分类模型输出的方向分类,以及识别模型输出的识别结果, OCRResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > * **img**: Input images in HWC or BGR format
+> > * **result**: OCR prediction results, including the position of the detection box from the detection model, the classification of the direction from the classification model, and the recognition result from the recognition model. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for OCRResult
-## DBDetector C++接口
+## DBDetector C++ Interface
-### DBDetector类
+### DBDetector Class
```
fastdeploy::vision::ocr::DBDetector(const std::string& model_file, const std::string& params_file = "",
@@ -112,18 +108,18 @@ fastdeploy::vision::ocr::DBDetector(const std::string& model_file, const std::st
const ModelFormat& model_format = ModelFormat::PADDLE);
```
-DBDetector模型加载和初始化,其中模型为paddle模型格式。
+DBDetector model loading and initialization. The model is in paddle format.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-### Classifier类与DBDetector类相同
+### The same applies to Classifier Class
-### Recognizer类
+### Recognizer Class
```
Recognizer(const std::string& model_file,
const std::string& params_file = "",
@@ -131,31 +127,31 @@ DBDetector模型加载和初始化,其中模型为paddle模型格式。
const RuntimeOption& custom_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::PADDLE);
```
-Recognizer类初始化时,需要在label_path参数中,输入识别模型所需的label文件,其他参数均与DBDetector类相同
+For the initialization of the Recognizer class, users should input the label file required by the recognition model in the label_path parameter. Other parameters are the same as the DBDetector class
-**参数**
-> * **label_path**(str): 识别模型的label文件路径
+**Parameter**
+> * **label_path**(str): The label path of the recognition model
-### 类成员变量
-#### DBDetector预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+### Class Member Variable
+#### DBDetector Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-> > * **max_side_len**(int): 检测算法前向时图片长边的最大尺寸,当长边超出这个值时会将长边resize到这个大小,短边等比例缩放,默认为960
-> > * **det_db_thresh**(double): DB模型输出预测图的二值化阈值,默认为0.3
-> > * **det_db_box_thresh**(double): DB模型输出框的阈值,低于此值的预测框会被丢弃,默认为0.6
-> > * **det_db_unclip_ratio**(double): DB模型输出框扩大的比例,默认为1.5
-> > * **det_db_score_mode**(string):DB后处理中计算文本框平均得分的方式,默认为slow,即求polygon区域的平均分数的方式
-> > * **use_dilation**(bool):是否对检测输出的feature map做膨胀处理,默认为Fasle
+> > * **max_side_len**(int): The long side’s maximum size of the oriented view before detection. The long side will be resized to this size when exceeding the value. And the short side will be scaled in equal proportion. Default 960
+> > * **det_db_thresh**(double): The binarization threshold of the prediction image from DB models. Default 0.3
+> > * **det_db_box_thresh**(double): The threshold for the output box of DB models, below which the predicted box is discarded. Default 0.6
+> > * **det_db_unclip_ratio**(double): The expansion ratio of the DB model output box. Default 1.5
+> > * **det_db_score_mode**(string): The way to calculate the average score of the text box in DB post-processing. Default slow, which is identical to the calculation of the polygon area’s average score
+> > * **use_dilation**(bool): Whether to expand the feature map from the detection. Default False
-#### Classifier预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+#### Classifier Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-> > * **cls_thresh**(double): 当分类模型输出的得分超过此阈值,输入的图片将被翻转,默认为0.9
+> > * **cls_thresh**(double): The input image will be flipped when the score output by the classification model exceeds this threshold. Default 0.9
-## 其它文档
+## Other Documents
-- [PPOCR 系列模型介绍](../../)
-- [PPOCRv2 Python部署](../python)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [PPOCR Model Description](../../)
+- [PPOCRv2 Python Deployment](../python)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/ocr/PP-OCRv2/cpp/README_CN.md b/examples/vision/ocr/PP-OCRv2/cpp/README_CN.md
new file mode 100644
index 000000000..ec8b0c16b
--- /dev/null
+++ b/examples/vision/ocr/PP-OCRv2/cpp/README_CN.md
@@ -0,0 +1,162 @@
+[English](README.md) | 简体中文
+# PPOCRv2 C++部署示例
+
+本目录下提供`infer.cc`快速完成PPOCRv2在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+```
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+
+# 下载模型,图片和字典文件
+wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar
+tar -xvf ch_PP-OCRv2_det_infer.tar
+
+wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
+tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
+
+wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar
+tar -xvf ch_PP-OCRv2_rec_infer.tar
+
+wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
+
+wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
+
+# CPU推理
+./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
+# GPU推理
+./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
+# GPU上TensorRT推理
+./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 2
+# GPU上Paddle-TRT推理
+./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 3
+# 昆仑芯XPU推理
+./infer_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 4
+# 华为昇腾推理, 需要使用静态shape的demo, 若用户需要连续地预测图片, 输入图片尺寸需要准备为统一尺寸
+./infer_static_shape_demo ./ch_PP-OCRv2_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv2_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
+```
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
+- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
+
+运行完成可视化结果如下图所示
+
+
+
+
+## PPOCRv2 C++接口
+
+### PPOCRv2类
+
+```
+fastdeploy::pipeline::PPOCRv2(fastdeploy::vision::ocr::DBDetector* det_model,
+ fastdeploy::vision::ocr::Classifier* cls_model,
+ fastdeploy::vision::ocr::Recognizer* rec_model);
+```
+
+PPOCRv2 的初始化,由检测,分类和识别模型串联构成
+
+**参数**
+
+> * **DBDetector**(model): OCR中的检测模型
+> * **Classifier**(model): OCR中的分类模型
+> * **Recognizer**(model): OCR中的识别模型
+
+```
+fastdeploy::pipeline::PPOCRv2(fastdeploy::vision::ocr::DBDetector* det_model,
+ fastdeploy::vision::ocr::Recognizer* rec_model);
+```
+PPOCRv2 的初始化,由检测,识别模型串联构成(无分类器)
+
+**参数**
+
+> * **DBDetector**(model): OCR中的检测模型
+> * **Recognizer**(model): OCR中的识别模型
+
+#### Predict函数
+
+> ```
+> bool Predict(cv::Mat* img, fastdeploy::vision::OCRResult* result);
+> bool Predict(const cv::Mat& img, fastdeploy::vision::OCRResult* result);
+> ```
+>
+> 模型预测接口,输入一张图片,返回OCR预测结果
+>
+> **参数**
+>
+> > * **img**: 输入图像,注意需为HWC,BGR格式
+> > * **result**: OCR预测结果,包括由检测模型输出的检测框位置,分类模型输出的方向分类,以及识别模型输出的识别结果, OCRResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+
+## DBDetector C++接口
+
+### DBDetector类
+
+```
+fastdeploy::vision::ocr::DBDetector(const std::string& model_file, const std::string& params_file = "",
+ const RuntimeOption& custom_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::PADDLE);
+```
+
+DBDetector模型加载和初始化,其中模型为paddle模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+### Classifier类与DBDetector类相同
+
+### Recognizer类
+```
+ Recognizer(const std::string& model_file,
+ const std::string& params_file = "",
+ const std::string& label_path = "",
+ const RuntimeOption& custom_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::PADDLE);
+```
+Recognizer类初始化时,需要在label_path参数中,输入识别模型所需的label文件,其他参数均与DBDetector类相同
+
+**参数**
+> * **label_path**(str): 识别模型的label文件路径
+
+
+### 类成员变量
+#### DBDetector预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+> > * **max_side_len**(int): 检测算法前向时图片长边的最大尺寸,当长边超出这个值时会将长边resize到这个大小,短边等比例缩放,默认为960
+> > * **det_db_thresh**(double): DB模型输出预测图的二值化阈值,默认为0.3
+> > * **det_db_box_thresh**(double): DB模型输出框的阈值,低于此值的预测框会被丢弃,默认为0.6
+> > * **det_db_unclip_ratio**(double): DB模型输出框扩大的比例,默认为1.5
+> > * **det_db_score_mode**(string):DB后处理中计算文本框平均得分的方式,默认为slow,即求polygon区域的平均分数的方式
+> > * **use_dilation**(bool):是否对检测输出的feature map做膨胀处理,默认为Fasle
+
+#### Classifier预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+> > * **cls_thresh**(double): 当分类模型输出的得分超过此阈值,输入的图片将被翻转,默认为0.9
+
+## 其它文档
+
+- [PPOCR 系列模型介绍](../../)
+- [PPOCRv2 Python部署](../python)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/ocr/PP-OCRv2/python/README.md b/examples/vision/ocr/PP-OCRv2/python/README.md
index 1ea95695f..53316be24 100755
--- a/examples/vision/ocr/PP-OCRv2/python/README.md
+++ b/examples/vision/ocr/PP-OCRv2/python/README.md
@@ -1,15 +1,16 @@
-# PPOCRv2 Python部署示例
+English | [简体中文](README_CN.md)
+# PPOCRv2 Python Deployment Example
-在部署前,需确认以下两个步骤
+Two steps before deployment
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`infer.py`快速完成PPOCRv2在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+This directory provides examples that `infer.py` fast finishes the deployment of PPOCRv2 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```
-# 下载模型,图片和字典文件
+# Download model, image, and dictionary files
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar
tar -xvf ch_PP-OCRv2_det_infer.tar
@@ -24,109 +25,107 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
-#下载部署示例代码
+# Download the example code for deployment
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vison/ocr/PP-OCRv2/python/
-# CPU推理
+# CPU inference
python infer.py --det_model ch_PP-OCRv2_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv2_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device cpu
-# GPU推理
+# GPU inference
python infer.py --det_model ch_PP-OCRv2_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv2_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu
-# GPU上使用TensorRT推理
+# TensorRT inference on GPU
python infer.py --det_model ch_PP-OCRv2_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv2_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu --backend trt
-# 昆仑芯XPU推理
+# KunlunXin XPU inference
python infer.py --det_model ch_PP-OCRv2_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv2_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device kunlunxin
-# 华为昇腾推理,需要使用静态shape脚本, 若用户需要连续地预测图片, 输入图片尺寸需要准备为统一尺寸
-python infer_static_shape.py --det_model ch_PP-OCRv2_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv2_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device ascend
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-## PPOCRv2 Python接口
+## PPOCRv2 Python Interface
```
fd.vision.ocr.PPOCRv2(det_model=det_model, cls_model=cls_model, rec_model=rec_model)
```
-PPOCRv2的初始化,输入的参数是检测模型,分类模型和识别模型,其中cls_model可选,如无需求,可设置为None
+To initialize PPOCRv2, the input parameters correspond to detection model, classification model, and recognition model. Among them, cls_model is optional. It can be set to None if there is no demand
-**参数**
+**Parameter**
-> * **det_model**(model): OCR中的检测模型
-> * **cls_model**(model): OCR中的分类模型
-> * **rec_model**(model): OCR中的识别模型
+> * **det_model**(model): Detection model in OCR
+> * **cls_model**(model): Classification model in OCR
+> * **rec_model**(model): Recognition model in OCR
-### predict函数
+### predict function
> ```
> result = ppocr_v2.predict(im)
> ```
>
-> 模型预测接口,输入是一张图片
+> Model prediction interface. Input one image.
>
-> **参数**
+> **Parameter**
>
-> > * **im**(np.ndarray): 输入数据,每张图片注意需为HWC,BGR格式
+> > * **im**(np.ndarray): Input data in HWC or BGR format
-> **返回**
+> **Return**
>
-> > 返回`fastdeploy.vision.OCRResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > Return the `fastdeploy.vision.OCRResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for its description.
-## DBDetector Python接口
+## DBDetector Python Interface
-### DBDetector类
+### DBDetector Class
```
fastdeploy.vision.ocr.DBDetector(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
-DBDetector模型加载和初始化,其中模型为paddle模型格式。
+DBDetector model loading and initialization. The model is in paddle format.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. PADDLE format by default
-### Classifier类与DBDetector类相同
+### The same applies to the Classifier class
-### Recognizer类
+### Recognizer Class
```
fastdeploy.vision.ocr.Recognizer(rec_model_file,rec_params_file,rec_label_file,
runtime_option=rec_runtime_option,model_format=ModelFormat.PADDLE)
```
-Recognizer类初始化时,需要在rec_label_file参数中,输入识别模型所需的label文件路径,其他参数均与DBDetector类相同
+To initialize the Recognizer class, users need to input the label file path required by the recognition model in the rec_label_file parameter. Other parameters are the same as those of DBDetector class
-**参数**
-> * **label_path**(str): 识别模型的label文件路径
+**Parameter**
+> * **label_path**(str): The label path of the recognition model
-### 类成员变量
+### Class Member Variable
-#### DBDetector预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+#### DBDetector Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-> > * **max_side_len**(int): 检测算法前向时图片长边的最大尺寸,当长边超出这个值时会将长边resize到这个大小,短边等比例缩放,默认为960
-> > * **det_db_thresh**(double): DB模型输出预测图的二值化阈值,默认为0.3
-> > * **det_db_box_thresh**(double): DB模型输出框的阈值,低于此值的预测框会被丢弃,默认为0.6
-> > * **det_db_unclip_ratio**(double): DB模型输出框扩大的比例,默认为1.5
-> > * **det_db_score_mode**(string):DB后处理中计算文本框平均得分的方式,默认为slow,即求polygon区域的平均分数的方式
-> > * **use_dilation**(bool):是否对检测输出的feature map做膨胀处理,默认为Fasle
+> > * **max_side_len**(int): The long side’s maximum size of the oriented view before detection. The long side will be resized to this size when exceeding the value. And the short side will be scaled in equal proportion. Default 960
+> > * **det_db_thresh**(double): The binarization threshold of the prediction image from DB models. Default 0.3
+> > * **det_db_box_thresh**(double): The threshold for the output box of DB models, below which the predicted box is discarded. Default 0.6
+> > * **det_db_unclip_ratio**(double): The expansion ratio of the DB model output box. Default 1.5
+> > * **det_db_score_mode**(string): The way to calculate the average score of the text box in DB post-processing. Default slow, which is identical to the calculation of the polygon area’s average score
+> > * **use_dilation**(bool): Whether to expand the feature map from the detection. Default False
-#### Classifier预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+#### Classifier Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-> > * **cls_thresh**(double): 当分类模型输出的得分超过此阈值,输入的图片将被翻转,默认为0.9
+> > * **cls_thresh**(double): The input image will be flipped when the score output by the classification model exceeds this threshold. Default 0.9
-## 其它文档
+## Other Documents
-- [PPOCR 系列模型介绍](../../)
-- [PPOCRv2 C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [PPOCR Model Description](../../)
+- [PPOCRv2 C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/ocr/PP-OCRv2/python/README_CN.md b/examples/vision/ocr/PP-OCRv2/python/README_CN.md
new file mode 100644
index 000000000..9eea8ba5c
--- /dev/null
+++ b/examples/vision/ocr/PP-OCRv2/python/README_CN.md
@@ -0,0 +1,133 @@
+[English](README.md) | 简体中文
+# PPOCRv2 Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成PPOCRv2在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```
+
+# 下载模型,图片和字典文件
+wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar
+tar -xvf ch_PP-OCRv2_det_infer.tar
+
+wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
+tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
+
+wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar
+tar -xvf ch_PP-OCRv2_rec_infer.tar
+
+wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
+
+wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
+
+
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd examples/vison/ocr/PP-OCRv2/python/
+
+# CPU推理
+python infer.py --det_model ch_PP-OCRv2_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv2_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device cpu
+# GPU推理
+python infer.py --det_model ch_PP-OCRv2_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv2_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu
+# GPU上使用TensorRT推理
+python infer.py --det_model ch_PP-OCRv2_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv2_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu --backend trt
+# 昆仑芯XPU推理
+python infer.py --det_model ch_PP-OCRv2_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv2_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device kunlunxin
+# 华为昇腾推理,需要使用静态shape脚本, 若用户需要连续地预测图片, 输入图片尺寸需要准备为统一尺寸
+python infer_static_shape.py --det_model ch_PP-OCRv2_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv2_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device ascend
+```
+
+运行完成可视化结果如下图所示
+
+
+## PPOCRv2 Python接口
+
+```
+fd.vision.ocr.PPOCRv2(det_model=det_model, cls_model=cls_model, rec_model=rec_model)
+```
+PPOCRv2的初始化,输入的参数是检测模型,分类模型和识别模型,其中cls_model可选,如无需求,可设置为None
+
+**参数**
+
+> * **det_model**(model): OCR中的检测模型
+> * **cls_model**(model): OCR中的分类模型
+> * **rec_model**(model): OCR中的识别模型
+
+### predict函数
+
+> ```
+> result = ppocr_v2.predict(im)
+> ```
+>
+> 模型预测接口,输入是一张图片
+>
+> **参数**
+>
+> > * **im**(np.ndarray): 输入数据,每张图片注意需为HWC,BGR格式
+
+> **返回**
+>
+> > 返回`fastdeploy.vision.OCRResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+
+
+## DBDetector Python接口
+
+### DBDetector类
+
+```
+fastdeploy.vision.ocr.DBDetector(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE)
+```
+
+DBDetector模型加载和初始化,其中模型为paddle模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为PADDLE格式
+
+### Classifier类与DBDetector类相同
+
+### Recognizer类
+```
+fastdeploy.vision.ocr.Recognizer(rec_model_file,rec_params_file,rec_label_file,
+ runtime_option=rec_runtime_option,model_format=ModelFormat.PADDLE)
+```
+Recognizer类初始化时,需要在rec_label_file参数中,输入识别模型所需的label文件路径,其他参数均与DBDetector类相同
+
+**参数**
+> * **label_path**(str): 识别模型的label文件路径
+
+
+
+### 类成员变量
+
+#### DBDetector预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+> > * **max_side_len**(int): 检测算法前向时图片长边的最大尺寸,当长边超出这个值时会将长边resize到这个大小,短边等比例缩放,默认为960
+> > * **det_db_thresh**(double): DB模型输出预测图的二值化阈值,默认为0.3
+> > * **det_db_box_thresh**(double): DB模型输出框的阈值,低于此值的预测框会被丢弃,默认为0.6
+> > * **det_db_unclip_ratio**(double): DB模型输出框扩大的比例,默认为1.5
+> > * **det_db_score_mode**(string):DB后处理中计算文本框平均得分的方式,默认为slow,即求polygon区域的平均分数的方式
+> > * **use_dilation**(bool):是否对检测输出的feature map做膨胀处理,默认为Fasle
+
+#### Classifier预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+> > * **cls_thresh**(double): 当分类模型输出的得分超过此阈值,输入的图片将被翻转,默认为0.9
+
+
+
+## 其它文档
+
+- [PPOCR 系列模型介绍](../../)
+- [PPOCRv2 C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/ocr/PP-OCRv2/serving/README.md b/examples/vision/ocr/PP-OCRv2/serving/README.md
index b7a636477..2049564c9 100644
--- a/examples/vision/ocr/PP-OCRv2/serving/README.md
+++ b/examples/vision/ocr/PP-OCRv2/serving/README.md
@@ -1,12 +1,13 @@
-# PP-OCRv2服务化部署示例
+English | [简体中文](README_CN.md)
+# PP-OCRv2 Serving Deployment
-除了`下载的模型`和`rec前处理的1个参数`以外PP-OCRv2的服务化部署与PP-OCRv3服务化部署全部一样,请参考[PP-OCRv3服务化部署](../../PP-OCRv3/serving)。
+The serving deployment of PP-OCRv2 is identical to that of PP-OCRv3 except for `downloaded models` and `1 parameter for rec pre-processing`. Refer to [PP-OCRv3 serving deployment](../../PP-OCRv3/serving)
-## 下载模型
-将下载链接中的`v3`改为`v2`即可。
+## Download models
+Change `v3` into `v2` in the download link.
-## 修改rec前处理参数
-在[model.py](../../PP-OCRv3/serving/models/det_postprocess/1/model.py#L109)文件**109行添加以下代码**:
+## Modify the rec pre-processing parameter
+**Add the following code to line 109** in the file [model.py](../../PP-OCRv3/serving/models/det_postprocess/1/model.py#L109):
```
self.rec_preprocessor.cls_image_shape[1] = 32
```
diff --git a/examples/vision/ocr/PP-OCRv2/serving/README_CN.md b/examples/vision/ocr/PP-OCRv2/serving/README_CN.md
new file mode 100644
index 000000000..f83e8b0b4
--- /dev/null
+++ b/examples/vision/ocr/PP-OCRv2/serving/README_CN.md
@@ -0,0 +1,13 @@
+[English](README.md) | 简体中文
+# PP-OCRv2服务化部署示例
+
+除了`下载的模型`和`rec前处理的1个参数`以外PP-OCRv2的服务化部署与PP-OCRv3服务化部署全部一样,请参考[PP-OCRv3服务化部署](../../PP-OCRv3/serving)。
+
+## 下载模型
+将下载链接中的`v3`改为`v2`即可。
+
+## 修改rec前处理参数
+在[model.py](../../PP-OCRv3/serving/models/det_postprocess/1/model.py#L109)文件**109行添加以下代码**:
+```
+self.rec_preprocessor.cls_image_shape[1] = 32
+```
diff --git a/examples/vision/ocr/PP-OCRv3/android/README.md b/examples/vision/ocr/PP-OCRv3/android/README.md
index f4e23ab99..a92fb7224 100644
--- a/examples/vision/ocr/PP-OCRv3/android/README.md
+++ b/examples/vision/ocr/PP-OCRv3/android/README.md
@@ -1,94 +1,94 @@
-# OCR文字识别 Android Demo 使用文档
+English | [简体中文](README_CN.md)
+# OCR Text Recognition Android Demo Tutorial
-在 Android 上实现实时的OCR文字识别功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
+Real-time OCR text recognition on Android. This demo is easy to use for everyone. For example, you can run your own trained model in the demo.
-## 环境准备
+## Prepare the Environment
-1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
-2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
+1. Install Android Studio in your local environment. Refer to [Android Studio Official Website](https://developer.android.com/studio) for detailed tutorial.
+2. Prepare an Android phone and turn on the USB debug mode. Opening: `Settings -> Find developer options -> Open developer options and USB debug mode`
-## 部署步骤
+## Deployment steps
-1. OCR文字识别 Demo 位于 `fastdeploy/examples/vision/ocr/PP-OCRv3/android` 目录
-2. 用 Android Studio 打开 PP-OCRv3/android 工程
-3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
+1. The OCR text recognition Demo is located in the `fastdeploy/examples/vision/ocr/PP-OCRv3/android`
+2. Open PP-OCRv2/android project with Android Studio
+3. Connect the phone to the computer, turn on USB debug mode and file transfer mode, and connect your phone to Android Studio (allow the phone to install software from USB)
-> **注意:**
->> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
+> **Attention:**
+>> If you encounter an NDK configuration error during import, compilation or running, open ` File > Project Structure > SDK Location` and change the path of SDK configured by the `Andriod SDK location`
-4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
- 成功后效果如下,图一:APP 安装到手机;图二: APP 打开后的效果,会自动识别图片中的物体并标记;图三:APP设置选项,点击右上角的设置图片,可以设置不同选项进行体验。
+4. Click the Run button to automatically compile the APP and install it to the phone. (The process will automatically download the pre-compiled FastDeploy Android library and model files. Internet is required).
+The final effect is as follows. Figure 1: Install the APP on the phone; Figure 2: The effect after opening the APP. It will automatically recognize and mark the objects in the image; Figure 3: APP setting option. Click setting in the upper right corner and modify your options.
-| APP 图标 | APP 效果 | APP设置项
+| APP Icon | APP Effect | APP Settings
| --- | --- | --- |
|  |  |  |
-### PP-OCRv3 Java API 说明
+### PP-OCRv3 Java API Description
-- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。 PP-OCR初始化参数说明如下:
- - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
- - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
- - labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 ppocr_keys_v1.txt,每一行包含一个label
- - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
- 与其他模型不同的是,PP-OCRv3 包含 DBDetector、Classifier和Recognizer等基础模型,以及pipeline类型。
+- Model initialized API: The initialized API contains two ways: Firstly, initialize directly through the constructor. Secondly, initialize at the appropriate program node by calling the init function. PP-OCR initialization parameters are as follows:
+ - modelFile: String. Model file path in paddle format, such as model.pdmodel
+ - paramFile: String. Parameter file path in paddle format, such as model.pdiparams
+ - labelFile: String. This optional parameter indicates the path of the label file and is used for visualization. such as ppocr_keys_v1.txt, each line containing one label
+ - option: RuntimeOption. Optional parameter for model initialization. Default runtime options if the parameter is not passed. Different from other models, PP-OCRv3 contains base models such as DBDetector, Classifier, Recognizer and the pipeline type.
```java
-// 构造函数: constructor w/o label file
+// Constructor: constructor w/o label file
public DBDetector(String modelFile, String paramsFile);
public DBDetector(String modelFile, String paramsFile, RuntimeOption option);
public Classifier(String modelFile, String paramsFile);
public Classifier(String modelFile, String paramsFile, RuntimeOption option);
public Recognizer(String modelFile, String paramsFile, String labelPath);
public Recognizer(String modelFile, String paramsFile, String labelPath, RuntimeOption option);
-public PPOCRv3(); // 空构造函数,之后可以调用init初始化
+public PPOCRv3(); // An empty constructor, which can be initialized by calling init
// Constructor w/o classifier
public PPOCRv3(DBDetector detModel, Recognizer recModel);
public PPOCRv3(DBDetector detModel, Classifier clsModel, Recognizer recModel);
```
-- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+- Model Prediction API: The Model Prediction API contains an API for direct prediction and an API for visualization. In direct prediction, we do not save the image and render the result on Bitmap. Instead, we merely predict the inference result. For prediction and visualization, the results are both predicted and visualized, the visualized images are saved to the specified path, and the visualized results are rendered in Bitmap (Now Bitmap in ARGB8888 format is supported). Afterward, the Bitmap can be displayed on the camera.
```java
-// 直接预测:不保存图片以及不渲染结果到Bitmap上
+// Direct prediction: No image saving and no result rendering to Bitmap
public OCRResult predict(Bitmap ARGB8888Bitmap);
-// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+// Prediction and visualization: Predict and visualize the results, save the visualized image to the specified path, and render the visualized results on Bitmap
public OCRResult predict(Bitmap ARGB8888Bitmap, String savedImagePath);
-public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // 只渲染 不保存图片
+public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // Render without saving images
```
-- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+- Model resource release API: Call release() API to release model resources. Return true for successful release and false for failure; call initialized() to determine whether the model was initialized successfully, with true indicating successful initialization and false indicating failure.
```java
-public boolean release(); // 释放native资源
-public boolean initialized(); // 检查是否初始化成功
+public boolean release(); // Release native resources
+public boolean initialized(); // Check if initialization was successful
```
-- RuntimeOption设置说明
+- RuntimeOption settings
```java
-public void enableLiteFp16(); // 开启fp16精度推理
-public void disableLiteFP16(); // 关闭fp16精度推理
-public void enableLiteInt8(); // 开启int8精度推理,针对量化模型
-public void disableLiteInt8(); // 关闭int8精度推理
-public void setCpuThreadNum(int threadNum); // 设置线程数
-public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
-public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
+public void enableLiteFp16(); // Enable fp16 accuracy inference
+public void disableLiteFP16(); // Disable fp16 accuracy inference
+public void enableLiteInt8(); // Enable int8 accuracy inference for quantification models
+public void disableLiteInt8(); // Disable int8 accuracy inference
+public void setCpuThreadNum(int threadNum); // Set thread numbers
+public void setLitePowerMode(LitePowerMode mode); // Set power mode
+public void setLitePowerMode(String modeStr); // Set power mode through character string
```
-- 模型结果OCRResult说明
+- Model OCRResult
```java
public class OCRResult {
- public int[][] mBoxes; // 表示单张图片检测出来的所有目标框坐标,每个框以8个int数值依次表示框的4个坐标点,顺序为左下,右下,右上,左上
- public String[] mText; // 表示多个文本框内被识别出来的文本内容
- public float[] mRecScores; // 表示文本框内识别出来的文本的置信度
- public float[] mClsScores; // 表示文本框的分类结果的置信度
- public int[] mClsLabels; // 表示文本框的方向分类类别
- public boolean mInitialized = false; // 检测结果是否有效
+ public int[][] mBoxes; // The coordinates of all target boxes in a single image. 8 int values represent the 4 coordinate points of the box in the order of bottom left, bottom right, top right and top left
+ public String[] mText; // Recognized text in multiple text boxes
+ public float[] mRecScores; // Confidence of the recognized text in the box
+ public float[] mClsScores; // Confidence of the classification result of the text box
+ public int[] mClsLabels; // Directional classification of the text box
+ public boolean mInitialized = false; // Whether the result is valid or not
}
```
-其他参考:C++/Python对应的OCRResult说明: [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md)
+Refer to [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md) for C++/Python OCRResult
-- 模型调用示例1:使用构造函数
+- Model Calling Example 1: Using Constructor
```java
import java.nio.ByteBuffer;
import android.graphics.Bitmap;
@@ -101,7 +101,7 @@ import com.baidu.paddle.fastdeploy.vision.ocr.Classifier;
import com.baidu.paddle.fastdeploy.vision.ocr.DBDetector;
import com.baidu.paddle.fastdeploy.vision.ocr.Recognizer;
-// 模型路径
+// Model path
String detModelFile = "ch_PP-OCRv3_det_infer/inference.pdmodel";
String detParamsFile = "ch_PP-OCRv3_det_infer/inference.pdiparams";
String clsModelFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdmodel";
@@ -109,7 +109,7 @@ String clsParamsFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdiparams";
String recModelFile = "ch_PP-OCRv3_rec_infer/inference.pdmodel";
String recParamsFile = "ch_PP-OCRv3_rec_infer/inference.pdiparams";
String recLabelFilePath = "labels/ppocr_keys_v1.txt";
-// 设置RuntimeOption
+// Set the RuntimeOption
RuntimeOption detOption = new RuntimeOption();
RuntimeOption clsOption = new RuntimeOption();
RuntimeOption recOption = new RuntimeOption();
@@ -122,37 +122,37 @@ recOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
detOption.enableLiteFp16();
clsOption.enableLiteFp16();
recOption.enableLiteFp16();
-// 初始化模型
+// Initialize the model
DBDetector detModel = new DBDetector(detModelFile, detParamsFile, detOption);
Classifier clsModel = new Classifier(clsModelFile, clsParamsFile, clsOption);
Recognizer recModel = new Recognizer(recModelFile, recParamsFile, recLabelFilePath, recOption);
PPOCRv3 model = new PPOCRv3(detModel,clsModel,recModel);
-// 读取图片: 以下仅为读取Bitmap的伪代码
+// Read the image: The following is merely the pseudo code to read the Bitmap
ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
-// 模型推理
+// Model Inference
OCRResult result = model.predict(ARGB8888ImageBitmap);
-// 释放模型资源
+// Release model resources
model.release();
```
-- 模型调用示例2: 在合适的程序节点,手动调用init
+- Model calling example 2: Manually call init at the appropriate program node
```java
-// import 同上 ...
+// import is as above...
import com.baidu.paddle.fastdeploy.RuntimeOption;
import com.baidu.paddle.fastdeploy.LitePowerMode;
import com.baidu.paddle.fastdeploy.vision.OCRResult;
import com.baidu.paddle.fastdeploy.vision.ocr.Classifier;
import com.baidu.paddle.fastdeploy.vision.ocr.DBDetector;
import com.baidu.paddle.fastdeploy.vision.ocr.Recognizer;
-// 新建空模型
+// Create an empty model
PPOCRv3 model = new PPOCRv3();
-// 模型路径
+// Model path
String detModelFile = "ch_PP-OCRv3_det_infer/inference.pdmodel";
String detParamsFile = "ch_PP-OCRv3_det_infer/inference.pdiparams";
String clsModelFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdmodel";
@@ -160,7 +160,7 @@ String clsParamsFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdiparams";
String recModelFile = "ch_PP-OCRv3_rec_infer/inference.pdmodel";
String recParamsFile = "ch_PP-OCRv3_rec_infer/inference.pdiparams";
String recLabelFilePath = "labels/ppocr_keys_v1.txt";
-// 设置RuntimeOption
+// Set the RuntimeOption
RuntimeOption detOption = new RuntimeOption();
RuntimeOption clsOption = new RuntimeOption();
RuntimeOption recOption = new RuntimeOption();
@@ -173,30 +173,30 @@ recOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
detOption.enableLiteFp16();
clsOption.enableLiteFp16();
recOption.enableLiteFp16();
-// 使用init函数初始化
+// Use init function for initialization
DBDetector detModel = new DBDetector(detModelFile, detParamsFile, detOption);
Classifier clsModel = new Classifier(clsModelFile, clsParamsFile, clsOption);
Recognizer recModel = new Recognizer(recModelFile, recParamsFile, recLabelFilePath, recOption);
model.init(detModel, clsModel, recModel);
-// Bitmap读取、模型预测、资源释放 同上 ...
+// Bitmap reading, model prediction, and resource release are as above ...
```
-更详细的用法请参考 [OcrMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/ocr/OcrMainActivity.java)中的用法
+Refer to [OcrMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/ocr/OcrMainActivity.java) for more details
-## 替换 FastDeploy SDK和模型
-替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models`。
-- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK,解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
- - [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
+## Replace FastDeploy SDK and Models
+It’s simple to replace the FastDeploy prediction library and models. The prediction library is located at `app/libs/fastdeploy-android-sdk-xxx.aar`, where `xxx` represents the version of your prediction library. The models are located at `app/src/main/assets/models`.
+- Replace the FastDeploy Android SDK: Download or compile the latest FastDeploy Android SDK, unzip and place it in the `app/libs`; For detailed configuration, refer to
+ - [FastDeploy Java SDK in Android](../../../../../java/android/)
-- 替换OCR模型的步骤:
- - 将您的OCR模型放在 `app/src/main/assets/models` 目录下;
- - 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
+- Steps to replace OCR models:
+ - Put your OCR model in `app/src/main/assets/models`;
+ - Modify the default value of the model path in `app/src/main/res/values/strings.xml`. For example,
```xml
-
+
models
labels/ppocr_keys_v1.txt
```
-## 使用量化模型
-如果您使用的是量化格式的模型,只需要使用RuntimeOption的enableLiteInt8()接口设置Int8精度推理即可。
+## Use quantification models
+If you're using quantification models, set Int8 accuracy inference using the interface enableLiteInt8() of RuntimeOption.
```java
String detModelFile = "ch_ppocrv3_plate_det_quant/inference.pdmodel";
String detParamsFile = "ch_ppocrv3_plate_det_quant/inference.pdiparams";
@@ -205,18 +205,18 @@ String recParamsFile = "ch_ppocrv3_plate_rec_distillation_quant/inference.pdipar
String recLabelFilePath = "ppocr_keys_v1.txt"; // ppocr_keys_v1.txt
RuntimeOption detOption = new RuntimeOption();
RuntimeOption recOption = new RuntimeOption();
-// 使用Int8精度进行推理
+// Use Int8 accuracy for inference
detOption.enableLiteInt8();
recOption.enableLiteInt8();
-// 初始化PP-OCRv3 Pipeline
+// Initialize PP-OCRv3 Pipeline
PPOCRv3 predictor = new PPOCRv3();
DBDetector detModel = new DBDetector(detModelFile, detParamsFile, detOption);
Recognizer recModel = new Recognizer(recModelFile, recParamsFile, recLabelFilePath, recOption);
predictor.init(detModel, recModel);
```
-在App中使用,可以参考 [OcrMainActivity.java](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/ocr/OcrMainActivity.java) 中的用法。
+Refer to [OcrMainActivity.java](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/ocr/OcrMainActivity.java) for use-pattern in APP.
-## 更多参考文档
-如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
-- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
-- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
+## More Reference Documents
+For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
+- [FastDeploy Java SDK in Android](../../../../../java/android/)
+- [FastDeploy C++ SDK in Android](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
diff --git a/examples/vision/ocr/PP-OCRv3/android/README_CN.md b/examples/vision/ocr/PP-OCRv3/android/README_CN.md
new file mode 100644
index 000000000..b355119e2
--- /dev/null
+++ b/examples/vision/ocr/PP-OCRv3/android/README_CN.md
@@ -0,0 +1,223 @@
+[English](README.md) | 简体中文
+# OCR文字识别 Android Demo 使用文档
+
+在 Android 上实现实时的OCR文字识别功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
+
+## 环境准备
+
+1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
+2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
+
+## 部署步骤
+
+1. OCR文字识别 Demo 位于 `fastdeploy/examples/vision/ocr/PP-OCRv3/android` 目录
+2. 用 Android Studio 打开 PP-OCRv3/android 工程
+3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
+
+
+
+
+
+> **注意:**
+>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
+
+4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
+ 成功后效果如下,图一:APP 安装到手机;图二: APP 打开后的效果,会自动识别图片中的物体并标记;图三:APP设置选项,点击右上角的设置图片,可以设置不同选项进行体验。
+
+| APP 图标 | APP 效果 | APP设置项
+ | --- | --- | --- |
+|  |  |  |
+
+### PP-OCRv3 Java API 说明
+
+- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。 PP-OCR初始化参数说明如下:
+ - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
+ - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
+ - labelFile: String, 可选参数,表示label标签文件所在路径,用于可视化,如 ppocr_keys_v1.txt,每一行包含一个label
+ - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
+ 与其他模型不同的是,PP-OCRv3 包含 DBDetector、Classifier和Recognizer等基础模型,以及pipeline类型。
+```java
+// 构造函数: constructor w/o label file
+public DBDetector(String modelFile, String paramsFile);
+public DBDetector(String modelFile, String paramsFile, RuntimeOption option);
+public Classifier(String modelFile, String paramsFile);
+public Classifier(String modelFile, String paramsFile, RuntimeOption option);
+public Recognizer(String modelFile, String paramsFile, String labelPath);
+public Recognizer(String modelFile, String paramsFile, String labelPath, RuntimeOption option);
+public PPOCRv3(); // 空构造函数,之后可以调用init初始化
+// Constructor w/o classifier
+public PPOCRv3(DBDetector detModel, Recognizer recModel);
+public PPOCRv3(DBDetector detModel, Classifier clsModel, Recognizer recModel);
+```
+- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+```java
+// 直接预测:不保存图片以及不渲染结果到Bitmap上
+public OCRResult predict(Bitmap ARGB8888Bitmap);
+// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+public OCRResult predict(Bitmap ARGB8888Bitmap, String savedImagePath);
+public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // 只渲染 不保存图片
+```
+- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+```java
+public boolean release(); // 释放native资源
+public boolean initialized(); // 检查是否初始化成功
+```
+
+- RuntimeOption设置说明
+
+```java
+public void enableLiteFp16(); // 开启fp16精度推理
+public void disableLiteFP16(); // 关闭fp16精度推理
+public void enableLiteInt8(); // 开启int8精度推理,针对量化模型
+public void disableLiteInt8(); // 关闭int8精度推理
+public void setCpuThreadNum(int threadNum); // 设置线程数
+public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
+public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
+```
+
+- 模型结果OCRResult说明
+```java
+public class OCRResult {
+ public int[][] mBoxes; // 表示单张图片检测出来的所有目标框坐标,每个框以8个int数值依次表示框的4个坐标点,顺序为左下,右下,右上,左上
+ public String[] mText; // 表示多个文本框内被识别出来的文本内容
+ public float[] mRecScores; // 表示文本框内识别出来的文本的置信度
+ public float[] mClsScores; // 表示文本框的分类结果的置信度
+ public int[] mClsLabels; // 表示文本框的方向分类类别
+ public boolean mInitialized = false; // 检测结果是否有效
+}
+```
+其他参考:C++/Python对应的OCRResult说明: [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md)
+
+
+- 模型调用示例1:使用构造函数
+```java
+import java.nio.ByteBuffer;
+import android.graphics.Bitmap;
+import android.opengl.GLES20;
+
+import com.baidu.paddle.fastdeploy.RuntimeOption;
+import com.baidu.paddle.fastdeploy.LitePowerMode;
+import com.baidu.paddle.fastdeploy.vision.OCRResult;
+import com.baidu.paddle.fastdeploy.vision.ocr.Classifier;
+import com.baidu.paddle.fastdeploy.vision.ocr.DBDetector;
+import com.baidu.paddle.fastdeploy.vision.ocr.Recognizer;
+
+// 模型路径
+String detModelFile = "ch_PP-OCRv3_det_infer/inference.pdmodel";
+String detParamsFile = "ch_PP-OCRv3_det_infer/inference.pdiparams";
+String clsModelFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdmodel";
+String clsParamsFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdiparams";
+String recModelFile = "ch_PP-OCRv3_rec_infer/inference.pdmodel";
+String recParamsFile = "ch_PP-OCRv3_rec_infer/inference.pdiparams";
+String recLabelFilePath = "labels/ppocr_keys_v1.txt";
+// 设置RuntimeOption
+RuntimeOption detOption = new RuntimeOption();
+RuntimeOption clsOption = new RuntimeOption();
+RuntimeOption recOption = new RuntimeOption();
+detOption.setCpuThreadNum(2);
+clsOption.setCpuThreadNum(2);
+recOption.setCpuThreadNum(2);
+detOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+clsOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+recOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+detOption.enableLiteFp16();
+clsOption.enableLiteFp16();
+recOption.enableLiteFp16();
+// 初始化模型
+DBDetector detModel = new DBDetector(detModelFile, detParamsFile, detOption);
+Classifier clsModel = new Classifier(clsModelFile, clsParamsFile, clsOption);
+Recognizer recModel = new Recognizer(recModelFile, recParamsFile, recLabelFilePath, recOption);
+PPOCRv3 model = new PPOCRv3(detModel,clsModel,recModel);
+
+// 读取图片: 以下仅为读取Bitmap的伪代码
+ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
+GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
+Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
+ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
+
+// 模型推理
+OCRResult result = model.predict(ARGB8888ImageBitmap);
+
+// 释放模型资源
+model.release();
+```
+
+- 模型调用示例2: 在合适的程序节点,手动调用init
+```java
+// import 同上 ...
+import com.baidu.paddle.fastdeploy.RuntimeOption;
+import com.baidu.paddle.fastdeploy.LitePowerMode;
+import com.baidu.paddle.fastdeploy.vision.OCRResult;
+import com.baidu.paddle.fastdeploy.vision.ocr.Classifier;
+import com.baidu.paddle.fastdeploy.vision.ocr.DBDetector;
+import com.baidu.paddle.fastdeploy.vision.ocr.Recognizer;
+// 新建空模型
+PPOCRv3 model = new PPOCRv3();
+// 模型路径
+String detModelFile = "ch_PP-OCRv3_det_infer/inference.pdmodel";
+String detParamsFile = "ch_PP-OCRv3_det_infer/inference.pdiparams";
+String clsModelFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdmodel";
+String clsParamsFile = "ch_ppocr_mobile_v2.0_cls_infer/inference.pdiparams";
+String recModelFile = "ch_PP-OCRv3_rec_infer/inference.pdmodel";
+String recParamsFile = "ch_PP-OCRv3_rec_infer/inference.pdiparams";
+String recLabelFilePath = "labels/ppocr_keys_v1.txt";
+// 设置RuntimeOption
+RuntimeOption detOption = new RuntimeOption();
+RuntimeOption clsOption = new RuntimeOption();
+RuntimeOption recOption = new RuntimeOption();
+detOption.setCpuThreadNum(2);
+clsOption.setCpuThreadNum(2);
+recOption.setCpuThreadNum(2);
+detOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+clsOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+recOption.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+detOption.enableLiteFp16();
+clsOption.enableLiteFp16();
+recOption.enableLiteFp16();
+// 使用init函数初始化
+DBDetector detModel = new DBDetector(detModelFile, detParamsFile, detOption);
+Classifier clsModel = new Classifier(clsModelFile, clsParamsFile, clsOption);
+Recognizer recModel = new Recognizer(recModelFile, recParamsFile, recLabelFilePath, recOption);
+model.init(detModel, clsModel, recModel);
+// Bitmap读取、模型预测、资源释放 同上 ...
+```
+更详细的用法请参考 [OcrMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/ocr/OcrMainActivity.java)中的用法
+
+## 替换 FastDeploy SDK和模型
+替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models`。
+- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK,解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
+ - [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
+
+- 替换OCR模型的步骤:
+ - 将您的OCR模型放在 `app/src/main/assets/models` 目录下;
+ - 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
+```xml
+
+models
+labels/ppocr_keys_v1.txt
+```
+## 使用量化模型
+如果您使用的是量化格式的模型,只需要使用RuntimeOption的enableLiteInt8()接口设置Int8精度推理即可。
+```java
+String detModelFile = "ch_ppocrv3_plate_det_quant/inference.pdmodel";
+String detParamsFile = "ch_ppocrv3_plate_det_quant/inference.pdiparams";
+String recModelFile = "ch_ppocrv3_plate_rec_distillation_quant/inference.pdmodel";
+String recParamsFile = "ch_ppocrv3_plate_rec_distillation_quant/inference.pdiparams";
+String recLabelFilePath = "ppocr_keys_v1.txt"; // ppocr_keys_v1.txt
+RuntimeOption detOption = new RuntimeOption();
+RuntimeOption recOption = new RuntimeOption();
+// 使用Int8精度进行推理
+detOption.enableLiteInt8();
+recOption.enableLiteInt8();
+// 初始化PP-OCRv3 Pipeline
+PPOCRv3 predictor = new PPOCRv3();
+DBDetector detModel = new DBDetector(detModelFile, detParamsFile, detOption);
+Recognizer recModel = new Recognizer(recModelFile, recParamsFile, recLabelFilePath, recOption);
+predictor.init(detModel, recModel);
+```
+在App中使用,可以参考 [OcrMainActivity.java](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/ocr/OcrMainActivity.java) 中的用法。
+
+## 更多参考文档
+如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
+- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
+- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
diff --git a/examples/vision/ocr/PP-OCRv3/cpp/README.md b/examples/vision/ocr/PP-OCRv3/cpp/README.md
index 7f557a213..752c6e184 100755
--- a/examples/vision/ocr/PP-OCRv3/cpp/README.md
+++ b/examples/vision/ocr/PP-OCRv3/cpp/README.md
@@ -1,25 +1,26 @@
-# PPOCRv3 C++部署示例
+English | [简体中文](README_CN.md)
+# PPOCRv3 C++ Deployment Example
-本目录下提供`infer.cc`快速完成PPOCRv3在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+This directory provides examples that `infer.cc` fast finishes the deployment of PPOCRv3 on CPU/GPU and GPU accelerated by TensorRT.
-在部署前,需确认以下两个步骤
+Two steps before deployment
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
```
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载模型,图片和字典文件
+# Download model, image, and dictionary files
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
tar -xvf ch_PP-OCRv3_det_infer.tar
@@ -33,34 +34,29 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
-# CPU推理
+# CPU inference
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
-# GPU推理
+# GPU inference
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
-# GPU上TensorRT推理
+# TensorRT inference on GPU
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 2
-# GPU上Paddle-TRT推理
+# Paddle-TRT inference on GPU
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 3
-# 昆仑芯XPU推理
+# KunlunXin XPU inference
./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 4
-# 华为昇腾推理,需要使用静态shape的demo, 若用户需要连续地预测图片, 输入图片尺寸需要准备为统一尺寸
-./infer_static_shape_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
```
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
-- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
-
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-## 其它文档
+## Other Documents
-- [C++ API查阅](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/)
-- [PPOCR 系列模型介绍](../../)
-- [PPOCRv3 Python部署](../python)
-- [模型预测结果说明](../../../../../docs/cn/faq/how_to_change_backend.md)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [C++ API Reference](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/)
+- [PPOCR Model Description](../../)
+- [PPOCRv3 Python Deployment](../python)
+- [Model Prediction Results](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/ocr/PP-OCRv3/cpp/README_CN.md b/examples/vision/ocr/PP-OCRv3/cpp/README_CN.md
new file mode 100644
index 000000000..167d2d952
--- /dev/null
+++ b/examples/vision/ocr/PP-OCRv3/cpp/README_CN.md
@@ -0,0 +1,67 @@
+[English](README.md) | 简体中文
+# PPOCRv3 C++部署示例
+
+本目录下提供`infer.cc`快速完成PPOCRv3在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+```
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+
+# 下载模型,图片和字典文件
+wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
+tar -xvf ch_PP-OCRv3_det_infer.tar
+
+wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
+tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
+
+wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
+tar -xvf ch_PP-OCRv3_rec_infer.tar
+
+wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
+
+wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
+
+# CPU推理
+./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 0
+# GPU推理
+./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
+# GPU上TensorRT推理
+./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 2
+# GPU上Paddle-TRT推理
+./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 3
+# 昆仑芯XPU推理
+./infer_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 4
+# 华为昇腾推理,需要使用静态shape的demo, 若用户需要连续地预测图片, 输入图片尺寸需要准备为统一尺寸
+./infer_static_shape_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 1
+```
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
+- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
+
+运行完成可视化结果如下图所示
+
+
+
+## 其它文档
+
+- [C++ API查阅](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/)
+- [PPOCR 系列模型介绍](../../)
+- [PPOCRv3 Python部署](../python)
+- [模型预测结果说明](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/ocr/PP-OCRv3/mini_program/README.md b/examples/vision/ocr/PP-OCRv3/mini_program/README.md
index 447a02e72..b80e6acb0 100644
--- a/examples/vision/ocr/PP-OCRv3/mini_program/README.md
+++ b/examples/vision/ocr/PP-OCRv3/mini_program/README.md
@@ -1,40 +1,40 @@
+English | [简体中文](README_CN.md)
+# PP-OCRv3 Wechat Mini-program Deployment Example
-# PP-OCRv3 微信小程序部署示例
-
-本节介绍部署PaddleOCR的PP-OCRv3模型在微信小程序中运行,以及@paddle-js-models/ocr npm包中的js接口。
+This document introduces the deployment of PP-OCRv3 model from PaddleOCR in Wechat mini-program, and the js interface in the @paddle-js-models/ocr npm package.
-## 微信小程序部署PP-OCRv3模型
+## Deploy PP-OCRv3 models in Wechat Mini-program
-PP-OCRv3模型部署到微信小程序[**参考文档**](../../../../application/js/mini_program)
+For the deployment of PP-OCRv3 models in Wechat mini-program, refer to [**reference document**](../../../../application/js/mini_program)
-## PP-OCRv3 js接口
+## PP-OCRv3 js interface
```
import * as ocr from "@paddle-js-models/ocr";
await ocr.init(detConfig, recConfig);
const res = await ocr.recognize(img, option, postConfig);
```
-ocr模型加载和初始化,其中模型为Paddle.js模型格式,js模型转换方式参考[文档](../../../../application/js/web_demo/README.md)
+ocr model loading and initialization, where the model is in Paddle.js model format. For the conversion of js models, refer to [document](../../../../application/js/web_demo/README.md)
-**init函数参数**
+**init function parameter**
-> * **detConfig**(dict): 文本检测模型配置参数,默认值为 {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_det_infer_js_960/model.json', fill: '#fff', mean: [0.485, 0.456, 0.406],std: [0.229, 0.224, 0.225]}; 其中,modelPath为文本检测模型路径,fill 为图像预处理padding的值,mean和std分别为预处理的均值和标准差
-> * **recConfig**(dict)): 文本识别模型配置参数,默认值为 {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_rec_infer_js/model.json', fill: '#000', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5]}; 其中,modelPath为文本检测模型路径,fill 为图像预处理padding的值,mean和std分别为预处理的均值和标准差
+> * **detConfig**(dict): The configuration parameter for text detection model. Default {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_det_infer_js_960/model.json', fill: '#fff', mean: [0.485, 0.456, 0.406],std: [0.229, 0.224, 0.225]}; Among them, modelPath is the path of the text detection model; fill is the padding value in the image pre-processing; mean and std are the mean and standard deviation in the pre-processing
+> * **recConfig**(dict)): The configuration parameter for text recognition model. Default {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_rec_infer_js/model.json', fill: '#000', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5]}; Among them, modelPath is the path of the text detection model, fill is the padding value in the image pre-processing, and mean/std are the mean and standard deviation in the pre-processing
-**recognize函数参数**
+**recognize function parameter**
-> * **img**(HTMLImageElement): 输入图像参数,类型为HTMLImageElement。
-> * **option**(dict): 可视化文本检测框的canvas参数,可不用设置。
-> * **postConfig**(dict): 文本检测后处理参数,默认值为:{shape: 960, thresh: 0.3, box_thresh: 0.6, unclip_ratio:1.5}; thresh是输出预测图的二值化阈值;box_thresh是输出框的阈值,低于此值的预测框会被丢弃,unclip_ratio是输出框扩大的比例。
+> * **img**(HTMLImageElement): Enter an image parameter in HTMLImageElement.
+> * **option**(dict): The canvas parameter of the visual text detection box. No need to set.
+> * **postConfig**(dict): Text detection post-processing parameter. Default: {shape: 960, thresh: 0.3, box_thresh: 0.6, unclip_ratio:1.5}; thresh is the binarization threshold of the output prediction image; box_thresh is the threshold of the output box, below which the prediction box will be discarded; unclip_ratio is the expansion ratio of the output box.
-## 其它文档
+## Other Documents
-- [PP-OCR 系列模型介绍](../../)
-- [PP-OCRv3 C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
-- [PP-OCRv3模型web demo文档](../../../../application/js/web_demo/README.md)
+- [PP-OCR Model Description](../../)
+- [PP-OCRv3 C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Web demo document of PP-OCRv3 models](../../../../application/js/web_demo/README.md)
diff --git a/examples/vision/ocr/PP-OCRv3/mini_program/README_CN.md b/examples/vision/ocr/PP-OCRv3/mini_program/README_CN.md
new file mode 100644
index 000000000..e3a969100
--- /dev/null
+++ b/examples/vision/ocr/PP-OCRv3/mini_program/README_CN.md
@@ -0,0 +1,40 @@
+[English](README.md) | 简体中文
+# PP-OCRv3 微信小程序部署示例
+
+本节介绍部署PaddleOCR的PP-OCRv3模型在微信小程序中运行,以及@paddle-js-models/ocr npm包中的js接口。
+
+
+## 微信小程序部署PP-OCRv3模型
+
+PP-OCRv3模型部署到微信小程序[**参考文档**](../../../../application/js/mini_program)
+
+
+## PP-OCRv3 js接口
+
+```
+import * as ocr from "@paddle-js-models/ocr";
+await ocr.init(detConfig, recConfig);
+const res = await ocr.recognize(img, option, postConfig);
+```
+ocr模型加载和初始化,其中模型为Paddle.js模型格式,js模型转换方式参考[文档](../../../../application/js/web_demo/README.md)
+
+**init函数参数**
+
+> * **detConfig**(dict): 文本检测模型配置参数,默认值为 {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_det_infer_js_960/model.json', fill: '#fff', mean: [0.485, 0.456, 0.406],std: [0.229, 0.224, 0.225]}; 其中,modelPath为文本检测模型路径,fill 为图像预处理padding的值,mean和std分别为预处理的均值和标准差
+> * **recConfig**(dict)): 文本识别模型配置参数,默认值为 {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_rec_infer_js/model.json', fill: '#000', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5]}; 其中,modelPath为文本检测模型路径,fill 为图像预处理padding的值,mean和std分别为预处理的均值和标准差
+
+
+**recognize函数参数**
+
+> * **img**(HTMLImageElement): 输入图像参数,类型为HTMLImageElement。
+> * **option**(dict): 可视化文本检测框的canvas参数,可不用设置。
+> * **postConfig**(dict): 文本检测后处理参数,默认值为:{shape: 960, thresh: 0.3, box_thresh: 0.6, unclip_ratio:1.5}; thresh是输出预测图的二值化阈值;box_thresh是输出框的阈值,低于此值的预测框会被丢弃,unclip_ratio是输出框扩大的比例。
+
+
+## 其它文档
+
+- [PP-OCR 系列模型介绍](../../)
+- [PP-OCRv3 C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [PP-OCRv3模型web demo文档](../../../../application/js/web_demo/README.md)
diff --git a/examples/vision/ocr/PP-OCRv3/python/README.md b/examples/vision/ocr/PP-OCRv3/python/README.md
index 3fcf372e0..99217ceea 100755
--- a/examples/vision/ocr/PP-OCRv3/python/README.md
+++ b/examples/vision/ocr/PP-OCRv3/python/README.md
@@ -1,15 +1,16 @@
-# PPOCRv3 Python部署示例
+English | [简体中文](README_CN.md)
+# PPOCRv3 Python Deployment Example
-在部署前,需确认以下两个步骤
+Two steps before deployment
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`infer.py`快速完成PPOCRv3在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+This directory provides examples that `infer.py` fast finishes the deployment of PPOCRv3 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```
-# 下载模型,图片和字典文件
+# Download model, image, and dictionary files
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
tar xvf ch_PP-OCRv3_det_infer.tar
@@ -23,32 +24,32 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
-#下载部署示例代码
+# Download the example code for deployment
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vison/ocr/PP-OCRv3/python/
-# CPU推理
+# CPU inference
python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device cpu
-# GPU推理
+# GPU inference
python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu
-# GPU上使用TensorRT推理
+# TensorRT inference on GPU
python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu --backend trt
-# 昆仑芯XPU推理
+# KunlunXin XPU inference
python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device kunlunxin
-# 华为昇腾推理,需要使用静态shape脚本, 若用户需要连续地预测图片, 输入图片尺寸需要准备为统一尺寸
+# HUAWEI Ascend inference requires static shape script. The size of input images should be consistent if you want to continuously predict images.
python infer_static_shape.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device ascend
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-## 其它文档
+## Other Documents
-- [Python API文档查阅](https://baidu-paddle.github.io/fastdeploy-api/python/html/)
-- [PPOCR 系列模型介绍](../../)
-- [PPOCRv3 C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Python API reference](https://baidu-paddle.github.io/fastdeploy-api/python/html/)
+- [PPOCR Model Description](../../)
+- [PPOCRv3 C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/ocr/PP-OCRv3/python/README_CN.md b/examples/vision/ocr/PP-OCRv3/python/README_CN.md
new file mode 100644
index 000000000..845cc91ab
--- /dev/null
+++ b/examples/vision/ocr/PP-OCRv3/python/README_CN.md
@@ -0,0 +1,55 @@
+[English](README.md) | 简体中文
+# PPOCRv3 Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成PPOCRv3在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```
+
+# 下载模型,图片和字典文件
+wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
+tar xvf ch_PP-OCRv3_det_infer.tar
+
+wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
+tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar
+
+wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
+tar xvf ch_PP-OCRv3_rec_infer.tar
+
+wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
+
+wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
+
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd examples/vison/ocr/PP-OCRv3/python/
+
+# CPU推理
+python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device cpu
+# GPU推理
+python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu
+# GPU上使用TensorRT推理
+python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu --backend trt
+# 昆仑芯XPU推理
+python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device kunlunxin
+# 华为昇腾推理,需要使用静态shape脚本, 若用户需要连续地预测图片, 输入图片尺寸需要准备为统一尺寸
+python infer_static_shape.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device ascend
+```
+
+运行完成可视化结果如下图所示
+
+
+
+
+
+## 其它文档
+
+- [Python API文档查阅](https://baidu-paddle.github.io/fastdeploy-api/python/html/)
+- [PPOCR 系列模型介绍](../../)
+- [PPOCRv3 C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/ocr/PP-OCRv3/serving/README.md b/examples/vision/ocr/PP-OCRv3/serving/README.md
index a870b2d19..1ad4c7009 100755
--- a/examples/vision/ocr/PP-OCRv3/serving/README.md
+++ b/examples/vision/ocr/PP-OCRv3/serving/README.md
@@ -1,37 +1,37 @@
-# PP-OCR服务化部署示例
+English | [简体中文](README_CN.md)
+# PP-OCR Serving Deployment Example
-在服务化部署前,需确认
+Before the serving deployment, please confirm
-- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
+- 1. Refer to [FastDeploy Serving Deployment](../../../../../serving/README_CN.md) for software and hardware environment requirements and image pull commands
-## 介绍
-本文介绍了使用FastDeploy搭建OCR文字识别服务的方法.
+## Introduction
+This document describes how to build an OCR text recognition service with FastDeploy.
-服务端必须在docker内启动,而客户端不是必须在docker容器内.
+The server must be started in docker, while the client does not need to be in a docker container.
-**本文所在路径($PWD)下的models里包含模型的配置和代码(服务端会加载模型和代码以启动服务), 需要将其映射到docker中使用.**
+**The models in the path ($PWD) contain the model configuration and code (the server will load the models and code to start the service), which needs to be mapped to docker.**
-OCR由det(检测)、cls(分类)和rec(识别)三个模型组成.
+OCR consists of det (detection), cls (classification) and rec (recognition) models.
-服务化部署串联的示意图如下图所示,其中`pp_ocr`串联了`det_preprocess`、`det_runtime`和`det_postprocess`,`cls_pp`串联了`cls_runtime`和`cls_postprocess`,`rec_pp`串联了`rec_runtime`和`rec_postprocess`.
-
-特别的是,在`det_postprocess`中会多次调用`cls_pp`和`rec_pp`服务,来实现对检测结果(多个框)进行分类和识别,,最后返回给用户最终的识别结果。
+The diagram of the serving deployment is shown below, where `pp_ocr` connects to `det_preprocess`、`det_runtime` and `det_postprocess`,`cls_pp` connects to `cls_runtime` and `cls_postprocess`,`rec_pp` connects to `rec_runtime` and `rec_postprocess`.
+In particular, `cls_pp` and `rec_pp` services are called multiple times in `det_postprocess` to realize the classification and identification of the detection results (multiple boxes), and finally return the identification results to users.
-## 使用
-### 1. 服务端
+## Usage
+### 1. Server
#### 1.1 Docker
```bash
-# 下载仓库代码
+# Download the repository code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/ocr/PP-OCRv3/serving/
-# 下载模型,图片和字典文件
+# Dpwnload model, image, and dictionary files
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
tar xvf ch_PP-OCRv3_det_infer.tar && mv ch_PP-OCRv3_det_infer 1
mv 1/inference.pdiparams 1/model.pdiparams && mv 1/inference.pdmodel 1/model.pdmodel
@@ -54,41 +54,41 @@ mv ppocr_keys_v1.txt models/rec_postprocess/1/
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
-# x.y.z为镜像版本号,需参照serving文档替换为数字
+# x.y.z represent the image version. Refer to serving document to replace them with numbers
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10
docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/ocr_serving registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash
docker exec -it -u root fastdeploy bash
```
-#### 1.2 安装(在docker内)
+#### 1.2 Installation (in docker)
```bash
ldconfig
apt-get install libgl1
```
-#### 1.3 启动服务端(在docker内)
+#### 1.3 Start the server (in docker)
```bash
fastdeployserver --model-repository=/ocr_serving/models
```
-参数:
- - `model-repository`(required): 整套模型streaming_pp_tts存放的路径.
- - `http-port`(optional): HTTP服务的端口号. 默认: `8000`. 本示例中未使用该端口.
- - `grpc-port`(optional): GRPC服务的端口号. 默认: `8001`.
- - `metrics-port`(optional): 服务端指标的端口号. 默认: `8002`. 本示例中未使用该端口.
+Parameter:
+ - `model-repository`(required): The storage path of the entire model streaming_pp_tts.
+ - `http-port`(optional): Port number for the HTTP service. Default: `8000`. This port is not used in this example.
+ - `grpc-port`(optional): Port number for the GRPC service. Default: `8001`.
+ - `metrics-port`(optional): Port number for the serer metric. Default: `8002`. This port is not used in this example.
-### 2. 客户端
-#### 2.1 安装
+### 2. Client
+#### 2.1 Installation
```bash
pip3 install tritonclient[all]
```
-#### 2.2 发送请求
+#### 2.2 Send Requests
```bash
python3 client.py
```
-## 配置修改
+## Configuration Change
-当前默认配置在GPU上运行, 如果要在CPU或其他推理引擎上运行。 需要修改`models/runtime/config.pbtxt`中配置,详情请参考[配置文档](../../../../../serving/docs/zh_CN/model_configuration.md)
+The current default configuration runs on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/zh_CN/model_configuration.md) for more information.
diff --git a/examples/vision/ocr/PP-OCRv3/serving/README_CN.md b/examples/vision/ocr/PP-OCRv3/serving/README_CN.md
new file mode 100644
index 000000000..3f68e69ff
--- /dev/null
+++ b/examples/vision/ocr/PP-OCRv3/serving/README_CN.md
@@ -0,0 +1,95 @@
+[English](README.md) | 简体中文
+# PP-OCR服务化部署示例
+
+在服务化部署前,需确认
+
+- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
+
+## 介绍
+本文介绍了使用FastDeploy搭建OCR文字识别服务的方法.
+
+服务端必须在docker内启动,而客户端不是必须在docker容器内.
+
+**本文所在路径($PWD)下的models里包含模型的配置和代码(服务端会加载模型和代码以启动服务), 需要将其映射到docker中使用.**
+
+OCR由det(检测)、cls(分类)和rec(识别)三个模型组成.
+
+服务化部署串联的示意图如下图所示,其中`pp_ocr`串联了`det_preprocess`、`det_runtime`和`det_postprocess`,`cls_pp`串联了`cls_runtime`和`cls_postprocess`,`rec_pp`串联了`rec_runtime`和`rec_postprocess`.
+
+特别的是,在`det_postprocess`中会多次调用`cls_pp`和`rec_pp`服务,来实现对检测结果(多个框)进行分类和识别,,最后返回给用户最终的识别结果。
+
+
+
+
+
+
+
+## 使用
+### 1. 服务端
+#### 1.1 Docker
+```bash
+# 下载仓库代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/ocr/PP-OCRv3/serving/
+
+# 下载模型,图片和字典文件
+wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
+tar xvf ch_PP-OCRv3_det_infer.tar && mv ch_PP-OCRv3_det_infer 1
+mv 1/inference.pdiparams 1/model.pdiparams && mv 1/inference.pdmodel 1/model.pdmodel
+mv 1 models/det_runtime/ && rm -rf ch_PP-OCRv3_det_infer.tar
+
+wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
+tar xvf ch_ppocr_mobile_v2.0_cls_infer.tar && mv ch_ppocr_mobile_v2.0_cls_infer 1
+mv 1/inference.pdiparams 1/model.pdiparams && mv 1/inference.pdmodel 1/model.pdmodel
+mv 1 models/cls_runtime/ && rm -rf ch_ppocr_mobile_v2.0_cls_infer.tar
+
+wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
+tar xvf ch_PP-OCRv3_rec_infer.tar && mv ch_PP-OCRv3_rec_infer 1
+mv 1/inference.pdiparams 1/model.pdiparams && mv 1/inference.pdmodel 1/model.pdmodel
+mv 1 models/rec_runtime/ && rm -rf ch_PP-OCRv3_rec_infer.tar
+
+mkdir models/pp_ocr/1 && mkdir models/rec_pp/1 && mkdir models/cls_pp/1
+
+wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt
+mv ppocr_keys_v1.txt models/rec_postprocess/1/
+
+wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
+
+# x.y.z为镜像版本号,需参照serving文档替换为数字
+docker pull registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10
+docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/ocr_serving registry.baidubce.com/paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 bash
+docker exec -it -u root fastdeploy bash
+```
+
+#### 1.2 安装(在docker内)
+```bash
+ldconfig
+apt-get install libgl1
+```
+
+#### 1.3 启动服务端(在docker内)
+```bash
+fastdeployserver --model-repository=/ocr_serving/models
+```
+
+参数:
+ - `model-repository`(required): 整套模型streaming_pp_tts存放的路径.
+ - `http-port`(optional): HTTP服务的端口号. 默认: `8000`. 本示例中未使用该端口.
+ - `grpc-port`(optional): GRPC服务的端口号. 默认: `8001`.
+ - `metrics-port`(optional): 服务端指标的端口号. 默认: `8002`. 本示例中未使用该端口.
+
+
+### 2. 客户端
+#### 2.1 安装
+```bash
+pip3 install tritonclient[all]
+```
+
+#### 2.2 发送请求
+```bash
+python3 client.py
+```
+
+## 配置修改
+
+当前默认配置在GPU上运行, 如果要在CPU或其他推理引擎上运行。 需要修改`models/runtime/config.pbtxt`中配置,详情请参考[配置文档](../../../../../serving/docs/zh_CN/model_configuration.md)
diff --git a/examples/vision/ocr/PP-OCRv3/web/README.md b/examples/vision/ocr/PP-OCRv3/web/README.md
index b13e7547d..454afa158 100644
--- a/examples/vision/ocr/PP-OCRv3/web/README.md
+++ b/examples/vision/ocr/PP-OCRv3/web/README.md
@@ -1,40 +1,39 @@
+English | [简体中文](README_CN.md)
+# PP-OCRv3 Frontend Deployment Example
-# PP-OCRv3 前端部署示例
-
-本节介绍部署PaddleOCR的PP-OCRv3模型在浏览器中运行,以及@paddle-js-models/ocr npm包中的js接口。
+This document introduces the deployment of PaddleOCR's PP-OCRv3 models to run in the browser, and the js interface in the @paddle-js-models/ocr npm package.
-## 前端部署PP-OCRv3模型
+## Frontend Deployment PP-OCRv3 Model
-PP-OCRv3模型web demo使用[**参考文档**](../../../../application/js/web_demo/)
+For PP-OCRv3 model web demo, refer to [**reference document**](../../../../application/js/web_demo/)
-## PP-OCRv3 js接口
+## PP-OCRv3 js Interface
```
import * as ocr from "@paddle-js-models/ocr";
await ocr.init(detConfig, recConfig);
const res = await ocr.recognize(img, option, postConfig);
```
-ocr模型加载和初始化,其中模型为Paddle.js模型格式,js模型转换方式参考[文档](../../../../application/js/web_demo/README.md)
+ocr model loading and initialization, where the model is in Paddle.js model format. For the conversion of js models, refer to [the document](../../../../application/js/web_demo/README.md)
-**init函数参数**
+**init function parameter**
-> * **detConfig**(dict): 文本检测模型配置参数,默认值为 {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_det_infer_js_960/model.json', fill: '#fff', mean: [0.485, 0.456, 0.406],std: [0.229, 0.224, 0.225]}; 其中,modelPath为文本检测模型路径,fill 为图像预处理padding的值,mean和std分别为预处理的均值和标准差
-> * **recConfig**(dict)): 文本识别模型配置参数,默认值为 {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_rec_infer_js/model.json', fill: '#000', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5]}; 其中,modelPath为文本检测模型路径,fill 为图像预处理padding的值,mean和std分别为预处理的均值和标准差
+> * **detConfig**(dict): The configuration parameter for the text detection model. Default {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_det_infer_js_960/model.json', fill: '#fff', mean: [0.485, 0.456, 0.406],std: [0.229, 0.224, 0.225]}; Among them, modelPath is the path of the text detection model, fill is the padding value in the image pre-processing, and mean/std are the mean and standard deviation in the pre-processing.
+> * **recConfig**(dict)): The configuration parameter for the text recognition model. Default {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_rec_infer_js/model.json', fill: '#000', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5]}; Among them, modelPath is the path of the text detection model, fill is the padding value in the image pre-processing, and mean/std are the mean and standard deviation in the pre-processing.
-**recognize函数参数**
+**recognize function parameter**
-> * **img**(HTMLImageElement): 输入图像参数,类型为HTMLImageElement。
-> * **option**(dict): 可视化文本检测框的canvas参数,可不用设置。
-> * **postConfig**(dict): 文本检测后处理参数,默认值为:{shape: 960, thresh: 0.3, box_thresh: 0.6, unclip_ratio:1.5}; thresh是输出预测图的二值化阈值;box_thresh是输出框的阈值,低于此值的预测框会被丢弃,unclip_ratio是输出框扩大的比例。
+> * **img**(HTMLImageElement): Enter an image parameter in HTMLImageElement.
+> * **option**(dict): The canvas parameter of the visual text detection box. No need to set.
+> * **postConfig**(dict): Text detection post-processing parameter. Default: {shape: 960, thresh: 0.3, box_thresh: 0.6, unclip_ratio:1.5}; thresh is the binarization threshold of the output prediction image. box_thresh is the threshold of the output box, below which the prediction box will be discarded. unclip_ratio is the expansion ratio of the output box.
+## Other Documents
-## 其它文档
-
-- [PP-OCR 系列模型介绍](../../)
-- [PP-OCRv3 C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
-- [PP-OCRv3 微信小程序部署文档](../mini_program/)
+- [PP-OCR Model Description](../../)
+- [PP-OCRv3 C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [PP-OCRv3 Wechat mini-program deployment document](../mini_program/)
diff --git a/examples/vision/ocr/PP-OCRv3/web/README_CN.md b/examples/vision/ocr/PP-OCRv3/web/README_CN.md
new file mode 100644
index 000000000..a383f8c52
--- /dev/null
+++ b/examples/vision/ocr/PP-OCRv3/web/README_CN.md
@@ -0,0 +1,40 @@
+[English](README.md) | 简体中文
+# PP-OCRv3 前端部署示例
+
+本节介绍部署PaddleOCR的PP-OCRv3模型在浏览器中运行,以及@paddle-js-models/ocr npm包中的js接口。
+
+
+## 前端部署PP-OCRv3模型
+
+PP-OCRv3模型web demo使用[**参考文档**](../../../../application/js/web_demo/)
+
+
+## PP-OCRv3 js接口
+
+```
+import * as ocr from "@paddle-js-models/ocr";
+await ocr.init(detConfig, recConfig);
+const res = await ocr.recognize(img, option, postConfig);
+```
+ocr模型加载和初始化,其中模型为Paddle.js模型格式,js模型转换方式参考[文档](../../../../application/js/web_demo/README.md)
+
+**init函数参数**
+
+> * **detConfig**(dict): 文本检测模型配置参数,默认值为 {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_det_infer_js_960/model.json', fill: '#fff', mean: [0.485, 0.456, 0.406],std: [0.229, 0.224, 0.225]}; 其中,modelPath为文本检测模型路径,fill 为图像预处理padding的值,mean和std分别为预处理的均值和标准差
+> * **recConfig**(dict)): 文本识别模型配置参数,默认值为 {modelPath: 'https://js-models.bj.bcebos.com/PaddleOCR/PP-OCRv3/ch_PP-OCRv3_rec_infer_js/model.json', fill: '#000', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5]}; 其中,modelPath为文本检测模型路径,fill 为图像预处理padding的值,mean和std分别为预处理的均值和标准差
+
+
+**recognize函数参数**
+
+> * **img**(HTMLImageElement): 输入图像参数,类型为HTMLImageElement。
+> * **option**(dict): 可视化文本检测框的canvas参数,可不用设置。
+> * **postConfig**(dict): 文本检测后处理参数,默认值为:{shape: 960, thresh: 0.3, box_thresh: 0.6, unclip_ratio:1.5}; thresh是输出预测图的二值化阈值;box_thresh是输出框的阈值,低于此值的预测框会被丢弃,unclip_ratio是输出框扩大的比例。
+
+
+## 其它文档
+
+- [PP-OCR 系列模型介绍](../../)
+- [PP-OCRv3 C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [PP-OCRv3 微信小程序部署文档](../mini_program/)
diff --git a/examples/vision/ocr/README.md b/examples/vision/ocr/README.md
index 22a72a0bb..97f0d6146 100644
--- a/examples/vision/ocr/README.md
+++ b/examples/vision/ocr/README.md
@@ -1,19 +1,20 @@
-# PaddleOCR 模型部署
+English | [简体中文](README_CN.md)
+# PaddleOCR Model Deployment
-## PaddleOCR为多个模型组合串联任务,包含
-- 文本检测 `DBDetector`
-- [可选]方向分类 `Classifer` 用于调整进入文字识别前的图像方向
-- 文字识别 `Recognizer` 用于从图像中识别出文字
+## PaddleOCR contains a series of tasks with multiple models, including
+- Text detection `DBDetector`
+- [Optional] Direction classification `Classifer` is used to adjust the direction of images before text recognition
+- Character recognition `Recognizer` is used to recognize characters from images
-根据不同场景, FastDeploy汇总提供如下OCR任务部署, 用户需同时下载3个模型与字典文件(或2个,分类器可选), 完成OCR整个预测流程
+According to different scenarios, FastDeploy provides the following OCR task deployment. Users need to download three models and dictionary files (or two, optional classifier) simultaneously to complete the entire OCR prediction process
-### PP-OCR 中英文系列模型
-下表中的模型下载链接由PaddleOCR模型库提供, 详见[PP-OCR系列模型列表](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/models_list.md)
+### PP-OCR Model in English and Chinese Scenarios
+The model download links in the following table are provided by PaddleOCR model library. Refer to [PP-OCR Model List](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/models_list.md) for details
-| OCR版本 | 文本框检测 | 方向分类模型 | 文字识别 |字典文件| 说明 |
+| OCR version | Text box detection | Direction classification model | Character recognition | Dictionary file | Note |
|:----|:----|:----|:----|:----|:--------|
-| ch_PP-OCRv3[推荐] |[ch_PP-OCRv3_det](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_PP-OCRv3_rec](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv3系列原始超轻量模型,支持中英文、多语种文本检测 |
-| en_PP-OCRv3[推荐] |[en_PP-OCRv3_det](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [en_PP-OCRv3_rec](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar) | [en_dict.txt](https://bj.bcebos.com/paddlehub/fastdeploy/en_dict.txt) | OCRv3系列原始超轻量模型,支持英文与数字识别,除检测模型和识别模型的训练数据与中文模型不同以外,无其他区别 |
-| ch_PP-OCRv2 |[ch_PP-OCRv2_det](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_PP-OCRv2_rec](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2系列原始超轻量模型,支持中英文、多语种文本检测 |
-| ch_PP-OCRv2_mobile |[ch_ppocr_mobile_v2.0_det](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_ppocr_mobile_v2.0_rec](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2系列原始超轻量模型,支持中英文、多语种文本检测,比PPOCRv2更加轻量 |
-| ch_PP-OCRv2_server |[ch_ppocr_server_v2.0_det](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_ppocr_server_v2.0_rec](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) |[ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2服务器系列模型, 支持中英文、多语种文本检测,比超轻量模型更大,但效果更好|
+| ch_PP-OCRv3[Recommended] |[ch_PP-OCRv3_det](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_PP-OCRv3_rec](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv3 Original Ultra-Lightweight Model supports text detection in Chinese, English and multiple languages |
+| en_PP-OCRv3[Recommended] |[en_PP-OCRv3_det](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [en_PP-OCRv3_rec](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar) | [en_dict.txt](https://bj.bcebos.com/paddlehub/fastdeploy/en_dict.txt) | OCRv3 Original Ultra-Lightweight Model supports English and digital recognition. Its training data of detection model and recognition model is different from that of Chinese model, and no other differences can be detected |
+| ch_PP-OCRv2 |[ch_PP-OCRv2_det](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_PP-OCRv2_rec](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2 Original Ultra-Lightweight Model supports text detection in Chinese, English and multiple languages |
+| ch_PP-OCRv2_mobile |[ch_ppocr_mobile_v2.0_det](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_ppocr_mobile_v2.0_rec](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2 Original Ultra-Lightweight Model Supports text detection in Chinese, English and multiple languages with lighter weight than PPOCRv2 |
+| ch_PP-OCRv2_server |[ch_ppocr_server_v2.0_det](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_ppocr_server_v2.0_rec](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) |[ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2 Server Model supports text detection in Chinese, English and multiple languages. It has better effects though being larger than the ultra-lightweight model |
diff --git a/examples/vision/ocr/README_CN.md b/examples/vision/ocr/README_CN.md
new file mode 100644
index 000000000..9cf63c52d
--- /dev/null
+++ b/examples/vision/ocr/README_CN.md
@@ -0,0 +1,20 @@
+[English](README.md) | 简体中文
+# PaddleOCR 模型部署
+
+## PaddleOCR为多个模型组合串联任务,包含
+- 文本检测 `DBDetector`
+- [可选]方向分类 `Classifer` 用于调整进入文字识别前的图像方向
+- 文字识别 `Recognizer` 用于从图像中识别出文字
+
+根据不同场景, FastDeploy汇总提供如下OCR任务部署, 用户需同时下载3个模型与字典文件(或2个,分类器可选), 完成OCR整个预测流程
+
+### PP-OCR 中英文系列模型
+下表中的模型下载链接由PaddleOCR模型库提供, 详见[PP-OCR系列模型列表](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/models_list.md)
+
+| OCR版本 | 文本框检测 | 方向分类模型 | 文字识别 |字典文件| 说明 |
+|:----|:----|:----|:----|:----|:--------|
+| ch_PP-OCRv3[推荐] |[ch_PP-OCRv3_det](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_PP-OCRv3_rec](https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv3系列原始超轻量模型,支持中英文、多语种文本检测 |
+| en_PP-OCRv3[推荐] |[en_PP-OCRv3_det](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [en_PP-OCRv3_rec](https://paddleocr.bj.bcebos.com/PP-OCRv3/english/en_PP-OCRv3_rec_infer.tar) | [en_dict.txt](https://bj.bcebos.com/paddlehub/fastdeploy/en_dict.txt) | OCRv3系列原始超轻量模型,支持英文与数字识别,除检测模型和识别模型的训练数据与中文模型不同以外,无其他区别 |
+| ch_PP-OCRv2 |[ch_PP-OCRv2_det](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_PP-OCRv2_rec](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2系列原始超轻量模型,支持中英文、多语种文本检测 |
+| ch_PP-OCRv2_mobile |[ch_ppocr_mobile_v2.0_det](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_ppocr_mobile_v2.0_rec](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar) | [ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2系列原始超轻量模型,支持中英文、多语种文本检测,比PPOCRv2更加轻量 |
+| ch_PP-OCRv2_server |[ch_ppocr_server_v2.0_det](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar) | [ch_ppocr_mobile_v2.0_cls](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar) | [ch_ppocr_server_v2.0_rec](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar) |[ppocr_keys_v1.txt](https://bj.bcebos.com/paddlehub/fastdeploy/ppocr_keys_v1.txt) | OCRv2服务器系列模型, 支持中英文、多语种文本检测,比超轻量模型更大,但效果更好|
diff --git a/examples/vision/segmentation/paddleseg/README.md b/examples/vision/segmentation/paddleseg/README.md
index 0b0cda349..de578cb22 100644
--- a/examples/vision/segmentation/paddleseg/README.md
+++ b/examples/vision/segmentation/paddleseg/README.md
@@ -1,47 +1,49 @@
-# PaddleSeg 模型部署
+English | [简体中文](README_CN.md)
+# PaddleSeg Model Deployment
-## 模型版本说明
+## Model Description
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
-目前FastDeploy支持如下模型的部署
+FastDeploy currently supports the deployment of the following models
-- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/unet/README.md)
-- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md)
-- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/contrib/PP-HumanSeg/README.md)
-- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/fcn/README.md)
-- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/deeplabv3/README.md)
+- [U-Net models](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/unet/README.md)
+- [PP-LiteSeg models](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md)
+- [PP-HumanSeg models](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/contrib/PP-HumanSeg/README.md)
+- [FCN models](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/fcn/README.md)
+- [DeepLabV3 models](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/deeplabv3/README.md)
-【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting)
+【Attention】For **PP-Matting**、**PP-HumanMatting** and **ModNet** deployment, please refer to [Matting Model Deployment](../../matting)
-## 准备PaddleSeg部署模型
+## Prepare PaddleSeg Deployment Model
-PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)
+For the export of the PaddleSeg model, refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) for more information
-**注意**
-- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息
+**Attention**
+- The exported PaddleSeg model contains three files, including `model.pdmodel`、`model.pdiparams` and `deploy.yaml`. FastDeploy will get the pre-processing information for inference from yaml files.
-## 下载预训练模型
+## Download Pre-trained Model
-为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型
-- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none`
-- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax`
+For developers' testing, part of the PaddleSeg exported models are provided below.
+- without-argmax export mode: **Not specified**`--input_shape`,**specified**`--output_op none`
+- with-argmax export mode:**Not specified**`--input_shape`,**specified**`--output_op argmax`
-开发者可直接下载使用。
+Developers can download directly.
-| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
+
+| Model | Parameter Size | Input Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- |
| [Unet-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_with_argmax_infer.tgz) \| [Unet-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
| [PP-LiteSeg-B(STDC2)-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz) \| [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 79.04% | 79.52% | 79.85% |
-|[PP-HumanSegV1-Lite-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV1_Lite_with_argmax_infer.tgz) \| [PP-HumanSegV1-Lite-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
-|[PP-HumanSegV2-Lite-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Lite-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
-| [PP-HumanSegV2-Mobile-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Mobile-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
-|[PP-HumanSegV1-Server-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_with_argmax_infer.tgz) \| [PP-HumanSegV1-Server-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
-| [Portait-PP-HumanSegV2-Lite-with-argmax(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_with_argmax_infer.tgz) \| [Portait-PP-HumanSegV2-Lite-without-argmax(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
-| [FCN-HRNet-W18-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_with_argmax_infer.tgz) \| [FCN-HRNet-W18-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz)(暂时不支持ONNXRuntime的GPU推理) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
+|[PP-HumanSegV1-Lite-with-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV1_Lite_with_argmax_infer.tgz) \| [PP-HumanSegV1-Lite-without-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
+|[PP-HumanSegV2-Lite-with-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Lite-without-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
+| [PP-HumanSegV2-Mobile-with-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Mobile-without-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
+|[PP-HumanSegV1-Server-with-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_with_argmax_infer.tgz) \| [PP-HumanSegV1-Server-without-argmax(General Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
+| [Portait-PP-HumanSegV2-Lite-with-argmax(Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_with_argmax_infer.tgz) \| [Portait-PP-HumanSegV2-Lite-without-argmax(Portrait Segmentation Model)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
+| [FCN-HRNet-W18-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_with_argmax_infer.tgz) \| [FCN-HRNet-W18-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz)(GPU inference for ONNXRuntime is not supported now) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
| [Deeplabv3-ResNet101-OS8-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_with_argmax_infer.tgz) \| [Deeplabv3-ResNet101-OS8-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |
-## 详细部署文档
+## Detailed Deployment Tutorials
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/segmentation/paddleseg/README_CN.md b/examples/vision/segmentation/paddleseg/README_CN.md
new file mode 100644
index 000000000..7306a5f4f
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/README_CN.md
@@ -0,0 +1,34 @@
+[English](README.md) | 简体中文
+# 视觉模型部署
+
+本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型
+
+| 任务类型 | 说明 | 预测结果结构体 |
+|:-------------- |:----------------------------------- |:-------------------------------------------------------------------------------- |
+| Detection | 目标检测,输入图像,检测图像中物体位置,并返回检测框坐标及类别和置信度 | [DetectionResult](../../docs/api/vision_results/detection_result.md) |
+| Segmentation | 语义分割,输入图像,给出图像中每个像素的分类及置信度 | [SegmentationResult](../../docs/api/vision_results/segmentation_result.md) |
+| Classification | 图像分类,输入图像,给出图像的分类结果和置信度 | [ClassifyResult](../../docs/api/vision_results/classification_result.md) |
+| FaceDetection | 人脸检测,输入图像,检测图像中人脸位置,并返回检测框坐标及人脸关键点 | [FaceDetectionResult](../../docs/api/vision_results/face_detection_result.md) |
+| FaceAlignment | 人脸对齐(人脸关键点检测),输入图像,返回人脸关键点 | [FaceAlignmentResult](../../docs/api/vision_results/face_alignment_result.md) |
+| KeypointDetection | 关键点检测,输入图像,返回图像中人物行为的各个关键点坐标和置信度 | [KeyPointDetectionResult](../../docs/api/vision_results/keypointdetection_result.md) |
+| FaceRecognition | 人脸识别,输入图像,返回可用于相似度计算的人脸特征的embedding | [FaceRecognitionResult](../../docs/api/vision_results/face_recognition_result.md) |
+| Matting | 抠图,输入图像,返回图片的前景每个像素点的Alpha值 | [MattingResult](../../docs/api/vision_results/matting_result.md) |
+| OCR | 文本框检测,分类,文本框内容识别,输入图像,返回文本框坐标,文本框的方向类别以及框内的文本内容 | [OCRResult](../../docs/api/vision_results/ocr_result.md) |
+| MOT | 多目标跟踪,输入图像,检测图像中物体位置,并返回检测框坐标,对象id及类别置信度 | [MOTResult](../../docs/api/vision_results/mot_result.md) |
+| HeadPose | 头部姿态估计,返回头部欧拉角 | [HeadPoseResult](../../docs/api/vision_results/headpose_result.md) |
+
+## FastDeploy API设计
+
+视觉模型具有较有统一任务范式,在设计API时(包括C++/Python),FastDeploy将视觉模型的部署拆分为四个步骤
+
+- 模型加载
+- 图像预处理
+- 模型推理
+- 推理结果后处理
+
+FastDeploy针对飞桨的视觉套件,以及外部热门模型,提供端到端的部署服务,用户只需准备模型,按以下步骤即可完成整个模型的部署
+
+- 加载模型
+- 调用`predict`接口
+
+FastDeploy在各视觉模型部署时,也支持一键切换后端推理引擎,详情参阅[如何切换模型推理引擎](../../docs/cn/faq/how_to_change_backend.md)。
diff --git a/examples/vision/segmentation/paddleseg/a311d/README.md b/examples/vision/segmentation/paddleseg/a311d/README.md
index 3fbf6ed5e..07870aa59 100755
--- a/examples/vision/segmentation/paddleseg/a311d/README.md
+++ b/examples/vision/segmentation/paddleseg/a311d/README.md
@@ -1,11 +1,12 @@
-# PP-LiteSeg 量化模型在 A311D 上的部署
-目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-LiteSeg 量化模型到 A311D 上。
+English | [简体中文](README_CN.md)
+# Deployment of PP-LiteSeg Quantification Model on A311D
+Now FastDeploy allows deploying PP-LiteSeg quantization model to A311D based on Paddle Lite.
-模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
+For model quantization and download of quantized models, refer to [Model Quantization](../quantize/README.md)
-## 详细部署文档
+## Detailed Deployment Tutorials
-在 A311D 上只支持 C++ 的部署。
+Only C++ deployment is supported on A311D.
-- [C++部署](cpp)
+- [C++ deployment](cpp)
diff --git a/examples/vision/segmentation/paddleseg/a311d/README_CN.md b/examples/vision/segmentation/paddleseg/a311d/README_CN.md
new file mode 100644
index 000000000..dad4f3924
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/a311d/README_CN.md
@@ -0,0 +1,12 @@
+[English](README.md) | 简体中文
+# PP-LiteSeg 量化模型在 A311D 上的部署
+目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-LiteSeg 量化模型到 A311D 上。
+
+模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
+
+
+## 详细部署文档
+
+在 A311D 上只支持 C++ 的部署。
+
+- [C++部署](cpp)
diff --git a/examples/vision/segmentation/paddleseg/cpp/README.md b/examples/vision/segmentation/paddleseg/cpp/README.md
index 07f9f4c62..4c5be9f6c 100755
--- a/examples/vision/segmentation/paddleseg/cpp/README.md
+++ b/examples/vision/segmentation/paddleseg/cpp/README.md
@@ -1,57 +1,53 @@
-# PaddleSeg C++部署示例
+English | [简体中文](README_CN.md)
+# PaddleSeg C++ Deployment Example
-本目录下提供`infer.cc`快速完成Unet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+This directory provides examples that `infer.cc` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT.
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)
+【Attention】For the deployment of **PP-Matting**、**PP-HumanMatting** and **ModNet**, refer to [Matting Model Deployment](../../../matting)
-以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.0以上(x.x.x>=1.0.0)
+Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.0 or above (x.x.x>=1.0.0) is required to support this model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载Unet模型文件和测试图片
+# Download Unet model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
-# CPU推理
+# CPU inference
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 0
-# GPU推理
+# GPU inference
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 1
-# GPU上TensorRT推理
+# TensorRT inference on GPU
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 2
-# 昆仑芯XPU推理
+# kunlunxin XPU inference
./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 3
-# 华为昇腾推理
-./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 4
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
-- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
+## PaddleSeg C++ Interface
-## PaddleSeg C++接口
-
-### PaddleSeg类
+### PaddleSeg Class
```c++
fastdeploy::vision::segmentation::PaddleSegModel(
@@ -62,39 +58,39 @@ fastdeploy::vision::segmentation::PaddleSegModel(
const ModelFormat& model_format = ModelFormat::PADDLE)
```
-PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模型格式。
+PaddleSegModel model loading and initialization, among which model_file is the exported Paddle model format.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **config_file**(str): 推理部署配置文件
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **config_file**(str): Inference deployment configuration file
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-#### Predict函数
+#### Predict Function
> ```c++
> PaddleSegModel::Predict(cv::Mat* im, DetectionResult* result)
> ```
>
-> 模型预测接口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **im**: 输入图像,注意需为HWC,BGR格式
-> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, SegmentationResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > * **im**: Input images in HWC or BGR format
+> > * **result**: The segmentation result, including the predicted label of the segmentation and the corresponding probability of the label. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of SegmentationResult
-### 类成员属性
-#### 预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+### Class Member Variable
+#### Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
+> > * **is_vertical_screen**(bool): For PP-HumanSeg models, the input image is portrait, height greater than a width, by setting this parameter to`true`
-#### 后处理参数
-> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
+#### Post-processing Parameter
+> > * **apply_softmax**(bool): The `apply_softmax` parameter is not specified when the model is exported. Set this parameter to `true` to normalize the probability result (score_map) of the predicted output segmentation label (label_map)
-- [模型介绍](../../)
-- [Python部署](../python)
-- [视觉模型预测结果](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/segmentation/paddleseg/cpp/README_CN.md b/examples/vision/segmentation/paddleseg/cpp/README_CN.md
new file mode 100644
index 000000000..df99e324e
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/cpp/README_CN.md
@@ -0,0 +1,101 @@
+[English](README.md) | 简体中文
+# PaddleSeg C++部署示例
+
+本目录下提供`infer.cc`快速完成Unet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)
+
+以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.0以上(x.x.x>=1.0.0)
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+# 下载Unet模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
+tar -xvf Unet_cityscapes_without_argmax_infer.tgz
+wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
+
+
+# CPU推理
+./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 0
+# GPU推理
+./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 1
+# GPU上TensorRT推理
+./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 2
+# 昆仑芯XPU推理
+./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 3
+# 华为昇腾推理
+./infer_demo Unet_cityscapes_without_argmax_infer cityscapes_demo.png 4
+```
+
+运行完成可视化结果如下图所示
+
+

+
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
+- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
+
+## PaddleSeg C++接口
+
+### PaddleSeg类
+
+```c++
+fastdeploy::vision::segmentation::PaddleSegModel(
+ const string& model_file,
+ const string& params_file = "",
+ const string& config_file,
+ const RuntimeOption& runtime_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::PADDLE)
+```
+
+PaddleSegModel模型加载和初始化,其中model_file为导出的Paddle模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **config_file**(str): 推理部署配置文件
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+#### Predict函数
+
+> ```c++
+> PaddleSegModel::Predict(cv::Mat* im, DetectionResult* result)
+> ```
+>
+> 模型预测接口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **im**: 输入图像,注意需为HWC,BGR格式
+> > * **result**: 分割结果,包括分割预测的标签以及标签对应的概率值, SegmentationResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+### 类成员属性
+#### 预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
+
+#### 后处理参数
+> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [视觉模型预测结果](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/segmentation/paddleseg/python/README.md b/examples/vision/segmentation/paddleseg/python/README.md
index 02b2e6ab5..95885a2f2 100755
--- a/examples/vision/segmentation/paddleseg/python/README.md
+++ b/examples/vision/segmentation/paddleseg/python/README.md
@@ -1,85 +1,82 @@
-# PaddleSeg Python部署示例
+English | [简体中文](README_CN.md)
+# PaddleSeg Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)
-
-本目录下提供`infer.py`快速完成Unet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+【Attention】For the deployment of **PP-Matting**、**PP-HumanMatting** and **ModNet**, refer to [Matting Model Deployment](../../../matting)
+This directory provides examples that `infer.py` fast finishes the deployment of Unet on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
-#下载部署示例代码
+# Download the deployment example code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/segmentation/paddleseg/python
-# 下载Unet模型文件和测试图片
+# Download Unet model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
tar -xvf Unet_cityscapes_without_argmax_infer.tgz
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
-# CPU推理
+# CPU inference
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
-# GPU推理
+# GPU inference
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
-# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+# TensorRT inference on GPU(Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
-# 昆仑芯XPU推理
+# kunlunxin XPU inference
python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device kunlunxin
-# 华为昇腾推理
-python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device ascend
```
-运行完成可视化结果如下图所示
+The visualized result after running is as follows
-## PaddleSegModel Python接口
+## PaddleSegModel Python Interface
```python
fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
-PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/docs/model_export_cn.md)
+PaddleSeg model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/docs/model_export_cn.md) for more information
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **config_file**(str): 推理部署配置文件
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **config_file**(str): Inference deployment configuration file
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-### predict函数
+### predict function
> ```python
> PaddleSegModel.predict(input_image)
> ```
>
-> 模型预测结口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+> > * **input_image**(np.ndarray): Input data in HWC or BGR format
-> **返回**
+> **Return**
>
-> > 返回`fastdeploy.vision.SegmentationResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > Return `fastdeploy.vision.SegmentationResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure.
-### 类成员属性
-#### 预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+### Class Member Variable
+#### Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
+> > * **is_vertical_screen**(bool): For PP-HumanSeg models, the input image is portrait with height greater than width by setting this parameter to `true`
+#### Post-processing Parameter
+> > * **apply_softmax**(bool): The `apply_softmax` parameter is not specified when the model is exported. Set this parameter to `true` to normalize the probability result (score_map) of the predicted output segmentation label (label_map) in softmax
-#### 后处理参数
-> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
+## Other Documents
-## 其它文档
-
-- [PaddleSeg 模型介绍](..)
-- [PaddleSeg C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [PaddleSeg Model Description](..)
+- [PaddleSeg C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/segmentation/paddleseg/python/README_CN.md b/examples/vision/segmentation/paddleseg/python/README_CN.md
new file mode 100644
index 000000000..61edc5b2b
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/python/README_CN.md
@@ -0,0 +1,86 @@
+[English](README.md) | 简体中文
+# PaddleSeg Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting)
+
+本目录下提供`infer.py`快速完成Unet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/segmentation/paddleseg/python
+
+# 下载Unet模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz
+tar -xvf Unet_cityscapes_without_argmax_infer.tgz
+wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
+
+# CPU推理
+python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device cpu
+# GPU推理
+python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu
+# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device gpu --use_trt True
+# 昆仑芯XPU推理
+python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device kunlunxin
+# 华为昇腾推理
+python infer.py --model Unet_cityscapes_without_argmax_infer --image cityscapes_demo.png --device ascend
+```
+
+运行完成可视化结果如下图所示
+
+

+
+
+## PaddleSegModel Python接口
+
+```python
+fd.vision.segmentation.PaddleSegModel(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
+```
+
+PaddleSeg模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/docs/model_export_cn.md)
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **config_file**(str): 推理部署配置文件
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+### predict函数
+
+> ```python
+> PaddleSegModel.predict(input_image)
+> ```
+>
+> 模型预测结口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
+
+> **返回**
+>
+> > 返回`fastdeploy.vision.SegmentationResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+### 类成员属性
+#### 预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+> > * **is_vertical_screen**(bool): PP-HumanSeg系列模型通过设置此参数为`true`表明输入图片是竖屏,即height大于width的图片
+
+#### 后处理参数
+> > * **apply_softmax**(bool): 当模型导出时,并未指定`apply_softmax`参数,可通过此设置此参数为`true`,将预测的输出分割标签(label_map)对应的概率结果(score_map)做softmax归一化处理
+
+## 其它文档
+
+- [PaddleSeg 模型介绍](..)
+- [PaddleSeg C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/segmentation/paddleseg/rv1126/README.md b/examples/vision/segmentation/paddleseg/rv1126/README.md
index 61954943c..dc9755272 100755
--- a/examples/vision/segmentation/paddleseg/rv1126/README.md
+++ b/examples/vision/segmentation/paddleseg/rv1126/README.md
@@ -1,11 +1,12 @@
-# PP-LiteSeg 量化模型在 RV1126 上的部署
-目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-LiteSeg 量化模型到 RV1126 上。
+English | [简体中文](README_CN.md)
+# Deployment of PP-LiteSeg Quantification Model on RV1126
+Now FastDeploy allows deploying PP-LiteSeg quantization model to RV1126 based on Paddle Lite.
-模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
+For model quantization and download of quantized models, refer to [Model Quantization](../quantize/README.md)
-## 详细部署文档
+## Detailed Deployment Tutorials
-在 RV1126 上只支持 C++ 的部署。
+Only C++ deployment is supported on RV1126.
-- [C++部署](cpp)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/segmentation/paddleseg/rv1126/README_CN.md b/examples/vision/segmentation/paddleseg/rv1126/README_CN.md
new file mode 100644
index 000000000..ce4cbb816
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/rv1126/README_CN.md
@@ -0,0 +1,12 @@
+[English](README.md) | 简体中文
+# PP-LiteSeg 量化模型在 RV1126 上的部署
+目前 FastDeploy 已经支持基于 Paddle Lite 部署 PP-LiteSeg 量化模型到 RV1126 上。
+
+模型的量化和量化模型的下载请参考:[模型量化](../quantize/README.md)
+
+
+## 详细部署文档
+
+在 RV1126 上只支持 C++ 的部署。
+
+- [C++部署](cpp)
diff --git a/examples/vision/sr/README.md b/examples/vision/sr/README.md
index 88f9e7777..b37290f19 100644
--- a/examples/vision/sr/README.md
+++ b/examples/vision/sr/README.md
@@ -1,9 +1,10 @@
-# sr 模型部署
+English | [简体中文](README_CN.md)
+# SR Model Deployment
-FastDeploy目前支持如下超分模型部署
+Now FastDeploy supports the deployment of the following SR models
-| 模型 | 说明 | 模型格式 | 版本 |
+| Model | Description | Model Format | Version |
|:-----------------------------------------|:----------------------|:-------|:----------------------------------------------------------------------------------|
-| [PaddleGAN/BasicVSR](./basicvsr) | BasicVSR 系列模型 | paddle | [develop](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) |
-| [PaddleGAN/EDVR](./edvr) | EDVR 系列模型 | paddle | [develop](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) |
-| [PaddleGAN/PP-MSVSR](./ppmsvsr) | PP-MSVSR 系列模型 | paddle | [develop](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) |
+| [PaddleGAN/BasicVSR](./basicvsr) | BasicVSR models | paddle | [develop](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) |
+| [PaddleGAN/EDVR](./edvr) | EDVR models | paddle | [develop](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) |
+| [PaddleGAN/PP-MSVSR](./ppmsvsr) | PP-MSVSR models | paddle | [develop](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) |
diff --git a/examples/vision/sr/README_CN.md b/examples/vision/sr/README_CN.md
new file mode 100644
index 000000000..2091e5ed0
--- /dev/null
+++ b/examples/vision/sr/README_CN.md
@@ -0,0 +1,10 @@
+[English](README.md) | 简体中文
+# sr 模型部署
+
+FastDeploy目前支持如下超分模型部署
+
+| 模型 | 说明 | 模型格式 | 版本 |
+|:-----------------------------------------|:----------------------|:-------|:----------------------------------------------------------------------------------|
+| [PaddleGAN/BasicVSR](./basicvsr) | BasicVSR 系列模型 | paddle | [develop](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) |
+| [PaddleGAN/EDVR](./edvr) | EDVR 系列模型 | paddle | [develop](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) |
+| [PaddleGAN/PP-MSVSR](./ppmsvsr) | PP-MSVSR 系列模型 | paddle | [develop](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) |
diff --git a/examples/vision/sr/basicvsr/README.md b/examples/vision/sr/basicvsr/README.md
index fc92b8422..91bfc7b81 100644
--- a/examples/vision/sr/basicvsr/README.md
+++ b/examples/vision/sr/basicvsr/README.md
@@ -1,28 +1,29 @@
-# BasicVSR模型部署
+English | [简体中文](README_CN.md)
+# BasicVSR Model Deployment
-## 模型版本说明
+## Model Description
- [PaddleGAN develop](https://github.com/PaddlePaddle/PaddleGAN)
-## 支持模型列表
+## List of Supported Models
-目前FastDeploy支持如下模型的部署
+Now FastDeploy supports the deployment of the following models
- [BasicVSR](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
-## 导出部署模型
+## Export Deployment Model
-在部署前,需要先将训练好的BasicVSR导出成部署模型,导出BasicVSR导出模型步骤,参考文档[导出模型](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
+Before deployment, export the trained BasicVSR to the deployment model. Refer to [Export Model](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) for detailed steps.
-| 模型 | 参数大小 | 精度 | 备注 |
+| Model | Parameter Size | Accuracy | Note |
|:----------------------------------------------------------------------------|:-------|:----- | :------ |
| [BasicVSR](https://bj.bcebos.com/paddlehub/fastdeploy/BasicVSR_reds_x4.tar) | 30.1MB | - |
-**注意**:非常不建议在没有独立显卡的设备上运行该模型
+**Attention**: Running this model on a device without separate graphics card is highly discouraged
-## 详细部署文档
+## Detailed Deployment Tutorials
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/sr/basicvsr/README_CN.md b/examples/vision/sr/basicvsr/README_CN.md
new file mode 100644
index 000000000..0862c6dd5
--- /dev/null
+++ b/examples/vision/sr/basicvsr/README_CN.md
@@ -0,0 +1,29 @@
+[English](README.md) | 简体中文
+# BasicVSR模型部署
+
+## 模型版本说明
+
+- [PaddleGAN develop](https://github.com/PaddlePaddle/PaddleGAN)
+
+## 支持模型列表
+
+目前FastDeploy支持如下模型的部署
+
+- [BasicVSR](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
+
+
+## 导出部署模型
+
+在部署前,需要先将训练好的BasicVSR导出成部署模型,导出BasicVSR导出模型步骤,参考文档[导出模型](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
+
+
+| 模型 | 参数大小 | 精度 | 备注 |
+|:----------------------------------------------------------------------------|:-------|:----- | :------ |
+| [BasicVSR](https://bj.bcebos.com/paddlehub/fastdeploy/BasicVSR_reds_x4.tar) | 30.1MB | - |
+
+**注意**:非常不建议在没有独立显卡的设备上运行该模型
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/sr/basicvsr/cpp/README.md b/examples/vision/sr/basicvsr/cpp/README.md
index 802a7ac58..7cb500798 100644
--- a/examples/vision/sr/basicvsr/cpp/README.md
+++ b/examples/vision/sr/basicvsr/cpp/README.md
@@ -1,42 +1,42 @@
-# BasicVSR C++部署示例
+English | [简体中文](README_CN.md)
+# BasicVSR C++ Deployment Example
-本目录下提供`infer.cc`快速完成BasicVSR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+This directory provides examples that `infer.cc` fast finishes the deployment of BasicVSR on CPU/GPU and GPU accelerated by TensorRT.
+Before deployment, two steps require confirmation
-在部署前,需确认以下两个步骤
-
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-以Linux上BasicVSR推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+Taking the BasicVSR inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载BasicVSR模型文件和测试视频
+# Download BasicVSR model files and test videos
wget https://bj.bcebos.com/paddlehub/fastdeploy/BasicVSR_reds_x4.tar
tar -xvf BasicVSR_reds_x4.tar
wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
-# CPU推理
+# CPU inference
./infer_demo BasicVSR_reds_x4 vsr_src.mp4 0 2
-# GPU推理
+# GPU inference
./infer_demo BasicVSR_reds_x4 vsr_src.mp4 1 2
-# GPU上TensorRT推理
+# TensorRT Inference on GPU
./infer_demo BasicVSR_reds_x4 vsr_src.mp4 2 2
```
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-## BasicVSR C++接口
+## BasicVSR C++ Interface
-### BasicVSR类
+### BasicVSR Class
```c++
fastdeploy::vision::sr::BasicVSR(
@@ -46,28 +46,28 @@ fastdeploy::vision::sr::BasicVSR(
const ModelFormat& model_format = ModelFormat::PADDLE)
```
-BasicVSR模型加载和初始化,其中model_file为导出的Paddle模型格式。
+BasicVSR model loading and initialization, among which model_file is the exported Paddle model format.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-#### Predict函数
+#### Predict Function
> ```c++
> BasicVSR::Predict(std::vector& imgs, std::vector& results)
> ```
>
-> 模型预测接口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **imgs**: 输入视频帧序列,注意需为HWC,BGR格式
-> > * **results**: 视频超分结果,超分后的视频帧序列
+> > * **imgs**: Input video frame sequence in HWC or BGR format
+> > * **results**: Video SR results: video frame sequence after SR
-- [模型介绍](../../)
-- [Python部署](../python)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/basicvsr/cpp/README_CN.md b/examples/vision/sr/basicvsr/cpp/README_CN.md
new file mode 100644
index 000000000..ed2076432
--- /dev/null
+++ b/examples/vision/sr/basicvsr/cpp/README_CN.md
@@ -0,0 +1,74 @@
+[English](README.md) | 简体中文
+# BasicVSR C++部署示例
+
+本目录下提供`infer.cc`快速完成BasicVSR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+以Linux上BasicVSR推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+# 下载BasicVSR模型文件和测试视频
+wget https://bj.bcebos.com/paddlehub/fastdeploy/BasicVSR_reds_x4.tar
+tar -xvf BasicVSR_reds_x4.tar
+wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
+
+
+# CPU推理
+./infer_demo BasicVSR_reds_x4 vsr_src.mp4 0 2
+# GPU推理
+./infer_demo BasicVSR_reds_x4 vsr_src.mp4 1 2
+# GPU上TensorRT推理
+./infer_demo BasicVSR_reds_x4 vsr_src.mp4 2 2
+```
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## BasicVSR C++接口
+
+### BasicVSR类
+
+```c++
+fastdeploy::vision::sr::BasicVSR(
+ const string& model_file,
+ const string& params_file = "",
+ const RuntimeOption& runtime_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::PADDLE)
+```
+
+BasicVSR模型加载和初始化,其中model_file为导出的Paddle模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+#### Predict函数
+
+> ```c++
+> BasicVSR::Predict(std::vector& imgs, std::vector& results)
+> ```
+>
+> 模型预测接口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **imgs**: 输入视频帧序列,注意需为HWC,BGR格式
+> > * **results**: 视频超分结果,超分后的视频帧序列
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/basicvsr/python/README.md b/examples/vision/sr/basicvsr/python/README.md
index ac5dd97c6..6597b5192 100644
--- a/examples/vision/sr/basicvsr/python/README.md
+++ b/examples/vision/sr/basicvsr/python/README.md
@@ -1,61 +1,61 @@
-# BasicVSR Python部署示例
+English | [简体中文](README_CN.md)
+# BasicVSR Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-
-本目录下提供`infer.py`快速完成BasicVSR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+This directory provides examples that `infer.py` fast finishesshes the deployment of BasicVSR on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
-#下载部署示例代码
+# Download deployment example code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/sr/basicvsr/python
-# 下载BasicVSR模型文件和测试视频
+# Download BasicVSR model files and test videos
wget https://bj.bcebos.com/paddlehub/fastdeploy/BasicVSR_reds_x4.tar
tar -xvf BasicVSR_reds_x4.tar
wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
-# CPU推理
+# CPU inference
python infer.py --model BasicVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device cpu
-# GPU推理
+# GPU inference
python infer.py --model BasicVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device gpu
-# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+# TensorRT inference on GPU(Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
python infer.py --model BasicVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device gpu --use_trt True
```
-## BasicVSR Python接口
+## BasicVSR Python Interface
```python
fd.vision.sr.BasicVSR(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
-BasicVSR模型加载和初始化,其中model_file和params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)
+YOLOv5Cls model loading and initialization, among which model_file and params_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) for more information
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-### predict函数
+### predict function
> ```python
> BasicVSR.predict(frames)
> ```
>
-> 模型预测结口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **frames**(list[np.ndarray]): 输入数据,注意需为HWC,BGR格式, frames为视频帧序列
+> > * **frames**(list[np.ndarray]): Input data in HWC or BGR format. frames are video frame sequences
-> **返回** list[np.ndarray] 为超分后的视频帧序列
+> **Return** list[np.ndarray] is the video frame sequence after SR
-## 其它文档
+## Other Documents
-- [BasicVSR 模型介绍](..)
-- [BasicVSR C++部署](../cpp)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [BasicVSR Model Description](..)
+- [BasicVSR C++ Deployment](../cpp)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/basicvsr/python/README_CN.md b/examples/vision/sr/basicvsr/python/README_CN.md
new file mode 100644
index 000000000..6c1cf34c2
--- /dev/null
+++ b/examples/vision/sr/basicvsr/python/README_CN.md
@@ -0,0 +1,62 @@
+[English](README.md) | 简体中文
+# BasicVSR Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成BasicVSR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/sr/basicvsr/python
+
+# 下载BasicVSR模型文件和测试视频
+wget https://bj.bcebos.com/paddlehub/fastdeploy/BasicVSR_reds_x4.tar
+tar -xvf BasicVSR_reds_x4.tar
+wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
+# CPU推理
+python infer.py --model BasicVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device cpu
+# GPU推理
+python infer.py --model BasicVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device gpu
+# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+python infer.py --model BasicVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device gpu --use_trt True
+```
+
+## BasicVSR Python接口
+
+```python
+fd.vision.sr.BasicVSR(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE)
+```
+
+BasicVSR模型加载和初始化,其中model_file和params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+### predict函数
+
+> ```python
+> BasicVSR.predict(frames)
+> ```
+>
+> 模型预测结口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **frames**(list[np.ndarray]): 输入数据,注意需为HWC,BGR格式, frames为视频帧序列
+
+> **返回** list[np.ndarray] 为超分后的视频帧序列
+
+
+## 其它文档
+
+- [BasicVSR 模型介绍](..)
+- [BasicVSR C++部署](../cpp)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/edvr/README.md b/examples/vision/sr/edvr/README.md
index 670bad8a1..9e5910cd7 100644
--- a/examples/vision/sr/edvr/README.md
+++ b/examples/vision/sr/edvr/README.md
@@ -1,28 +1,29 @@
-# EDVR模型部署
+English | [简体中文](README_CN.md)
+# EDVR Model Deployment
-## 模型版本说明
+## Model Description
- [PaddleGAN develop](https://github.com/PaddlePaddle/PaddleGAN)
-## 支持模型列表
+## List of Supported Models
-目前FastDeploy支持如下模型的部署
+Now FastDeploy supports the deployment of the following models
- [EDVR](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
-## 导出部署模型
+## Export Deployment Model
-在部署前,需要先将训练好的EDVR导出成部署模型,导出EDVR导出模型步骤,参考文档[导出模型](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
+Before deployment, export the trained EDVR to the deployment model. Refer to [Export Model](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) for detailed steps.
-| 模型 | 参数大小 | 精度 | 备注 |
+| Model | Parameter Size | Accuracy | Note |
|:--------------------------------------------------------------------------------|:-------|:----- | :------ |
| [EDVR](https://bj.bcebos.com/paddlehub/fastdeploy/EDVR_M_wo_tsa_SRx4.tar) | 14.9MB | - |
-**注意**:非常不建议在没有独立显卡的设备上运行该模型
+**Attention**: Running this model on a device without separate graphics card is highly discouraged
-## 详细部署文档
+## Detailed Deployment Tutorials
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/sr/edvr/README_CN.md b/examples/vision/sr/edvr/README_CN.md
new file mode 100644
index 000000000..88b36c968
--- /dev/null
+++ b/examples/vision/sr/edvr/README_CN.md
@@ -0,0 +1,29 @@
+[English](README.md) | 简体中文
+# EDVR模型部署
+
+## 模型版本说明
+
+- [PaddleGAN develop](https://github.com/PaddlePaddle/PaddleGAN)
+
+## 支持模型列表
+
+目前FastDeploy支持如下模型的部署
+
+- [EDVR](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
+
+
+## 导出部署模型
+
+在部署前,需要先将训练好的EDVR导出成部署模型,导出EDVR导出模型步骤,参考文档[导出模型](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
+
+
+| 模型 | 参数大小 | 精度 | 备注 |
+|:--------------------------------------------------------------------------------|:-------|:----- | :------ |
+| [EDVR](https://bj.bcebos.com/paddlehub/fastdeploy/EDVR_M_wo_tsa_SRx4.tar) | 14.9MB | - |
+
+**注意**:非常不建议在没有独立显卡的设备上运行该模型
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/sr/edvr/cpp/README.md b/examples/vision/sr/edvr/cpp/README.md
index 54ad66ee1..406813353 100644
--- a/examples/vision/sr/edvr/cpp/README.md
+++ b/examples/vision/sr/edvr/cpp/README.md
@@ -1,43 +1,44 @@
-# EDVR C++部署示例
+English | [简体中文](README_CN.md)
+# EDVR C++ Deployment Example
-本目录下提供`infer.cc`快速完成EDVR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+This directory provides examples that `infer.cc` fast finishes the deployment of EDVR on CPU/GPU and GPU accelerated by TensorRT.
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-以Linux上EDVR推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+Taking the EDVR inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0)
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载EDVR模型文件和测试视频
+# Download EDVR model files and test videos
wget https://bj.bcebos.com/paddlehub/fastdeploy/EDVR_M_wo_tsa_SRx4.tar
tar -xvf EDVR_M_wo_tsa_SRx4.tar
wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
-# CPU推理
+# CPU inference
./infer_demo EDVR_M_wo_tsa_SRx4 vsr_src.mp4 0 5
-# GPU推理
+# GPU inference
./infer_demo EDVR_M_wo_tsa_SRx4 vsr_src.mp4 1 5
-# GPU上TensorRT推理
+# TensorRT Inference on GPU
./infer_demo EDVR_M_wo_tsa_SRx4 vsr_src.mp4 2 5
```
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-## EDVR C++接口
+## EDVR C++ Interface
-### EDVR类
+### EDVR Class
```c++
fastdeploy::vision::sr::EDVR(
@@ -47,28 +48,28 @@ fastdeploy::vision::sr::EDVR(
const ModelFormat& model_format = ModelFormat::PADDLE)
```
-EDVR模型加载和初始化,其中model_file为导出的Paddle模型格式。
+EDVR model loading and initialization, among which model_file is the exported Paddle model format.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-#### Predict函数
+#### Predict Function
> ```c++
> EDVR::Predict(std::vector& imgs, std::vector& results)
> ```
>
-> 模型预测接口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **imgs**: 输入视频帧序列,注意需为HWC,BGR格式
-> > * **results**: 视频超分结果,超分后的视频帧序列
+> > * **imgs**: Input video frame sequence in HWC or BGR format
+> > * **results**: Video SR results: video frame sequence after SR
-- [模型介绍](../../)
-- [Python部署](../python)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/edvr/cpp/README_CN.md b/examples/vision/sr/edvr/cpp/README_CN.md
new file mode 100644
index 000000000..b4f173c5a
--- /dev/null
+++ b/examples/vision/sr/edvr/cpp/README_CN.md
@@ -0,0 +1,75 @@
+[English](README.md) | 简体中文
+# EDVR C++部署示例
+
+本目录下提供`infer.cc`快速完成EDVR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+以Linux上EDVR推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+# 下载EDVR模型文件和测试视频
+wget https://bj.bcebos.com/paddlehub/fastdeploy/EDVR_M_wo_tsa_SRx4.tar
+tar -xvf EDVR_M_wo_tsa_SRx4.tar
+wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
+
+
+# CPU推理
+./infer_demo EDVR_M_wo_tsa_SRx4 vsr_src.mp4 0 5
+# GPU推理
+./infer_demo EDVR_M_wo_tsa_SRx4 vsr_src.mp4 1 5
+# GPU上TensorRT推理
+./infer_demo EDVR_M_wo_tsa_SRx4 vsr_src.mp4 2 5
+```
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## EDVR C++接口
+
+### EDVR类
+
+```c++
+fastdeploy::vision::sr::EDVR(
+ const string& model_file,
+ const string& params_file = "",
+ const RuntimeOption& runtime_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::PADDLE)
+```
+
+EDVR模型加载和初始化,其中model_file为导出的Paddle模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+#### Predict函数
+
+> ```c++
+> EDVR::Predict(std::vector& imgs, std::vector& results)
+> ```
+>
+> 模型预测接口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **imgs**: 输入视频帧序列,注意需为HWC,BGR格式
+> > * **results**: 视频超分结果,超分后的视频帧序列
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/edvr/python/README.md b/examples/vision/sr/edvr/python/README.md
index 8875045df..50ca71c89 100644
--- a/examples/vision/sr/edvr/python/README.md
+++ b/examples/vision/sr/edvr/python/README.md
@@ -1,61 +1,62 @@
-# EDVR Python部署示例
+English | [简体中文](README_CN.md)
+# EDVR Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`infer.py`快速完成EDVR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+This directory provides examples that `infer.py` fast finishes the deployment of EDVR on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
-#下载部署示例代码
+# Download deployment example code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/sr/edvr/python
-# 下载VSR模型文件和测试视频
+# Download VSR model files and test videos
wget https://bj.bcebos.com/paddlehub/fastdeploy/EDVR_M_wo_tsa_SRx4.tar
tar -xvf EDVR_M_wo_tsa_SRx4.tar
wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
-# CPU推理
+# CPU inference
python infer.py --model EDVR_M_wo_tsa_SRx4 --video vsr_src.mp4 --frame_num 5 --device cpu
-# GPU推理
+# GPU inference
python infer.py --model EDVR_M_wo_tsa_SRx4 --video vsr_src.mp4 --frame_num 5 --device gpu
-# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+# TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
python infer.py --model EDVR_M_wo_tsa_SRx4 --video vsr_src.mp4 --frame_num 5 --device gpu --use_trt True
```
-## EDVR Python接口
+## EDVR Python Interface
```python
fd.vision.sr.EDVR(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
-EDVR模型加载和初始化,其中model_file和params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)
+EDVR model loading and initialization, among which model_file and params_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) for more information
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-### predict函数
+### predict function
> ```python
> EDVR.predict(frames)
> ```
>
-> 模型预测结口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **frames**(list[np.ndarray]): 输入数据,注意需为HWC,BGR格式, frames为视频帧序列
+> > * **frames**(list[np.ndarray]): Input data in HWC or BGR format. Frames are video frame sequences.
-> **返回** list[np.ndarray] 为超分后的视频帧序列
+> **Return** list[np.ndarray] is the video frame sequence after SR
-## 其它文档
+## Other Documents
-- [EDVR 模型介绍](..)
-- [EDVR C++部署](../cpp)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [EDVR Model Description](..)
+- [EDVR C++ Deployment](../cpp)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/edvr/python/README_CN.md b/examples/vision/sr/edvr/python/README_CN.md
new file mode 100644
index 000000000..605ab2fc2
--- /dev/null
+++ b/examples/vision/sr/edvr/python/README_CN.md
@@ -0,0 +1,62 @@
+[English](README.md) | 简体中文
+# EDVR Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成EDVR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/sr/edvr/python
+
+# 下载VSR模型文件和测试视频
+wget https://bj.bcebos.com/paddlehub/fastdeploy/EDVR_M_wo_tsa_SRx4.tar
+tar -xvf EDVR_M_wo_tsa_SRx4.tar
+wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
+# CPU推理
+python infer.py --model EDVR_M_wo_tsa_SRx4 --video vsr_src.mp4 --frame_num 5 --device cpu
+# GPU推理
+python infer.py --model EDVR_M_wo_tsa_SRx4 --video vsr_src.mp4 --frame_num 5 --device gpu
+# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+python infer.py --model EDVR_M_wo_tsa_SRx4 --video vsr_src.mp4 --frame_num 5 --device gpu --use_trt True
+```
+
+## EDVR Python接口
+
+```python
+fd.vision.sr.EDVR(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE)
+```
+
+EDVR模型加载和初始化,其中model_file和params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+### predict函数
+
+> ```python
+> EDVR.predict(frames)
+> ```
+>
+> 模型预测结口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **frames**(list[np.ndarray]): 输入数据,注意需为HWC,BGR格式, frames为视频帧序列
+
+> **返回** list[np.ndarray] 为超分后的视频帧序列
+
+
+## 其它文档
+
+- [EDVR 模型介绍](..)
+- [EDVR C++部署](../cpp)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/ppmsvsr/README.md b/examples/vision/sr/ppmsvsr/README.md
index a11e8101e..3719a3744 100644
--- a/examples/vision/sr/ppmsvsr/README.md
+++ b/examples/vision/sr/ppmsvsr/README.md
@@ -1,27 +1,28 @@
-# PP-MSVSR模型部署
+English | [简体中文](README_CN.md)
+# PP-MSVSR Model Deployment
-## 模型版本说明
+## Model Description
- [PaddleGAN develop](https://github.com/PaddlePaddle/PaddleGAN)
-## 支持模型列表
+## List of Supported Models
-目前FastDeploy支持如下模型的部署
+Now FastDeploy supports the deployment of the following models
- [PP-MSVSR](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
-## 导出部署模型
+## Export Deployment Model
-在部署前,需要先将训练好的PP-MSVSR导出成部署模型,导出PP-MSVSR导出模型步骤,参考文档[导出模型](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
+Before deployment, export the trained PP-MSVSR to the deployment model. Refer to [Export Model](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) for detailed steps.
-| 模型 | 参数大小 | 精度 | 备注 |
+| Model | Parameter Size | Accuracy | Note |
|:----------------------------------------------------------------------------|:------|:----- | :------ |
| [PP-MSVSR](https://bj.bcebos.com/paddlehub/fastdeploy/PP-MSVSR_reds_x4.tar) | 8.8MB | - |
-## 详细部署文档
+## Detailed Deployment Tutorials
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python deployment](python)
+- [C++ deployment](cpp)
diff --git a/examples/vision/sr/ppmsvsr/README_CN.md b/examples/vision/sr/ppmsvsr/README_CN.md
new file mode 100644
index 000000000..a62411cf4
--- /dev/null
+++ b/examples/vision/sr/ppmsvsr/README_CN.md
@@ -0,0 +1,28 @@
+[English](README.md) | 简体中文
+# PP-MSVSR模型部署
+
+## 模型版本说明
+
+- [PaddleGAN develop](https://github.com/PaddlePaddle/PaddleGAN)
+
+## 支持模型列表
+
+目前FastDeploy支持如下模型的部署
+
+- [PP-MSVSR](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
+
+
+## 导出部署模型
+
+在部署前,需要先将训练好的PP-MSVSR导出成部署模型,导出PP-MSVSR导出模型步骤,参考文档[导出模型](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)。
+
+
+| 模型 | 参数大小 | 精度 | 备注 |
+|:----------------------------------------------------------------------------|:------|:----- | :------ |
+| [PP-MSVSR](https://bj.bcebos.com/paddlehub/fastdeploy/PP-MSVSR_reds_x4.tar) | 8.8MB | - |
+
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/sr/ppmsvsr/cpp/README.md b/examples/vision/sr/ppmsvsr/cpp/README.md
index 712264a36..7e1c66a3b 100644
--- a/examples/vision/sr/ppmsvsr/cpp/README.md
+++ b/examples/vision/sr/ppmsvsr/cpp/README.md
@@ -1,43 +1,44 @@
-# VSR C++部署示例
+English | [简体中文](README_CN.md)
+# VSR C++ Deployment Example
-本目录下提供`infer.cc`快速完成PP-MSVSR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+This directory provides examples that `infer.cc` fast finishes the deployment of PP-MSVSR on CPU/GPU and GPU accelerated by TensorRT.
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-以Linux上 PP-MSVSR 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+Taking the PP-MSVSR inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载PP-MSVSR模型文件和测试视频
+# Download PP-MSVSR model files and test videos
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-MSVSR_reds_x4.tar
tar -xvf PP-MSVSR_reds_x4.tar
wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
-# CPU推理
+# CPU inference
./infer_demo PP-MSVSR_reds_x4 vsr_src.mp4 0 2
-# GPU推理
+# GPU inference
./infer_demo PP-MSVSR_reds_x4 vsr_src.mp4 1 2
-# GPU上TensorRT推理
+# TensorRT Inference on GPU
./infer_demo PP-MSVSR_reds_x4 vsr_src.mp4 2 2
```
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-## PP-MSVSR C++接口
+## PP-MSVSR C++ Interface
-### PPMSVSR类
+### PPMSVSR Class
```c++
fastdeploy::vision::sr::PPMSVSR(
@@ -47,28 +48,28 @@ fastdeploy::vision::sr::PPMSVSR(
const ModelFormat& model_format = ModelFormat::PADDLE)
```
-PP-MSVSR模型加载和初始化,其中model_file为导出的Paddle模型格式。
+PP-MSVSR model loading and initialization, among which model_file is the exported Paddle model format.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-#### Predict函数
+#### Predict Function
> ```c++
> PPMSVSR::Predict(std::vector& imgs, std::vector& results)
> ```
>
-> 模型预测接口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **imgs**: 输入视频帧序列,注意需为HWC,BGR格式
-> > * **results**: 视频超分结果,超分后的视频帧序列
+> > * **imgs**: Input video frame sequences in HWC or BGR format
+> > * **results**: Video SR results: video frame sequence after SR
-- [模型介绍](../../)
-- [Python部署](../python)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/ppmsvsr/cpp/README_CN.md b/examples/vision/sr/ppmsvsr/cpp/README_CN.md
new file mode 100644
index 000000000..4fcff3aea
--- /dev/null
+++ b/examples/vision/sr/ppmsvsr/cpp/README_CN.md
@@ -0,0 +1,75 @@
+[English](README.md) | 简体中文
+# VSR C++部署示例
+
+本目录下提供`infer.cc`快速完成PP-MSVSR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+以Linux上 PP-MSVSR 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+# 下载PP-MSVSR模型文件和测试视频
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-MSVSR_reds_x4.tar
+tar -xvf PP-MSVSR_reds_x4.tar
+wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
+
+
+# CPU推理
+./infer_demo PP-MSVSR_reds_x4 vsr_src.mp4 0 2
+# GPU推理
+./infer_demo PP-MSVSR_reds_x4 vsr_src.mp4 1 2
+# GPU上TensorRT推理
+./infer_demo PP-MSVSR_reds_x4 vsr_src.mp4 2 2
+```
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## PP-MSVSR C++接口
+
+### PPMSVSR类
+
+```c++
+fastdeploy::vision::sr::PPMSVSR(
+ const string& model_file,
+ const string& params_file = "",
+ const RuntimeOption& runtime_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::PADDLE)
+```
+
+PP-MSVSR模型加载和初始化,其中model_file为导出的Paddle模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+#### Predict函数
+
+> ```c++
+> PPMSVSR::Predict(std::vector& imgs, std::vector& results)
+> ```
+>
+> 模型预测接口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **imgs**: 输入视频帧序列,注意需为HWC,BGR格式
+> > * **results**: 视频超分结果,超分后的视频帧序列
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/ppmsvsr/python/README.md b/examples/vision/sr/ppmsvsr/python/README.md
index 66eea35f7..cf4ef5ed1 100644
--- a/examples/vision/sr/ppmsvsr/python/README.md
+++ b/examples/vision/sr/ppmsvsr/python/README.md
@@ -1,61 +1,61 @@
-# PP-MSVSR Python部署示例
+English | [简体中文](README_CN.md)
+# PP-MSVSR Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-
-本目录下提供`infer.py`快速完成PP-MSVSR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+This directory provides examples that `infer.py` fast finishes the deployment of PP-MSVSR on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
-#下载部署示例代码
+# Download the deployment example code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/sr/ppmsvsr/python
-# 下载VSR模型文件和测试视频
+# Download VSR model files and test videos
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-MSVSR_reds_x4.tar
tar -xvf PP-MSVSR_reds_x4.tar
wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
-# CPU推理
+# CPU inference
python infer.py --model PP-MSVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device cpu
-# GPU推理
+# GPU inference
python infer.py --model PP-MSVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device gpu
-# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+# TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
python infer.py --model PP-MSVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device gpu --use_trt True
```
-## VSR Python接口
+## VSR Python Interface
```python
fd.vision.sr.PPMSVSR(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
-PP-MSVSR模型加载和初始化,其中model_file和params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)
+PP-MSVSR model loading and initialization, among which model_file and params_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md) for more information
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-### predict函数
+### predict function
> ```python
> PPMSVSR.predict(frames)
> ```
>
-> 模型预测结口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **frames**(list[np.ndarray]): 输入数据,注意需为HWC,BGR格式, frames为视频帧序列
+> > * **frames**(list[np.ndarray]): Input data in HWC or BGR format. Frames are the video frame sequences
-> **返回** list[np.ndarray] 为超分后的视频帧序列
+> **Return** list[np.ndarray] is the video frame sequence after SR
-## 其它文档
+## Other Documents
-- [PP-MSVSR 模型介绍](..)
-- [PP-MSVSR C++部署](../cpp)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [PP-MSVSR Model Description](..)
+- [PP-MSVSR C++ Deployment](../cpp)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/sr/ppmsvsr/python/README_CN.md b/examples/vision/sr/ppmsvsr/python/README_CN.md
new file mode 100644
index 000000000..ba793011d
--- /dev/null
+++ b/examples/vision/sr/ppmsvsr/python/README_CN.md
@@ -0,0 +1,62 @@
+[English](README.md) | 简体中文
+# PP-MSVSR Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成PP-MSVSR在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/sr/ppmsvsr/python
+
+# 下载VSR模型文件和测试视频
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-MSVSR_reds_x4.tar
+tar -xvf PP-MSVSR_reds_x4.tar
+wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
+# CPU推理
+python infer.py --model PP-MSVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device cpu
+# GPU推理
+python infer.py --model PP-MSVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device gpu
+# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+python infer.py --model PP-MSVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device gpu --use_trt True
+```
+
+## VSR Python接口
+
+```python
+fd.vision.sr.PPMSVSR(model_file, params_file, runtime_option=None, model_format=ModelFormat.PADDLE)
+```
+
+PP-MSVSR模型加载和初始化,其中model_file和params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md)
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+### predict函数
+
+> ```python
+> PPMSVSR.predict(frames)
+> ```
+>
+> 模型预测接口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **frames**(list[np.ndarray]): 输入数据,注意需为HWC,BGR格式, frames为视频帧序列
+
+> **返回** list[np.ndarray] 为超分后的视频帧序列
+
+
+## 其它文档
+
+- [PP-MSVSR 模型介绍](..)
+- [PP-MSVSR C++部署](../cpp)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/tracking/pptracking/README.md b/examples/vision/tracking/pptracking/README.md
index 35b9d7173..e3e5ff949 100644
--- a/examples/vision/tracking/pptracking/README.md
+++ b/examples/vision/tracking/pptracking/README.md
@@ -1,35 +1,36 @@
-# PP-Tracking模型部署
+English | [简体中文](README_CN.md)
+# PP-Tracking Model Deployment
-## 模型版本说明
+## Model Description
- [PaddleDetection release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5)
-## 支持模型列表
+## List of Supported Models
-目前FastDeploy支持如下模型的部署
+Now FastDeploy supports the deployment of the following models
-- [PP-Tracking系列模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/mot)
+- [PP-Tracking models](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/mot)
-## 导出部署模型
+## Export Deployment Models
-在部署前,需要先将训练好的PP-Tracking导出成部署模型,导出PPTracking导出模型步骤,参考文档[导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pptracking/cpp/README.md)。
+Before deployment, the trained PP-Tracking needs to be exported into the deployment model. Refer to [Export Model](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pptracking/cpp/README.md) for more details.
-## 下载预训练模型
+## Download Pre-trained Models
-为了方便开发者的测试,下面提供了PP-Tracking行人跟踪垂类模型,开发者可直接下载使用,更多模型参见[PPTracking](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pptracking/README_cn.md)。
+For developers' testing, PP-Tracking’s pedestrian tracking pendant model is provided below. Developers can download and use it directly. Refer to [PPTracking](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pptracking/README_cn.md) for other models.
-| 模型 | 参数大小 | 精度 | 备注 |
+| Model | Parameter Size | Accuracy | Note |
|:-----------------------------------------------------------------------------------------------------|:-------|:----- | :------ |
| [PP-Tracking](https://bj.bcebos.com/paddlehub/fastdeploy/fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz) | 51.2MB | - |
-**说明**
-- 仅支持JDE模型(JDE,FairMOT,MCFairMOT);
-- 目前暂不支持SDE模型的部署,待PaddleDetection官方更新SED部署代码后,对SDE模型进行支持。
+**Statement**
+- Only JDE models are supported(JDE,FairMOT,MCFairMOT);
+- SDE model deployment is not supported at present. Its deployment can be allowed after PaddleDetection officially updates SED deployment code.
-## 详细部署文档
+## Detailed Deployment Tutorials
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/tracking/pptracking/README_CN.md b/examples/vision/tracking/pptracking/README_CN.md
new file mode 100644
index 000000000..6549aee62
--- /dev/null
+++ b/examples/vision/tracking/pptracking/README_CN.md
@@ -0,0 +1,36 @@
+[English](README.md) | 简体中文
+# PP-Tracking模型部署
+
+## 模型版本说明
+
+- [PaddleDetection release/2.5](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5)
+
+## 支持模型列表
+
+目前FastDeploy支持如下模型的部署
+
+- [PP-Tracking系列模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/mot)
+
+
+## 导出部署模型
+
+在部署前,需要先将训练好的PP-Tracking导出成部署模型,导出PPTracking导出模型步骤,参考文档[导出模型](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pptracking/cpp/README.md)。
+
+
+## 下载预训练模型
+
+为了方便开发者的测试,下面提供了PP-Tracking行人跟踪垂类模型,开发者可直接下载使用,更多模型参见[PPTracking](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pptracking/README_cn.md)。
+
+| 模型 | 参数大小 | 精度 | 备注 |
+|:-----------------------------------------------------------------------------------------------------|:-------|:----- | :------ |
+| [PP-Tracking](https://bj.bcebos.com/paddlehub/fastdeploy/fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz) | 51.2MB | - |
+
+**说明**
+- 仅支持JDE模型(JDE,FairMOT,MCFairMOT);
+- 目前暂不支持SDE模型的部署,待PaddleDetection官方更新SED部署代码后,对SDE模型进行支持。
+
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/tracking/pptracking/cpp/README.md b/examples/vision/tracking/pptracking/cpp/README.md
index af26e1fff..189639a4b 100644
--- a/examples/vision/tracking/pptracking/cpp/README.md
+++ b/examples/vision/tracking/pptracking/cpp/README.md
@@ -1,43 +1,43 @@
-# PP-Tracking C++部署示例
+English | [简体中文](README_CN.md)
+# PP-Tracking C++ Deployment Example
-本目录下提供`infer.cc`快速完成PP-Tracking在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+This directory provides examples that `infer.cc` fast finishes the deployment of PP-Tracking on CPU/GPU and GPU accelerated by TensorRT.
+Before deployment, two steps require confirmation
-在部署前,需确认以下两个步骤
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-
-以Linux上 PP-Tracking 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+Taking the PP-Tracking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download the FastDeploy precompiled library. Users can choose your appropriate version in the`FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-# 下载PP-Tracking模型文件和测试视频
+# Download PP-Tracking model files and test videos
wget https://bj.bcebos.com/paddlehub/fastdeploy/fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz
tar -xvf fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/person.mp4
-# CPU推理
+# CPU inference
./infer_demo fairmot_hrnetv2_w18_dlafpn_30e_576x320 person.mp4 0
-# GPU推理
+# GPU inference
./infer_demo fairmot_hrnetv2_w18_dlafpn_30e_576x320 person.mp4 1
-# GPU上TensorRT推理
+# TensorRT Inference on GPU
./infer_demo fairmot_hrnetv2_w18_dlafpn_30e_576x320 person.mp4 2
```
-以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
-- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
-## PP-Tracking C++接口
+## PP-Tracking C++ Interface
-### PPTracking类
+### PPTracking Class
```c++
fastdeploy::vision::tracking::PPTracking(
@@ -48,31 +48,31 @@ fastdeploy::vision::tracking::PPTracking(
const ModelFormat& model_format = ModelFormat::PADDLE)
```
-PP-Tracking模型加载和初始化,其中model_file为导出的Paddle模型格式。
+PP-Tracking model loading and initialization, among which model_file is the exported Paddle model format.
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **config_file**(str): 推理部署配置文件
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **config_file**(str): Inference deployment configuration file
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-#### Predict函数
+#### Predict Function
> ```c++
> PPTracking::Predict(cv::Mat* im, MOTResult* result)
> ```
>
-> 模型预测接口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **im**: 输入图像,注意需为HWC,BGR格式
-> > * **result**: 检测结果,包括检测框,跟踪id,各个框的置信度,对象类别id,MOTResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > * **im**: Input images in HWC or BGR format
+> > * **result**: Detection results, including detection box, tracking id, confidence of each box, and object class id. Refer to [visual model prediction results](../../../../../docs/api/vision_results/) for the description of MOTResult
-- [模型介绍](../../)
-- [Python部署](../python)
-- [视觉模型预测结果](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/tracking/pptracking/cpp/README_CN.md b/examples/vision/tracking/pptracking/cpp/README_CN.md
new file mode 100644
index 000000000..bd21ab13c
--- /dev/null
+++ b/examples/vision/tracking/pptracking/cpp/README_CN.md
@@ -0,0 +1,79 @@
+[English](README.md) | 简体中文
+# PP-Tracking C++部署示例
+
+本目录下提供`infer.cc`快速完成PP-Tracking在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+以Linux上 PP-Tracking 推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+# 下载PP-Tracking模型文件和测试视频
+wget https://bj.bcebos.com/paddlehub/fastdeploy/fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz
+tar -xvf fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz
+wget https://bj.bcebos.com/paddlehub/fastdeploy/person.mp4
+
+
+# CPU推理
+./infer_demo fairmot_hrnetv2_w18_dlafpn_30e_576x320 person.mp4 0
+# GPU推理
+./infer_demo fairmot_hrnetv2_w18_dlafpn_30e_576x320 person.mp4 1
+# GPU上TensorRT推理
+./infer_demo fairmot_hrnetv2_w18_dlafpn_30e_576x320 person.mp4 2
+```
+
+以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
+- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## PP-Tracking C++接口
+
+### PPTracking类
+
+```c++
+fastdeploy::vision::tracking::PPTracking(
+ const string& model_file,
+ const string& params_file = "",
+ const string& config_file,
+ const RuntimeOption& runtime_option = RuntimeOption(),
+ const ModelFormat& model_format = ModelFormat::PADDLE)
+```
+
+PP-Tracking模型加载和初始化,其中model_file为导出的Paddle模型格式。
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **config_file**(str): 推理部署配置文件
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+#### Predict函数
+
+> ```c++
+> PPTracking::Predict(cv::Mat* im, MOTResult* result)
+> ```
+>
+> 模型预测接口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **im**: 输入图像,注意需为HWC,BGR格式
+> > * **result**: 检测结果,包括检测框,跟踪id,各个框的置信度,对象类别id,MOTResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [视觉模型预测结果](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/tracking/pptracking/python/README.md b/examples/vision/tracking/pptracking/python/README.md
index 48f8300c9..318a75cbe 100644
--- a/examples/vision/tracking/pptracking/python/README.md
+++ b/examples/vision/tracking/pptracking/python/README.md
@@ -1,70 +1,71 @@
-# PP-Tracking Python部署示例
+English | [简体中文](README_CN.md)
+# PP-Tracking Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, two steps require confirmation
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`infer.py`快速完成PP-Tracking在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+This directory provides examples that `infer.py` fast finishesshes the deployment of PP-Tracking on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
-#下载部署示例代码
+# Download deployment example code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/tracking/pptracking/python
-# 下载PP-Tracking模型文件和测试视频
+# Download PP-Tracking model files and test videos
wget https://bj.bcebos.com/paddlehub/fastdeploy/fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz
tar -xvf fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/person.mp4
-# CPU推理
+# CPU inference
python infer.py --model fairmot_hrnetv2_w18_dlafpn_30e_576x320 --video person.mp4 --device cpu
-# GPU推理
+# GPU inference
python infer.py --model fairmot_hrnetv2_w18_dlafpn_30e_576x320 --video person.mp4 --device gpu
-# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+# TensorRT inference on GPU (Attention: It is somewhat time-consuming for the operation of model serialization when running TensorRT inference for the first time. Please be patient.)
python infer.py --model fairmot_hrnetv2_w18_dlafpn_30e_576x320 --video person.mp4 --device gpu --use_trt True
```
-## PP-Tracking Python接口
+## PP-Tracking Python Interface
```python
fd.vision.tracking.PPTracking(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
```
-PP-Tracking模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pptracking/cpp/README.md)
+PP-Tracking model loading and initialization, among which model_file, params_file, and config_file are the Paddle inference files exported from the training model. Refer to [Model Export](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pptracking/cpp/README.md) for more information
-**参数**
+**Parameter**
-> * **model_file**(str): 模型文件路径
-> * **params_file**(str): 参数文件路径
-> * **config_file**(str): 推理部署配置文件
-> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
-> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+> * **model_file**(str): Model file path
+> * **params_file**(str): Parameter file path
+> * **config_file**(str): Inference deployment configuration file
+> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
+> * **model_format**(ModelFormat): Model format. Paddle format by default
-### predict函数
+### predict function
> ```python
> PPTracking.predict(frame)
> ```
>
-> 模型预测结口,输入图像直接输出检测结果。
+> Model prediction interface. Input images and output detection results.
>
-> **参数**
+> **Parameter**
>
-> > * **frame**(np.ndarray): 输入数据,注意需为HWC,BGR格式,frame为视频帧如:_,frame=cap.read()得到
+> > * **frame**(np.ndarray): Input data in HWC or BGR format. The video frame is obtained through: _,frame=cap.read()
-> **返回**
+> **Return**
>
-> > 返回`fastdeploy.vision.MOTResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+> > Return `fastdeploy.vision.MOTResult` structure. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for the description of the structure
-### 类成员属性
-#### 预处理参数
-用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+### Class Member Variable
+#### Pre-processing Parameter
+Users can modify the following pre-processing parameters to their needs, which affects the final inference and deployment results
-## 其它文档
+## Other Documents
-- [PP-Tracking 模型介绍](..)
-- [PP-Tracking C++部署](../cpp)
-- [模型预测结果说明](../../../../../docs/api/vision_results/)
-- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [PP-Tracking Model Description](..)
+- [PP-Tracking C++ Deployment](../cpp)
+- [Model Prediction Results](../../../../../docs/api/vision_results/)
+- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/vision/tracking/pptracking/python/README_CN.md b/examples/vision/tracking/pptracking/python/README_CN.md
new file mode 100644
index 000000000..6e1d5e89c
--- /dev/null
+++ b/examples/vision/tracking/pptracking/python/README_CN.md
@@ -0,0 +1,71 @@
+[English](README.md) | 简体中文
+# PP-Tracking Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`infer.py`快速完成PP-Tracking在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
+
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/tracking/pptracking/python
+
+# 下载PP-Tracking模型文件和测试视频
+wget https://bj.bcebos.com/paddlehub/fastdeploy/fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz
+tar -xvf fairmot_hrnetv2_w18_dlafpn_30e_576x320.tgz
+wget https://bj.bcebos.com/paddlehub/fastdeploy/person.mp4
+# CPU推理
+python infer.py --model fairmot_hrnetv2_w18_dlafpn_30e_576x320 --video person.mp4 --device cpu
+# GPU推理
+python infer.py --model fairmot_hrnetv2_w18_dlafpn_30e_576x320 --video person.mp4 --device gpu
+# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
+python infer.py --model fairmot_hrnetv2_w18_dlafpn_30e_576x320 --video person.mp4 --device gpu --use_trt True
+```
+
+## PP-Tracking Python接口
+
+```python
+fd.vision.tracking.PPTracking(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
+```
+
+PP-Tracking模型加载和初始化,其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pptracking/cpp/README.md)
+
+**参数**
+
+> * **model_file**(str): 模型文件路径
+> * **params_file**(str): 参数文件路径
+> * **config_file**(str): 推理部署配置文件
+> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
+> * **model_format**(ModelFormat): 模型格式,默认为Paddle格式
+
+### predict函数
+
+> ```python
+> PPTracking.predict(frame)
+> ```
+>
+> 模型预测结口,输入图像直接输出检测结果。
+>
+> **参数**
+>
+> > * **frame**(np.ndarray): 输入数据,注意需为HWC,BGR格式,frame为视频帧如:_,frame=cap.read()得到
+
+> **返回**
+>
+> > 返回`fastdeploy.vision.MOTResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
+
+### 类成员属性
+#### 预处理参数
+用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果
+
+
+
+## 其它文档
+
+- [PP-Tracking 模型介绍](..)
+- [PP-Tracking C++部署](../cpp)
+- [模型预测结果说明](../../../../../docs/api/vision_results/)
+- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
From 1135d33dd71b8767761098f8acf2058d32d54c92 Mon Sep 17 00:00:00 2001
From: charl-u <115439700+charl-u@users.noreply.github.com>
Date: Fri, 6 Jan 2023 09:35:12 +0800
Subject: [PATCH 2/2] [Doc]Add English version of documents in examples/
(#1042)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* 第一次提交
* 补充一处漏翻译
* deleted: docs/en/quantize.md
* Update one translation
* Update en version
* Update one translation in code
* Standardize one writing
* Standardize one writing
* Update some en version
* Fix a grammer problem
* Update en version for api/vision result
* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop
* Checkout the link in README in vision_results/ to the en documents
* Modify a title
* Add link to serving/docs/
* Finish translation of demo.md
* Update english version of serving/docs/
* Update title of readme
* Update some links
* Modify a title
* Update some links
* Update en version of java android README
* Modify some titles
* Modify some titles
* Modify some titles
* modify article to document
* update some english version of documents in examples
* Add english version of documents in examples/visions
* Sync to current branch
* Add english version of documents in examples
---
docs/cn/build_and_install/sophgo.md | 1 +
docs/en/build_and_install/sophgo.md | 2 +-
docs/en/quick_start/runtime/cpp.md | 2 +-
docs/en/quick_start/runtime/python.md | 4 +-
examples/application/js/converter/README.md | 3 +-
.../application/js/converter/README_CN.md | 30 +++
examples/application/js/converter/RNN.md | 3 +-
examples/application/js/converter/RNN_EN.md | 80 ++++++++
.../paddlejs-models/humanseg_gpu/README.md | 2 +-
examples/audio/pp-tts/README.md | 11 +-
examples/audio/pp-tts/README_CN.md | 10 +
.../multimodal/stable_diffusion/README.md | 59 +++---
.../multimodal/stable_diffusion/README_CN.md | 64 +++++++
.../multimodal/stable_diffusion/cpp/README.md | 15 +-
.../stable_diffusion/cpp/README_CN.md | 13 ++
.../multimodal/stable_diffusion/export.md | 1 +
.../multimodal/stable_diffusion/export_EN.md | 106 +++++++++++
examples/runtime/README.md | 13 +-
examples/runtime/README_CN.md | 35 ++++
examples/runtime/cpp/README.md | 45 ++---
examples/runtime/cpp/README_CN.md | 122 ++++++++++++
examples/runtime/python/README.md | 29 +--
examples/runtime/python/README_CN.md | 54 ++++++
examples/text/ernie-3.0/cpp/README.md | 2 +-
examples/text/ernie-3.0/serving/README.md | 30 +--
.../models/ernie_seqcls_model/1/README.md | 3 +-
.../models/ernie_seqcls_model/1/README_CN.md | 2 +
.../models/ernie_tokencls_model/1/README.md | 3 +-
.../ernie_tokencls_model/1/README_CN.md | 2 +
examples/text/uie/cpp/README.md | 32 ++--
examples/text/uie/python/README.md | 30 +--
examples/text/uie/serving/README.md | 4 +-
.../paddleclas/a311d/cpp/README_CN.md | 0
.../paddleclas/sophgo/README.md | 49 ++---
.../paddleclas/sophgo/README_CN.md | 85 +++++++++
.../paddleclas/sophgo/cpp/README.md | 49 ++---
.../paddleclas/sophgo/cpp/README_CN.md | 62 ++++++
.../paddleclas/sophgo/python/README.md | 23 +--
.../paddleclas/sophgo/python/README_CN.md | 30 +++
.../vision/detection/yolov5/sophgo/README.md | 47 ++---
.../detection/yolov5/sophgo/README_CN.md | 76 ++++++++
.../detection/yolov5/sophgo/cpp/README.md | 45 ++---
.../detection/yolov5/sophgo/cpp/README_CN.md | 57 ++++++
.../detection/yolov5/sophgo/python/README.md | 23 +--
.../yolov5/sophgo/python/README_CN.md | 47 +++++
.../paddleseg/a311d/cpp/README.md | 45 ++---
.../paddleseg/a311d/cpp/README_CN.md | 59 ++++++
.../segmentation/paddleseg/android/README.md | 157 ++++++++--------
.../paddleseg/android/README_CN.md | 177 ++++++++++++++++++
.../segmentation/paddleseg/quantize/README.md | 53 +++---
.../paddleseg/quantize/README_CN.md | 37 ++++
.../paddleseg/quantize/cpp/README.md | 29 +--
.../paddleseg/quantize/cpp/README_CN.md | 32 ++++
.../paddleseg/quantize/python/README.md | 27 +--
.../paddleseg/quantize/python/README_CN.md | 29 +++
.../segmentation/paddleseg/rknpu2/README.md | 39 ++--
.../paddleseg/rknpu2/README_CN.md | 34 ++++
.../paddleseg/rknpu2/cpp/README.md | 57 +++---
.../paddleseg/rknpu2/cpp/README_CN.md | 73 ++++++++
.../paddleseg/rknpu2/pp_humanseg.md | 1 +
.../paddleseg/rknpu2/pp_humanseg_EN.md | 81 ++++++++
.../paddleseg/rknpu2/python/README.md | 32 ++--
.../paddleseg/rknpu2/python/README_CN.md | 36 ++++
.../paddleseg/rv1126/cpp/README.md | 45 ++---
.../paddleseg/rv1126/cpp/README_CN.md | 57 ++++++
.../segmentation/paddleseg/sophgo/README.md | 51 ++---
.../paddleseg/sophgo/README_CN.md | 90 +++++++++
.../paddleseg/sophgo/cpp/README.md | 45 ++---
.../paddleseg/sophgo/cpp/README_CN.md | 57 ++++++
.../paddleseg/sophgo/python/README.md | 25 +--
.../paddleseg/sophgo/python/README_CN.md | 27 +++
.../segmentation/paddleseg/web/README.md | 41 ++--
.../segmentation/paddleseg/web/README_CN.md | 44 +++++
java/android/README.md | 2 +-
74 files changed, 2312 insertions(+), 575 deletions(-)
create mode 100644 examples/application/js/converter/README_CN.md
create mode 100644 examples/application/js/converter/RNN_EN.md
create mode 100644 examples/audio/pp-tts/README_CN.md
create mode 100644 examples/multimodal/stable_diffusion/README_CN.md
create mode 100644 examples/multimodal/stable_diffusion/cpp/README_CN.md
create mode 100644 examples/multimodal/stable_diffusion/export_EN.md
create mode 100644 examples/runtime/README_CN.md
create mode 100644 examples/runtime/cpp/README_CN.md
create mode 100644 examples/runtime/python/README_CN.md
create mode 100644 examples/text/ernie-3.0/serving/models/ernie_seqcls_model/1/README_CN.md
create mode 100644 examples/text/ernie-3.0/serving/models/ernie_tokencls_model/1/README_CN.md
create mode 100644 examples/vision/classification/paddleclas/a311d/cpp/README_CN.md
create mode 100644 examples/vision/classification/paddleclas/sophgo/README_CN.md
create mode 100644 examples/vision/classification/paddleclas/sophgo/cpp/README_CN.md
create mode 100644 examples/vision/classification/paddleclas/sophgo/python/README_CN.md
create mode 100644 examples/vision/detection/yolov5/sophgo/README_CN.md
create mode 100644 examples/vision/detection/yolov5/sophgo/cpp/README_CN.md
create mode 100644 examples/vision/detection/yolov5/sophgo/python/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/a311d/cpp/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/android/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/quantize/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/quantize/cpp/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/quantize/python/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/rknpu2/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/rknpu2/cpp/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/rknpu2/pp_humanseg_EN.md
create mode 100644 examples/vision/segmentation/paddleseg/rknpu2/python/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/rv1126/cpp/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/sophgo/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/sophgo/cpp/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/sophgo/python/README_CN.md
create mode 100644 examples/vision/segmentation/paddleseg/web/README_CN.md
diff --git a/docs/cn/build_and_install/sophgo.md b/docs/cn/build_and_install/sophgo.md
index f27432e71..f9e70c629 100644
--- a/docs/cn/build_and_install/sophgo.md
+++ b/docs/cn/build_and_install/sophgo.md
@@ -1,3 +1,4 @@
+[English](../../en/build_and_install/sophgo.md) | 简体中文
# SOPHGO 部署库编译
## SOPHGO 环境准备
diff --git a/docs/en/build_and_install/sophgo.md b/docs/en/build_and_install/sophgo.md
index 08d18122c..5741680dc 100644
--- a/docs/en/build_and_install/sophgo.md
+++ b/docs/en/build_and_install/sophgo.md
@@ -1,4 +1,4 @@
-
+English | [中文](../../cn/build_and_install/sophgo.md)
# How to Build SOPHGO Deployment Environment
## SOPHGO Environment Preparation
diff --git a/docs/en/quick_start/runtime/cpp.md b/docs/en/quick_start/runtime/cpp.md
index 38dbcbc58..6de60b40f 100644
--- a/docs/en/quick_start/runtime/cpp.md
+++ b/docs/en/quick_start/runtime/cpp.md
@@ -5,7 +5,7 @@ Please check out the FastDeploy C++ deployment library is already in your enviro
This document shows an inference sample on the CPU using the PaddleClas classification model MobileNetV2 as an example.
-## 1. Obtaining the Module
+## 1. Obtaining the Model
```bash
wget https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz
diff --git a/docs/en/quick_start/runtime/python.md b/docs/en/quick_start/runtime/python.md
index d1d6b5bef..48878c2b5 100644
--- a/docs/en/quick_start/runtime/python.md
+++ b/docs/en/quick_start/runtime/python.md
@@ -5,7 +5,7 @@ Please check out the FastDeploy is already installed in your environment. You ca
This document shows an inference sample on the CPU using the PaddleClas classification model MobileNetV2 as an example.
-## 1. Obtaining the Module
+## 1. Obtaining the model
``` python
import fastdeploy as fd
@@ -42,7 +42,7 @@ results = runtime.infer({
print(results[0].shape)
```
-When loading is complete, you can get the following output information indicating the initialized backend and the hardware devices.
+When loading is complete, you will get the following output information indicating the initialized backend and the hardware devices.
```
[INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init Runtime initialized with Backend::OrtBackend in device Device::CPU.
```
diff --git a/examples/application/js/converter/README.md b/examples/application/js/converter/README.md
index b2f441533..c2cbfb292 100644
--- a/examples/application/js/converter/README.md
+++ b/examples/application/js/converter/README.md
@@ -1,3 +1,4 @@
+English | [简体中文](README_CN.md)
# PaddleJsConverter
## Installation
@@ -26,4 +27,4 @@ pip3 install paddlejsconverter
```shell
paddlejsconverter --modelPath=user_model_path --paramPath=user_model_params_path --outputDir=model_saved_path --useGPUOpt=True
```
-注意:useGPUOpt 选项默认不开启,如果模型用在 gpu backend(webgl/webgpu),则开启 useGPUOpt,如果模型运行在(wasm/plain js)则不要开启。
+Note: The option useGPUOpt is not turned on by default. Turn on useGPUOpt if the model is used on gpu backend (webgl/webgpu), don't turn on if is running on (wasm/plain js).
diff --git a/examples/application/js/converter/README_CN.md b/examples/application/js/converter/README_CN.md
new file mode 100644
index 000000000..bb14de94d
--- /dev/null
+++ b/examples/application/js/converter/README_CN.md
@@ -0,0 +1,30 @@
+简体中文 | [English](README.md)
+# PaddleJsConverter
+
+## Installation
+
+System Requirements:
+
+* paddlepaddle >= 2.0.0
+* paddlejslite >= 0.0.2
+* Python3: 3.5.1+ / 3.6 / 3.7
+* Python2: 2.7.15+
+
+#### Install PaddleJsConverter
+
+
+
+```shell
+pip install paddlejsconverter
+
+# or
+pip3 install paddlejsconverter
+```
+
+
+## Usage
+
+```shell
+paddlejsconverter --modelPath=user_model_path --paramPath=user_model_params_path --outputDir=model_saved_path --useGPUOpt=True
+```
+注意:useGPUOpt 选项默认不开启,如果模型用在 gpu backend(webgl/webgpu),则开启 useGPUOpt,如果模型运行在(wasm/plain js)则不要开启。
\ No newline at end of file
diff --git a/examples/application/js/converter/RNN.md b/examples/application/js/converter/RNN.md
index 294e93b19..811a4cf30 100644
--- a/examples/application/js/converter/RNN.md
+++ b/examples/application/js/converter/RNN.md
@@ -1,3 +1,4 @@
+简体中文 | [English](RNN_EN.md)
# RNN算子计算过程
## 一、RNN理解
@@ -73,7 +74,7 @@ paddle源码实现:https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/
计算方式:将rnn_matmul op输出结果分割成4份,每份执行不同激活函数计算,最后输出lstm_x_y.tmp_c[1, 1, 48]。x∈[0, 3],y∈[0, 24]。
详见算子实现:[rnn_cell](../paddlejs-backend-webgl/src/ops/shader/rnn/rnn_cell.ts)
-)
+
4)rnn_hidden
计算方式:将rnn_matmul op输出结果分割成4份,每份执行不同激活函数计算,最后输出lstm_x_y.tmp_h[1, 1, 48]。x∈[0, 3],y∈[0, 24]。
diff --git a/examples/application/js/converter/RNN_EN.md b/examples/application/js/converter/RNN_EN.md
new file mode 100644
index 000000000..ff60dbbe9
--- /dev/null
+++ b/examples/application/js/converter/RNN_EN.md
@@ -0,0 +1,80 @@
+English | [简体中文](RNN.md)
+# The computation process of RNN operator
+
+## 1. Understanding of RNN
+
+**RNN** is a recurrent neural network, including an input layer, a hidden layer and an output layer, which is specialized in processing sequential data.
+
+
+paddle official document: https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/RNN_cn.html#rnn
+
+paddle source code implementation: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/rnn_op.h#L812
+
+## 2. How to compute RNN
+
+ At moment t, the input layer is , hidden layer is , output layer is . As the picture above, isn't just decided by ,it is also related to . The formula is as follows.:
+
+
+
+## 3. RNN operator implementation in pdjs
+
+Because the gradient disappearance problem exists in RNN, and more contextual information cannot be obtained, **LSTM (Long Short Term Memory)** is used in CRNN, which is a special kind of RNN that can preserve long-term dependencies.
+
+Based on the image sequence, the two directions of context are mutually useful and complementary. Since the LSTM is unidirectional, two LSTMs, one forward and one backward, are combined into a **bidirectional LSTM**. In addition, multiple layers of bidirectional LSTMs can be stacked. ch_PP-OCRv2_rec_infer recognition model is using a two-layer bidirectional LSTM structure. The calculation process is shown as follows.
+
+#### Take ch_ppocr_mobile_v2.0_rec_infer model, rnn operator as an example
+```javascript
+{
+ Attr: {
+ mode: 'LSTM'
+ // Whether bidirectional, if true, it is necessary to traverse both forward and reverse.
+ is_bidirec: true
+ // Number of hidden layers, representing the number of loops.
+ num_layers: 2
+ }
+
+ Input: [
+ transpose_1.tmp_0[25, 1, 288]
+ ]
+
+ PreState: [
+ fill_constant_batch_size_like_0.tmp_0[4, 1, 48],
+ fill_constant_batch_size_like_1.tmp_0[4, 1, 48]
+ ]
+
+ WeightList: [
+ lstm_cell_0.w_0[192, 288], lstm_cell_0.w_1[192, 48],
+ lstm_cell_1.w_0[192, 288], lstm_cell_1.w_1[192, 48],
+ lstm_cell_2.w_0[192, 96], lstm_cell_2.w_1[192, 48],
+ lstm_cell_3.w_0[192, 96], lstm_cell_3.w_1[192, 48],
+ lstm_cell_0.b_0[192], lstm_cell_0.b_1[192],
+ lstm_cell_1.b_0[192], lstm_cell_1.b_1[192],
+ lstm_cell_2.b_0[192], lstm_cell_2.b_1[192],
+ lstm_cell_3.b_0[192], lstm_cell_3.b_1[192]
+ ]
+
+ Output: [
+ lstm_0.tmp_0[25, 1, 96]
+ ]
+}
+```
+
+#### Overall computation process
+
+#### Add op in rnn calculation
+1) rnn_origin
+Formula: blas.MatMul(Input, WeightList_ih, blas_ih) + blas.MatMul(PreState, WeightList_hh, blas_hh)
+
+2) rnn_matmul
+Formula: rnn_matmul = rnn_origin + Matmul( $ S_{t-1} $, WeightList_hh)
+
+3) rnn_cell
+Method: Split the rnn_matmul op output into 4 copies, each copy performs a different activation function calculation, and finally outputs lstm_x_y.tmp_c[1, 1, 48]. x∈[0, 3], y∈[0, 24].
+For details, please refer to [rnn_cell](../paddlejs-backend-webgl/src/ops/shader/rnn/rnn_cell.ts).
+
+
+4) rnn_hidden
+Split the rnn_matmul op output into 4 copies, each copy performs a different activation function calculation, and finally outputs lstm_x_y.tmp_h[1, 1, 48]. x∈[0, 3], y∈[0, 24].
+For details, please refer to [rnn_hidden](../paddlejs-backend-webgl/src/ops/shader/rnn/rnn_hidden.ts).
+
+
diff --git a/examples/application/js/package/packages/paddlejs-models/humanseg_gpu/README.md b/examples/application/js/package/packages/paddlejs-models/humanseg_gpu/README.md
index 2c8bae817..28fc7cdbf 100644
--- a/examples/application/js/package/packages/paddlejs-models/humanseg_gpu/README.md
+++ b/examples/application/js/package/packages/paddlejs-models/humanseg_gpu/README.md
@@ -47,7 +47,7 @@ humanseg.drawMask(data, canvas3, back_canvas);
```js
-// 引入 humanseg sdk
+// import humanseg sdk
import * as humanseg from '@paddle-js-models/humanseg/lib/index_gpu';
// load humanseg model, use 398x224 shape model, and preheat
diff --git a/examples/audio/pp-tts/README.md b/examples/audio/pp-tts/README.md
index d12f2d0e5..a38637f15 100644
--- a/examples/audio/pp-tts/README.md
+++ b/examples/audio/pp-tts/README.md
@@ -1,9 +1,10 @@
-# PaddleSpeech 流式语音合成
+English | [简体中文](README_CN.md)
+# PaddleSpeech Streaming Text-to-Speech
-- 本文示例的实现来自[PaddleSpeech 流式语音合成](https://github.com/PaddlePaddle/PaddleSpeech/tree/r1.2).
+- The examples in this document are from [PaddleSpeech Streaming Text-to-Speech](https://github.com/PaddlePaddle/PaddleSpeech/tree/r1.2).
-## 详细部署文档
+## Detailed deployment document
-- [Python部署](python)
-- [Serving部署](serving)
+- [Python deployment](python)
+- [Serving deployment](serving)
diff --git a/examples/audio/pp-tts/README_CN.md b/examples/audio/pp-tts/README_CN.md
new file mode 100644
index 000000000..1f3a8b97b
--- /dev/null
+++ b/examples/audio/pp-tts/README_CN.md
@@ -0,0 +1,10 @@
+简体中文 | [English](README.md)
+# PaddleSpeech 流式语音合成
+
+
+- 本文示例的实现来自[PaddleSpeech 流式语音合成](https://github.com/PaddlePaddle/PaddleSpeech/tree/r1.2).
+
+## 详细部署文档
+
+- [Python部署](python)
+- [Serving部署](serving)
diff --git a/examples/multimodal/stable_diffusion/README.md b/examples/multimodal/stable_diffusion/README.md
index 0b5bbcd09..d745eb3a2 100755
--- a/examples/multimodal/stable_diffusion/README.md
+++ b/examples/multimodal/stable_diffusion/README.md
@@ -1,63 +1,64 @@
-# FastDeploy Diffusion模型高性能部署
+English | [简体中文](README_CN.md)
+# FastDeploy Diffusion Model High-Performance Deployment
-本部署示例使用⚡️`FastDeploy`在Huggingface团队[Diffusers](https://github.com/huggingface/diffusers)项目设计的`DiffusionPipeline`基础上,完成Diffusion模型的高性能部署。
+This document completes the high-performance deployment of the Diffusion model with ⚡️`FastDeploy`, based on `DiffusionPipeline` in project [Diffusers](https://github.com/huggingface/diffusers) designed by Huggingface.
-### 部署模型准备
+### Preperation for Deployment
-本示例需要使用训练模型导出后的部署模型。有两种部署模型的获取方式:
+This example needs the deployment model after exporting the training model. Here are two ways to obtain the deployment model:
-- 模型导出方式,可参考[模型导出文档](./export.md)导出部署模型。
-- 下载部署模型。为了方便开发者快速测试本示例,我们已经将部分`Diffusion`模型预先导出,开发者只要下载模型就可以快速测试:
+- Methods for model export. Please refer to [Model Export](./export_EN.md) to export deployment model.
+- Download the deployment model. To facilitate developers to test the example, we have pre-exported some of the `Diffusion` models, so you can just download models and test them quickly:
-| 模型 | Scheduler |
+| Model | Scheduler |
|----------|--------------|
| [CompVis/stable-diffusion-v1-4](https://bj.bcebos.com/fastdeploy/models/stable-diffusion/CompVis/stable-diffusion-v1-4.tgz) | PNDM |
| [runwayml/stable-diffusion-v1-5](https://bj.bcebos.com/fastdeploy/models/stable-diffusion/runwayml/stable-diffusion-v1-5.tgz) | EulerAncestral |
-## 环境依赖
+## Environment Dependency
-在示例中使用了PaddleNLP的CLIP模型的分词器,所以需要执行以下命令安装依赖。
+In the example, the word splitter in CLIP model of PaddleNLP is required, so you need to run the following line to install the dependency.
```shell
pip install paddlenlp paddlepaddle-gpu
```
-### 快速体验
+### Quick Experience
-我们经过部署模型准备,可以开始进行测试。下面将指定模型目录以及推理引擎后端,运行`infer.py`脚本,完成推理。
+We are ready to start testing after model deployment. Here we will specify the model directory as well as the inference engine backend, and run the `infer.py` script to complete the inference.
```
python infer.py --model_dir stable-diffusion-v1-4/ --scheduler "pndm" --backend paddle
```
-得到的图像文件为fd_astronaut_rides_horse.png。生成的图片示例如下(每次生成的图片都不相同,示例仅作参考):
+The image file is fd_astronaut_rides_horse.png. An example of the generated image is as follows (the generated image is different each time, the example is for reference only):

-如果使用stable-diffusion-v1-5模型,则可执行以下命令完成推理:
+If the stable-diffusion-v1-5 model is used, you can run these to complete the inference.
```
-# GPU上推理
+# Inference on GPU
python infer.py --model_dir stable-diffusion-v1-5/ --scheduler "euler_ancestral" --backend paddle
-# 在昆仑芯XPU上推理
+# Inference on KunlunXin XPU
python infer.py --model_dir stable-diffusion-v1-5/ --scheduler "euler_ancestral" --backend paddle-kunlunxin
```
-#### 参数说明
+#### Parameters
-`infer.py` 除了以上示例的命令行参数,还支持更多命令行参数的设置。以下为各命令行参数的说明。
+`infer.py` supports more command line parameters than the above example. The following is a description of each command line parameter.
-| 参数 |参数说明 |
+| Parameter |Description |
|----------|--------------|
-| --model_dir | 导出后模型的目录。 |
-| --model_format | 模型格式。默认为`'paddle'`,可选列表:`['paddle', 'onnx']`。 |
-| --backend | 推理引擎后端。默认为`paddle`,可选列表:`['onnx_runtime', 'paddle', 'paddle-kunlunxin']`,当模型格式为`onnx`时,可选列表为`['onnx_runtime']`。 |
-| --scheduler | StableDiffusion 模型的scheduler。默认为`'pndm'`。可选列表:`['pndm', 'euler_ancestral']`,StableDiffusio模型对应的scheduler可参考[ppdiffuser模型列表](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/textual_inversion)。|
-| --unet_model_prefix | UNet模型前缀。默认为`unet`。 |
-| --vae_model_prefix | VAE模型前缀。默认为`vae_decoder`。 |
-| --text_encoder_model_prefix | TextEncoder模型前缀。默认为`text_encoder`。 |
-| --inference_steps | UNet模型运行的次数,默认为100。 |
-| --image_path | 生成图片的路径。默认为`fd_astronaut_rides_horse.png`。 |
-| --device_id | gpu设备的id。若`device_id`为-1,视为使用cpu推理。 |
-| --use_fp16 | 是否使用fp16精度。默认为`False`。使用tensorrt或者paddle-tensorrt后端时可以设为`True`开启。 |
+| --model_dir | Directory of the exported model. |
+| --model_format | Model format. Default is `'paddle'`, optional list: `['paddle', 'onnx']`. |
+| --backend | Inference engine backend. Default is`paddle`, optional list: `['onnx_runtime', 'paddle', 'paddle-kunlunxin']`, when the model format is `onnx`, optional list is`['onnx_runtime']`. |
+| --scheduler | Scheduler in StableDiffusion model. Default is`'pndm'`, optional list `['pndm', 'euler_ancestral']`. The scheduler corresponding to the StableDiffusio model can be found in [ppdiffuser model list](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/textual_inversion).|
+| --unet_model_prefix | UNet model prefix, default is `unet`. |
+| --vae_model_prefix | VAE model prefix, defalut is `vae_decoder`. |
+| --text_encoder_model_prefix | TextEncoder model prefix, default is `text_encoder`. |
+| --inference_steps | Running times of UNet model, default is 100. |
+| --image_path | Path to the generated images, defalut is `fd_astronaut_rides_horse.png`. |
+| --device_id | gpu id. If `device_id` is -1, cpu is used for inference. |
+| --use_fp16 | Indicates if fp16 is used, default is `False`. Can be set to `True` when using tensorrt or paddle-tensorrt backend. |
diff --git a/examples/multimodal/stable_diffusion/README_CN.md b/examples/multimodal/stable_diffusion/README_CN.md
new file mode 100644
index 000000000..8dc5f6cf7
--- /dev/null
+++ b/examples/multimodal/stable_diffusion/README_CN.md
@@ -0,0 +1,64 @@
+简体中文 | [English](README.md)
+# FastDeploy Diffusion模型高性能部署
+
+本部署示例使用⚡️`FastDeploy`在Huggingface团队[Diffusers](https://github.com/huggingface/diffusers)项目设计的`DiffusionPipeline`基础上,完成Diffusion模型的高性能部署。
+
+### 部署模型准备
+
+本示例需要使用训练模型导出后的部署模型。有两种部署模型的获取方式:
+
+- 模型导出方式,可参考[模型导出文档](./export.md)导出部署模型。
+- 下载部署模型。为了方便开发者快速测试本示例,我们已经将部分`Diffusion`模型预先导出,开发者只要下载模型就可以快速测试:
+
+| 模型 | Scheduler |
+|----------|--------------|
+| [CompVis/stable-diffusion-v1-4](https://bj.bcebos.com/fastdeploy/models/stable-diffusion/CompVis/stable-diffusion-v1-4.tgz) | PNDM |
+| [runwayml/stable-diffusion-v1-5](https://bj.bcebos.com/fastdeploy/models/stable-diffusion/runwayml/stable-diffusion-v1-5.tgz) | EulerAncestral |
+
+## 环境依赖
+
+在示例中使用了PaddleNLP的CLIP模型的分词器,所以需要执行以下命令安装依赖。
+
+```shell
+pip install paddlenlp paddlepaddle-gpu
+```
+
+### 快速体验
+
+我们经过部署模型准备,可以开始进行测试。下面将指定模型目录以及推理引擎后端,运行`infer.py`脚本,完成推理。
+
+```
+python infer.py --model_dir stable-diffusion-v1-4/ --scheduler "pndm" --backend paddle
+```
+
+得到的图像文件为fd_astronaut_rides_horse.png。生成的图片示例如下(每次生成的图片都不相同,示例仅作参考):
+
+
+
+如果使用stable-diffusion-v1-5模型,则可执行以下命令完成推理:
+
+```
+# GPU上推理
+python infer.py --model_dir stable-diffusion-v1-5/ --scheduler "euler_ancestral" --backend paddle
+
+# 在昆仑芯XPU上推理
+python infer.py --model_dir stable-diffusion-v1-5/ --scheduler "euler_ancestral" --backend paddle-kunlunxin
+```
+
+#### 参数说明
+
+`infer.py` 除了以上示例的命令行参数,还支持更多命令行参数的设置。以下为各命令行参数的说明。
+
+| 参数 |参数说明 |
+|----------|--------------|
+| --model_dir | 导出后模型的目录。 |
+| --model_format | 模型格式。默认为`'paddle'`,可选列表:`['paddle', 'onnx']`。 |
+| --backend | 推理引擎后端。默认为`paddle`,可选列表:`['onnx_runtime', 'paddle', 'paddle-kunlunxin']`,当模型格式为`onnx`时,可选列表为`['onnx_runtime']`。 |
+| --scheduler | StableDiffusion 模型的scheduler。默认为`'pndm'`。可选列表:`['pndm', 'euler_ancestral']`,StableDiffusio模型对应的scheduler可参考[ppdiffuser模型列表](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/textual_inversion)。|
+| --unet_model_prefix | UNet模型前缀。默认为`unet`。 |
+| --vae_model_prefix | VAE模型前缀。默认为`vae_decoder`。 |
+| --text_encoder_model_prefix | TextEncoder模型前缀。默认为`text_encoder`。 |
+| --inference_steps | UNet模型运行的次数,默认为100。 |
+| --image_path | 生成图片的路径。默认为`fd_astronaut_rides_horse.png`。 |
+| --device_id | gpu设备的id。若`device_id`为-1,视为使用cpu推理。 |
+| --use_fp16 | 是否使用fp16精度。默认为`False`。使用tensorrt或者paddle-tensorrt后端时可以设为`True`开启。 |
diff --git a/examples/multimodal/stable_diffusion/cpp/README.md b/examples/multimodal/stable_diffusion/cpp/README.md
index 06d085feb..b50f950f8 100644
--- a/examples/multimodal/stable_diffusion/cpp/README.md
+++ b/examples/multimodal/stable_diffusion/cpp/README.md
@@ -1,12 +1,13 @@
-# StableDiffusion C++部署示例
+English | [简体中文](README_CN.md)
+# StableDiffusion C++ Deployment
-在部署前,需确认以下两个步骤
+Before deployment, the following two steps need to be confirmed:
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Hardware and software environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download pre-compiled libraries and samples according to the development environment. Please refer to [FastDeploy pre-compiled libraries](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
-本目录下提供`*_infer.cc`快速完成StableDiffusion各任务的C++部署示例。
+This directory provides `*_infer.cc` to quickly complete C++ deployment examples for each task of StableDiffusion.
-## Inpaint任务
+## Inpaint Task
-StableDiffusion Inpaint任务是一个根据提示文本补全图片的任务,具体而言就是用户给定提示文本,原始图片以及原始图片的mask图片,该任务输出补全后的图片。
+The StableDiffusion Inpaint task is a task that completes the image based on the prompt text. User provides the prompt text, the original image and the mask image of the original image, and the task outputs the completed image.
diff --git a/examples/multimodal/stable_diffusion/cpp/README_CN.md b/examples/multimodal/stable_diffusion/cpp/README_CN.md
new file mode 100644
index 000000000..df2372151
--- /dev/null
+++ b/examples/multimodal/stable_diffusion/cpp/README_CN.md
@@ -0,0 +1,13 @@
+简体中文 | [English](README.md)
+# StableDiffusion C++部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本目录下提供`*_infer.cc`快速完成StableDiffusion各任务的C++部署示例。
+
+## Inpaint任务
+
+StableDiffusion Inpaint任务是一个根据提示文本补全图片的任务,具体而言就是用户给定提示文本,原始图片以及原始图片的mask图片,该任务输出补全后的图片。
diff --git a/examples/multimodal/stable_diffusion/export.md b/examples/multimodal/stable_diffusion/export.md
index ba2b4faf1..84badd703 100644
--- a/examples/multimodal/stable_diffusion/export.md
+++ b/examples/multimodal/stable_diffusion/export.md
@@ -1,3 +1,4 @@
+简体中文 | [English](export_EN.md)
# Diffusion模型导出教程
本项目支持两种模型导出方式:[PPDiffusers](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers)模型导出以及[Diffusers](https://github.com/huggingface/diffusers)模型导出。下面分别介绍这两种模型导出方式。
diff --git a/examples/multimodal/stable_diffusion/export_EN.md b/examples/multimodal/stable_diffusion/export_EN.md
new file mode 100644
index 000000000..fd4f6c421
--- /dev/null
+++ b/examples/multimodal/stable_diffusion/export_EN.md
@@ -0,0 +1,106 @@
+English | [简体中文](export.md)
+# Diffusion Model Export
+
+The project supports two methods of model export, [PPDiffusers](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers) model export and [Diffusers](https://github.com/huggingface/diffusers) model export. Here we introduce each of these two methods.
+
+## PPDiffusers Model Export
+
+[PPDiffusers](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers) is a Diffusion Model toolkit that supports cross-modal (e.g., image and speech) training and inference. It builds on the design of [Diffusers](https://github.com/huggingface/diffusers) by the 🤗 Huggingface team, and relies on [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) framework and the [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) natural language processing library. The following describes how to use FastDeploy to deploy the Diffusion model provided by PPDiffusers for high performance.
+
+### Dependency Installation
+
+The model export depends on `paddlepaddle`, `paddlenlp` and `ppdiffusers`, which can be installed quickly by running the following command using `pip`.
+
+```shell
+pip install -r requirements_paddle.txt
+```
+
+### Model Export
+
+___Note: The StableDiffusion model needs to be downloaded during the model export process. In order to use the model and weights, you must accept the License required. Please visit HuggingFace's [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5), to read the License carefully, and then sign the agreement.___
+
+___Tips: Stable Diffusion is based on these Licenses: The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which this license is based.___
+
+You can run the following lines to export model.
+
+```shell
+python export_model.py --pretrained_model_name_or_path CompVis/stable-diffusion-v1-4 --output_path stable-diffusion-v1-4
+```
+
+The output model directory is as follows:
+```shell
+stable-diffusion-v1-4/
+├── text_encoder
+│ ├── inference.pdiparams
+│ ├── inference.pdiparams.info
+│ └── inference.pdmodel
+├── unet
+│ ├── inference.pdiparams
+│ ├── inference.pdiparams.info
+│ └── inference.pdmodel
+└── vae_decoder
+ ├── inference.pdiparams
+ ├── inference.pdiparams.info
+ └── inference.pdmodel
+```
+
+#### Parameters
+
+Here is description of each command line parameter in `export_model.py`.
+
+| Parameter |Description |
+|----------|--------------|
+|--pretrained_model_name_or_path
| The diffusion pretrained model provided by ppdiffuers. Default is "CompVis/stable-diffusion-v1-4". For more diffusion pretrained models, please refer to [ppdiffuser model list](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/ppdiffusers/examples/textual_inversion).|
+|--output_path | Exported directory |
+
+
+## Diffusers Model Export
+
+[Diffusers](https://github.com/huggingface/diffusers) is a Diffusion Model toolkit built by HuggingFace to support cross-modal (e.g. image and speech) training and inference. The underlying model code is available in both a PyTorch implementation and a Flax implementation. This example shows how to use FastDeploy to deploy a PyTorch implementation of Diffusion Model for high performance.
+
+### Dependency Installation
+
+The model export depends on `onnx`, `torch`, `diffusers` and `transformers`, which can be installed quickly by running the following command using `pip`.
+
+```shell
+pip install -r requirements_torch.txt
+```
+
+### Model Export
+
+___Note: The StableDiffusion model needs to be downloaded during the model export process. In order to use the model and weights, you must accept the License required, and get the Token granted by HF Hub. Please visit HuggingFace's [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5), to read the License carefully, and then sign the agreement.___
+
+___Tips: Stable Diffusion is based on these Licenses: The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which this license is based.___
+
+If you are exporting a model for the first time, you need to log in to the HuggingFace client first. Run the following command to log in:
+
+```shell
+huggingface-cli login
+```
+
+After finishing the login, you can run the following lines to export model.
+
+```shell
+python export_torch_to_onnx_model.py --pretrained_model_name_or_path CompVis/stable-diffusion-v1-4 --output_path torch_diffusion_model
+```
+
+The output model directory is as follows:
+
+```shell
+torch_diffusion_model/
+├── text_encoder
+│ └── inference.onnx
+├── unet
+│ └── inference.onnx
+└── vae_decoder
+ └── inference.onnx
+```
+
+#### Parameters
+
+Here is description of each command line parameter in `export_torch_to_onnx_model.py`.
+
+| Parameter |Description |
+|----------|--------------|
+|--pretrained_model_name_or_path
|The diffusion pretrained model provided by ppdiffuers, default is "CompVis/stable-diffusion-v1-4". For more diffusion pretrained models, please refer to [HuggingFace model list](https://huggingface.co/CompVis/stable-diffusion-v1-4).|
+|--output_path |Exported directory |
\ No newline at end of file
diff --git a/examples/runtime/README.md b/examples/runtime/README.md
index a4fb921c7..80a035257 100755
--- a/examples/runtime/README.md
+++ b/examples/runtime/README.md
@@ -1,8 +1,9 @@
+English | [简体中文](README_CN.md)
# FastDeploy Runtime examples
-FastDeploy Runtime 推理示例如下
+FastDeploy Runtime examples are as follows:
-## Python 示例
+## Python Example
| Example Code | Program Language | Description |
| :------- | :------- | :---- |
@@ -15,7 +16,7 @@ FastDeploy Runtime 推理示例如下
| python/infer_onnx_onnxruntime.py | Python | Deploy ONNX model with ONNX Runtime(CPU/GPU) |
| python/infer_torchscript_poros.py | Python | Deploy TorchScript model with Poros Runtime(CPU/GPU) |
-## C++ 示例
+## C++ Example
| Example Code | Program Language | Description |
| :------- | :------- | :---- |
@@ -28,7 +29,7 @@ FastDeploy Runtime 推理示例如下
| cpp/infer_onnx_onnxruntime.cc | C++ | Deploy ONNX model with ONNX Runtime(CPU/GPU) |
| cpp/infer_torchscript_poros.cc | C++ | Deploy TorchScript model with Poros Runtime(CPU/GPU) |
-## 详细部署文档
+## Detailed deployment documents
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python deployment](python)
+- [C++ deployment](cpp)
diff --git a/examples/runtime/README_CN.md b/examples/runtime/README_CN.md
new file mode 100644
index 000000000..73ce02232
--- /dev/null
+++ b/examples/runtime/README_CN.md
@@ -0,0 +1,35 @@
+简体中文 | [English](README.md)
+# FastDeploy Runtime examples
+
+FastDeploy Runtime 推理示例如下
+
+## Python 示例
+
+| Example Code | Program Language | Description |
+| :------- | :------- | :---- |
+| python/infer_paddle_paddle_inference.py | Python | Deploy Paddle model with Paddle Inference(CPU/GPU) |
+| python/infer_paddle_tensorrt.py | Python | Deploy Paddle model with TensorRT(GPU) |
+| python/infer_paddle_openvino.py | Python | Deploy Paddle model with OpenVINO(CPU) |
+| python/infer_paddle_onnxruntime.py | Python | Deploy Paddle model with ONNX Runtime(CPU/GPU) |
+| python/infer_onnx_openvino.py | Python | Deploy ONNX model with OpenVINO(CPU) |
+| python/infer_onnx_tensorrt.py | Python | Deploy ONNX model with TensorRT(GPU) |
+| python/infer_onnx_onnxruntime.py | Python | Deploy ONNX model with ONNX Runtime(CPU/GPU) |
+| python/infer_torchscript_poros.py | Python | Deploy TorchScript model with Poros Runtime(CPU/GPU) |
+
+## C++ 示例
+
+| Example Code | Program Language | Description |
+| :------- | :------- | :---- |
+| cpp/infer_paddle_paddle_inference.cc | C++ | Deploy Paddle model with Paddle Inference(CPU/GPU) |
+| cpp/infer_paddle_tensorrt.cc | C++ | Deploy Paddle model with TensorRT(GPU) |
+| cpp/infer_paddle_openvino.cc | C++ | Deploy Paddle model with OpenVINO(CPU |
+| cpp/infer_paddle_onnxruntime.cc | C++ | Deploy Paddle model with ONNX Runtime(CPU/GPU) |
+| cpp/infer_onnx_openvino.cc | C++ | Deploy ONNX model with OpenVINO(CPU) |
+| cpp/infer_onnx_tensorrt.cc | C++ | Deploy ONNX model with TensorRT(GPU) |
+| cpp/infer_onnx_onnxruntime.cc | C++ | Deploy ONNX model with ONNX Runtime(CPU/GPU) |
+| cpp/infer_torchscript_poros.cc | C++ | Deploy TorchScript model with Poros Runtime(CPU/GPU) |
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/runtime/cpp/README.md b/examples/runtime/cpp/README.md
index 38d25041d..3fefa1b1c 100644
--- a/examples/runtime/cpp/README.md
+++ b/examples/runtime/cpp/README.md
@@ -1,22 +1,23 @@
+English | [简体中文](README_CN.md)
# C++推理
-在运行demo前,需确认以下两个步骤
+Before running demo, the following two steps need to be confirmed:
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Hardware and software environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. Download pre-compiled libraries and samples according to the development environment. Please refer to [FastDeploy pre-compiled libraries](../../../docs/cn/build_and_install/download_prebuilt_libraries.md).
-本文档以 PaddleClas 分类模型 MobileNetV2 为例展示CPU上的推理示例
+This document shows an inference example on the CPU using the PaddleClas classification model MobileNetV2 as an example.
-## 1. 获取模型
+## 1. Obtaining the Model
```bash
wget https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz
tar xvf mobilenetv2.tgz
```
-## 2. 配置后端
+## 2. Backend Configuration
-如下C++代码保存为`infer_paddle_onnxruntime.cc`
+The following C++ code is saved as `infer_paddle_onnxruntime.cc`.
``` c++
#include "fastdeploy/runtime.h"
@@ -66,35 +67,35 @@ int main(int argc, char* argv[]) {
return 0;
}
```
-加载完成,会输出提示如下,说明初始化的后端,以及运行的硬件设备
+When loading is complete, the following prompt will be output, indicating the initialized backend, and the running hardware devices.
```
[INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init Runtime initialized with Backend::OrtBackend in device Device::CPU.
```
-## 3. 准备CMakeLists.txt
+## 3. Prepare for CMakeLists.txt
-FastDeploy中包含多个依赖库,直接采用`g++`或编译器编译较为繁杂,推荐使用cmake进行编译配置。示例配置如下,
+FastDeploy contains several dependencies, it is complicated to compile directly with `g++` or compiler, so we recommend using cmake for compiling configuration. The sample configuration is as follows:
```cmake
PROJECT(runtime_demo C CXX)
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
-# 指定下载解压后的fastdeploy库路径
+# Specify the path to the fastdeploy library after downloading and unpacking.
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
-# 添加FastDeploy依赖头文件
+# Add FastDeploy dependency headers.
include_directories(${FASTDEPLOY_INCS})
add_executable(runtime_demo ${PROJECT_SOURCE_DIR}/infer_onnx_openvino.cc)
-# 添加FastDeploy库依赖
+# Adding FastDeploy library dependencies.
target_link_libraries(runtime_demo ${FASTDEPLOY_LIBS})
```
-## 4. 编译可执行程序
+## 4. Compile executable program
-打开命令行终端,进入`infer_paddle_onnxruntime.cc`和`CMakeLists.txt`所在的目录,执行如下命令
+Open the terminal, go to the directory where `infer_paddle_onnxruntime.cc` and `CMakeLists.txt` are located, and run the following command:
```bash
mkdir build & cd build
@@ -102,20 +103,20 @@ cmake .. -DFASTDEPLOY_INSTALL_DIR=$fastdeploy_cpp_sdk
make -j
```
-```fastdeploy_cpp_sdk``` 为FastDeploy C++部署库路径
+```fastdeploy_cpp_sdk``` is path to FastDeploy C++ deployment libraries.
-编译完成后,使用如下命令执行可得到预测结果
+After compiling, run the following command and get the results.
```bash
./runtime_demo
```
-执行时如提示`error while loading shared libraries: libxxx.so: cannot open shared object file: No such file...`,说明程序执行时没有找到FastDeploy的库路径,可通过执行如下命令,将FastDeploy的库路径添加到环境变量之后,重新执行二进制程序。
+If you are prompted with `error while loading shared libraries: libxxx.so: cannot open shared object file: No such file... `, it means that the path to FastDeploy libraries is not found, you can run the program again after adding the path to the environment variable by executing the following command.
```bash
source /Path/to/fastdeploy_cpp_sdk/fastdeploy_init.sh
```
-本示例代码在各平台(Windows/Linux/Mac)上通用,但编译过程仅支持(Linux/Mac),Windows上使用msbuild进行编译,具体使用方式参考[Windows平台使用FastDeploy C++ SDK](../../../docs/cn/faq/use_sdk_on_windows.md)
+This sample code is common on all platforms (Windows/Linux/Mac), but the compilation process is only supported on (Linux/Mac),while using msbuild to compile on Windows. Please refer to [FastDeploy C++ SDK on Windows](../../../docs/en/faq/use_sdk_on_windows.md).
-## 其它文档
+## Other Documents
-- [Runtime Python 示例](../python)
-- [切换模型推理的硬件和后端](../../../docs/cn/faq/how_to_change_backend.md)
+- [A Python example for Runtime](../python)
+- [Switching hardware and backend for model inference](../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/runtime/cpp/README_CN.md b/examples/runtime/cpp/README_CN.md
new file mode 100644
index 000000000..592c23dab
--- /dev/null
+++ b/examples/runtime/cpp/README_CN.md
@@ -0,0 +1,122 @@
+简体中文 | [English](README.md)
+# C++推理
+
+在运行demo前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本文档以 PaddleClas 分类模型 MobileNetV2 为例展示CPU上的推理示例
+
+## 1. 获取模型
+
+```bash
+wget https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz
+tar xvf mobilenetv2.tgz
+```
+
+## 2. 配置后端
+
+如下C++代码保存为`infer_paddle_onnxruntime.cc`
+
+``` c++
+#include "fastdeploy/runtime.h"
+
+namespace fd = fastdeploy;
+
+int main(int argc, char* argv[]) {
+ std::string model_file = "mobilenetv2/inference.pdmodel";
+ std::string params_file = "mobilenetv2/inference.pdiparams";
+
+ // setup option
+ fd::RuntimeOption runtime_option;
+ runtime_option.SetModelPath(model_file, params_file, fd::ModelFormat::PADDLE);
+ runtime_option.UseOrtBackend();
+ runtime_option.SetCpuThreadNum(12);
+ // init runtime
+ std::unique_ptr runtime =
+ std::unique_ptr(new fd::Runtime());
+ if (!runtime->Init(runtime_option)) {
+ std::cerr << "--- Init FastDeploy Runitme Failed! "
+ << "\n--- Model: " << model_file << std::endl;
+ return -1;
+ } else {
+ std::cout << "--- Init FastDeploy Runitme Done! "
+ << "\n--- Model: " << model_file << std::endl;
+ }
+ // init input tensor shape
+ fd::TensorInfo info = runtime->GetInputInfo(0);
+ info.shape = {1, 3, 224, 224};
+
+ std::vector input_tensors(1);
+ std::vector output_tensors(1);
+
+ std::vector inputs_data;
+ inputs_data.resize(1 * 3 * 224 * 224);
+ for (size_t i = 0; i < inputs_data.size(); ++i) {
+ inputs_data[i] = std::rand() % 1000 / 1000.0f;
+ }
+ input_tensors[0].SetExternalData({1, 3, 224, 224}, fd::FDDataType::FP32, inputs_data.data());
+
+ //get input name
+ input_tensors[0].name = info.name;
+
+ runtime->Infer(input_tensors, &output_tensors);
+
+ output_tensors[0].PrintInfo();
+ return 0;
+}
+```
+加载完成,会输出提示如下,说明初始化的后端,以及运行的硬件设备
+```
+[INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init Runtime initialized with Backend::OrtBackend in device Device::CPU.
+```
+
+## 3. 准备CMakeLists.txt
+
+FastDeploy中包含多个依赖库,直接采用`g++`或编译器编译较为繁杂,推荐使用cmake进行编译配置。示例配置如下,
+
+```cmake
+PROJECT(runtime_demo C CXX)
+CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
+
+# 指定下载解压后的fastdeploy库路径
+option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
+
+include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
+
+# 添加FastDeploy依赖头文件
+include_directories(${FASTDEPLOY_INCS})
+
+add_executable(runtime_demo ${PROJECT_SOURCE_DIR}/infer_onnx_openvino.cc)
+# 添加FastDeploy库依赖
+target_link_libraries(runtime_demo ${FASTDEPLOY_LIBS})
+```
+
+## 4. 编译可执行程序
+
+打开命令行终端,进入`infer_paddle_onnxruntime.cc`和`CMakeLists.txt`所在的目录,执行如下命令
+
+```bash
+mkdir build & cd build
+cmake .. -DFASTDEPLOY_INSTALL_DIR=$fastdeploy_cpp_sdk
+make -j
+```
+
+```fastdeploy_cpp_sdk``` 为FastDeploy C++部署库路径
+
+编译完成后,使用如下命令执行可得到预测结果
+```bash
+./runtime_demo
+```
+执行时如提示`error while loading shared libraries: libxxx.so: cannot open shared object file: No such file...`,说明程序执行时没有找到FastDeploy的库路径,可通过执行如下命令,将FastDeploy的库路径添加到环境变量之后,重新执行二进制程序。
+```bash
+source /Path/to/fastdeploy_cpp_sdk/fastdeploy_init.sh
+```
+
+本示例代码在各平台(Windows/Linux/Mac)上通用,但编译过程仅支持(Linux/Mac),Windows上使用msbuild进行编译,具体使用方式参考[Windows平台使用FastDeploy C++ SDK](../../../docs/cn/faq/use_sdk_on_windows.md)
+
+## 其它文档
+
+- [Runtime Python 示例](../python)
+- [切换模型推理的硬件和后端](../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/runtime/python/README.md b/examples/runtime/python/README.md
index 42f007051..cdd69b2c1 100644
--- a/examples/runtime/python/README.md
+++ b/examples/runtime/python/README.md
@@ -1,13 +1,14 @@
+English | [简体中文](README_CN.md)
# Python推理
-在运行demo前,需确认以下两个步骤
+Before running demo, the following two steps need to be confirmed:
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Hardware and software environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. Install FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../docs/cn/build_and_install/download_prebuilt_libraries.md).
-本文档以 PaddleClas 分类模型 MobileNetV2 为例展示 CPU 上的推理示例
+This document shows an inference example on the CPU using the PaddleClas classification model MobileNetV2 as an example.
-## 1. 获取模型
+## 1. Obtaining the model
``` python
import fastdeploy as fd
@@ -16,7 +17,7 @@ model_url = "https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz"
fd.download_and_decompress(model_url, path=".")
```
-## 2. 配置后端
+## 2. Backend Configuration
``` python
option = fd.RuntimeOption()
@@ -24,30 +25,30 @@ option = fd.RuntimeOption()
option.set_model_path("mobilenetv2/inference.pdmodel",
"mobilenetv2/inference.pdiparams")
-# **** CPU 配置 ****
+# **** CPU Configuration ****
option.use_cpu()
option.use_ort_backend()
option.set_cpu_thread_num(12)
-# 初始化构造runtime
+# Initialise runtime
runtime = fd.Runtime(option)
-# 获取模型输入名
+# Get model input name
input_name = runtime.get_input_info(0).name
-# 构造随机数据进行推理
+# Constructing random data for inference
results = runtime.infer({
input_name: np.random.rand(1, 3, 224, 224).astype("float32")
})
print(results[0].shape)
```
-加载完成,会输出提示如下,说明初始化的后端,以及运行的硬件设备
+When loading is complete, you will get the following output information indicating the initialized backend and the hardware devices.
```
[INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init Runtime initialized with Backend::OrtBackend in device Device::CPU.
```
-## 其它文档
+## Other Documents
-- [Runtime C++ 示例](../cpp)
-- [切换模型推理的硬件和后端](../../../docs/cn/faq/how_to_change_backend.md)
+- [A C++ example for Runtime C++](../cpp)
+- [Switching hardware and backend for model inference](../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/runtime/python/README_CN.md b/examples/runtime/python/README_CN.md
new file mode 100644
index 000000000..1fa0235a7
--- /dev/null
+++ b/examples/runtime/python/README_CN.md
@@ -0,0 +1,54 @@
+简体中文 | [English](README.md)
+# Python推理
+
+在运行demo前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+本文档以 PaddleClas 分类模型 MobileNetV2 为例展示 CPU 上的推理示例
+
+## 1. 获取模型
+
+``` python
+import fastdeploy as fd
+
+model_url = "https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz"
+fd.download_and_decompress(model_url, path=".")
+```
+
+## 2. 配置后端
+
+``` python
+option = fd.RuntimeOption()
+
+option.set_model_path("mobilenetv2/inference.pdmodel",
+ "mobilenetv2/inference.pdiparams")
+
+# **** CPU 配置 ****
+option.use_cpu()
+option.use_ort_backend()
+option.set_cpu_thread_num(12)
+
+# 初始化构造runtime
+runtime = fd.Runtime(option)
+
+# 获取模型输入名
+input_name = runtime.get_input_info(0).name
+
+# 构造随机数据进行推理
+results = runtime.infer({
+ input_name: np.random.rand(1, 3, 224, 224).astype("float32")
+})
+
+print(results[0].shape)
+```
+加载完成,会输出提示如下,说明初始化的后端,以及运行的硬件设备
+```
+[INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init Runtime initialized with Backend::OrtBackend in device Device::CPU.
+```
+
+## 其它文档
+
+- [Runtime C++ 示例](../cpp)
+- [切换模型推理的硬件和后端](../../../docs/cn/faq/how_to_change_backend.md)
diff --git a/examples/text/ernie-3.0/cpp/README.md b/examples/text/ernie-3.0/cpp/README.md
index c5527907c..65ca4100e 100755
--- a/examples/text/ernie-3.0/cpp/README.md
+++ b/examples/text/ernie-3.0/cpp/README.md
@@ -35,7 +35,7 @@ tar xvfz ernie-3.0-medium-zh-afqmc.tgz
# GPU Inference
./seq_cls_infer_demo --device gpu --model_dir ernie-3.0-medium-zh-afqmc
-# KunlunXin XPU 推理
+# KunlunXin XPU Inference
./seq_cls_infer_demo --device kunlunxin --model_dir ernie-3.0-medium-zh-afqmc
```
The result returned after running is as follows:
diff --git a/examples/text/ernie-3.0/serving/README.md b/examples/text/ernie-3.0/serving/README.md
index 15fa1ba64..9fc94dc45 100644
--- a/examples/text/ernie-3.0/serving/README.md
+++ b/examples/text/ernie-3.0/serving/README.md
@@ -30,18 +30,18 @@ mv msra_ner_pruned_infer_model/float32.pdiparams models/ernie_tokencls_model/1/m
After download and move, the models directory of the classification tasks is as follows:
```
models
-├── ernie_seqcls # 分类任务的pipeline
+├── ernie_seqcls # Pipeline for classification task
│ ├── 1
-│ └── config.pbtxt # 通过这个文件组合前后处理和模型推理
-├── ernie_seqcls_model # 分类任务的模型推理
+│ └── config.pbtxt # Combine pre and post processing and model inference
+├── ernie_seqcls_model # Model inference for classification task
│ ├── 1
│ │ └── model.onnx
│ └── config.pbtxt
-├── ernie_seqcls_postprocess # 分类任务后处理
+├── ernie_seqcls_postprocess # Post-processing of classification task
│ ├── 1
│ │ └── model.py
│ └── config.pbtxt
-└── ernie_tokenizer # 预处理分词
+└── ernie_tokenizer # Pre-processing splitting
├── 1
│ └── model.py
└── config.pbtxt
@@ -63,9 +63,9 @@ docker run -it --net=host --name fastdeploy_server --shm-size="1g" -v /path/ser
The serving directory contains the configuration to start the pipeline service and the code to send the prediction request, including
```
-models # 服务化启动需要的模型仓库,包含模型和服务配置文件
-seq_cls_rpc_client.py # 新闻分类任务发送pipeline预测请求的脚本
-token_cls_rpc_client.py # 序列标注任务发送pipeline预测请求的脚本
+models # Model repository needed for serving startup, containing model and service configuration files
+seq_cls_rpc_client.py # Script for sending pipeline prediction requests for news classification task
+token_cls_rpc_client.py # Script for sequence annotation task to send pipeline prediction requests
```
*Attention*:Attention: When starting the service, each python backend process of Server requests 64M memory by default, and the docker started by default cannot start more than one python backend node. There are two solutions:
@@ -76,13 +76,13 @@ token_cls_rpc_client.py # 序列标注任务发送pipeline预测请求的脚
### Classification Task
Execute the following command in the container to start the service:
```
-# 默认启动models下所有模型
+# Enable all models by default
fastdeployserver --model-repository=/models
-# 可通过参数只启动分类任务
+# You can only enable classification task via parameters
fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=ernie_seqcls
```
-输出打印如下:
+The output is:
```
I1019 09:41:15.375496 2823 model_repository_manager.cc:1183] successfully loaded 'ernie_tokenizer' version 1
I1019 09:41:15.375987 2823 model_repository_manager.cc:1022] loading: ernie_seqcls:1
@@ -109,7 +109,7 @@ Execute the following command in the container to start the sequence labelling s
```
fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=ernie_tokencls --backend-config=python,shm-default-byte-size=10485760
```
-输出打印如下:
+The output is:
```
I1019 09:41:15.375496 2823 model_repository_manager.cc:1183] successfully loaded 'ernie_tokenizer' version 1
I1019 09:41:15.375987 2823 model_repository_manager.cc:1022] loading: ernie_seqcls:1
@@ -148,7 +148,7 @@ Attention: The proxy need turning off when executing client requests. The ip add
```
python seq_cls_grpc_client.py
```
-输出打印如下:
+The output is:
```
{'label': array([5, 9]), 'confidence': array([0.6425664 , 0.66534853], dtype=float32)}
{'label': array([4]), 'confidence': array([0.53198355], dtype=float32)}
@@ -160,7 +160,7 @@ Attention: The proxy need turning off when executing client requests. The ip add
```
python token_cls_grpc_client.py
```
-输出打印如下:
+The output is:
```
input data: 北京的涮肉,重庆的火锅,成都的小吃都是极具特色的美食。
The model detects all entities:
@@ -173,5 +173,5 @@ entity: 玛雅 label: LOC pos: [2, 3]
entity: 华夏 label: LOC pos: [14, 15]
```
-## 配置修改
+## Configuration Modification
The current classification task (ernie_seqcls_model/config.pbtxt) is by default configured to run the OpenVINO engine on CPU; the sequence labelling task is by default configured to run the Paddle engine on GPU. If you want to run on CPU/GPU or other inference engines, you should modify the configuration. please refer to the [configuration document.](../../../../serving/docs/zh_CN/model_configuration.md)
diff --git a/examples/text/ernie-3.0/serving/models/ernie_seqcls_model/1/README.md b/examples/text/ernie-3.0/serving/models/ernie_seqcls_model/1/README.md
index b3ce2c1ae..a89a7baf8 100644
--- a/examples/text/ernie-3.0/serving/models/ernie_seqcls_model/1/README.md
+++ b/examples/text/ernie-3.0/serving/models/ernie_seqcls_model/1/README.md
@@ -1 +1,2 @@
-本目录存放ERNIE 3.0模型
+English | [简体中文](README_CN.md)
+This directory contains ERNIE 3.0 models.
\ No newline at end of file
diff --git a/examples/text/ernie-3.0/serving/models/ernie_seqcls_model/1/README_CN.md b/examples/text/ernie-3.0/serving/models/ernie_seqcls_model/1/README_CN.md
new file mode 100644
index 000000000..4d38de503
--- /dev/null
+++ b/examples/text/ernie-3.0/serving/models/ernie_seqcls_model/1/README_CN.md
@@ -0,0 +1,2 @@
+[English](README.md) | 简体中文
+本目录存放ERNIE 3.0模型
diff --git a/examples/text/ernie-3.0/serving/models/ernie_tokencls_model/1/README.md b/examples/text/ernie-3.0/serving/models/ernie_tokencls_model/1/README.md
index b3ce2c1ae..59ec0fbde 100644
--- a/examples/text/ernie-3.0/serving/models/ernie_tokencls_model/1/README.md
+++ b/examples/text/ernie-3.0/serving/models/ernie_tokencls_model/1/README.md
@@ -1 +1,2 @@
-本目录存放ERNIE 3.0模型
+English | [简体中文](README_CN.md)
+This directory contains ERNIE 3.0 models
diff --git a/examples/text/ernie-3.0/serving/models/ernie_tokencls_model/1/README_CN.md b/examples/text/ernie-3.0/serving/models/ernie_tokencls_model/1/README_CN.md
new file mode 100644
index 000000000..fd0cf7466
--- /dev/null
+++ b/examples/text/ernie-3.0/serving/models/ernie_tokencls_model/1/README_CN.md
@@ -0,0 +1,2 @@
+[English](README.md) | 简体中文
+本目录存放ERNIE 3.0模型
\ No newline at end of file
diff --git a/examples/text/uie/cpp/README.md b/examples/text/uie/cpp/README.md
index cb69716c6..ab8831815 100644
--- a/examples/text/uie/cpp/README.md
+++ b/examples/text/uie/cpp/README.md
@@ -6,8 +6,8 @@ This directory provides `infer.cc` quickly complete the example on CPU/GPU by [U
Before deployment, two steps need to be confirmed.
-- 1. The software and hardware environment meets the requirements. Please refer to [FastDeploy环境要求](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download precompiled deployment library and samples code based on the develop environment. Please refer to [FastDeploy预编译库](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. The software and hardware environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. Download precompiled deployment library and samples code based on the develop environment. Please refer to [FastDeploy pre-compiled libraries](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
## A Quick Start
Take uie-base model inference on Linux as an example, execute the following command in this directory to complete the compilation test. FastDeploy version 0.7.0 or above is required to support this model (x.x.x>=0.7.0).
@@ -15,7 +15,7 @@ Take uie-base model inference on Linux as an example, execute the following comm
```
mkdir build
cd build
-# Download FastDeploy precompiled library. Users can choose proper versions in the `FastDeploy预编译库` mentioned above.
+# Download FastDeploy precompiled library. Users can choose proper versions in the `FastDeploy pre-compiled libraries` mentioned above.
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
@@ -73,10 +73,10 @@ std::string param_path = model_dir + sep + "inference.pdiparams";
std::string vocab_path = model_dir + sep + "vocab.txt";
using fastdeploy::text::SchemaNode;
using fastdeploy::text::UIEResult;
-// 定义uie result对象
+// Define the uie result object
std::vector>> results;
-// 初始化UIE模型
+// Initialize UIE model
auto predictor =
fastdeploy::text::UIEModel(model_path, param_path, vocab_path, 0.5, 128,
{"时间", "选手", "赛事名称"}, option);
@@ -94,7 +94,7 @@ predictor.Predict({"2月8日上午北京冬奥会自由式滑雪女子大跳台
std::cout << results << std::endl;
results.clear();
-// 示例输出
+// An output example
// The result:
// 赛事名称:
// text: 北京冬奥会自由式滑雪女子大跳台决赛
@@ -128,7 +128,7 @@ predictor.Predict({"(右肝肿瘤)肝细胞性肝癌(II-"
std::cout << results << std::endl;
results.clear();
-// 示例输出
+// An output example
// The result:
// 脉管内癌栓分级:
// text: M0级
@@ -174,7 +174,7 @@ predictor.Predict(
std::cout << results << std::endl;
results.clear();
-// 示例输出
+// An output example
// The result:
// 竞赛名称:
// text: 2022语言与智能技术竞赛
@@ -233,7 +233,7 @@ predictor.Predict(
std::cout << results << std::endl;
results.clear();
-// 示例输出
+// An output example
// The result:
// 地震触发词:
// text: 地震
@@ -287,7 +287,7 @@ predictor.Predict(
std::cout << results << std::endl;
results.clear();
-// 示例输出
+// An output example
// The result:
// 评价维度:
// text: 店面
@@ -332,7 +332,7 @@ predictor.Predict({"这个产品用起来真的很流畅,我非常喜欢"}, &r
std::cout << results << std::endl;
results.clear();
-// 示例输出
+// An output example
// The result:
// 情感倾向[正向,负向]:
// text: 正向
@@ -355,7 +355,7 @@ predictor.Predict({"北京市海淀区人民法院\n民事判决书\n(199x)"
&results);
std::cout << results << std::endl;
results.clear();
-// 示例输出
+// An output example
// The result:
// 被告:
// text: B公司
@@ -433,7 +433,7 @@ UIEModel(
SchemaLanguage schema_language = SchemaLanguage::ZH);
```
-UIEModel loading and initialization. Among them, model_file, params_file are Paddle inference documents exported by trained models. Please refer to [模型导出](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)。
+UIEModel loading and initialization. Among them, model_file, params_file are Paddle inference documents exported by trained models. Please refer to [Model export](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).
**Parameter**
@@ -472,8 +472,8 @@ void Predict(
## Related Documents
-[UIE模型详细介绍](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md)
+[Details for UIE model](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md)
-[UIE模型导出方法](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)
+[How to export a UIE model](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)
-[UIE Python部署方法](../python/README.md)
+[UIE Python deployment](../python/README.md)
diff --git a/examples/text/uie/python/README.md b/examples/text/uie/python/README.md
index 6e6ccc0ab..54c2da2a3 100644
--- a/examples/text/uie/python/README.md
+++ b/examples/text/uie/python/README.md
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps need to be confirmed.
-- 1. The software and hardware environment meets the requirements. Please refer to [FastDeploy环境要求](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl pacakage needs installation. Please refer to [FastDeploy Python安装](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. The software and hardware environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl pacakage needs installation. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides an example that `infer.py` quickly complete CPU deployment conducted by the UIE model with OpenVINO acceleration on CPU/GPU and CPU.
@@ -67,7 +67,7 @@ The extraction schema: ['肿瘤的大小', '肿瘤的个数', '肝癌级别', '
### Description of command line arguments
-`infer.py` 除了以上示例的命令行参数,还支持更多命令行参数的设置。以下为各命令行参数的说明。
+`infer.py` supports more command line parameters than the above example. The following is a description of each command line parameter.
| Argument | Description |
|----------|--------------|
@@ -95,7 +95,7 @@ vocab_path = os.path.join(model_dir, "vocab.txt")
runtime_option = fastdeploy.RuntimeOption()
schema = ["时间", "选手", "赛事名称"]
-# 初始化UIE模型
+# Initialise UIE model
uie = UIEModel(
model_path,
param_path,
@@ -116,7 +116,7 @@ The initialization stage sets the schema```["time", "player", "event name"]``` t
["2月8日上午北京冬奥会自由式滑雪女子大跳台决赛中中国选手谷爱凌以188.25分获得金牌!"], return_dict=True)
>>> pprint(results)
-# 示例输出
+# An output example
# [{'时间': {'end': 6,
# 'probability': 0.9857379794120789,
# 'start': 0,
@@ -145,7 +145,7 @@ For example, if the target entity types are "肿瘤的大小", "肿瘤的个数"
return_dict=True)
>>> pprint(results)
-# 示例输出
+# An output example
# [{'肝癌级别': {'end': 20,
# 'probability': 0.9243271350860596,
# 'start': 13,
@@ -181,7 +181,7 @@ For example, if we take "contest name" as the extracted entity, and the relation
return_dict=True)
>>> pprint(results)
-# 示例输出
+# An output example
# [{'竞赛名称': {'end': 13,
# 'probability': 0.7825401425361633,
# 'relation': {'主办方': [{'end': 22,
@@ -229,7 +229,7 @@ For example, if the targets are"地震强度", "时间", "震中位置" and "引
return_dict=True)
>>> pprint(results)
-# 示例输出
+# An output example
# [{'地震触发词': {'end': 58,
# 'probability': 0.9977425932884216,
# 'relation': {'地震强度': [{'end': 56,
@@ -265,7 +265,7 @@ For example, if the extraction target is the evaluation dimensions and their cor
["店面干净,很清静,服务员服务热情,性价比很高,发现收银台有排队"], return_dict=True)
>>> pprint(results)
-# 示例输出
+# An output example
# [{'评价维度': {'end': 20,
# 'probability': 0.9817039966583252,
# 'relation': {'情感倾向[正向,负向]': [{'end': 0,
@@ -290,7 +290,7 @@ Sentence-level sentiment classification, i.e., determining a sentence has a "pos
>>> results = uie.predict(["这个产品用起来真的很流畅,我非常喜欢"], return_dict=True)
>>> pprint(results)
-# 示例输出
+# An output example
# [{'情感倾向[正向,负向]': {'end': 0,
# 'probability': 0.9990023970603943,
# 'start': 0,
@@ -311,7 +311,7 @@ For example, in a legal scenario where both entity extraction and relation extra
],
return_dict=True)
>>> pprint(results)
-# 示例输出
+# An output example
# [{'原告': {'end': 37,
# 'probability': 0.9949813485145569,
# 'relation': {'委托代理人': [{'end': 46,
@@ -348,7 +348,7 @@ fd.text.uie.UIEModel(model_file,
schema_language=SchemaLanguage.ZH)
```
-UIEModel loading and initialization. Among them, `model_file`, `params_file` are Paddle inference documents exported by trained models. Please refer to [模型导出](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).`vocab_file`refers to the vocabulary file. The vocabulary of the UIE model UIE can be downloaded in [UIE配置文件](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py)
+UIEModel loading and initialization. Among them, `model_file`, `params_file` are Paddle inference documents exported by trained models. Please refer to [Model export](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).`vocab_file`refers to the vocabulary file. The vocabulary of the UIE model UIE can be downloaded in [UIE configuration file](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py)
**Parameter**
@@ -393,8 +393,8 @@ UIEModel loading and initialization. Among them, `model_file`, `params_file` are
## Related Documents
-[UIE模型详细介绍](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md)
+[Details for UIE model](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md)
-[UIE模型导出方法](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)
+[How to export a UIE model](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)
-[UIE C++部署方法](../cpp/README.md)
+[UIE C++ deployment](../cpp/README.md)
diff --git a/examples/text/uie/serving/README.md b/examples/text/uie/serving/README.md
index a05399ccd..2aa3fbbb8 100644
--- a/examples/text/uie/serving/README.md
+++ b/examples/text/uie/serving/README.md
@@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)
Before serving deployment, you need to confirm:
-- 1. You can refer to [FastDeploy服务化部署](../../../../../serving/README_CN.md) for hardware and software environment requirements and image pull commands for serving images.
+- 1. You can refer to [FastDeploy serving deployment](../../../../../serving/README_CN.md) for hardware and software environment requirements and image pull commands for serving images.
## Prepare models
@@ -143,4 +143,4 @@ results:
## Configuration Modification
-The current configuration is by default to run the paddle engine on CPU. If you want to run on CPU/GPU or other inference engines, modifying the configuration is needed.Please refer to [配置文档](../../../../serving/docs/zh_CN/model_configuration.md)
+The current configuration is by default to run the paddle engine on CPU. If you want to run on CPU/GPU or other inference engines, modifying the configuration is needed.Please refer to [Configuration Document](../../../../serving/docs/zh_CN/model_configuration.md).
diff --git a/examples/vision/classification/paddleclas/a311d/cpp/README_CN.md b/examples/vision/classification/paddleclas/a311d/cpp/README_CN.md
new file mode 100644
index 000000000..e69de29bb
diff --git a/examples/vision/classification/paddleclas/sophgo/README.md b/examples/vision/classification/paddleclas/sophgo/README.md
index 32bb3bfbf..7d887ae7d 100644
--- a/examples/vision/classification/paddleclas/sophgo/README.md
+++ b/examples/vision/classification/paddleclas/sophgo/README.md
@@ -1,28 +1,29 @@
-# PaddleDetection SOPHGO部署示例
+English | [简体中文](README_CN.md)
+# PaddleDetection SOPHGO Deployment Example
-## 支持模型列表
+## Supporting Model List
-目前FastDeploy支持的如下模型的部署[ResNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/ResNet_and_vd.md)
+Currently FastDeploy supports the following model deployment: [ResNet series model](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/en/models/ResNet_and_vd_en.md).
-## 准备ResNet部署模型以及转换模型
+## Preparing ResNet Model Deployment and Conversion
-SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型,具体步骤如下:
-- Paddle动态图模型转换为ONNX模型,请参考[Paddle2ONNX模型转换](https://github.com/PaddlePaddle/Paddle2ONNX/tree/develop/model_zoo/classification)
-- ONNX模型转换bmodel模型的过程,请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)。
+Before deploying SOPHGO-TPU model, you need to first convert Paddle model to bmodel. Specific steps are as follows:
+- Convert Paddle dynamic map model to ONNX model, please refer to [Paddle2ONNX model conversion](https://github.com/PaddlePaddle/Paddle2ONNX/tree/develop/model_zoo/classification).
+- For the process of converting ONNX model to bmodel, please refer to [TPU-MLIR](https://github.com/sophgo/tpu-mlir).
-## 模型转换example
+## Model Converting Example
-下面以[ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz)为例子,教大家如何转换Paddle模型到SOPHGO-TPU模型。
+Here we take [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz) as an example to show you how to convert Paddle model to SOPHGO-TPU model.
-## 导出ONNX模型
+## Export ONNX Model
-### 下载Paddle ResNet50_vd静态图模型并解压
+### Download and Unzip Paddle ResNet50_vd Static Map Model
```shell
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
tar xvf ResNet50_vd_infer.tgz
```
-### 静态图转ONNX模型,注意,这里的save_file请和压缩包名对齐
+### Convert Static Map Model to ONNX Model, note that the save_file here aligns with the zip name
```shell
paddle2onnx --model_dir ResNet50_vd_infer \
--model_filename inference.pdmodel \
@@ -30,32 +31,32 @@ paddle2onnx --model_dir ResNet50_vd_infer \
--save_file ResNet50_vd_infer.onnx \
--enable_dev_version True
```
-### 导出bmodel模型
+### Export bmodel
-以转化BM1684x的bmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
-### 1. 安装
+Take converting BM1684x model to bmodel as an example. You need to download [TPU-MLIR](https://github.com/sophgo/tpu-mlir) project. For the process of installation, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
+### 1. Installation
``` shell
docker pull sophgo/tpuc_dev:latest
-# myname1234是一个示例,也可以设置其他名字
+# myname1234 is just an example, you can customize your own name.
docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
source ./envsetup.sh
./build.sh
```
-### 2. ONNX模型转换为bmodel模型
+### 2. Convert ONNX model to bmodel
``` shell
mkdir ResNet50_vd_infer && cd ResNet50_vd_infer
-# 在该文件中放入测试图片,同时将上一步转换好的ResNet50_vd_infer.onnx放入该文件夹中
+# Put the test image in this file, and put the ResNet50_vd_infer.onnx into this folder.
cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
cp -rf ${REGRESSION_PATH}/image .
-# 放入onnx模型文件ResNet50_vd_infer.onnx
+# Put in the onnx model file ResNet50_vd_infer.onnx.
mkdir workspace && cd workspace
-# 将ONNX模型转换为mlir模型,其中参数--output_names可以通过NETRON查看
+# Convert ONNX model to mlir model, the parameter --output_names can be viewed via NETRON.
model_transform.py \
--model_name ResNet50_vd_infer \
--model_def ../ResNet50_vd_infer.onnx \
@@ -69,7 +70,7 @@ model_transform.py \
--test_result ResNet50_vd_infer_top_outputs.npz \
--mlir ResNet50_vd_infer.mlir
-# 将mlir模型转换为BM1684x的F32 bmodel模型
+# Convert mlir model to BM1684x F32 bmodel.
model_deploy.py \
--mlir ResNet50_vd_infer.mlir \
--quantize F32 \
@@ -78,7 +79,7 @@ model_deploy.py \
--test_reference ResNet50_vd_infer_top_outputs.npz \
--model ResNet50_vd_infer_1684x_f32.bmodel
```
-最终获得可以在BM1684x上能够运行的bmodel模型ResNet50_vd_infer_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
+The final bmodel, ResNet50_vd_infer_1684x_f32.bmodel, can run on BM1684x. If you want to further accelerate the model, you can convert ONNX model to INT8 bmodel. For details, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
-## 其他链接
-- [Cpp部署](./cpp)
+## Other Documents
+- [Cpp Deployment](./cpp)
diff --git a/examples/vision/classification/paddleclas/sophgo/README_CN.md b/examples/vision/classification/paddleclas/sophgo/README_CN.md
new file mode 100644
index 000000000..5e86aa26e
--- /dev/null
+++ b/examples/vision/classification/paddleclas/sophgo/README_CN.md
@@ -0,0 +1,85 @@
+[English](README.md) | 简体中文
+# PaddleDetection SOPHGO部署示例
+
+## 支持模型列表
+
+目前FastDeploy支持的如下模型的部署[ResNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/ResNet_and_vd.md)
+
+## 准备ResNet部署模型以及转换模型
+
+SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型,具体步骤如下:
+- Paddle动态图模型转换为ONNX模型,请参考[Paddle2ONNX模型转换](https://github.com/PaddlePaddle/Paddle2ONNX/tree/develop/model_zoo/classification)
+- ONNX模型转换bmodel模型的过程,请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)。
+
+## 模型转换example
+
+下面以[ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz)为例子,教大家如何转换Paddle模型到SOPHGO-TPU模型。
+
+## 导出ONNX模型
+
+### 下载Paddle ResNet50_vd静态图模型并解压
+```shell
+wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
+tar xvf ResNet50_vd_infer.tgz
+```
+
+### 静态图转ONNX模型,注意,这里的save_file请和压缩包名对齐
+```shell
+paddle2onnx --model_dir ResNet50_vd_infer \
+ --model_filename inference.pdmodel \
+ --params_filename inference.pdiparams \
+ --save_file ResNet50_vd_infer.onnx \
+ --enable_dev_version True
+```
+### 导出bmodel模型
+
+以转化BM1684x的bmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
+### 1. 安装
+``` shell
+docker pull sophgo/tpuc_dev:latest
+
+# myname1234是一个示例,也可以设置其他名字
+docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
+
+source ./envsetup.sh
+./build.sh
+```
+
+### 2. ONNX模型转换为bmodel模型
+``` shell
+mkdir ResNet50_vd_infer && cd ResNet50_vd_infer
+
+# 在该文件中放入测试图片,同时将上一步转换好的ResNet50_vd_infer.onnx放入该文件夹中
+cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
+cp -rf ${REGRESSION_PATH}/image .
+# 放入onnx模型文件ResNet50_vd_infer.onnx
+
+mkdir workspace && cd workspace
+
+# 将ONNX模型转换为mlir模型,其中参数--output_names可以通过NETRON查看
+model_transform.py \
+ --model_name ResNet50_vd_infer \
+ --model_def ../ResNet50_vd_infer.onnx \
+ --input_shapes [[1,3,224,224]] \
+ --mean 0.0,0.0,0.0 \
+ --scale 0.0039216,0.0039216,0.0039216 \
+ --keep_aspect_ratio \
+ --pixel_format rgb \
+ --output_names save_infer_model/scale_0.tmp_1 \
+ --test_input ../image/dog.jpg \
+ --test_result ResNet50_vd_infer_top_outputs.npz \
+ --mlir ResNet50_vd_infer.mlir
+
+# 将mlir模型转换为BM1684x的F32 bmodel模型
+model_deploy.py \
+ --mlir ResNet50_vd_infer.mlir \
+ --quantize F32 \
+ --chip bm1684x \
+ --test_input ResNet50_vd_infer_in_f32.npz \
+ --test_reference ResNet50_vd_infer_top_outputs.npz \
+ --model ResNet50_vd_infer_1684x_f32.bmodel
+```
+最终获得可以在BM1684x上能够运行的bmodel模型ResNet50_vd_infer_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
+
+## 其他链接
+- [Cpp部署](./cpp)
diff --git a/examples/vision/classification/paddleclas/sophgo/cpp/README.md b/examples/vision/classification/paddleclas/sophgo/cpp/README.md
index 7edfd2c94..0a8d95232 100644
--- a/examples/vision/classification/paddleclas/sophgo/cpp/README.md
+++ b/examples/vision/classification/paddleclas/sophgo/cpp/README.md
@@ -1,48 +1,49 @@
-# PaddleClas C++部署示例
+English | [简体中文](README_CN.md)
+# PaddleClas C++ Deployment Example
-本目录下提供`infer.cc`快速完成ResNet50_vd模型在SOPHGO BM1684x板子上加速部署的示例。
+`infer.cc` in this directory provides a quick example of accelerated deployment of the ResNet50_vd model on SOPHGO BM1684x.
-在部署前,需确认以下两个步骤:
+Before deployment, the following two steps need to be confirmed:
-1. 软硬件环境满足要求
-2. 根据开发环境,从头编译FastDeploy仓库
+1. Hardware and software environment meets the requirements.
+2. Compile the FastDeploy repository from scratch according to the development environment.
-以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
+For the above steps, please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md).
-## 生成基本目录文件
+## Generate Basic Directory Files
-该例程由以下几个部分组成
+The routine consists of the following parts:
```text
.
├── CMakeLists.txt
-├── build # 编译文件夹
-├── image # 存放图片的文件夹
+├── build # Compile Folder
+├── image # Folder for images
├── infer.cc
-├── preprocess_config.yaml #示例前处理配置文件
-└── model # 存放模型文件的文件夹
+├── preprocess_config.yaml # Preprocessing configuration sample file.
+└── model # Folder for models
```
-## 编译
+## Compile
-### 编译并拷贝SDK到thirdpartys文件夹
+### Compile and Copy SDK to folder thirdpartys
-请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录.
+Please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory.
-### 拷贝模型文件,以及配置文件至model文件夹
-将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md)
-将转换后的SOPHGO bmodel模型文件拷贝至model中
-将前处理配置文件也拷贝到model中
+### Copy model and configuration files to folder Model
+Convert Paddle model to SOPHGO bmodel model. For the conversion steps, please refer to [Document](../README.md).
+Please copy the converted SOPHGO bmodel to folder model.
+Copy the preprocessing configuration file to folder model as well.
```bash
cp preprocess_config.yaml ./model
```
-### 准备测试图片至image文件夹
+### Prepare Test Images to folder image
```bash
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
cp ILSVRC2012_val_00000010.jpeg ./images
```
-### 编译example
+### Compile example
```bash
cd build
@@ -50,12 +51,12 @@ cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
make
```
-## 运行例程
+## Running Routines
```bash
./infer_demo model images/ILSVRC2012_val_00000010.jpeg
```
-- [模型介绍](../../)
-- [模型转换](../)
+- [Model Description](../../)
+- [Model Conversion](../)
diff --git a/examples/vision/classification/paddleclas/sophgo/cpp/README_CN.md b/examples/vision/classification/paddleclas/sophgo/cpp/README_CN.md
new file mode 100644
index 000000000..8352e63d1
--- /dev/null
+++ b/examples/vision/classification/paddleclas/sophgo/cpp/README_CN.md
@@ -0,0 +1,62 @@
+[English](README.md) | 简体中文
+# PaddleClas C++部署示例
+
+本目录下提供`infer.cc`快速完成ResNet50_vd模型在SOPHGO BM1684x板子上加速部署的示例。
+
+在部署前,需确认以下两个步骤:
+
+1. 软硬件环境满足要求
+2. 根据开发环境,从头编译FastDeploy仓库
+
+以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
+
+## 生成基本目录文件
+
+该例程由以下几个部分组成
+```text
+.
+├── CMakeLists.txt
+├── build # 编译文件夹
+├── image # 存放图片的文件夹
+├── infer.cc
+├── preprocess_config.yaml #示例前处理配置文件
+└── model # 存放模型文件的文件夹
+```
+
+## 编译
+
+### 编译并拷贝SDK到thirdpartys文件夹
+
+请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录.
+
+### 拷贝模型文件,以及配置文件至model文件夹
+将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md)
+将转换后的SOPHGO bmodel模型文件拷贝至model中
+将前处理配置文件也拷贝到model中
+```bash
+cp preprocess_config.yaml ./model
+```
+
+### 准备测试图片至image文件夹
+```bash
+wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
+cp ILSVRC2012_val_00000010.jpeg ./images
+```
+
+### 编译example
+
+```bash
+cd build
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
+make
+```
+
+## 运行例程
+
+```bash
+./infer_demo model images/ILSVRC2012_val_00000010.jpeg
+```
+
+
+- [模型介绍](../../)
+- [模型转换](../)
diff --git a/examples/vision/classification/paddleclas/sophgo/python/README.md b/examples/vision/classification/paddleclas/sophgo/python/README.md
index f495e5830..cc0c6f570 100644
--- a/examples/vision/classification/paddleclas/sophgo/python/README.md
+++ b/examples/vision/classification/paddleclas/sophgo/python/README.md
@@ -1,29 +1,30 @@
-# PaddleClas Python部署示例
+English | [简体中文](README_CN.md)
+# PaddleClas Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, the following step need to be confirmed:
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
+- 1. Hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirement](../../../../../../docs/en/build_and_install/sophgo.md)
-本目录下提供`infer.py`快速完成 ResNet50_vd 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
+`infer.py` in this directory provides a quick example of deployment of the ResNet50_vd model on SOPHGO TPU. Please run the following script:
```bash
-# 下载部署示例代码
+# Download the sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/classification/paddleclas/sophgo/python
-# 下载图片
+# Download images.
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
-# 推理
+# Inference.
python3 infer.py --model_file ./bmodel/resnet50_1684x_f32.bmodel --config_file ResNet50_vd_infer/inference_cls.yaml --image ILSVRC2012_val_00000010.jpeg
-# 运行完成后返回结果如下所示
+# The returned result.
ClassifyResult(
label_ids: 153,
scores: 0.684570,
)
```
-## 其它文档
-- [ResNet50_vd C++部署](../cpp)
-- [转换ResNet50_vd SOPHGO模型文档](../README.md)
+## Other Documents
+- [ResNet50_vd C++ Deployment](../cpp)
+- [Converting ResNet50_vd SOPHGO model](../README.md)
diff --git a/examples/vision/classification/paddleclas/sophgo/python/README_CN.md b/examples/vision/classification/paddleclas/sophgo/python/README_CN.md
new file mode 100644
index 000000000..2cc9e4596
--- /dev/null
+++ b/examples/vision/classification/paddleclas/sophgo/python/README_CN.md
@@ -0,0 +1,30 @@
+[English](README.md) | 简体中文
+# PaddleClas Python部署示例
+
+在部署前,需确认以下步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
+
+本目录下提供`infer.py`快速完成 ResNet50_vd 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
+
+```bash
+# 下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/classification/paddleclas/sophgo/python
+
+# 下载图片
+wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
+
+# 推理
+python3 infer.py --model_file ./bmodel/resnet50_1684x_f32.bmodel --config_file ResNet50_vd_infer/inference_cls.yaml --image ILSVRC2012_val_00000010.jpeg
+
+# 运行完成后返回结果如下所示
+ClassifyResult(
+label_ids: 153,
+scores: 0.684570,
+)
+```
+
+## 其它文档
+- [ResNet50_vd C++部署](../cpp)
+- [转换ResNet50_vd SOPHGO模型文档](../README.md)
diff --git a/examples/vision/detection/yolov5/sophgo/README.md b/examples/vision/detection/yolov5/sophgo/README.md
index d4fa4f7a8..cddca0c75 100644
--- a/examples/vision/detection/yolov5/sophgo/README.md
+++ b/examples/vision/detection/yolov5/sophgo/README.md
@@ -1,52 +1,53 @@
-# YOLOv5 SOPHGO部署示例
+English | [简体中文](README_CN.md)
+# YOLOv5 SOPHGO Deployment Example
-## 支持模型列表
+## Supporting Model List
-YOLOv5 v6.0部署模型实现来自[YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.0),和[基于COCO的预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v6.0)
+For YOLOv5 v6.0 model deployment, please refer to [YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.0) and [Pretrained model based on COCO](https://github.com/ultralytics/yolov5/releases/tag/v6.0).
-## 准备YOLOv5部署模型以及转换模型
+## Preparing YOLOv5 Model Deployment and Conversion
-SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型,具体步骤如下:
-- 下载预训练ONNX模型,请参考[YOLOv5准备部署模型](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/detection/yolov5)
-- ONNX模型转换bmodel模型的过程,请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
+Before deploying SOPHGO-TPU model, you need to first convert Paddle model to bmodel. Specific steps are as follows:
+- Download the pre-trained ONNX model. Please refer to [YOLOv5 Ready-to-deploy Model](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/detection/yolov5).
+- Convert ONNX model to bmodel. Please refer to [TPU-MLIR](https://github.com/sophgo/tpu-mlir).
-## 模型转换example
+## Model conversion example
-下面以YOLOv5s为例子,教大家如何转换ONNX模型到SOPHGO-TPU模型
+Here we take YOLOv5s as an example to show you how to convert ONNX model to SOPHGO-TPU model.
-## 下载YOLOv5s模型
+## Download YOLOv5s Model
-### 下载ONNX YOLOv5s静态图模型
+### Download ONNX YOLOv5s Static Map Model
```shell
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
```
-### 导出bmodel模型
+### Export bmodel Model
-以转化BM1684x的bmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
-### 1. 安装
+Here we take BM1684x bmodel as an example. You need to download [TPU-MLIR](https://github.com/sophgo/tpu-mlir) project. For the installing process, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
+### 1. Installation
``` shell
docker pull sophgo/tpuc_dev:latest
-# myname1234是一个示例,也可以设置其他名字
+# myname1234 is just an example, you can customize your own name.
docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
source ./envsetup.sh
./build.sh
```
-### 2. ONNX模型转换为bmodel模型
+### 2. Convert ONNX model to bmodel
``` shell
mkdir YOLOv5s && cd YOLOv5s
-# 在该文件中放入测试图片,同时将上一步下载的yolov5s.onnx放入该文件夹中
+# Put the test image in this file, and put the yolov5s.onnx into this folder.
cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
cp -rf ${REGRESSION_PATH}/image .
-# 放入onnx模型文件yolov5s.onnx
+# Put in the onnx model file yolov5s.onnx
mkdir workspace && cd workspace
-# 将ONNX模型转换为mlir模型,其中参数--output_names可以通过NETRON查看
+# Convert ONNX model to mlir model, the parameter --output_names can be viewed via NETRON.
model_transform.py \
--model_name yolov5s \
--model_def ../yolov5s.onnx \
@@ -60,7 +61,7 @@ model_transform.py \
--test_result yolov5s_top_outputs.npz \
--mlir yolov5s.mlir
-# 将mlir模型转换为BM1684x的F32 bmodel模型
+# Convert mlir model to BM1684x F32 bmodel.
model_deploy.py \
--mlir yolov5s.mlir \
--quantize F32 \
@@ -69,7 +70,7 @@ model_deploy.py \
--test_reference yolov5s_top_outputs.npz \
--model yolov5s_1684x_f32.bmodel
```
-最终获得可以在BM1684x上能够运行的bmodel模型yolov5s_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
+The final bmodel, yolov5s_1684x_f32.bmodel, can run on BM1684x. If you want to further accelerate the model, you can convert ONNX model to INT8 bmodel. For details, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
-## 其他链接
-- [Cpp部署](./cpp)
+## Other Documents
+- [Cpp Deployment](./cpp)
diff --git a/examples/vision/detection/yolov5/sophgo/README_CN.md b/examples/vision/detection/yolov5/sophgo/README_CN.md
new file mode 100644
index 000000000..68be52280
--- /dev/null
+++ b/examples/vision/detection/yolov5/sophgo/README_CN.md
@@ -0,0 +1,76 @@
+[English](README.md) | 简体中文
+# YOLOv5 SOPHGO部署示例
+
+## 支持模型列表
+
+YOLOv5 v6.0部署模型实现来自[YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.0),和[基于COCO的预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v6.0)
+
+## 准备YOLOv5部署模型以及转换模型
+
+SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型,具体步骤如下:
+- 下载预训练ONNX模型,请参考[YOLOv5准备部署模型](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/detection/yolov5)
+- ONNX模型转换bmodel模型的过程,请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
+
+## 模型转换example
+
+下面以YOLOv5s为例子,教大家如何转换ONNX模型到SOPHGO-TPU模型
+
+## 下载YOLOv5s模型
+
+### 下载ONNX YOLOv5s静态图模型
+```shell
+wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
+
+```
+### 导出bmodel模型
+
+以转化BM1684x的bmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
+### 1. 安装
+``` shell
+docker pull sophgo/tpuc_dev:latest
+
+# myname1234是一个示例,也可以设置其他名字
+docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
+
+source ./envsetup.sh
+./build.sh
+```
+
+### 2. ONNX模型转换为bmodel模型
+``` shell
+mkdir YOLOv5s && cd YOLOv5s
+
+# 在该文件中放入测试图片,同时将上一步下载的yolov5s.onnx放入该文件夹中
+cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
+cp -rf ${REGRESSION_PATH}/image .
+# 放入onnx模型文件yolov5s.onnx
+
+mkdir workspace && cd workspace
+
+# 将ONNX模型转换为mlir模型,其中参数--output_names可以通过NETRON查看
+model_transform.py \
+ --model_name yolov5s \
+ --model_def ../yolov5s.onnx \
+ --input_shapes [[1,3,640,640]] \
+ --mean 0.0,0.0,0.0 \
+ --scale 0.0039216,0.0039216,0.0039216 \
+ --keep_aspect_ratio \
+ --pixel_format rgb \
+ --output_names output,350,498,646 \
+ --test_input ../image/dog.jpg \
+ --test_result yolov5s_top_outputs.npz \
+ --mlir yolov5s.mlir
+
+# 将mlir模型转换为BM1684x的F32 bmodel模型
+model_deploy.py \
+ --mlir yolov5s.mlir \
+ --quantize F32 \
+ --chip bm1684x \
+ --test_input yolov5s_in_f32.npz \
+ --test_reference yolov5s_top_outputs.npz \
+ --model yolov5s_1684x_f32.bmodel
+```
+最终获得可以在BM1684x上能够运行的bmodel模型yolov5s_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
+
+## 其他链接
+- [Cpp部署](./cpp)
diff --git a/examples/vision/detection/yolov5/sophgo/cpp/README.md b/examples/vision/detection/yolov5/sophgo/cpp/README.md
index e313da855..469a4d02b 100644
--- a/examples/vision/detection/yolov5/sophgo/cpp/README.md
+++ b/examples/vision/detection/yolov5/sophgo/cpp/README.md
@@ -1,43 +1,44 @@
-# YOLOv5 C++部署示例
+English | [简体中文](README_CN.md)
+# YOLOv5 C++ Deployment Example
-本目录下提供`infer.cc`快速完成yolov5s模型在SOPHGO BM1684x板子上加速部署的示例。
+`infer.cc` in this directory provides a quick example of accelerated deployment of the yolov5s model on SOPHGO BM1684x.
-在部署前,需确认以下两个步骤:
+Before deployment, the following two steps need to be confirmed:
-1. 软硬件环境满足要求
-2. 根据开发环境,从头编译FastDeploy仓库
+1. Hardware and software environment meets the requirements.
+2. Compile the FastDeploy repository from scratch according to the development environment.
-以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
+For the above steps, please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md).
-## 生成基本目录文件
+## Generate Basic Directory Files
-该例程由以下几个部分组成
+The routine consists of the following parts:
```text
.
├── CMakeLists.txt
-├── build # 编译文件夹
-├── image # 存放图片的文件夹
+├── build # Compile Folder
+├── image # Folder for images
├── infer.cc
-└── model # 存放模型文件的文件夹
+└── model # Folder for models
```
-## 编译
+## Compile
-### 编译并拷贝SDK到thirdpartys文件夹
+### Compile and Copy SDK to folder thirdpartys
-请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录.
+Please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory.
-### 拷贝模型文件,以及配置文件至model文件夹
-将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md)
-将转换后的SOPHGO bmodel模型文件拷贝至model中
+### Copy model and configuration files to folder Model
+Convert Paddle model to SOPHGO bmodel model. For the conversion steps, please refer to [Document](../README.md).
+Please copy the converted SOPHGO bmodel to folder model.
-### 准备测试图片至image文件夹
+### Prepare Test Images to folder image
```bash
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
cp 000000014439.jpg ./images
```
-### 编译example
+### Compile example
```bash
cd build
@@ -45,12 +46,12 @@ cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
make
```
-## 运行例程
+## Running Routines
```bash
./infer_demo model images/000000014439.jpg
```
-- [模型介绍](../../)
-- [模型转换](../)
+- [Model Description](../../)
+- [Model Conversion](../)
diff --git a/examples/vision/detection/yolov5/sophgo/cpp/README_CN.md b/examples/vision/detection/yolov5/sophgo/cpp/README_CN.md
new file mode 100644
index 000000000..17f1526cf
--- /dev/null
+++ b/examples/vision/detection/yolov5/sophgo/cpp/README_CN.md
@@ -0,0 +1,57 @@
+[English](README.md) | 简体中文
+# YOLOv5 C++部署示例
+
+本目录下提供`infer.cc`快速完成yolov5s模型在SOPHGO BM1684x板子上加速部署的示例。
+
+在部署前,需确认以下两个步骤:
+
+1. 软硬件环境满足要求
+2. 根据开发环境,从头编译FastDeploy仓库
+
+以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
+
+## 生成基本目录文件
+
+该例程由以下几个部分组成
+```text
+.
+├── CMakeLists.txt
+├── build # 编译文件夹
+├── image # 存放图片的文件夹
+├── infer.cc
+└── model # 存放模型文件的文件夹
+```
+
+## 编译
+
+### 编译并拷贝SDK到thirdpartys文件夹
+
+请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录.
+
+### 拷贝模型文件,以及配置文件至model文件夹
+将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md)
+将转换后的SOPHGO bmodel模型文件拷贝至model中
+
+### 准备测试图片至image文件夹
+```bash
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+cp 000000014439.jpg ./images
+```
+
+### 编译example
+
+```bash
+cd build
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
+make
+```
+
+## 运行例程
+
+```bash
+./infer_demo model images/000000014439.jpg
+```
+
+
+- [模型介绍](../../)
+- [模型转换](../)
diff --git a/examples/vision/detection/yolov5/sophgo/python/README.md b/examples/vision/detection/yolov5/sophgo/python/README.md
index ccf8ed7e8..3f876ccca 100644
--- a/examples/vision/detection/yolov5/sophgo/python/README.md
+++ b/examples/vision/detection/yolov5/sophgo/python/README.md
@@ -1,23 +1,24 @@
-# YOLOv5 Python部署示例
+English | [简体中文](README_CN.md)
+# YOLOv5 Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, the following step need to be confirmed:
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
+- 1. Hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirement](../../../../../../docs/en/build_and_install/sophgo.md)
-本目录下提供`infer.py`快速完成 YOLOv5 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
+`infer.py` in this directory provides a quick example of deployment of the YOLOv5 model on SOPHGO TPU. Please run the following script:
```bash
-# 下载部署示例代码
+# Download the sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/yolov5/sophgo/python
-# 下载图片
+# Download images.
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
-# 推理
+# Inference.
python3 infer.py --model_file ./bmodel/yolov5s_1684x_f32.bmodel --image 000000014439.jpg
-# 运行完成后返回结果如下所示
+# The returned result.
DetectionResult: [xmin, ymin, xmax, ymax, score, label_id]
268.480255,81.053055, 298.694794, 169.439026, 0.896569, 0
104.731163,45.661972, 127.583824, 93.449387, 0.869531, 0
@@ -41,6 +42,6 @@ DetectionResult: [xmin, ymin, xmax, ymax, score, label_id]
101.406250,152.562500, 118.890625, 169.140625, 0.253891, 24
```
-## 其它文档
-- [YOLOv5 C++部署](../cpp)
-- [转换YOLOv5 SOPHGO模型文档](../README.md)
+## Other Documents
+- [YOLOv5 C++ Deployment](../cpp)
+- [Converting YOLOv5 SOPHGO model](../README.md)
diff --git a/examples/vision/detection/yolov5/sophgo/python/README_CN.md b/examples/vision/detection/yolov5/sophgo/python/README_CN.md
new file mode 100644
index 000000000..69a2ed4af
--- /dev/null
+++ b/examples/vision/detection/yolov5/sophgo/python/README_CN.md
@@ -0,0 +1,47 @@
+[English](README.md) | 简体中文
+# YOLOv5 Python部署示例
+
+在部署前,需确认以下步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
+
+本目录下提供`infer.py`快速完成 YOLOv5 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
+
+```bash
+# 下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/detection/yolov5/sophgo/python
+
+# 下载图片
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+
+# 推理
+python3 infer.py --model_file ./bmodel/yolov5s_1684x_f32.bmodel --image 000000014439.jpg
+
+# 运行完成后返回结果如下所示
+DetectionResult: [xmin, ymin, xmax, ymax, score, label_id]
+268.480255,81.053055, 298.694794, 169.439026, 0.896569, 0
+104.731163,45.661972, 127.583824, 93.449387, 0.869531, 0
+378.909363,39.750137, 395.608643, 84.243454, 0.868430, 0
+158.552979,80.361511, 199.185760, 168.181915, 0.842988, 0
+414.375305,90.948090, 506.321899, 280.405182, 0.835842, 0
+364.003448,56.608932, 381.978607, 115.968216, 0.815136, 0
+351.725128,42.635330, 366.910309, 98.048386, 0.808936, 0
+505.888306,114.366791, 593.124878, 275.995270, 0.801361, 0
+327.708618,38.363693, 346.849915, 80.893021, 0.794725, 0
+583.493408,114.532883, 612.354614, 175.873535, 0.760649, 0
+186.470657,44.941360, 199.664505, 61.037643, 0.632591, 0
+169.615891,48.014603, 178.141556, 60.888596, 0.613938, 0
+25.810200,117.199692, 59.888783, 152.850128, 0.590614, 0
+352.145294,46.712723, 381.946075, 106.752151, 0.505329, 0
+1.875000,150.734375, 37.968750, 173.781250, 0.404573, 24
+464.657288,15.901413, 472.512939, 34.116409, 0.346033, 0
+64.625000,135.171875, 84.500000, 154.406250, 0.332831, 24
+57.812500,151.234375, 103.000000, 174.156250, 0.332566, 24
+165.906250,88.609375, 527.906250, 339.953125, 0.259424, 33
+101.406250,152.562500, 118.890625, 169.140625, 0.253891, 24
+```
+
+## 其它文档
+- [YOLOv5 C++部署](../cpp)
+- [转换YOLOv5 SOPHGO模型文档](../README.md)
diff --git a/examples/vision/segmentation/paddleseg/a311d/cpp/README.md b/examples/vision/segmentation/paddleseg/a311d/cpp/README.md
index 8bd94e646..538808e83 100755
--- a/examples/vision/segmentation/paddleseg/a311d/cpp/README.md
+++ b/examples/vision/segmentation/paddleseg/a311d/cpp/README.md
@@ -1,28 +1,29 @@
-# PP-LiteSeg 量化模型 C++ 部署示例
+English | [简体中文](README_CN.md)
+# PP-LiteSeg Quantized Model C++ Deployment Example
-本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 A311D 上的部署推理加速。
+ `infer.cc` in this directory can help you quickly complete the inference acceleration of PP-LiteSeg quantization model deployment on A311D.
-## 部署准备
-### FastDeploy 交叉编译环境准备
-1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
+## Deployment Preparations
+### FastDeploy Cross-compile Environment Preparations
+1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction)
-### 模型准备
-1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
-2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
-3. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
+### Model Preparations
+1. You can directly use the quantized model provided by FastDeploy for deployment.
+2. You can use one-click automatical compression tool provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
+3. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
-更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+For more information, please refer to [Model Quantization](../../quantize/README.md)
-## 在 A311D 上部署量化后的 PP-LiteSeg 分割模型
-请按照以下步骤完成在 A311D 上部署 PP-LiteSeg 量化模型:
-1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+## Deploying the Quantized PP-LiteSeg Segmentation model on A311D
+Please follow these steps to complete the deployment of the PP-LiteSeg quantization model on A311D.
+1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
-2. 将编译后的库拷贝到当前目录,可使用如下命令:
+2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
```
-3. 在当前路径下载部署所需的模型和示例图片:
+3. Download the model and example images required for deployment in current path.
```bash
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
mkdir models && mkdir images
@@ -33,26 +34,26 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
cp -r cityscapes_demo.png images
```
-4. 编译部署示例,可使入如下命令:
+4. Compile the deployment example. You can run the following lines:
```bash
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
-# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+# After success, an install folder will be created with a running demo and libraries required for deployment.
```
-5. 基于 adb 工具部署 PP-LiteSeg 分割模型到晶晨 A311D,可使用如下命令:
+5. Deploy the PP-LiteSeg segmentation model to A311D based on adb. You can run the following lines:
```bash
-# 进入 install 目录
+# Go to the install directory.
cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp/build/install/
-# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
```
-部署成功后运行结果如下:
+The output is:
-需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
+Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).
diff --git a/examples/vision/segmentation/paddleseg/a311d/cpp/README_CN.md b/examples/vision/segmentation/paddleseg/a311d/cpp/README_CN.md
new file mode 100644
index 000000000..a9528e940
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/a311d/cpp/README_CN.md
@@ -0,0 +1,59 @@
+[English](README.md) | 简体中文
+# PP-LiteSeg 量化模型 C++ 部署示例
+
+本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 A311D 上的部署推理加速。
+
+## 部署准备
+### FastDeploy 交叉编译环境准备
+1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
+
+### 模型准备
+1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
+2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+3. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
+
+更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+
+## 在 A311D 上部署量化后的 PP-LiteSeg 分割模型
+请按照以下步骤完成在 A311D 上部署 PP-LiteSeg 量化模型:
+1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddle-lite-的-fastdeploy-交叉编译库编译)
+
+2. 将编译后的库拷贝到当前目录,可使用如下命令:
+```bash
+cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
+```
+
+3. 在当前路径下载部署所需的模型和示例图片:
+```bash
+cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
+mkdir models && mkdir images
+wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
+tar -xvf ppliteseg.tar.gz
+cp -r ppliteseg models
+wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
+cp -r cityscapes_demo.png images
+```
+
+4. 编译部署示例,可使入如下命令:
+```bash
+cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp
+mkdir build && cd build
+cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
+make -j8
+make install
+# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+```
+
+5. 基于 adb 工具部署 PP-LiteSeg 分割模型到晶晨 A311D,可使用如下命令:
+```bash
+# 进入 install 目录
+cd FastDeploy/examples/vision/segmentation/paddleseg/a311d/cpp/build/install/
+# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
+```
+
+部署成功后运行结果如下:
+
+
+
+需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
diff --git a/examples/vision/segmentation/paddleseg/android/README.md b/examples/vision/segmentation/paddleseg/android/README.md
index f5fc5cfa7..0d845f2a0 100644
--- a/examples/vision/segmentation/paddleseg/android/README.md
+++ b/examples/vision/segmentation/paddleseg/android/README.md
@@ -1,97 +1,98 @@
-# 目标检测 PaddleSeg Android Demo 使用文档
+English | [简体中文](README_CN.md)
+# PaddleSeg Android Demo for Target Detection
-在 Android 上实现实时的人像分割功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
+For real-time portrait segmentation on Android, this demo has good ease of use and openness. You can run your own training model in the demo.
-## 环境准备
+## Environment Preparations
-1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
-2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
+1. Install the Android Studio tool locally, for details see [Android Stuido official website](https://developer.android.com/studio).
+2. Get an Android phone and turn on USB debugging mode. How to turn on: ` Phone Settings -> Find Developer Options -> Turn on Developer Options and USB Debug Mode`.
-## 部署步骤
+## Deployment Steps
-1. 目标检测 PaddleSeg Demo 位于 `fastdeploy/examples/vision/segmentation/paddleseg/android` 目录
-2. 用 Android Studio 打开 paddleseg/android 工程
-3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
+1. Target detection PaddleSeg Demo is located in `fastdeploy/examples/vision/segmentation/paddleseg/android` directory.
+2. Please use Android Studio to open paddleseg/android project.
+3. Connect your phone to your computer, turn on USB debugging and file transfer mode, and connect your own mobile device on Android Studio (your phone needs to be enabled to allow software installation from USB).
-> **注意:**
->> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
+> **Notes:**
+>> If you encounter an NDK configuration error during importing, compiling or running the program, please open ` File > Project Structure > SDK Location` and change `Andriod SDK location` to your locally configured SDK path.
-4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
-成功后效果如下,图一:APP 安装到手机;图二: APP 打开后的效果,会自动识别图片中的人物并绘制mask;图三:APP设置选项,点击右上角的设置图片,可以设置不同选项进行体验。
+4. Click the Run button to automatically compile the APP and install it to your phone. (The process will automatically download the pre-compiled FastDeploy Android library and model files, internet connection required.)
+The success interface is as follows. Figure 1: Install APP on phone; Figure 2: The opening interface, it will automatically recognize the person in the picture and draw the mask; Figure 3: APP setting options, click setting in the upper right corner, and you can set different options.
-| APP 图标 | APP 效果 | APP设置项
+| APP icon | APP effect | APP setting options
| --- | --- | --- |
|
|
|
|
-## PaddleSegModel Java API 说明
-- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下:
- - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
- - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
- - configFile: String, 模型推理的预处理配置文件,如 deploy.yml
- - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
+## PaddleSegModel Java API Introduction
+- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleSegModel initialization parameters are described as follows:
+ - modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
+ - paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
+ - configFile: String, preprocessing configuration file of model inference, e.g. deploy.yml.
+ - option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
-// 构造函数: constructor w/o label file
-public PaddleSegModel(); // 空构造函数,之后可以调用init初始化
+// Constructor w/o label file
+public PaddleSegModel(); // An empty constructor, which can be initialised by calling init function later.
public PaddleSegModel(String modelFile, String paramsFile, String configFile);
public PaddleSegModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
-// 手动调用init初始化: call init manually w/o label file
+// Call init manually w/o label file
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
```
-- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
-// 直接预测:不保存图片以及不渲染结果到Bitmap上
+// Directly predict: do not save images or render result to Bitmap.
public SegmentationResult predict(Bitmap ARGB8888Bitmap);
-// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public SegmentationResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float weight);
-public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // 只渲染 不保存图片
-// 修改result,而非返回result,关注性能的用户可以将以下接口与SegmentationResult的CxxBuffer一起使用
+public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // Only rendering images without saving.
+// Modify result, but not return it. Concerning performance, you can use the following interface with CxxBuffer in SegmentationResult.
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result);
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, String savedImagePath, float weight);
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, boolean rendering, float weight);
```
-- 设置竖屏或横屏模式: 对于 PP-HumanSeg系列模型,必须要调用该方法设置竖屏模式为true.
+- Set vertical or horizontal mode: For PP-HumanSeg series model, you should call this method to set the vertical mode to true.
```java
public void setVerticalScreenFlag(boolean flag);
```
-- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
-public boolean release(); // 释放native资源
-public boolean initialized(); // 检查是否初始化成功
+public boolean release(); // Release native resources.
+public boolean initialized(); // Check if initialization is successful.
```
-- RuntimeOption设置说明
+- Runtime Option Setting
```java
-public void enableLiteFp16(); // 开启fp16精度推理
-public void disableLiteFP16(); // 关闭fp16精度推理
-public void setCpuThreadNum(int threadNum); // 设置线程数
-public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
-public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
+public void enableLiteFp16(); // Enable fp16 precision inference
+public void disableLiteFP16(); // Disable fp16 precision inference
+public void setCpuThreadNum(int threadNum); // Set number of threads.
+public void setLitePowerMode(LitePowerMode mode); // Set power mode.
+public void setLitePowerMode(String modeStr); // Set power mode by string.
```
-- 模型结果SegmentationResult说明
+- Segmentation Result
```java
public class SegmentationResult {
- public int[] mLabelMap; // 预测到的label map 每个像素位置对应一个label HxW
- public float[] mScoreMap; // 预测到的得分 map 每个像素位置对应一个score HxW
- public long[] mShape; // label map实际的shape (H,W)
- public boolean mContainScoreMap = false; // 是否包含 score map
- // 用户可以选择直接使用CxxBuffer,而非通过JNI拷贝到Java层,
- // 该方式可以一定程度上提升性能
- public void setCxxBufferFlag(boolean flag); // 设置是否为CxxBuffer模式
- public boolean releaseCxxBuffer(); // 手动释放CxxBuffer!!!
- public boolean initialized(); // 检测结果是否有效
+ public int[] mLabelMap; // The predicted label map, each pixel position corresponds to a label HxW.
+ public float[] mScoreMap; // The predicted score map, each pixel position corresponds to a score HxW.
+ public long[] mShape; // The real shape(H,W) of label map.
+ public boolean mContainScoreMap = false; // Whether score map is included.
+ // You can choose to use CxxBuffer directly instead of copying it to JAVA layer through JNI.
+ // This method can improve performance to some extent.
+ public void setCxxBufferFlag(boolean flag); // Set whether the mode is CxxBuffer.
+ public boolean releaseCxxBuffer(); // Release CxxBuffer manually!!!
+ public boolean initialized(); // Check if the result is valid.
}
```
-其他参考:C++/Python对应的SegmentationResult说明: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
+Other reference: C++/Python corresponding SegmentationResult description: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md).
-- 模型调用示例1:使用构造函数以及默认的RuntimeOption
+- Model calling example 1: Using constructor and the default RuntimeOption:
```java
import java.nio.ByteBuffer;
import android.graphics.Bitmap;
@@ -100,77 +101,77 @@ import android.opengl.GLES20;
import com.baidu.paddle.fastdeploy.vision.SegmentationResult;
import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel;
-// 初始化模型
+// Initialise model.
PaddleSegModel model = new PaddleSegModel(
"portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel",
"portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams",
"portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml");
-// 如果摄像头为竖屏模式,PP-HumanSeg系列需要设置改标记
+// If the camera is in portrait mode, the PP-HumanSeg series needs to change the mark.
model.setVerticalScreenFlag(true);
-// 读取图片: 以下仅为读取Bitmap的伪代码
+// Read Bitmaps: The following is the pseudo code of reading the Bitmap.
ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
-// 模型推理
+// Model inference.
SegmentationResult result = new SegmentationResult();
result.setCxxBufferFlag(true);
model.predict(ARGB8888ImageBitmap, result);
-// 释放CxxBuffer
+// Release CxxBuffer.
result.releaseCxxBuffer();
-// 或直接预测返回 SegmentationResult
+// Or return SegmentationResult directly.
SegmentationResult result = model.predict(ARGB8888ImageBitmap);
-// 释放模型资源
+// Release model resources.
model.release();
```
-- 模型调用示例2: 在合适的程序节点,手动调用init,并自定义RuntimeOption
+- Model calling example 2: Call init function manually at the appropriate program node and customize RuntimeOption.
```java
-// import 同上 ...
+// import id.
import com.baidu.paddle.fastdeploy.RuntimeOption;
import com.baidu.paddle.fastdeploy.LitePowerMode;
import com.baidu.paddle.fastdeploy.vision.SegmentationResult;
import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel;
-// 新建空模型
+// Create empty model.
PaddleSegModel model = new PaddleSegModel();
-// 模型路径
+// Model path.
String modelFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel";
String paramFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams";
String configFile = "portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml";
-// 指定RuntimeOption
+// Specify RuntimeOption.
RuntimeOption option = new RuntimeOption();
option.setCpuThreadNum(2);
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
option.enableLiteFp16();
-// 如果摄像头为竖屏模式,PP-HumanSeg系列需要设置改标记
+// If the camera is in portrait mode, the PP-HumanSeg series needs to change the mark.
model.setVerticalScreenFlag(true);
-// 使用init函数初始化
+// Initialise with the init function.
model.init(modelFile, paramFile, configFile, option);
-// Bitmap读取、模型预测、资源释放 同上 ...
+// Read Bitmap, predict model, release resources, id.
```
-更详细的用法请参考 [SegmentationMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java) 中的用法
+For details, please refer to [SegmentationMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java).
-## 替换 FastDeploy SDK和模型
-替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models/portrait_pp_humansegv2_lite_256x144_inference_model`。
-- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK,解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
- - [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
+## Replace FastDeploy SDK and model
+ Steps to replace the FastDeploy prediction libraries and model are very simple. The location of the prediction library is `app/libs/fastdeploy-android-sdk-xxx.aar`, where `xxx` indicates the version of the prediction library you are currently using. The location of the model is, `app/src/main/assets/models/portrait_pp_humansegv2_lite_256x144_inference_model`.
+- Replace FastDeploy Android SDK: Download or compile the latest FastDeploy Android SDK, unzip it and put it in the `app/libs` directory. For details please refer to:
+ - [Use FastDeploy Java SDK on Android](../../../../../java/android/)
-- 替换PaddleSeg模型的步骤:
- - 将您的PaddleSeg模型放在 `app/src/main/assets/models` 目录下;
- - 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
+- Steps for replacing the PaddleSeg model.
+ - Put your PaddleSeg model in `app/src/main/assets/models`;
+ - Modify the model path in `app/src/main/res/values/strings.xml`, such as:
```xml
-
+
models/human_pp_humansegv1_lite_192x192_inference_model
```
-## 更多参考文档
-如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
-- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
-- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
+## Other Documenets
+If you are interested in more FastDeploy Java API documents and how to access the FastDeploy C++ API via JNI, you can refer to the following:
+- [Use FastDeploy Java SDK on Android](../../../../../java/android/)
+- [Use FastDeploy C++ SDK on Android](../../../../../docs/en/faq/use_cpp_sdk_on_android.md)
diff --git a/examples/vision/segmentation/paddleseg/android/README_CN.md b/examples/vision/segmentation/paddleseg/android/README_CN.md
new file mode 100644
index 000000000..eb683bdfa
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/android/README_CN.md
@@ -0,0 +1,177 @@
+[English](README.md) | 简体中文
+# 目标检测 PaddleSeg Android Demo 使用文档
+
+在 Android 上实现实时的人像分割功能,此 Demo 有很好的的易用性和开放性,如在 Demo 中跑自己训练好的模型等。
+
+## 环境准备
+
+1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
+2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
+
+## 部署步骤
+
+1. 目标检测 PaddleSeg Demo 位于 `fastdeploy/examples/vision/segmentation/paddleseg/android` 目录
+2. 用 Android Studio 打开 paddleseg/android 工程
+3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
+
+
+
+
+
+> **注意:**
+>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod SDK location` 为您本机配置的 SDK 所在路径。
+
+4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
+成功后效果如下,图一:APP 安装到手机;图二: APP 打开后的效果,会自动识别图片中的人物并绘制mask;图三:APP设置选项,点击右上角的设置图片,可以设置不同选项进行体验。
+
+| APP 图标 | APP 效果 | APP设置项
+ | --- | --- | --- |
+ |
|
|
|
+
+
+## PaddleSegModel Java API 说明
+- 模型初始化 API: 模型初始化API包含两种方式,方式一是通过构造函数直接初始化;方式二是,通过调用init函数,在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下:
+ - modelFile: String, paddle格式的模型文件路径,如 model.pdmodel
+ - paramFile: String, paddle格式的参数文件路径,如 model.pdiparams
+ - configFile: String, 模型推理的预处理配置文件,如 deploy.yml
+ - option: RuntimeOption,可选参数,模型初始化option。如果不传入该参数则会使用默认的运行时选项。
+
+```java
+// 构造函数: constructor w/o label file
+public PaddleSegModel(); // 空构造函数,之后可以调用init初始化
+public PaddleSegModel(String modelFile, String paramsFile, String configFile);
+public PaddleSegModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
+// 手动调用init初始化: call init manually w/o label file
+public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
+```
+- 模型预测 API:模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指,不保存图片以及不渲染结果到Bitmap上,仅预测推理结果。预测并且可视化是指,预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
+```java
+// 直接预测:不保存图片以及不渲染结果到Bitmap上
+public SegmentationResult predict(Bitmap ARGB8888Bitmap);
+// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap上
+public SegmentationResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float weight);
+public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // 只渲染 不保存图片
+// 修改result,而非返回result,关注性能的用户可以将以下接口与SegmentationResult的CxxBuffer一起使用
+public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result);
+public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, String savedImagePath, float weight);
+public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, boolean rendering, float weight);
+```
+- 设置竖屏或横屏模式: 对于 PP-HumanSeg系列模型,必须要调用该方法设置竖屏模式为true.
+```java
+public void setVerticalScreenFlag(boolean flag);
+```
+- 模型资源释放 API:调用 release() API 可以释放模型资源,返回true表示释放成功,false表示失败;调用 initialized() 可以判断模型是否初始化成功,true表示初始化成功,false表示失败。
+```java
+public boolean release(); // 释放native资源
+public boolean initialized(); // 检查是否初始化成功
+```
+
+- RuntimeOption设置说明
+```java
+public void enableLiteFp16(); // 开启fp16精度推理
+public void disableLiteFP16(); // 关闭fp16精度推理
+public void setCpuThreadNum(int threadNum); // 设置线程数
+public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
+public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
+```
+
+- 模型结果SegmentationResult说明
+```java
+public class SegmentationResult {
+ public int[] mLabelMap; // 预测到的label map 每个像素位置对应一个label HxW
+ public float[] mScoreMap; // 预测到的得分 map 每个像素位置对应一个score HxW
+ public long[] mShape; // label map实际的shape (H,W)
+ public boolean mContainScoreMap = false; // 是否包含 score map
+ // 用户可以选择直接使用CxxBuffer,而非通过JNI拷贝到Java层,
+ // 该方式可以一定程度上提升性能
+ public void setCxxBufferFlag(boolean flag); // 设置是否为CxxBuffer模式
+ public boolean releaseCxxBuffer(); // 手动释放CxxBuffer!!!
+ public boolean initialized(); // 检测结果是否有效
+}
+```
+其他参考:C++/Python对应的SegmentationResult说明: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
+
+
+- 模型调用示例1:使用构造函数以及默认的RuntimeOption
+```java
+import java.nio.ByteBuffer;
+import android.graphics.Bitmap;
+import android.opengl.GLES20;
+
+import com.baidu.paddle.fastdeploy.vision.SegmentationResult;
+import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel;
+
+// 初始化模型
+PaddleSegModel model = new PaddleSegModel(
+ "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel",
+ "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams",
+ "portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml");
+
+// 如果摄像头为竖屏模式,PP-HumanSeg系列需要设置改标记
+model.setVerticalScreenFlag(true);
+
+// 读取图片: 以下仅为读取Bitmap的伪代码
+ByteBuffer pixelBuffer = ByteBuffer.allocate(width * height * 4);
+GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
+Bitmap ARGB8888ImageBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
+ARGB8888ImageBitmap.copyPixelsFromBuffer(pixelBuffer);
+
+// 模型推理
+SegmentationResult result = new SegmentationResult();
+result.setCxxBufferFlag(true);
+
+model.predict(ARGB8888ImageBitmap, result);
+
+// 释放CxxBuffer
+result.releaseCxxBuffer();
+
+// 或直接预测返回 SegmentationResult
+SegmentationResult result = model.predict(ARGB8888ImageBitmap);
+
+// 释放模型资源
+model.release();
+```
+
+- 模型调用示例2: 在合适的程序节点,手动调用init,并自定义RuntimeOption
+```java
+// import 同上 ...
+import com.baidu.paddle.fastdeploy.RuntimeOption;
+import com.baidu.paddle.fastdeploy.LitePowerMode;
+import com.baidu.paddle.fastdeploy.vision.SegmentationResult;
+import com.baidu.paddle.fastdeploy.vision.segmentation.PaddleSegModel;
+// 新建空模型
+PaddleSegModel model = new PaddleSegModel();
+// 模型路径
+String modelFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdmodel";
+String paramFile = "portrait_pp_humansegv2_lite_256x144_inference_model/model.pdiparams";
+String configFile = "portrait_pp_humansegv2_lite_256x144_inference_model/deploy.yml";
+// 指定RuntimeOption
+RuntimeOption option = new RuntimeOption();
+option.setCpuThreadNum(2);
+option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
+option.enableLiteFp16();
+// 如果摄像头为竖屏模式,PP-HumanSeg系列需要设置改标记
+model.setVerticalScreenFlag(true);
+// 使用init函数初始化
+model.init(modelFile, paramFile, configFile, option);
+// Bitmap读取、模型预测、资源释放 同上 ...
+```
+更详细的用法请参考 [SegmentationMainActivity](./app/src/main/java/com/baidu/paddle/fastdeploy/app/examples/segmentation/SegmentationMainActivity.java) 中的用法
+
+## 替换 FastDeploy SDK和模型
+替换FastDeploy预测库和模型的步骤非常简单。预测库所在的位置为 `app/libs/fastdeploy-android-sdk-xxx.aar`,其中 `xxx` 表示当前您使用的预测库版本号。模型所在的位置为,`app/src/main/assets/models/portrait_pp_humansegv2_lite_256x144_inference_model`。
+- 替换FastDeploy Android SDK: 下载或编译最新的FastDeploy Android SDK,解压缩后放在 `app/libs` 目录下;详细配置文档可参考:
+ - [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
+
+- 替换PaddleSeg模型的步骤:
+ - 将您的PaddleSeg模型放在 `app/src/main/assets/models` 目录下;
+ - 修改 `app/src/main/res/values/strings.xml` 中模型路径的默认值,如:
+```xml
+
+models/human_pp_humansegv1_lite_192x192_inference_model
+```
+
+## 更多参考文档
+如果您想知道更多的FastDeploy Java API文档以及如何通过JNI来接入FastDeploy C++ API感兴趣,可以参考以下内容:
+- [在 Android 中使用 FastDeploy Java SDK](../../../../../java/android/)
+- [在 Android 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
diff --git a/examples/vision/segmentation/paddleseg/quantize/README.md b/examples/vision/segmentation/paddleseg/quantize/README.md
index 83a76e384..ab0fa77fc 100755
--- a/examples/vision/segmentation/paddleseg/quantize/README.md
+++ b/examples/vision/segmentation/paddleseg/quantize/README.md
@@ -1,36 +1,37 @@
-# PaddleSeg 量化模型部署
-FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
-用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
+English | [简体中文](README_CN.md)
+# PaddleSeg Quantized Model Deployment
+FastDeploy already supports the deployment of quantitative models and provides a tool to automatically compress model with just one click.
+You can use the one-click automatical model compression tool to quantify and deploy the models, or directly download the quantified models provided by FastDeploy for deployment.
-## FastDeploy一键模型自动化压缩工具
-FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
-详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
-注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。
+## FastDeploy One-Click Automation Model Compression Tool
+FastDeploy provides an one-click automatical model compression tool that can quantify a model simply by entering configuration file.
+For details, please refer to [one-click automatical compression tool](../../../../../tools/common_tools/auto_compression/).
+Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.
-## 下载量化完成的PaddleSeg模型
-用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
+## Download the Quantized PaddleSeg Model
+You can also directly download the quantized models in the following table for deployment (click model name to download).
-Benchmark表格说明:
-- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
-- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
-- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
-- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
-- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
-- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
-- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
-- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
+Note:
+- Runtime latency is the inference latency of the model on various Runtimes, including CPU->GPU data copy, GPU inference, and GPU->CPU data copy time. It does not include the respective pre and post processing time of the models.
+- The end-to-end latency is the latency of the model in the actual inference scenario, including the pre and post processing of the model.
+- The measured latencies are averaged over 1000 inferences, in milliseconds.
+- INT8 + FP16 is to enable the FP16 inference option for Runtime while inferring the INT8 quantization model.
+- INT8 + FP16 + PM is the option to use Pinned Memory while inferring INT8 quantization model and turning on FP16, which can speed up the GPU->CPU data copy speed.
+- The maximum speedup ratio is obtained by dividing the FP32 latency by the fastest INT8 inference latency.
+- The strategy is quantitative distillation training, using a small number of unlabeled data sets to train the quantitative model, and verify the accuracy on the full validation set, INT8 accuracy does not represent the highest INT8 accuracy.
+- The CPU is Intel(R) Xeon(R) Gold 6271C with a fixed CPU thread count of 1 in all tests. The GPU is Tesla T4, TensorRT version 8.4.15.
#### Runtime Benchmark
-| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mIoU | INT8 mIoU | 量化方式 |
+| Model |Inference Backends | Hardware | FP32 Runtime Latency | INT8 Runtime Latency | INT8 + FP16 Runtime Latency | INT8+FP16+PM Runtime Latency | Max Speedup | FP32 mIoU | INT8 mIoU | Method |
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
-| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar)) | Paddle Inference | CPU | 1138.04| 602.62 |None|None | 1.89 |77.37 | 71.62 |量化蒸馏训练 |
+| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) | Paddle Inference | CPU | 1138.04| 602.62 |None|None | 1.89 |77.37 | 71.62 |Quantaware Distillation Training |
-#### 端到端 Benchmark
-| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 mIoU | INT8 mIoU | 量化方式 |
+#### End to End Benchmark
+| Model |Inference Backends | Hardware | FP32 End2End Latency | INT8 End2End Latency | INT8 + FP16 End2End Latency | INT8+FP16+PM End2End Latency | Max Speedup | FP32 mIoU | INT8 mIoU | Method |
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
-| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar)) | Paddle Inference | CPU | 4726.65| 4134.91|None|None | 1.14 |77.37 | 71.62 |量化蒸馏训练 |
+| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) | Paddle Inference | CPU | 4726.65| 4134.91|None|None | 1.14 |77.37 | 71.62 |Quantaware Distillation Training|
-## 详细部署文档
+## Detailed Deployment Documents
-- [Python部署](python)
-- [C++部署](cpp)
+- [Python Deployment](python)
+- [C++ Deployment](cpp)
diff --git a/examples/vision/segmentation/paddleseg/quantize/README_CN.md b/examples/vision/segmentation/paddleseg/quantize/README_CN.md
new file mode 100644
index 000000000..a35f1d99d
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/quantize/README_CN.md
@@ -0,0 +1,37 @@
+[English](README.md) | 简体中文
+# PaddleSeg 量化模型部署
+FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
+用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
+
+## FastDeploy一键模型自动化压缩工具
+FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
+详细教程请见: [一键模型自动化压缩工具](../../../../../tools/common_tools/auto_compression/)
+注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。
+
+## 下载量化完成的PaddleSeg模型
+用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
+
+Benchmark表格说明:
+- Runtime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
+- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
+- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
+- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
+- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
+- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
+- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
+- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
+
+#### Runtime Benchmark
+| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mIoU | INT8 mIoU | 量化方式 |
+| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
+| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) | Paddle Inference | CPU | 1138.04| 602.62 |None|None | 1.89 |77.37 | 71.62 |量化蒸馏训练 |
+
+#### 端到端 Benchmark
+| 模型 |推理后端 |部署硬件 | FP32 End2End时延 | INT8 End2End时延 | INT8 + FP16 End2End时延 | INT8+FP16+PM End2End时延 | 最大加速比 | FP32 mIoU | INT8 mIoU | 量化方式 |
+| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
+| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar) | Paddle Inference | CPU | 4726.65| 4134.91|None|None | 1.14 |77.37 | 71.62 |量化蒸馏训练 |
+
+## 详细部署文档
+
+- [Python部署](python)
+- [C++部署](cpp)
diff --git a/examples/vision/segmentation/paddleseg/quantize/cpp/README.md b/examples/vision/segmentation/paddleseg/quantize/cpp/README.md
index bd17ec634..9eb7c9146 100755
--- a/examples/vision/segmentation/paddleseg/quantize/cpp/README.md
+++ b/examples/vision/segmentation/paddleseg/quantize/cpp/README.md
@@ -1,31 +1,32 @@
-# PaddleSeg 量化模型 C++部署示例
-本目录下提供的`infer.cc`,可以帮助用户快速完成PaddleSeg量化模型在CPU上的部署推理加速.
+English | [简体中文](README_CN.md)
+# PaddleSeg Quantitative Model C++ Deployment Example
+ `infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleSeg quantization model deployment on CPU.
-## 部署准备
-### FastDeploy环境准备
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+## Deployment Preparations
+### FastDeploy Environment Preparations
+- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
-### 量化模型准备
-- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
-- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+### Quantized Model Preparations
+- 1. You can directly use the quantized model provided by FastDeploy for deployment.
+- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
-## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
-在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+## Take the Quantized PP_LiteSeg_T_STDC1_cityscapes Model as an example for Deployment
+Run the following commands in this directory to compile and deploy the quantized model. FastDeploy version 0.7.0 or higher is required (x.x.x>=0.7.0).
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download pre-compiled FastDeploy libraries. You can choose the appropriate version from `pre-compiled FastDeploy libraries` mentioned above.
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-#下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
+# Download the PP_LiteSeg_T_STDC1_cityscapes quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
-# 在CPU上使用Paddle-Inference推理量化模型
+# Use Paddle-Inference inference quantization model on CPU.
./infer_demo PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ cityscapes_demo.png 1
```
diff --git a/examples/vision/segmentation/paddleseg/quantize/cpp/README_CN.md b/examples/vision/segmentation/paddleseg/quantize/cpp/README_CN.md
new file mode 100644
index 000000000..c4cde0b1f
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/quantize/cpp/README_CN.md
@@ -0,0 +1,32 @@
+[English](README.md) | 简体中文
+# PaddleSeg 量化模型 C++部署示例
+本目录下提供的`infer.cc`,可以帮助用户快速完成PaddleSeg量化模型在CPU上的部署推理加速.
+
+## 部署准备
+### FastDeploy环境准备
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+### 量化模型准备
+- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
+- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+
+## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
+在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+# 下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
+tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
+wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
+
+# 在CPU上使用Paddle-Inference推理量化模型
+./infer_demo PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ cityscapes_demo.png 1
+```
diff --git a/examples/vision/segmentation/paddleseg/quantize/python/README.md b/examples/vision/segmentation/paddleseg/quantize/python/README.md
index b9cc1c6c4..5607e1a80 100755
--- a/examples/vision/segmentation/paddleseg/quantize/python/README.md
+++ b/examples/vision/segmentation/paddleseg/quantize/python/README.md
@@ -1,28 +1,29 @@
-# PaddleSeg 量化模型 Python部署示例
-本目录下提供的`infer.py`,可以帮助用户快速完成PaddleSeg量化模型在CPU/GPU上的部署推理加速.
+English | [简体中文](README_CN.md)
+# PaddleSeg Quantitative Model Python Deployment Example
+ `infer.py` in this directory can help you quickly complete the inference acceleration of PaddleSeg quantization model deployment on CPU/GPU.
-## 部署准备
-### FastDeploy环境准备
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+## Deployment Preparations
+### FastDeploy Environment Preparations
+- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
-### 量化模型准备
-- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
-- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+### Quantized Model Preparations
+- 1. You can directly use the quantized model provided by FastDeploy for deployment.
+- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
-## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
+## Take the Quantized PP_LiteSeg_T_STDC1_cityscapes Model as an example for Deployment
```bash
-#下载部署示例代码
+# Download sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/segmentation/paddleseg/quantize/python
-#下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
+# Download the PP_LiteSeg_T_STDC1_cityscapes quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
-# 在CPU上使用Paddle-Inference推理量化模型
+# Use Paddle-Inference inference quantization model on CPU.
python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT --image cityscapes_demo.png --device cpu --backend paddle
```
diff --git a/examples/vision/segmentation/paddleseg/quantize/python/README_CN.md b/examples/vision/segmentation/paddleseg/quantize/python/README_CN.md
new file mode 100644
index 000000000..1975a84fe
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/quantize/python/README_CN.md
@@ -0,0 +1,29 @@
+[English](README.md) | 简体中文
+# PaddleSeg 量化模型 Python部署示例
+本目录下提供的`infer.py`,可以帮助用户快速完成PaddleSeg量化模型在CPU/GPU上的部署推理加速.
+
+## 部署准备
+### FastDeploy环境准备
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+### 量化模型准备
+- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
+- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+
+
+## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
+```bash
+# 下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd examples/vision/segmentation/paddleseg/quantize/python
+
+# 下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
+tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
+wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
+
+# 在CPU上使用Paddle-Inference推理量化模型
+python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT --image cityscapes_demo.png --device cpu --backend paddle
+
+```
diff --git a/examples/vision/segmentation/paddleseg/rknpu2/README.md b/examples/vision/segmentation/paddleseg/rknpu2/README.md
index d083fa049..40606fee0 100644
--- a/examples/vision/segmentation/paddleseg/rknpu2/README.md
+++ b/examples/vision/segmentation/paddleseg/rknpu2/README.md
@@ -1,33 +1,34 @@
-# PaddleSeg 模型部署
+English | [简体中文](README_CN.md)
+# PaddleSeg Model Deployment
-## 模型版本说明
+## Model Version
- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
-目前FastDeploy使用RKNPU2推理PPSeg支持如下模型的部署:
+Currently FastDeploy using RKNPU2 to infer PPSeg supports the following model deployments:
-| 模型 | 参数文件大小 | 输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
+| Model | Parameter File Size | Input Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
|:---------------------------------------------------------------------------------------------------------------------------------------------|:-------|:---------|:-------|:------------|:---------------|
| [Unet-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
-| [PP-HumanSegV1-Lite(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
-| [PP-HumanSegV2-Lite(通用人像分割模型)](https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
-| [PP-HumanSegV2-Mobile(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
-| [PP-HumanSegV1-Server(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
-| [Portait-PP-HumanSegV2_Lite(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
+| [PP-HumanSegV1-Lite(Universal portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
+| [PP-HumanSegV2-Lite(Universal portrait segmentation model)](https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
+| [PP-HumanSegV2-Mobile(Universal portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
+| [PP-HumanSegV1-Server(Universal portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
+| [Portait-PP-HumanSegV2_Lite(Portrait segmentation model)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
| [FCN-HRNet-W18-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
| [Deeplabv3-ResNet101-OS8-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |
-## 准备PaddleSeg部署模型以及转换模型
-RKNPU部署模型前需要将Paddle模型转换成RKNN模型,具体步骤如下:
-* Paddle动态图模型转换为ONNX模型,请参考[PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg)
-* ONNX模型转换RKNN模型的过程,请参考[转换文档](../../../../../docs/cn/faq/rknpu2/export.md)进行转换。
+## Prepare PaddleSeg Deployment Model and Conversion Model
+RKNPU needs to convert the Paddle model to RKNN model before deploying, the steps are as follows:
+* For the conversion of Paddle dynamic diagram model to ONNX model, please refer to [PaddleSeg Model Export](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg).
+* For the process of converting ONNX model to RKNN model, please refer to [Conversion document](../../../../../docs/en/faq/rknpu2/export.md).
-## 模型转换example
+## An example of Model Conversion
-* [PPHumanSeg](./pp_humanseg.md)
+* [PPHumanSeg](./pp_humanseg_EN.md)
-## 详细部署文档
-- [RKNN总体部署教程](../../../../../docs/cn/faq/rknpu2/rknpu2.md)
-- [C++部署](cpp)
-- [Python部署](python)
+## Detailed Deployment Document
+- [Overall RKNN Deployment Guidance](../../../../../docs/en/faq/rknpu2/rknpu2.md)
+- [Deploy with C++](cpp)
+- [Deploy with Python](python)
diff --git a/examples/vision/segmentation/paddleseg/rknpu2/README_CN.md b/examples/vision/segmentation/paddleseg/rknpu2/README_CN.md
new file mode 100644
index 000000000..7d10f82f2
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/rknpu2/README_CN.md
@@ -0,0 +1,34 @@
+[English](README.md) | 简体中文
+# PaddleSeg 模型部署
+
+## 模型版本说明
+
+- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop)
+
+目前FastDeploy使用RKNPU2推理PPSeg支持如下模型的部署:
+
+| 模型 | 参数文件大小 | 输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) |
+|:---------------------------------------------------------------------------------------------------------------------------------------------|:-------|:---------|:-------|:------------|:---------------|
+| [Unet-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% |
+| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 77.04% | 77.73% | 77.46% |
+| [PP-HumanSegV1-Lite(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - |
+| [PP-HumanSegV2-Lite(通用人像分割模型)](https://bj.bcebos.com/paddle2onnx/libs/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - |
+| [PP-HumanSegV2-Mobile(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - |
+| [PP-HumanSegV1-Server(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - |
+| [Portait-PP-HumanSegV2_Lite(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - |
+| [FCN-HRNet-W18-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% |
+| [Deeplabv3-ResNet101-OS8-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% |
+
+## 准备PaddleSeg部署模型以及转换模型
+RKNPU部署模型前需要将Paddle模型转换成RKNN模型,具体步骤如下:
+* Paddle动态图模型转换为ONNX模型,请参考[PaddleSeg模型导出说明](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/contrib/PP-HumanSeg)
+* ONNX模型转换RKNN模型的过程,请参考[转换文档](../../../../../docs/cn/faq/rknpu2/export.md)进行转换。
+
+## 模型转换example
+
+* [PPHumanSeg](./pp_humanseg.md)
+
+## 详细部署文档
+- [RKNN总体部署教程](../../../../../docs/cn/faq/rknpu2/rknpu2.md)
+- [C++部署](cpp)
+- [Python部署](python)
diff --git a/examples/vision/segmentation/paddleseg/rknpu2/cpp/README.md b/examples/vision/segmentation/paddleseg/rknpu2/cpp/README.md
index 5a9bf2dae..48c4646e2 100644
--- a/examples/vision/segmentation/paddleseg/rknpu2/cpp/README.md
+++ b/examples/vision/segmentation/paddleseg/rknpu2/cpp/README.md
@@ -1,30 +1,31 @@
-# PaddleSeg C++部署示例
+English | [简体中文](README_CN.md)
+# PaddleSeg Deployment Examples for C++
-本目录下用于展示PaddleSeg系列模型在RKNPU2上的部署,以下的部署过程以PPHumanSeg为例子。
+This directory demonstrates the deployment of PaddleSeg series models on RKNPU2. The following deployment process takes PHumanSeg as an example.
-在部署前,需确认以下两个步骤:
+Before deployment, the following two steps need to be confirmed:
-1. 软硬件环境满足要求
-2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
+1. Hardware and software environment meets the requirements.
+2. Download the pre-compiled deployment repository or compile the FastDeploy repository from scratch according to the development environment.
-以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
+For the above steps, please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md).
-## 生成基本目录文件
+## Generate Basic Directory Files
-该例程由以下几个部分组成
+The routine consists of the following parts:
```text
.
├── CMakeLists.txt
-├── build # 编译文件夹
-├── image # 存放图片的文件夹
+├── build # Compile Folder
+├── image # Folder for images
├── infer_cpu_npu.cc
├── infer_cpu_npu.h
├── main.cc
-├── model # 存放模型文件的文件夹
-└── thirdpartys # 存放sdk的文件夹
+├── model # Folder for models
+└── thirdpartys # Folder for sdk
```
-首先需要先生成目录结构
+First, please build a directory structure
```bash
mkdir build
mkdir images
@@ -32,24 +33,23 @@ mkdir model
mkdir thirdpartys
```
-## 编译
+## Compile
-### 编译并拷贝SDK到thirdpartys文件夹
+### Compile and Copy SDK to folder thirdpartys
-请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成
-fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
+Please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory, please move it to the thirdpartys directory.
-### 拷贝模型文件,以及配置文件至model文件夹
-在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
-转换为RKNN后的模型文件也需要拷贝至model,输入以下命令下载使用(模型文件为RK3588,RK3568需要重新[转换PPSeg RKNN模型](../README.md))。
+### Copy model and configuration files to folder Model
+In the process of Paddle dynamic map model -> Paddle static map model -> ONNX mdoel, ONNX file and the corresponding yaml configuration file will be generated. Please move the configuration file to the folder model.
+After converting to RKNN, the model file also needs to be copied to folder model. Run the following command to download and use (the model file is RK3588. RK3568 needs to be [reconverted to PPSeg RKNN model](../README.md)).
-### 准备测试图片至image文件夹
+### Prepare Test Images to folder image
```bash
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
unzip -qo images.zip
```
-### 编译example
+### Compile example
```bash
cd build
@@ -58,17 +58,16 @@ make -j8
make install
```
-## 运行例程
+## Running Routines
```bash
cd ./build/install
./rknpu_test model/Portrait_PP_HumanSegV2_Lite_256x144_infer/ images/portrait_heng.jpg
```
-## 注意事项
-RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,
-需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
+## Notes
+The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisableNormalizeAndPermute(C++) or disable_normalize_and_permute(Python) first when deploying with FastDeploy to disable normalization and data format conversion in the preprocessing stage.
-- [模型介绍](../../)
-- [Python部署](../python)
-- [转换PPSeg RKNN模型文档](../README.md)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [Convert PPSeg and RKNN model](../README.md)
diff --git a/examples/vision/segmentation/paddleseg/rknpu2/cpp/README_CN.md b/examples/vision/segmentation/paddleseg/rknpu2/cpp/README_CN.md
new file mode 100644
index 000000000..309d5f26c
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/rknpu2/cpp/README_CN.md
@@ -0,0 +1,73 @@
+[English](README.md) | 简体中文
+# PaddleSeg C++部署示例
+
+本目录下用于展示PaddleSeg系列模型在RKNPU2上的部署,以下的部署过程以PPHumanSeg为例子。
+
+在部署前,需确认以下两个步骤:
+
+1. 软硬件环境满足要求
+2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
+
+以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
+
+## 生成基本目录文件
+
+该例程由以下几个部分组成
+```text
+.
+├── CMakeLists.txt
+├── build # 编译文件夹
+├── image # 存放图片的文件夹
+├── infer_cpu_npu.cc
+├── infer_cpu_npu.h
+├── main.cc
+├── model # 存放模型文件的文件夹
+└── thirdpartys # 存放sdk的文件夹
+```
+
+首先需要先生成目录结构
+```bash
+mkdir build
+mkdir images
+mkdir model
+mkdir thirdpartys
+```
+
+## 编译
+
+### 编译并拷贝SDK到thirdpartys文件夹
+
+请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
+
+### 拷贝模型文件,以及配置文件至model文件夹
+在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
+转换为RKNN后的模型文件也需要拷贝至model,输入以下命令下载使用(模型文件为RK3588,RK3568需要重新[转换PPSeg RKNN模型](../README.md))。
+
+### 准备测试图片至image文件夹
+```bash
+wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
+unzip -qo images.zip
+```
+
+### 编译example
+
+```bash
+cd build
+cmake ..
+make -j8
+make install
+```
+
+## 运行例程
+
+```bash
+cd ./build/install
+./rknpu_test model/Portrait_PP_HumanSegV2_Lite_256x144_infer/ images/portrait_heng.jpg
+```
+
+## 注意事项
+RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [转换PPSeg RKNN模型文档](../README.md)
diff --git a/examples/vision/segmentation/paddleseg/rknpu2/pp_humanseg.md b/examples/vision/segmentation/paddleseg/rknpu2/pp_humanseg.md
index d012bffb6..2b14f6b9d 100644
--- a/examples/vision/segmentation/paddleseg/rknpu2/pp_humanseg.md
+++ b/examples/vision/segmentation/paddleseg/rknpu2/pp_humanseg.md
@@ -1,3 +1,4 @@
+[English](pp_humanseg_EN.md) | 简体中文
# PPHumanSeg模型部署
## 转换模型
diff --git a/examples/vision/segmentation/paddleseg/rknpu2/pp_humanseg_EN.md b/examples/vision/segmentation/paddleseg/rknpu2/pp_humanseg_EN.md
new file mode 100644
index 000000000..6870d32c7
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/rknpu2/pp_humanseg_EN.md
@@ -0,0 +1,81 @@
+English | [简体中文](pp_humanseg.md)
+# PPHumanSeg Model Deployment
+
+## Converting Model
+The following is an example of Portait-PP-HumanSegV2_Lite (portrait segmentation model), showing how to convert PPSeg model to RKNN model.
+
+```bash
+# Download Paddle2ONNX repository.
+git clone https://github.com/PaddlePaddle/Paddle2ONNX
+
+# Download the Paddle static map model and fix the input shape.
+## Go to the directory where the input shape is fixed for the Paddle static map model.
+cd Paddle2ONNX/tools/paddle
+## Download and unzip Paddle static map model.
+wget https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz
+tar xvf Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz
+python paddle_infer_shape.py --model_dir Portrait_PP_HumanSegV2_Lite_256x144_infer/ \
+ --model_filename model.pdmodel \
+ --params_filename model.pdiparams \
+ --save_dir Portrait_PP_HumanSegV2_Lite_256x144_infer \
+ --input_shape_dict="{'x':[1,3,144,256]}"
+
+# Converting static map model to ONNX model, note that the save_file here aligns with the zip name.
+paddle2onnx --model_dir Portrait_PP_HumanSegV2_Lite_256x144_infer \
+ --model_filename model.pdmodel \
+ --params_filename model.pdiparams \
+ --save_file Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer.onnx \
+ --enable_dev_version True
+
+# Convert ONNX model to RKNN model.
+# Copy the ONNX model directory to the Fastdeploy root directory.
+cp -r ./Portrait_PP_HumanSegV2_Lite_256x144_infer /path/to/Fastdeploy
+# Convert model, the model will be generated in the Portrait_PP_HumanSegV2_Lite_256x144_infer directory.
+python tools/rknpu2/export.py \
+ --config_path tools/rknpu2/config/Portrait_PP_HumanSegV2_Lite_256x144_infer.yaml \
+ --target_platform rk3588
+```
+
+## Modify yaml Configuration File
+
+In the **An example of Model Conversion** part, we fixed the shape of the model, so the corresponding yaml file needs to be modified as follows:
+
+**The original yaml file**
+```yaml
+Deploy:
+ input_shape:
+ - -1
+ - 3
+ - -1
+ - -1
+ model: model.pdmodel
+ output_dtype: float32
+ output_op: none
+ params: model.pdiparams
+ transforms:
+ - target_size:
+ - 256
+ - 144
+ type: Resize
+ - type: Normalize
+```
+
+**The modified yaml file**
+```yaml
+Deploy:
+ input_shape:
+ - 1
+ - 3
+ - 144
+ - 256
+ model: model.pdmodel
+ output_dtype: float32
+ output_op: none
+ params: model.pdiparams
+ transforms:
+ - target_size:
+ - 256
+ - 144
+ type: Resize
+ - type: Normalize
+```
\ No newline at end of file
diff --git a/examples/vision/segmentation/paddleseg/rknpu2/python/README.md b/examples/vision/segmentation/paddleseg/rknpu2/python/README.md
index 522744b1d..f5b99400f 100644
--- a/examples/vision/segmentation/paddleseg/rknpu2/python/README.md
+++ b/examples/vision/segmentation/paddleseg/rknpu2/python/README.md
@@ -1,36 +1,36 @@
-# PaddleSeg Python部署示例
+English | [简体中文](README_CN.md)
+# PaddleSeg Deployment Examples for Python
-在部署前,需确认以下两个步骤
+Before deployment, the following step need to be confirmed:
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
+- 1. Hardware and software environment meets the requirements, please refer to [Environment Requirements for FastDeploy](../../../../../../docs/en/build_and_install/rknpu2.md).
-【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../matting/)
+【Note】If you are deploying **PP-Matting**, **PP-HumanMatting** or **ModNet**, please refer to [Matting Model Deployment](../../../../matting/).
-本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成
+This directory provides `infer.py` for a quick example of PPHumanseg deployment on RKNPU. This can be done by running the following script.
```bash
-# 下载部署示例代码
+# Download the deploying demo code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/segmentation/paddleseg/python
-# 下载图片
+# Download images.
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
unzip images.zip
-# 推理
+# Inference.
python3 infer.py --model_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn \
--config_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/deploy.yaml \
--image images/portrait_heng.jpg
```
-## 注意事项
-RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,
-需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
+## Notes
+The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisableNormalizeAndPermute(C++) or disable_normalize_and_permute(Python) first when deploying with FastDeploy to disable normalization and data format conversion in the preprocessing stage.
-## 其它文档
+## Other Documents
-- [PaddleSeg 模型介绍](..)
-- [PaddleSeg C++部署](../cpp)
-- [模型预测结果说明](../../../../../../docs/api/vision_results/)
-- [转换PPSeg RKNN模型文档](../README.md)
+- [PaddleSeg Model Description](..)
+- [PaddleSeg C++ Deployment](../cpp)
+- [Description of the prediction](../../../../../../docs/api/vision_results/)
+- [Convert PPSeg and RKNN model](../README.md)
diff --git a/examples/vision/segmentation/paddleseg/rknpu2/python/README_CN.md b/examples/vision/segmentation/paddleseg/rknpu2/python/README_CN.md
new file mode 100644
index 000000000..b897dc369
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/rknpu2/python/README_CN.md
@@ -0,0 +1,36 @@
+[English](README.md) | 简体中文
+# PaddleSeg Python部署示例
+
+在部署前,需确认以下步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
+
+【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../matting/)
+
+本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成
+
+```bash
+# 下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/segmentation/paddleseg/python
+
+# 下载图片
+wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/images.zip
+unzip images.zip
+
+# 推理
+python3 infer.py --model_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/Portrait_PP_HumanSegV2_Lite_256x144_infer_rk3588.rknn \
+ --config_file ./Portrait_PP_HumanSegV2_Lite_256x144_infer/deploy.yaml \
+ --image images/portrait_heng.jpg
+```
+
+
+## 注意事项
+RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisableNormalizeAndPermute(C++)或`disable_normalize_and_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
+
+## 其它文档
+
+- [PaddleSeg 模型介绍](..)
+- [PaddleSeg C++部署](../cpp)
+- [模型预测结果说明](../../../../../../docs/api/vision_results/)
+- [转换PPSeg RKNN模型文档](../README.md)
diff --git a/examples/vision/segmentation/paddleseg/rv1126/cpp/README.md b/examples/vision/segmentation/paddleseg/rv1126/cpp/README.md
index a15dd0a64..146cf5457 100755
--- a/examples/vision/segmentation/paddleseg/rv1126/cpp/README.md
+++ b/examples/vision/segmentation/paddleseg/rv1126/cpp/README.md
@@ -1,28 +1,29 @@
-# PP-LiteSeg 量化模型 C++ 部署示例
+English | [简体中文](README_CN.md)
+# PP-LiteSeg Quantitative Model C++ Deployment Example
-本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 RV1126 上的部署推理加速。
+ `infer.cc` in this directory can help you quickly complete the inference acceleration of PP-LiteSeg quantization model deployment on RV1126.
-## 部署准备
-### FastDeploy 交叉编译环境准备
-1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
+## Deployment Preparations
+### FastDeploy Cross-compile Environment Preparations
+1. For the software and hardware environment, and the cross-compile environment, please refer to [Preparations for FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/rv1126.md#Cross-compilation-environment-construction).
-### 模型准备
-1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
-2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
-3. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
+### Model Preparations
+1. You can directly use the quantized model provided by FastDeploy for deployment.
+2. You can use one-click automatical compression tool provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the deploy.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
+3. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
-更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+For more information, please refer to [Model Quantization](../../quantize/README.md).
-## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型
-请按照以下步骤完成在 RV1126 上部署 PP-LiteSeg 量化模型:
-1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+## Deploying the Quantized PP-LiteSeg Segmentation model on RV1126
+Please follow these steps to complete the deployment of the PP-LiteSeg quantization model on RV1126.
+1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/rv1126.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite).
-2. 将编译后的库拷贝到当前目录,可使用如下命令:
+2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp
```
-3. 在当前路径下载部署所需的模型和示例图片:
+3. Download the model and example images required for deployment in current path.
```bash
mkdir models && mkdir images
wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
@@ -32,25 +33,25 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
cp -r cityscapes_demo.png images
```
-4. 编译部署示例,可使入如下命令:
+4. Compile the deployment example. You can run the following lines:
```bash
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
make -j8
make install
-# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+# After success, an install folder will be created with a running demo and libraries required for deployment.
```
-5. 基于 adb 工具部署 PP-LiteSeg 分割模型到 Rockchip RV1126,可使用如下命令:
+5. Deploy the PP-LiteSeg segmentation model to Rockchip RV1126 based on adb. You can run the following lines:
```bash
-# 进入 install 目录
+# Go to the install directory.
cd FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp/build/install/
-# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
```
-部署成功后运行结果如下:
+The output is:
-需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
+Please note that the model deployed on RV1126 needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).
diff --git a/examples/vision/segmentation/paddleseg/rv1126/cpp/README_CN.md b/examples/vision/segmentation/paddleseg/rv1126/cpp/README_CN.md
new file mode 100644
index 000000000..15c1f273e
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/rv1126/cpp/README_CN.md
@@ -0,0 +1,57 @@
+[English](README.md) | 简体中文
+# PP-LiteSeg 量化模型 C++ 部署示例
+
+本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-LiteSeg 量化模型在 RV1126 上的部署推理加速。
+
+## 部署准备
+### FastDeploy 交叉编译环境准备
+1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
+
+### 模型准备
+1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
+2. 用户可以使用 FastDeploy 提供的一键模型自动化压缩工具,自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的 deploy.yaml 文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+3. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
+
+更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+
+## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型
+请按照以下步骤完成在 RV1126 上部署 PP-LiteSeg 量化模型:
+1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+
+2. 将编译后的库拷贝到当前目录,可使用如下命令:
+```bash
+cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp
+```
+
+3. 在当前路径下载部署所需的模型和示例图片:
+```bash
+mkdir models && mkdir images
+wget https://bj.bcebos.com/fastdeploy/models/rk1/ppliteseg.tar.gz
+tar -xvf ppliteseg.tar.gz
+cp -r ppliteseg models
+wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
+cp -r cityscapes_demo.png images
+```
+
+4. 编译部署示例,可使入如下命令:
+```bash
+mkdir build && cd build
+cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
+make -j8
+make install
+# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+```
+
+5. 基于 adb 工具部署 PP-LiteSeg 分割模型到 Rockchip RV1126,可使用如下命令:
+```bash
+# 进入 install 目录
+cd FastDeploy/examples/vision/segmentation/paddleseg/rv1126/cpp/build/install/
+# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+bash run_with_adb.sh infer_demo ppliteseg cityscapes_demo.png $DEVICE_ID
+```
+
+部署成功后运行结果如下:
+
+
+
+需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
diff --git a/examples/vision/segmentation/paddleseg/sophgo/README.md b/examples/vision/segmentation/paddleseg/sophgo/README.md
index afebe3451..85a4360fa 100644
--- a/examples/vision/segmentation/paddleseg/sophgo/README.md
+++ b/examples/vision/segmentation/paddleseg/sophgo/README.md
@@ -1,33 +1,34 @@
-# PaddleSeg C++部署示例
+English | [简体中文](README_CN.md)
+# PaddleSeg C++ Deployment Example
-## 支持模型列表
+## Supporting Model List
-- PP-LiteSeg部署模型实现来自[PaddleSeg PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md)
+- PP-LiteSeg deployment models are from [PaddleSeg PP-LiteSeg series model](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md).
-## 准备PP-LiteSeg部署模型以及转换模型
+## PP-LiteSeg Model Deployment and Conversion Preparations
-SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型,具体步骤如下:
-- 下载Paddle模型[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)
-- Pddle模型转换为ONNX模型,请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)
-- ONNX模型转换bmodel模型的过程,请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
+Befor SOPHGO-TPU model deployment, you should first convert Paddle model to bmodel model. Specific steps are as follows:
+- Download Paddle model: [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz).
+- Convert Paddle model to ONNX model. Please refer to [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX).
+- For the process of converting ONNX model to bmodel, please refer to [TPU-MLIR](https://github.com/sophgo/tpu-mlir).
-## 模型转换example
+## Model Converting Example
-下面以[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)为例子,教大家如何转换Paddle模型到SOPHGO-TPU模型
+Here we take [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz) as an example to show you how to convert Paddle model to SOPHGO-TPU model.
-### 下载PP-LiteSeg-B(STDC2)-cityscapes-without-argmax模型,并转换为ONNX模型
+### Download PP-LiteSeg-B(STDC2)-cityscapes-without-argmax, and convert it to ONNX
```shell
https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
tar xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
-# 修改PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer模型的输入shape,由动态输入变成固定输入
+# Modify the input shape of PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer model from dynamic input to constant input.
python paddle_infer_shape.py --model_dir PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
--save_dir pp_liteseg_fix \
--input_shape_dict="{'x':[1,3,512,512]}"
-#将固定输入的Paddle模型转换成ONNX模型
+# Convert constant input Paddle model to ONNX model.
paddle2onnx --model_dir pp_liteseg_fix \
--model_filename model.pdmodel \
--params_filename model.pdiparams \
@@ -35,32 +36,32 @@ paddle2onnx --model_dir pp_liteseg_fix \
--enable_dev_version True
```
-### 导出bmodel模型
+### Export bmodel
-以转换BM1684x的bmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
-### 1. 安装
+Take converting BM1684x model to bmodel as an example. You need to download [TPU-MLIR](https://github.com/sophgo/tpu-mlir) project. For the process of installation, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
+### 1. Installation
``` shell
docker pull sophgo/tpuc_dev:latest
-# myname1234是一个示例,也可以设置其他名字
+# myname1234 is just an example, you can customize your own name.
docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
source ./envsetup.sh
./build.sh
```
-### 2. ONNX模型转换为bmodel模型
+### 2. Convert ONNX model to bmodel
``` shell
mkdir pp_liteseg && cd pp_liteseg
-#在该文件中放入测试图片,同时将上一步转换的pp_liteseg.onnx放入该文件夹中
+# Put the test image in this file, and put the converted pp_liteseg.onnx into this folder.
cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
cp -rf ${REGRESSION_PATH}/image .
-#放入onnx模型文件pp_liteseg.onnx
+# Put in the onnx model file pp_liteseg.onnx.
mkdir workspace && cd workspace
-#将ONNX模型转换为mlir模型,其中参数--output_names可以通过NETRON查看
+# Convert ONNX model to mlir model, the parameter --output_names can be viewed via NETRON.
model_transform.py \
--model_name pp_liteseg \
--model_def ../pp_liteseg.onnx \
@@ -74,7 +75,7 @@ model_transform.py \
--test_result pp_liteseg_top_outputs.npz \
--mlir pp_liteseg.mlir
-#将mlir模型转换为BM1684x的F32 bmodel模型
+# Convert mlir model to BM1684x F32 bmodel.
model_deploy.py \
--mlir pp_liteseg.mlir \
--quantize F32 \
@@ -83,7 +84,7 @@ model_deploy.py \
--test_reference pp_liteseg_top_outputs.npz \
--model pp_liteseg_1684x_f32.bmodel
```
-最终获得可以在BM1684x上能够运行的bmodel模型pp_liteseg_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
+The final bmodel, pp_liteseg_1684x_f32.bmodel, can run on BM1684x. If you want to further accelerate the model, you can convert ONNX model to INT8 bmodel. For details, please refer to [TPU-MLIR Document](https://github.com/sophgo/tpu-mlir/blob/master/README.md).
-## 其他链接
-- [Cpp部署](./cpp)
+## Other Documents
+- [Cpp Deployment](./cpp)
diff --git a/examples/vision/segmentation/paddleseg/sophgo/README_CN.md b/examples/vision/segmentation/paddleseg/sophgo/README_CN.md
new file mode 100644
index 000000000..5961d2e94
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/sophgo/README_CN.md
@@ -0,0 +1,90 @@
+[English](README.md) | 简体中文
+# PaddleSeg C++部署示例
+
+## 支持模型列表
+
+- PP-LiteSeg部署模型实现来自[PaddleSeg PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md)
+
+## 准备PP-LiteSeg部署模型以及转换模型
+
+SOPHGO-TPU部署模型前需要将Paddle模型转换成bmodel模型,具体步骤如下:
+- 下载Paddle模型[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)
+- Paddle模型转换为ONNX模型,请参考[Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX)
+- ONNX模型转换bmodel模型的过程,请参考[TPU-MLIR](https://github.com/sophgo/tpu-mlir)
+
+## 模型转换example
+
+下面以[PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz)为例子,教大家如何转换Paddle模型到SOPHGO-TPU模型
+
+### 下载PP-LiteSeg-B(STDC2)-cityscapes-without-argmax模型,并转换为ONNX模型
+```shell
+https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
+tar xvf PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz
+
+# 修改PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer模型的输入shape,由动态输入变成固定输入
+python paddle_infer_shape.py --model_dir PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer \
+ --model_filename model.pdmodel \
+ --params_filename model.pdiparams \
+ --save_dir pp_liteseg_fix \
+ --input_shape_dict="{'x':[1,3,512,512]}"
+
+#将固定输入的Paddle模型转换成ONNX模型
+paddle2onnx --model_dir pp_liteseg_fix \
+ --model_filename model.pdmodel \
+ --params_filename model.pdiparams \
+ --save_file pp_liteseg.onnx \
+ --enable_dev_version True
+```
+
+### 导出bmodel模型
+
+以转换BM1684x的bmodel模型为例子,我们需要下载[TPU-MLIR](https://github.com/sophgo/tpu-mlir)工程,安装过程具体参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
+### 1. 安装
+``` shell
+docker pull sophgo/tpuc_dev:latest
+
+# myname1234是一个示例,也可以设置其他名字
+docker run --privileged --name myname1234 -v $PWD:/workspace -it sophgo/tpuc_dev:latest
+
+source ./envsetup.sh
+./build.sh
+```
+
+### 2. ONNX模型转换为bmodel模型
+``` shell
+mkdir pp_liteseg && cd pp_liteseg
+
+#在该文件中放入测试图片,同时将上一步转换的pp_liteseg.onnx放入该文件夹中
+cp -rf ${REGRESSION_PATH}/dataset/COCO2017 .
+cp -rf ${REGRESSION_PATH}/image .
+#放入onnx模型文件pp_liteseg.onnx
+
+mkdir workspace && cd workspace
+
+#将ONNX模型转换为mlir模型,其中参数--output_names可以通过NETRON查看
+model_transform.py \
+ --model_name pp_liteseg \
+ --model_def ../pp_liteseg.onnx \
+ --input_shapes [[1,3,512,512]] \
+ --mean 0.0,0.0,0.0 \
+ --scale 0.0039216,0.0039216,0.0039216 \
+ --keep_aspect_ratio \
+ --pixel_format rgb \
+ --output_names bilinear_interp_v2_6.tmp_0 \
+ --test_input ../image/dog.jpg \
+ --test_result pp_liteseg_top_outputs.npz \
+ --mlir pp_liteseg.mlir
+
+#将mlir模型转换为BM1684x的F32 bmodel模型
+model_deploy.py \
+ --mlir pp_liteseg.mlir \
+ --quantize F32 \
+ --chip bm1684x \
+ --test_input pp_liteseg_in_f32.npz \
+ --test_reference pp_liteseg_top_outputs.npz \
+ --model pp_liteseg_1684x_f32.bmodel
+```
+最终获得可以在BM1684x上能够运行的bmodel模型pp_liteseg_1684x_f32.bmodel。如果需要进一步对模型进行加速,可以将ONNX模型转换为INT8 bmodel,具体步骤参见[TPU-MLIR文档](https://github.com/sophgo/tpu-mlir/blob/master/README.md)。
+
+## 其他链接
+- [Cpp部署](./cpp)
diff --git a/examples/vision/segmentation/paddleseg/sophgo/cpp/README.md b/examples/vision/segmentation/paddleseg/sophgo/cpp/README.md
index dac3ed565..eae65d559 100644
--- a/examples/vision/segmentation/paddleseg/sophgo/cpp/README.md
+++ b/examples/vision/segmentation/paddleseg/sophgo/cpp/README.md
@@ -1,43 +1,44 @@
-# PaddleSeg C++部署示例
+English | [简体中文](README_CN.md)
+# PaddleSeg C++ Deployment Example
-本目录下提供`infer.cc`快速完成pp_liteseg模型在SOPHGO BM1684x板子上加速部署的示例。
+`infer.cc` in this directory provides a quick example of accelerated deployment of the pp_liteseg model on SOPHGO BM1684x.
-在部署前,需确认以下两个步骤:
+Before deployment, the following two steps need to be confirmed:
-1. 软硬件环境满足要求
-2. 根据开发环境,从头编译FastDeploy仓库
+1. Hardware and software environment meets the requirements.
+2. Compile the FastDeploy repository from scratch according to the development environment.
-以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
+For the above steps, please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md).
-## 生成基本目录文件
+## Generate Basic Directory Files
-该例程由以下几个部分组成
+The routine consists of the following parts:
```text
.
├── CMakeLists.txt
-├── build # 编译文件夹
-├── image # 存放图片的文件夹
+├── build # Compile Folder
+├── image # Folder for images
├── infer.cc
-└── model # 存放模型文件的文件夹
+└── model # Folder for models
```
-## 编译
+## Compile
-### 编译并拷贝SDK到thirdpartys文件夹
+### Compile and Copy SDK to folder thirdpartys
-请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录.
+Please refer to [How to Build SOPHGO Deployment Environment](../../../../../../docs/en/build_and_install/sophgo.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory.
-### 拷贝模型文件,以及配置文件至model文件夹
-将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md)
-将转换后的SOPHGO bmodel模型文件拷贝至model中
+### Copy model and configuration files to folder Model
+Convert Paddle model to SOPHGO bmodel model. For the conversion steps, please refer to [Document](../README.md).
+Please copy the converted SOPHGO bmodel to folder model.
-### 准备测试图片至image文件夹
+### Prepare Test Images to folder image
```bash
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
cp cityscapes_demo.png ./images
```
-### 编译example
+### Compile example
```bash
cd build
@@ -45,12 +46,12 @@ cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
make
```
-## 运行例程
+## Running Routines
```bash
./infer_demo model images/cityscapes_demo.png
```
-- [模型介绍](../../)
-- [模型转换](../)
+- [Model Description](../../)
+- [Model Conversion](../)
diff --git a/examples/vision/segmentation/paddleseg/sophgo/cpp/README_CN.md b/examples/vision/segmentation/paddleseg/sophgo/cpp/README_CN.md
new file mode 100644
index 000000000..6360a2907
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/sophgo/cpp/README_CN.md
@@ -0,0 +1,57 @@
+[English](README.md) | 简体中文
+# PaddleSeg C++部署示例
+
+本目录下提供`infer.cc`快速完成pp_liteseg模型在SOPHGO BM1684x板子上加速部署的示例。
+
+在部署前,需确认以下两个步骤:
+
+1. 软硬件环境满足要求
+2. 根据开发环境,从头编译FastDeploy仓库
+
+以上步骤请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)实现
+
+## 生成基本目录文件
+
+该例程由以下几个部分组成
+```text
+.
+├── CMakeLists.txt
+├── build # 编译文件夹
+├── image # 存放图片的文件夹
+├── infer.cc
+└── model # 存放模型文件的文件夹
+```
+
+## 编译
+
+### 编译并拷贝SDK到thirdpartys文件夹
+
+请参考[SOPHGO部署库编译](../../../../../../docs/cn/build_and_install/sophgo.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录.
+
+### 拷贝模型文件,以及配置文件至model文件夹
+将Paddle模型转换为SOPHGO bmodel模型,转换步骤参考[文档](../README.md)
+将转换后的SOPHGO bmodel模型文件拷贝至model中
+
+### 准备测试图片至image文件夹
+```bash
+wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
+cp cityscapes_demo.png ./images
+```
+
+### 编译example
+
+```bash
+cd build
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-0.0.3
+make
+```
+
+## 运行例程
+
+```bash
+./infer_demo model images/cityscapes_demo.png
+```
+
+
+- [模型介绍](../../)
+- [模型转换](../)
diff --git a/examples/vision/segmentation/paddleseg/sophgo/python/README.md b/examples/vision/segmentation/paddleseg/sophgo/python/README.md
index e04ad28c4..5aba6590f 100644
--- a/examples/vision/segmentation/paddleseg/sophgo/python/README.md
+++ b/examples/vision/segmentation/paddleseg/sophgo/python/README.md
@@ -1,26 +1,27 @@
-# PaddleSeg Python部署示例
+English | [简体中文](README_CN.md)
+# PaddleSeg Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, the following step need to be confirmed:
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
+- 1. Hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirement](../../../../../../docs/en/build_and_install/sophgo.md).
-本目录下提供`infer.py`快速完成 pp_liteseg 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
+`infer.py` in this directory provides a quick example of deployment of the pp_liteseg model on SOPHGO TPU. Please run the following script:
```bash
-# 下载部署示例代码
+# Download the sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/segmentation/paddleseg/sophgo/python
-# 下载图片
+# Download images.
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
-# 推理
+# Inference.
python3 infer.py --model_file ./bmodel/pp_liteseg_1684x_f32.bmodel --config_file ./bmodel/deploy.yaml --image cityscapes_demo.png
-# 运行完成后返回结果如下所示
-运行结果保存在sophgo_img.png中
+# The returned result.
+The result is saved as sophgo_img.png.
```
-## 其它文档
-- [pp_liteseg C++部署](../cpp)
-- [转换 pp_liteseg SOPHGO模型文档](../README.md)
+## Other Documents
+- [pp_liteseg C++ Deployment](../cpp)
+- [Converting pp_liteseg SOPHGO model](../README.md)
diff --git a/examples/vision/segmentation/paddleseg/sophgo/python/README_CN.md b/examples/vision/segmentation/paddleseg/sophgo/python/README_CN.md
new file mode 100644
index 000000000..9cafb1dc9
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/sophgo/python/README_CN.md
@@ -0,0 +1,27 @@
+[English](README.md) | 简体中文
+# PaddleSeg Python部署示例
+
+在部署前,需确认以下步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/sophgo.md)
+
+本目录下提供`infer.py`快速完成 pp_liteseg 在SOPHGO TPU上部署的示例。执行如下脚本即可完成
+
+```bash
+# 下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/segmentation/paddleseg/sophgo/python
+
+# 下载图片
+wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
+
+# 推理
+python3 infer.py --model_file ./bmodel/pp_liteseg_1684x_f32.bmodel --config_file ./bmodel/deploy.yaml --image cityscapes_demo.png
+
+# 运行完成后返回结果如下所示
+运行结果保存在sophgo_img.png中
+```
+
+## 其它文档
+- [pp_liteseg C++部署](../cpp)
+- [转换 pp_liteseg SOPHGO模型文档](../README.md)
diff --git a/examples/vision/segmentation/paddleseg/web/README.md b/examples/vision/segmentation/paddleseg/web/README.md
index 6c214347c..b4f216b61 100644
--- a/examples/vision/segmentation/paddleseg/web/README.md
+++ b/examples/vision/segmentation/paddleseg/web/README.md
@@ -1,43 +1,44 @@
-# PP-Humanseg v1模型前端部署
+English | [简体中文](README_CN.md)
+# PP-Humanseg v1 Model Frontend Deployment
-## 模型版本说明
+## Model Version
- [PP-HumanSeg Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/contrib/PP-HumanSeg/)
-## 前端部署PP-Humanseg v1模型
+## Deploy PP-Humanseg v1 Model on Frontend
-PP-Humanseg v1模型web demo部署及使用参考[文档](../../../../application/js/web_demo/README.md)
+To deploy and use PP-Humanseg v1 model of web demo, please refer to [document](../../../../application/js/web_demo/README.md).
-## PP-Humanseg v1 js接口
+## PP-Humanseg v1 js interface
```
import * as humanSeg from "@paddle-js-models/humanseg";
-# 模型加载与初始化
+# Load and initialise model
await humanSeg.load(Config);
-# 人像分割
+# Portrait segmentation
const res = humanSeg.getGrayValue(input)
-# 提取人像与背景的二值图
+# Extract the binary map of portrait and background
humanSeg.drawMask(res)
-# 用于替换背景的可视化函数
+# Visualization function for background replacement
humanSeg.drawHumanSeg(res)
-# 背景虚化
+# Blur background
humanSeg.blurBackground(res)
```
-**load()函数参数**
-> * **Config**(dict): PP-Humanseg模型配置参数,默认为{modelpath : 'https://paddlejs.bj.bcebos.com/models/fuse/humanseg/humanseg_398x224_fuse_activation/model.json', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5], enableLightModel: false};modelPath为默认的PP-Humanseg js模型,mean,std分别为预处理的均值和标准差,enableLightModel为是否使用更轻量的模型。
+**Parameters in function load()**
+> * **Config**(dict): Configuration parameter for PP-Humanseg model, default is {modelpath : 'https://paddlejs.bj.bcebos.com/models/fuse/humanseg/humanseg_398x224_fuse_activation/model.json', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5], enableLightModel: false};modelPath is the default PP-Humanseg js model. Mean, std respectively represent the mean and standard deviation of the preprocessing, and enableLightModel represents whether to use a lighter model.
-**getGrayValue()函数参数**
-> * **input**(HTMLImageElement | HTMLVideoElement | HTMLCanvasElement): 输入图像参数。
+**Parameters in function getGrayValue()**
+> * **input**(HTMLImageElement | HTMLVideoElement | HTMLCanvasElement): Input image parameter.
-**drawMask()函数参数**
-> * **seg_values**(number[]): 输入参数,一般是getGrayValue函数计算的结果作为输入
+**Parameters in function drawMask()**
+> * **seg_values**(number[]): Input parameter, generally the result of function getGrayValue is used as input.
-**blurBackground()函数参数**
-> * **seg_values**(number[]): 输入参数,一般是getGrayValue函数计算的结果作为输入
+**Parameters in function blurBackground()**
+> * **seg_values**(number[]): Input parameter, generally the result of function getGrayValue is used as input.
-**drawHumanSeg()函数参数**
-> * **seg_values**(number[]): 输入参数,一般是getGrayValue函数计算的结果作为输入
+**Parameters in function drawHumanSeg()**
+> * **seg_values**(number[]): Input parameter, generally the result of function getGrayValue is used as input.
diff --git a/examples/vision/segmentation/paddleseg/web/README_CN.md b/examples/vision/segmentation/paddleseg/web/README_CN.md
new file mode 100644
index 000000000..81664eee3
--- /dev/null
+++ b/examples/vision/segmentation/paddleseg/web/README_CN.md
@@ -0,0 +1,44 @@
+[English](README.md) | 简体中文
+# PP-Humanseg v1模型前端部署
+
+## 模型版本说明
+
+- [PP-HumanSeg Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/contrib/PP-HumanSeg/)
+
+
+## 前端部署PP-Humanseg v1模型
+
+PP-Humanseg v1模型web demo部署及使用参考[文档](../../../../application/js/web_demo/README.md)
+
+
+## PP-Humanseg v1 js接口
+
+```
+import * as humanSeg from "@paddle-js-models/humanseg";
+# 模型加载与初始化
+await humanSeg.load(Config);
+# 人像分割
+const res = humanSeg.getGrayValue(input)
+# 提取人像与背景的二值图
+humanSeg.drawMask(res)
+# 用于替换背景的可视化函数
+humanSeg.drawHumanSeg(res)
+# 背景虚化
+humanSeg.blurBackground(res)
+```
+
+**load()函数参数**
+> * **Config**(dict): PP-Humanseg模型配置参数,默认为{modelpath : 'https://paddlejs.bj.bcebos.com/models/fuse/humanseg/humanseg_398x224_fuse_activation/model.json', mean: [0.5, 0.5, 0.5], std: [0.5, 0.5, 0.5], enableLightModel: false};modelPath为默认的PP-Humanseg js模型,mean,std分别为预处理的均值和标准差,enableLightModel为是否使用更轻量的模型。
+
+
+**getGrayValue()函数参数**
+> * **input**(HTMLImageElement | HTMLVideoElement | HTMLCanvasElement): 输入图像参数。
+
+**drawMask()函数参数**
+> * **seg_values**(number[]): 输入参数,一般是getGrayValue函数计算的结果作为输入
+
+**blurBackground()函数参数**
+> * **seg_values**(number[]): 输入参数,一般是getGrayValue函数计算的结果作为输入
+
+**drawHumanSeg()函数参数**
+> * **seg_values**(number[]): 输入参数,一般是getGrayValue函数计算的结果作为输入
diff --git a/java/android/README.md b/java/android/README.md
index 1c557fca3..00834195b 100644
--- a/java/android/README.md
+++ b/java/android/README.md
@@ -328,7 +328,7 @@ public class SegmentationResult {
public boolean initialized(); // Check if the result is valid.
}
```
-Other reference:C++/Python corresponding SegmentationResult description: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
+Other reference: C++/Python corresponding SegmentationResult description: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
- Face detection result description
```java