[Doc] Add English version of some documents (#1221)

* Update README_CN.md

* Create README.md

* Update README.md

* Create README_CN.md

* Update README.md

* Update README_CN.md

* Update README_CN.md

* Create README.md

* Update README.md

* Update README_CN.md

* Create README.md

* Update README.md

* Update README_CN.md

* Rename examples/vision/faceid/insightface/rknpu2/cpp/README.md to examples/vision/faceid/insightface/rknpu2/README_EN.md

* Rename README_CN.md to README.md

* Rename README.md to README_EN.md

* Rename README.md to README_CN.md

* Rename README_EN.md to README.md

* Create build.md

* Create environment.md

* Create issues.md

* Update build.md

* Update environment.md

* Update issues.md

* Update build.md

* Update environment.md

* Update issues.md
This commit is contained in:
Hu Chuqi
2023-02-06 11:11:00 +08:00
committed by GitHub
parent cfc7af2d45
commit e2de3f36d3
14 changed files with 685 additions and 53 deletions

View File

@@ -1,3 +1,4 @@
[English](../../../en/faq/rknpu2/build.md) | 中文
# FastDeploy RKNPU2引擎编译 # FastDeploy RKNPU2引擎编译
## FastDeploy后端支持详情 ## FastDeploy后端支持详情

View File

@@ -1,3 +1,4 @@
[English](../../../en/faq/rknpu2/environment.md) | 中文
# FastDeploy RKNPU2推理环境搭建 # FastDeploy RKNPU2推理环境搭建
## 简介 ## 简介

View File

@@ -1,3 +1,4 @@
[English](../../../en/faq/rknpu2/issues.md) | 中文
# RKNPU2常见问题合集 # RKNPU2常见问题合集
在使用FastDeploy的过程中大家可能会碰到很多的问题这个文档用来记录已经解决的共性问题方便大家查阅。 在使用FastDeploy的过程中大家可能会碰到很多的问题这个文档用来记录已经解决的共性问题方便大家查阅。

View File

@@ -0,0 +1,78 @@
English | [中文](../../../cn/faq/rknpu2/build.md)
# FastDeploy RKNPU2 Engine Compilation
## FastDeploy supported backends
FastDeploy currently supports the following backends on the RK platform:
| Backend | Platform | Supported model formats | Notes |
|:------------------|:---------------------|:-------|:-------------------------------------------|
| ONNX&nbsp;Runtime | RK356X <br> RK3588 | ONNX | Compile switch `ENABLE_ORT_BACKEND` is controlled by ON or OFF. Default OFF |
| RKNPU2 | RK356X <br> RK3588 | RKNN | Compile switch `ENABLE_RKNPU2_BACKEND` is controlled by ON or OFF. Default OFF |
## Compile FastDeploy SDK
### Compile FastDeploy C++ SDK on board side
Currently, RKNPU2 is only available on linux. The following tutorial is completed on RK3568(debian 10) and RK3588(debian 11).
```bash
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
# If you are using the develop branch, type the following command
git checkout develop
mkdir build && cd build
cmake .. -DENABLE_ORT_BACKEND=ON \
-DENABLE_RKNPU2_BACKEND=ON \
-DENABLE_VISION=ON \
-DRKNN2_TARGET_SOC=RK3588 \
-DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy-0.0.0
make -j8
make install
```
### Cross-compile FastDeploy C++ SDK
```bash
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
# If you are using the develop branch, type the following command
git checkout develop
mkdir build && cd build
cmake .. -DCMAKE_C_COMPILER=/home/zbc/opt/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-gcc \
-DCMAKE_CXX_COMPILER=/home/zbc/opt/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-g++ \
-DCMAKE_TOOLCHAIN_FILE=./../cmake/toolchain.cmake \
-DTARGET_ABI=arm64 \
-DENABLE_ORT_BACKEND=OFF \
-DENABLE_RKNPU2_BACKEND=ON \
-DENABLE_VISION=ON \
-DRKNN2_TARGET_SOC=RK3588 \
-DENABLE_FLYCV=ON \
-DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy-0.0.0
make -j8
make install
```
### Compile the Python SDK on the board
Currently, RKNPU2 is only available on linux. The following tutorial is completed on RK3568(debian 10) and RK3588(debian 11). Packing Python is dependent on `wheel`, so run `pip install wheel` before compiling.
```bash
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
# If you are using the develop branch, type the following command
git checkout develop
cd python
export ENABLE_ORT_BACKEND=ON
export ENABLE_RKNPU2_BACKEND=ON
export ENABLE_VISION=ON
export RKNN2_TARGET_SOC=RK3588
python3 setup.py build
python3 setup.py bdist_wheel
cd dist
pip3 install fastdeploy_python-0.0.0-cp39-cp39-linux_aarch64.whl
```

View File

@@ -0,0 +1,92 @@
English | [中文](../../../cn/faq/rknpu2/environment.md)
# FastDeploy RKNPU2 inference environment setup
## Introduction
We need to set up the development environment before deploying models on FastDeploy. The environment setup of FastDeploy is divided into two parts: the board-side inference environment setup and the PC-side model conversion environment setup.
## Board-side inference environment setup
Based on the feedback from developers, we provide two ways to set up the inference environment on the board: one-click script installation script and command line installation of development board dirver.
### Install via script
Most developers don't like complex command lines for installation, so FastDeploy provides a one-click way for developers to install stable RKNN. Refer to the following command to set up the board side environment
```bash
# Download and unzip rknpu2_device_install_1.4.0
wget https://bj.bcebos.com/fastdeploy/third_libs/rknpu2_device_install_1.4.0.zip
unzip rknpu2_device_install_1.4.0.zip
cd rknpu2_device_install_1.4.0
# RK3588 runs the following code
sudo rknn_install_rk3588.sh
# RK356X runs the following code
sudo rknn_install_rk356X.sh
```
### Install via the command line
For developers who want to try out the latest RK drivers, we provide a method to install them from scratch using the following command line.
```bash
# Install the required packages
sudo apt update -y
sudo apt install -y python3
sudo apt install -y python3-dev
sudo apt install -y python3-pip
sudo apt install -y gcc
sudo apt install -y python3-opencv
sudo apt install -y python3-numpy
sudo apt install -y cmake
# Download rknpu2
# RK3588 runs the following code
git clone https://gitee.com/mirrors_rockchip-linux/rknpu2.git
sudo cp ./rknpu2/runtime/RK3588/Linux/librknn_api/aarch64/* /usr/lib
sudo cp ./rknpu2/runtime/RK3588/Linux/rknn_server/aarch64/usr/bin/* /usr/bin/
# RK356X runs the following code
git clone https://gitee.com/mirrors_rockchip-linux/rknpu2.git
sudo cp ./rknpu2/runtime/RK356X/Linux/librknn_api/aarch64/* /usr/lib
sudo cp ./rknpu2/runtime/RK356X/Linux/rknn_server/aarch64/usr/bin/* /usr/bin/
```
## Install rknn_toolkit2
There are dependency issues when installing the rknn_toolkit2. Here are the installation tutorial.
rknn_toolkit2 depends on a few specific packages, so it is recommended to create a virtual environment using conda. The way to install conda is omitted and we mainly introduce how to install rknn_toolkit2.
### Download rknn_toolkit2
rknn_toolkit2 can usually be downloaded from git
```bash
git clone https://github.com/rockchip-linux/rknn-toolkit2.git
```
### Download and install the required packages
```bash
sudo apt-get install libxslt1-dev zlib1g zlib1g-dev libglib2.0-0 \
libsm6 libgl1-mesa-glx libprotobuf-dev gcc g++
```
### Install rknn_toolkit2 environment
```bash
# Create virtual environment
conda create -n rknn2 python=3.6
conda activate rknn2
# Install numpy==1.16.6 first because rknn_toolkit2 has a specific numpy dependency
pip install numpy==1.16.6
# Install rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl
cd ~/Download /rknn-toolkit2-master/packages
pip install rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl
```
## Resource links
* [RKNPU2, rknntoolkit2 development board download Passwordrknn](https://eyun.baidu.com/s/3eTDMk6Y)
## Other documents
- [RKNN model conversion document](./export.md)

View File

@@ -0,0 +1,47 @@
English | [中文](../../../cn/faq/rknpu2/issues.md)
# RKNPU2 FAQs
This document collects the common problems when using FastDeploy.
## Navigation
- [Link issues in dynamic link library](#动态链接库链接问题)
## Link issues in dynamic link library
### Association issue
- [Issue 870](https://github.com/PaddlePaddle/FastDeploy/issues/870)
### Problem Description
No problem during compiling, but the following error is reported when running the program
```text
error while loading shared libraries: libfastdeploy.so.0.0.0: cannot open shared object file: No such file or directory
```
### Analysis
The linker ld indicates that the library file cannot be found. The default directories for ld are /lib and /usr/lib.
Other directories are also OK, but you need to let ld know where the library files are located.
### Solutions
**Temporary solution**
This solution has no influence on the system, but it only works on the current terminal and fails when closing this terminal.
```bash
source PathToFastDeploySDK/fastdeploy_init.sh
```
**Permanent solution**
The temporary solution fails because users need to retype the command each time they reopen the terminal to run the program. If you don't want to constantly run the code, execute the following code:
```bash
source PathToFastDeploySDK/fastdeploy_init.sh
sudo cp PathToFastDeploySDK/fastdeploy_libs.conf /etc/ld.so.conf.d/
sudo ldconfig
```
After execution, the configuration file is written to the system. Refresh to let the system find the library location.

View File

@@ -0,0 +1,90 @@
English | [简体中文](README_CN.md)
# YOLOv8 C++ Deployment Example
This directory provides the example that `infer.cc` fast finishes the deployment of YOLOv8 on CPU/GPU and GPU through TensorRT.
Two steps before deployment
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code based on your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, FastDeploy version 1.0.3 or above (x.x.x>=1.0.3) is required to support this model.
```bash
mkdir build
cd build
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
# 1. Download the official converted YOLOv8 ONNX model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8s.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# CPU inference
./infer_demo yolov8s.onnx 000000014439.jpg 0
# GPU inference
./infer_demo yolov8s.onnx 000000014439.jpg 1
# TensorRT inference on GPU
./infer_demo yolov8s.onnx 000000014439.jpg 2
```
The visualized result is as follows
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
he above command works for Linux or MacOS. For SDK in Windows, refer to:
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
If you use Huawei Ascend NPU deployment, refer to the following document to initialize the deployment environment:
- [How to use Huawei Ascend NPU deployment](../../../../../docs/cn/faq/use_sdk_on_ascend.md)
## YOLOv8 C++ Interface
### YOLOv8
```c++
fastdeploy::vision::detection::YOLOv8(
const string& model_file,
const string& params_file = "",
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX)
```
YOLOv8 model loading and initialization, among which model_file is the exported ONNX model format
**Parameter**
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(ModelFormat): Model format. ONNX format by default
#### Predict function
> ```c++
> YOLOv8::Predict(cv::Mat* im, DetectionResult* result)
> ```
>
> Model prediction interface. Input images and output detection results
>
> **Parameter**
>
> > * **im**: Input images in HWC or BGR format
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for DetectionResult.
### Class Member Variable
#### Pre-processing Parameter
Users can modify the following preprocessing parameters based on actual needs to change the final inference and deployment results
> > * **size**(vector&lt;int&gt;): This parameter changes the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
> > * **padding_value**(vector&lt;float&gt;): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=ture` represents no paddling. Default `is_no_pad=false`
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=false`
> > * **stride**(int): Used with the `stris_mini_pad` member variable. Default `stride=32`
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
- [How to switch the backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -81,7 +81,7 @@ YOLOv8模型加载和初始化其中model_file为导出的ONNX模型格式。
> > * **size**(vector&lt;int&gt;): 通过此参数修改预处理过程中resize的大小包含两个整型元素表示[width, height], 默认值为[640, 640] > > * **size**(vector&lt;int&gt;): 通过此参数修改预处理过程中resize的大小包含两个整型元素表示[width, height], 默认值为[640, 640]
> > * **padding_value**(vector&lt;float&gt;): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114] > > * **padding_value**(vector&lt;float&gt;): 通过此参数可以修改图片在resize时候做填充(padding)的值, 包含三个浮点型元素, 分别表示三个通道的值, 默认值为[114, 114, 114]
> > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false` > > * **is_no_pad**(bool): 通过此参数让图片是否通过填充的方式进行resize, `is_no_pad=ture` 表示不使用填充的方式,默认值为`is_no_pad=false`
> > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高这是为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false` > > * **is_mini_pad**(bool): 通过此参数可以将resize之后图像的宽高设置为最接近`size`成员变量的值, 并且满足填充的像素大小是可以被`stride`成员变量整除的。默认值为`is_mini_pad=false`
> > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32` > > * **stride**(int): 配合`stris_mini_pad`成员变量使用, 默认值为`stride=32`
- [模型介绍](../../) - [模型介绍](../../)

View File

@@ -0,0 +1,78 @@
English | [简体中文](README_CN.md)
# YOLOv8 Python Deployment Example
Two steps before deployment
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. Install FastDeploy Python whl. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
This directory provides the example that `infer.py` fast finishes the deployment of YOLOv8 on CPU/GPU and GPU through TensorRT. The script is as follows
```bash
# Download the example code for deployment
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov8/python/
# Download yolov8 model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov8.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# CPU inference
python infer.py --model yolov8.onnx --image 000000014439.jpg --device cpu
# GPU inference
python infer.py --model yolov8.onnx --image 000000014439.jpg --device gpu
# TensorRT inference on GPU
python infer.py --model yolov8.onnx --image 000000014439.jpg --device gpu --use_trt True
```
The visualized result is as follows
<img width="640" src="https://user-images.githubusercontent.com/67993288/184309358-d803347a-8981-44b6-b589-4608021ad0f4.jpg">
## YOLOv8 Python Interface
```python
fastdeploy.vision.detection.YOLOv8(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
```
YOLOv8 model loading and initialization, among which model_file is the exported ONNX model format
**Parameter**
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(ModelFormat): Model format. ONNX format by default
### predict function
> ```python
> YOLOv8.predict(image_data)
> ```
>
> Model prediction interface. Input images and output detection results
>
> **Parameter**
>
> > * **image_data**(np.ndarray): Input data in HWC or BGR format
> **Return**
>
> > Return the `fastdeploy.vision.DetectionResult`structure, refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for its description
### Class Member Property
#### Pre-processing Parameter
Users can modify the following preprocessing parameters based on actual needs to change the final inference and deployment results
> > * **size**(list[int]): This parameter changes the resize used during preprocessing, containing two integer elements for [width, height] with default value [640, 640]
> > * **padding_value**(list[float]): This parameter is used to change the padding value of images during resize, containing three floating-point elements that represent the value of three channels. Default value [114, 114, 114]
> > * **is_no_pad**(bool): Specify whether to resize the image through padding. `is_no_pad=True` represents no paddling. Default `is_no_pad=False`
> > * **is_mini_pad**(bool): This parameter sets the width and height of the image after resize to the value nearest to the `size` member variable and to the point where the padded pixel size is divisible by the `stride` member variable. Default `is_mini_pad=False`
> > * **stride**(int): Used with the `stris_mini_padide` member variable. Default `stride=32`
## Other Documents
- [YOLOv8 Model Description](..)
- [YOLOv8 C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
- [How to switch the backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -50,7 +50,7 @@ YOLOv8模型加载和初始化其中model_file为导出的ONNX模型格式
> YOLOv8.predict(image_data) > YOLOv8.predict(image_data)
> ``` > ```
> >
> 模型预测口,输入图像直接输出检测结果。 > 模型预测口,输入图像直接输出检测结果。
> >
> **参数** > **参数**
> >

View File

@@ -1,19 +1,19 @@
[English](README.md) | 简体中文 English | [简体中文](README_CN.md)
# InsightFace C++部署示例 # InsightFace C++ Deployment Example
FastDeploy支持在RKNPU上部署包括ArcFace\CosFace\VPL\Partial_FC在内的InsightFace系列模型。 FastDeploy supports the deployment of InsightFace models like ArcFace\CosFace\VPL\Partial_FC on RKNPU.
本目录下提供`infer_arcface.cc`快速完成InsighFace模型包括ArcFaceCPU/RKNPU加速部署的示例。 This directoty provides the example that `infer_arcface.cc` fast finishes the deployment of InsighFace models like ArcFace on CPU/RKNPU.
在部署前,需确认以下两个步骤: Two steps before deployment:
1. 软硬件环境满足要求 1. Software and hardware should meet the requirements.
2. 根据开发环境下载预编译部署库或者从头编译FastDeploy仓库 2. Download the precompiled deployment library or deploy FastDeploy repository from scratch according to your development environment.
以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现 Refer to [RK2 generation NPU deployment library compilation](../../../../../../docs/cn/build_and_install/rknpu2.md) for the above steps
在本目录执行如下命令即可完成编译测试 The compilation can be completed by executing the following command in this directory.
```bash ```bash
mkdir build mkdir build
@@ -24,18 +24,18 @@ tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j make -j
# 下载官方转换好的ArcFace模型文件和测试图片 # Download the official converted ArcFace model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r18.onnx wget https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r18.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/rknpu2/face_demo.zip wget https://bj.bcebos.com/paddlehub/fastdeploy/rknpu2/face_demo.zip
unzip face_demo.zip unzip face_demo.zip
# CPU推理 # CPU inference
./infer_arcface_demo ms1mv3_arcface_r100.onnx face_0.jpg face_1.jpg face_2.jpg 0 ./infer_arcface_demo ms1mv3_arcface_r100.onnx face_0.jpg face_1.jpg face_2.jpg 0
# RKNPU推理 # RKNPU inference
./infer_arcface_demo ms1mv3_arcface_r100.onnx face_0.jpg face_1.jpg face_2.jpg 1 ./infer_arcface_demo ms1mv3_arcface_r100.onnx face_0.jpg face_1.jpg face_2.jpg 1
``` ```
运行完成可视化结果如下图所示 The visualized result is as follows
<div width="700"> <div width="700">
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184321537-860bf857-0101-4e92-a74c-48e8658d838c.JPG"> <img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184321537-860bf857-0101-4e92-a74c-48e8658d838c.JPG">
@@ -43,12 +43,12 @@ unzip face_demo.zip
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184321622-d9a494c3-72f3-47f1-97c5-8a2372de491f.JPG"> <img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184321622-d9a494c3-72f3-47f1-97c5-8a2372de491f.JPG">
</div> </div>
以上命令只适用于LinuxMacOS, Windows下SDK的使用方式请参考: The above command works for Linux or MacOS. For SDK in Windows, refer to:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md) - [How to use FastDeploy C++ SDK in Windows](../../../../../../docs/cn/faq/use_sdk_on_windows.md)
## InsightFace C++接口 ## InsightFace C++ Interface
### ArcFace ### ArcFace
```c++ ```c++
fastdeploy::vision::faceid::ArcFace( fastdeploy::vision::faceid::ArcFace(
@@ -58,9 +58,9 @@ fastdeploy::vision::faceid::ArcFace(
const ModelFormat& model_format = ModelFormat::ONNX) const ModelFormat& model_format = ModelFormat::ONNX)
``` ```
ArcFace模型加载和初始化其中model_file为导出的ONNX模型格式。 ArcFace model loading and initialization, among which model_file is the exported ONNX model format
### CosFace ### CosFace
```c++ ```c++
fastdeploy::vision::faceid::CosFace( fastdeploy::vision::faceid::CosFace(
@@ -70,9 +70,9 @@ fastdeploy::vision::faceid::CosFace(
const ModelFormat& model_format = ModelFormat::ONNX) const ModelFormat& model_format = ModelFormat::ONNX)
``` ```
CosFace模型加载和初始化其中model_file为导出的ONNX模型格式。 CosFace model loading and initialization, among which model_file is the exported ONNX model format
### PartialFC ### PartialFC
```c++ ```c++
fastdeploy::vision::faceid::PartialFC( fastdeploy::vision::faceid::PartialFC(
@@ -82,9 +82,9 @@ fastdeploy::vision::faceid::PartialFC(
const ModelFormat& model_format = ModelFormat::ONNX) const ModelFormat& model_format = ModelFormat::ONNX)
``` ```
PartialFC模型加载和初始化其中model_file为导出的ONNX模型格式。 PartialFC model loading and initialization, among which model_file is the exported ONNX model format
### VPL ### VPL
```c++ ```c++
fastdeploy::vision::faceid::VPL( fastdeploy::vision::faceid::VPL(
@@ -94,43 +94,43 @@ fastdeploy::vision::faceid::VPL(
const ModelFormat& model_format = ModelFormat::ONNX) const ModelFormat& model_format = ModelFormat::ONNX)
``` ```
VPL模型加载和初始化其中model_file为导出的ONNX模型格式。 VPL model loading and initialization, among which model_file is the exported ONNX model format
**参数** **Parameter**
> * **model_file**(str): 模型文件路径 > * **model_file**(str): Model file path
> * **params_file**(str): 参数文件路径当模型格式为ONNX时此参数传入空字符串即可 > * **params_file**(str): Parameter file path. Merely passing an empty string when the model is in ONNX format
> * **runtime_option**(RuntimeOption): 后端推理配置默认为None即采用默认配置 > * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(ModelFormat): 模型格式默认为ONNX格式 > * **model_format**(ModelFormat): Model format. ONNX format by default
#### Predict函数 #### Predict function
> ```c++ > ```c++
> ArcFace::Predict(const cv::Mat& im, FaceRecognitionResult* result) > ArcFace::Predict(const cv::Mat& im, FaceRecognitionResult* result)
> ``` > ```
> >
> 模型预测接口,输入图像直接输出检测结果。 > Model prediction interface. Input images and output detection results
> >
> **参数** > **Parameter**
> >
> > * **im**: 输入图像注意需为HWCBGR格式 > > * **im**: Input images in HWC or BGR format
> > * **result**: 检测结果,包括检测框,各个框的置信度, FaceRecognitionResult说明参考[视觉模型预测结果](../../../../../../docs/api/vision_results/) > > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Results] for the description of FaceRecognitionResult(../../../../../../docs/api/vision_results/)
### 修改预处理以及后处理的参数 ### Change pre-processing and post-processing parameters
预处理和后处理的参数的需要通过修改InsightFaceRecognitionPostprocessorInsightFaceRecognitionPreprocessor的成员变量来进行修改。 Pre-processing and post-processing parameters can be changed by modifying the member variables of InsightFaceRecognitionPostprocessor and InsightFaceRecognitionPreprocessor
#### InsightFaceRecognitionPreprocessor成员变量(预处理参数) #### Member variables of InsightFaceRecognitionPreprocessor (preprocessing parameters)
> > * **size**(vector&lt;int&gt;): 通过此参数修改预处理过程中resize的大小包含两个整型元素表示[width, height], 默认值为[112, 112], > > * **size**(vector&lt;int&gt;): This parameter changes the resize during preprocessing, containing two integer elements for [width, height] with default value [112, 112].
通过InsightFaceRecognitionPreprocessor::SetSize(std::vector<int>& size)来进行修改 Revise through InsightFaceRecognitionPreprocessor::SetSize(std::vector<int>& size)
> > * **alpha**(vector&lt;float&gt;): 预处理归一化的alpha值计算公式为`x'=x*alpha+beta`alpha默认为[1. / 127.5, 1.f / 127.5, 1. / 127.5], > > * **alpha**(vector&lt;float&gt;): Preprocess normalized alpha, and calculated as `x'=x*alpha+beta`. Alpha defaults to [1. / 127.5, 1.f / 127.5, 1. / 127.5].
通过InsightFaceRecognitionPreprocessor::SetAlpha(std::vector<float>& alpha)来进行修改 Revise through InsightFaceRecognitionPreprocessor::SetAlpha(std::vector<float>& alpha)
> > * **beta**(vector&lt;float&gt;): 预处理归一化的beta值计算公式为`x'=x*alpha+beta`beta默认为[-1.f, -1.f, -1.f], > > * **beta**(vector&lt;float&gt;): Preprocess normalized beta, and calculated as `x'=x*alpha+beta`. Alpha defaults to [-1.f, -1.f, -1.f],
通过InsightFaceRecognitionPreprocessor::SetBeta(std::vector<float>& beta)来进行修改 Revise through InsightFaceRecognitionPreprocessor::SetBeta(std::vector<float>& beta)
#### InsightFaceRecognitionPostprocessor成员变量(后处理参数) #### Member variables of InsightFaceRecognitionPostprocessor(post-processing parameters)
> > * **l2_normalize**(bool): 输出人脸向量之前是否执行l2归一化默认false, > > * **l2_normalize**(bool): Whether to perform l2 normalization before outputting the face vector. Default false.
InsightFaceRecognitionPostprocessor::SetL2Normalize(bool& l2_normalize)来进行修改 Revise through InsightFaceRecognitionPostprocessor::SetL2Normalize(bool& l2_normalize)
- [模型介绍](../../../) - [Model Description](../../../)
- [Python部署](../python) - [Python Deployemnt](../python)
- [视觉模型预测结果](../../../../../../docs/api/vision_results/README.md) - [Vision Model Prediction Results](../../../../../../docs/api/vision_results/README.md)
- [如何切换模型推理后端引擎](../../../../../../docs/cn/faq/how_to_change_backend.md) - [How to switch the backend engine](../../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -0,0 +1,136 @@
[English](README.md) | 简体中文
# InsightFace C++部署示例
FastDeploy支持在RKNPU上部署包括ArcFace\CosFace\VPL\Partial_FC在内的InsightFace系列模型。
本目录下提供`infer_arcface.cc`快速完成InsighFace模型包括ArcFace在CPU/RKNPU加速部署的示例。
在部署前,需确认以下两个步骤:
1. 软硬件环境满足要求
2. 根据开发环境下载预编译部署库或者从头编译FastDeploy仓库
以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
在本目录执行如下命令即可完成编译测试
```bash
mkdir build
cd build
# FastDeploy version need >=1.0.3
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
# 下载官方转换好的ArcFace模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r18.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/rknpu2/face_demo.zip
unzip face_demo.zip
# CPU推理
./infer_arcface_demo ms1mv3_arcface_r100.onnx face_0.jpg face_1.jpg face_2.jpg 0
# RKNPU推理
./infer_arcface_demo ms1mv3_arcface_r100.onnx face_0.jpg face_1.jpg face_2.jpg 1
```
运行完成可视化结果如下图所示
<div width="700">
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184321537-860bf857-0101-4e92-a74c-48e8658d838c.JPG">
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184322004-a551e6e4-6f47-454e-95d6-f8ba2f47b516.JPG">
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184321622-d9a494c3-72f3-47f1-97c5-8a2372de491f.JPG">
</div>
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md)
## InsightFace C++接口
### ArcFace类
```c++
fastdeploy::vision::faceid::ArcFace(
const string& model_file,
const string& params_file = "",
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX)
```
ArcFace模型加载和初始化其中model_file为导出的ONNX模型格式。
### CosFace类
```c++
fastdeploy::vision::faceid::CosFace(
const string& model_file,
const string& params_file = "",
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX)
```
CosFace模型加载和初始化其中model_file为导出的ONNX模型格式。
### PartialFC类
```c++
fastdeploy::vision::faceid::PartialFC(
const string& model_file,
const string& params_file = "",
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX)
```
PartialFC模型加载和初始化其中model_file为导出的ONNX模型格式。
### VPL类
```c++
fastdeploy::vision::faceid::VPL(
const string& model_file,
const string& params_file = "",
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX)
```
VPL模型加载和初始化其中model_file为导出的ONNX模型格式。
**参数**
> * **model_file**(str): 模型文件路径
> * **params_file**(str): 参数文件路径当模型格式为ONNX时此参数传入空字符串即可
> * **runtime_option**(RuntimeOption): 后端推理配置默认为None即采用默认配置
> * **model_format**(ModelFormat): 模型格式默认为ONNX格式
#### Predict函数
> ```c++
> ArcFace::Predict(const cv::Mat& im, FaceRecognitionResult* result)
> ```
>
> 模型预测接口,输入图像直接输出检测结果。
>
> **参数**
>
> > * **im**: 输入图像注意需为HWCBGR格式
> > * **result**: 检测结果,包括检测框,各个框的置信度, FaceRecognitionResult说明参考[视觉模型预测结果](../../../../../../docs/api/vision_results/)
### 修改预处理以及后处理的参数
预处理和后处理的参数的需要通过修改InsightFaceRecognitionPostprocessorInsightFaceRecognitionPreprocessor的成员变量来进行修改。
#### InsightFaceRecognitionPreprocessor成员变量(预处理参数)
> > * **size**(vector&lt;int&gt;): 通过此参数修改预处理过程中resize的大小包含两个整型元素表示[width, height], 默认值为[112, 112],
通过InsightFaceRecognitionPreprocessor::SetSize(std::vector<int>& size)来进行修改
> > * **alpha**(vector&lt;float&gt;): 预处理归一化的alpha值计算公式为`x'=x*alpha+beta`alpha默认为[1. / 127.5, 1.f / 127.5, 1. / 127.5],
通过InsightFaceRecognitionPreprocessor::SetAlpha(std::vector<float>& alpha)来进行修改
> > * **beta**(vector&lt;float&gt;): 预处理归一化的beta值计算公式为`x'=x*alpha+beta`beta默认为[-1.f, -1.f, -1.f],
通过InsightFaceRecognitionPreprocessor::SetBeta(std::vector<float>& beta)来进行修改
#### InsightFaceRecognitionPostprocessor成员变量(后处理参数)
> > * **l2_normalize**(bool): 输出人脸向量之前是否执行l2归一化默认false,
通过InsightFaceRecognitionPostprocessor::SetL2Normalize(bool& l2_normalize)来进行修改
- [模型介绍](../../../)
- [Python部署](../python)
- [视觉模型预测结果](../../../../../../docs/api/vision_results/README.md)
- [如何切换模型推理后端引擎](../../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -0,0 +1,108 @@
English | [简体中文](README_CN.md)
# InsightFace Python Deployment Example
FastDeploy supports the deployment of InsightFace models like ArcFace\CosFace\VPL\Partial on RKNPU.
This directoty provides the example that `infer_arcface.py` fast finishes the deployment of InsighFace models like ArcFace on CPU/RKNPU.
Two steps before deployment:
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../../docs/cn/build_and_install/rknpu2.md)
```bash
# Download the example code for deployment
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/faceid/insightface/python/
# Download ArcFace model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r100.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/rknpu2/face_demo.zip
unzip face_demo.zip
# CPU inference
python infer_arcface.py --model ms1mv3_arcface_r100.onnx \
--face face_0.jpg \
--face_positive face_1.jpg \
--face_negative face_2.jpg \
--device cpu
# GPU inference
python infer_arcface.py --model ms1mv3_arcface_r100.onnx \
--face face_0.jpg \
--face_positive face_1.jpg \
--face_negative face_2.jpg \
--device gpu
```
The visualized result is as follows
<div width="700">
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184321537-860bf857-0101-4e92-a74c-48e8658d838c.JPG">
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184322004-a551e6e4-6f47-454e-95d6-f8ba2f47b516.JPG">
<img width="220" float="left" src="https://user-images.githubusercontent.com/67993288/184321622-d9a494c3-72f3-47f1-97c5-8a2372de491f.JPG">
</div>
```bash
Prediction Done!
--- [Face 0]:FaceRecognitionResult: [Dim(512), Min(-2.309220), Max(2.372197), Mean(0.016987)]
--- [Face 1]:FaceRecognitionResult: [Dim(512), Min(-2.288258), Max(1.995104), Mean(-0.003400)]
--- [Face 2]:FaceRecognitionResult: [Dim(512), Min(-3.243411), Max(3.875866), Mean(-0.030682)]
Detect Done! Cosine 01: 0.814385, Cosine 02:-0.059388
```
## InsightFace Python interface
```python
fastdeploy.vision.faceid.ArcFace(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
fastdeploy.vision.faceid.CosFace(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
fastdeploy.vision.faceid.PartialFC(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
fastdeploy.vision.faceid.VPL(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
```
ArcFace model loading and initialization, among which model_file is the exported ONNX model format
**Parameter**
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(ModelFormat): Model format. ONNX format by default
### predict function
> ```python
> ArcFace.predict(image_data)
> ```
>
> Model prediction interface. Input images and output prediction results
>
> **Parameter**
>
> > * **image_data**(np.ndarray): Input data in HWC or BGR format
> **Return**
>
> > Return the `fastdeploy.vision.FaceRecognitionResult` structure. Refer to [Vision Model Prediction Results](../../../../../../docs/api/vision_results/) for its description
### Class Member Property
#### Pre-processing Parameter
Users can modify the following preprocessing parameters based on actual needs to change the final inference and deployment results.
#### Member Variables of AdaFacePreprocessor
The followings are the member variables of AdaFacePreprocessor
> > * **size**(list[int]): This parameter changes the resize used during preprocessing, containing two integer elements for [width, height] with default value [112, 112]
> > * **alpha**(list[float]): Preprocess normalized alpha, and calculated as `x'=x*alpha+beta`. Alpha defaults to [1. / 127.5, 1.f / 127.5, 1. / 127.5]
> > * **beta**(list[float]): Preprocess normalized beta, and calculated as `x'=x*alpha+beta`. beta defaults to [-1.f, -1.f, -1.f]
#### Member Variables of AdaFacePostprocessor
The followings are the member variables of AdaFacePostprocessor
> > * **l2_normalize**(bool): Whether to perform l2 normalization before outputting the face vector. Default false.
## Other Documents
- [InsightFace Model Description](..)
- [InsightFace C++ Deployment](../cpp)
- [Vision Model Prediction Results](../../../../../../docs/api/vision_results/)
- [How to switch the backend engine](../../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -75,7 +75,7 @@ ArcFace模型加载和初始化其中model_file为导出的ONNX模型格式
> ArcFace.predict(image_data) > ArcFace.predict(image_data)
> ``` > ```
> >
> 模型预测口,输入图像直接输出检测结果。 > 模型预测口,输入图像直接输出检测结果。
> >
> **参数** > **参数**
> >