[Doc] Add English version of serving/ and java/andriod/. (#963)

* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md

* Update english version of serving/docs/

* Update title of readme

* Update some links

* Modify a title

* Update some links

* Update en version of java android README

* Modify some titles

* Modify some titles

* Modify some titles
This commit is contained in:
charl-u
2022-12-24 14:39:06 +08:00
committed by GitHub
parent 1e36856b84
commit b7d2c0da2c
18 changed files with 1612 additions and 364 deletions

View File

@@ -1 +0,0 @@
README_EN.md

47
docs/README.md Executable file
View File

@@ -0,0 +1,47 @@
[简体中文](README_CN.md)| English
# Tutorials
## Install
- [Install FastDeploy Prebuilt Libraries](en/build_and_install/download_prebuilt_libraries.md)
- [Build and Install FastDeploy Library on GPU Platform](en/build_and_install/gpu.md)
- [Build and Install FastDeploy Library on CPU Platform](en/build_and_install/cpu.md)
- [Build and Install FastDeploy Library on IPU Platform](en/build_and_install/ipu.md)
- [Build and Install FastDeploy Library on KunlunXin XPU Platform](en/build_and_install/xpu.md)
- [Build and Install on RV1126 Platform](en/build_and_install/rv1126.md)
- [Build and Install on RK3588 Platform](en/build_and_install/rknpu2.md)
- [Build and Install on A311D Platform](en/build_and_install/a311d.md)
- [Build and Install FastDeploy Library on Nvidia Jetson Platform](en/build_and_install/jetson.md)
- [Build and Install FastDeploy Library on Android Platform](en/build_and_install/android.md)
- [Build and Install FastDeploy Serving Deployment Image](../serving/docs/EN/compile-en.md)
## A Quick Start - Demos
- [Python Deployment Demo](en/quick_start/models/python.md)
- [C++ Deployment Demo](en/quick_start/models/cpp.md)
- [A Quick Start on Runtime Python](en/quick_start/runtime/python.md)
- [A Quick Start on Runtime C++](en/quick_start/runtime/cpp.md)
## API
- [Python API](https://baidu-paddle.github.io/fastdeploy-api/python/html/)
- [C++ API](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/)
- [Android Java API](../java/android)
## Performance Optimization
- [Quantization Acceleration](en/quantize.md)
## Frequent Q&As
- [1. How to Change Inference Backends](en/faq/how_to_change_backend.md)
- [2. How to Use FastDeploy C++ SDK on Windows Platform](en/faq/use_sdk_on_windows.md)
- [3. How to Use FastDeploy C++ SDK on Android Platform](en/faq/use_cpp_sdk_on_android.md)
- [4. Tricks of TensorRT](en/faq/tensorrt_tricks.md)
- [5. How to Develop a New Model](en/faq/develop_a_new_model.md)
## More FastDeploy Deployment Module
- [Deployment AI Model as a Service](../serving)
- [Benchmark Testing](../benchmark)

View File

@@ -1,4 +1,4 @@
[English](README_EN.md) | 简体中文
[English](README.md) | 简体中文
# 使用文档

View File

@@ -1,47 +0,0 @@
[简体中文](README_CN.md)| English
# Tutorials
## Install
- [Install FastDeploy Prebuilt Libraries](en/build_and_install/download_prebuilt_libraries.md)
- [Build and Install FastDeploy Library on GPU Platform](en/build_and_install/gpu.md)
- [Build and Install FastDeploy Library on CPU Platform](en/build_and_install/cpu.md)
- [Build and Install FastDeploy Library on IPU Platform](en/build_and_install/ipu.md)
- [Build and Install FastDeploy Library on KunlunXin XPU Platform](en/build_and_install/xpu.md)
- [Build and Install on RV1126 Platform](en/build_and_install/rv1126.md)
- [Build and Install on RK3588 Platform](en/build_and_install/rknpu2.md)
- [Build and Install on A311D Platform](en/build_and_install/a311d.md)
- [Build and Install FastDeploy Library on Nvidia Jetson Platform](en/build_and_install/jetson.md)
- [Build and Install FastDeploy Library on Android Platform](en/build_and_install/android.md)
- [Build and Install FastDeploy Serving Deployment Image](../serving/docs/EN/compile-en.md)
## A Quick Start - Demos
- [Python Deployment Demo](en/quick_start/models/python.md)
- [C++ Deployment Demo](en/quick_start/models/cpp.md)
- [A Quick Start on Runtime Python](en/quick_start/runtime/python.md)
- [A Quick Start on Runtime C++](en/quick_start/runtime/cpp.md)
## API
- [Python API](https://baidu-paddle.github.io/fastdeploy-api/python/html/)
- [C++ API](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/)
- [Android Java API](../java/android)
## Performance Optimization
- [Quantization Acceleration](en/quantize.md)
## Frequent Q&As
- [1. How to Change Inference Backends](en/faq/how_to_change_backend.md)
- [2. How to Use FastDeploy C++ SDK on Windows Platform](en/faq/use_sdk_on_windows.md)
- [3. How to Use FastDeploy C++ SDK on Android Platform](en/faq/use_cpp_sdk_on_android.md)
- [4. Tricks of TensorRT](en/faq/tensorrt_tricks.md)
- [5. How to Develop a New Model](en/faq/develop_a_new_model.md)
## More FastDeploy Deployment Module
- [Deployment AI Model as a Service](../serving)
- [Benchmark Testing](../benchmark)

View File

@@ -1,4 +1,4 @@
[English](README_EN.md)| 简体中文
[English](README.md)| 简体中文
# 视觉模型预测结果说明
FastDeploy根据视觉模型的任务类型定义了不同的结构体(`fastdeploy/vision/common/result.h`)来表达模型预测结果,具体如下表所示

View File

@@ -1,6 +1,6 @@
English | [中文](matting_result.md)
# MattingResult keying results
# Matting Result
The MattingResult code is defined in `fastdeploy/vision/common/result.h`, and is used to indicate the predicted value of alpha transparency predicted and the predicted foreground, etc.

View File

@@ -5,17 +5,18 @@ English | [中文](../../cn/faq/develop_a_new_model.md)
| Step | Description | Create or modify the files |
|:-----------:|:--------------------------------------------------------------------------------:|:-----------------------------------------:|
| [1](#step2) | Add a model implementation to the corresponding task module in FastDeploy/vision | resnet.hresnet.ccvision.h |
| [2](#step4) | Python interface binding via pybind | resnet_pybind.ccclassification_pybind.cc |
| [3](#step5) | Use Python to call Interface | resnet.py\_\_init\_\_.py |
| [1](#step2) | Add a model implementation to the corresponding task module in FastDeploy/vision | resnet.h, resnet.cc, vision.h |
| [2](#step4) | Python interface binding via pybind | resnet_pybind.cc, classification_pybind.cc |
| [3](#step5) | Use Python to call Interface | resnet.py, \_\_init\_\_.py |
After completing the above 3 steps, an external model is integrated.
If you want to contribute your code to FastDeploy, it is very kind of you to add test code, instructions (Readme), and code annotations for the added model in the [test](#test).
## Model Integration
## Model Integration <span id="modelsupport"></span>
### Prepare the models <span id="step1"></span>
### Prepare the models
Before integrating external models, it is important to convert the trained models (.pt, .pdparams, etc.) to the model formats (.onnx, .pdmodel) that FastDeploy supports for deployment. Most open source repositories provide model conversion scripts for developers. As torchvision does not provide conversion scripts, developers can write conversion scripts manually. In this demo, we convert `torchvison.models.resnet50` to `resnet50.onnx` with the following code for your reference.
@@ -40,7 +41,7 @@ torch.onnx.export(model,
Running the above script will generate a`resnet50.onnx` file.
### C++
### C++ <span id="step2"></span>
* Create`resnet.h` file
* Create a path
@@ -93,7 +94,7 @@ bool ResNet::Predict(cv::Mat* im, ClassifyResult* result, int topk) {
return true;
}
```
<span id="step3"></span>
* Add new model file to`vision.h`
* modify location
* FastDeploy/fastdeploy/vision.h
@@ -105,7 +106,7 @@ bool ResNet::Predict(cv::Mat* im, ClassifyResult* result, int topk) {
#endif
```
### Pybind
### Pybind <span id="step4"></span>
* Create Pybind file
@@ -146,7 +147,7 @@ bool ResNet::Predict(cv::Mat* im, ClassifyResult* result, int topk) {
}
```
### Python
### Python <span id="step5"></span>
* Create`resnet.py`file
* Create path
@@ -167,7 +168,7 @@ class ResNet(FastDeployModel):
def size(self, wh):
...
```
<span id="step6"></span>
* Import ResNet classes
* modify path
* FastDeploy/python/fastdeploy/vision/classification/\_\_init\_\_.py (FastDeploy/Python code/fastdeploy/vision model/task name/\_\_init\_\_.py)
@@ -177,7 +178,7 @@ class ResNet(FastDeployModel):
from .contrib.resnet import ResNet
```
## Test
## Test <span id="test"></span>
### Compile

View File

@@ -92,7 +92,7 @@ In particular, for the configuration method of the dependency library required b
### 3.2 SDK usage method 2: Visual Studio 2019 creates sln project using C++ SDK
This section is for non-CMake users and describes how to create a sln project in Visual Studio 2019 to use FastDeploy C++ SDK. CMake users please read the next section directly. In addition, this section is a special thanks to "Awake to the Southern Sky" for his tutorial on FastDeploy: [How to deploy PaddleDetection target detection model on Windows using FastDeploy C++].(https://www.bilibili.com/read/cv18807232)
This section is for non-CMake users and describes how to create a sln project in Visual Studio 2019 to use FastDeploy C++ SDK. CMake users please read the next section directly. In addition, this section is a special thanks to "Awake to the Southern Sky" for his tutorial on FastDeploy: [How to deploy PaddleDetection target detection model on Windows using FastDeploy C++](https://www.bilibili.com/read/cv18807232).
<div id="VisualStudio2019Sln"></div>
@@ -192,7 +192,7 @@ Compile successfully, you can see the exe saved in
D:\qiuyanjun\fastdeploy_test\infer_ppyoloe\x64\Release\infer_ppyoloe.exe
```
2Execute the executable file and get the inference result. First you need to copy all the dlls to the directory where the exe is located. At the same time, you also need to download and extract the pyoloe model files and test images, and then copy them to the directory where the exe is located. Special note, the exe needs to run when the dependency library configuration method, please refer to the section: [various methods to configure the exe to run the required dependency library](#CommandLineDeps)
2Execute the executable file and get the inference result. First you need to copy all the dlls to the directory where the exe is located. At the same time, you also need to download and extract the pyoloe model files and test images, and then copy them to the directory where the exe is located. Special note, the exe needs to run when the dependency library configuration method, please refer to the section: [various methods to configure the exe to run the required dependency library](#CommandLineDeps).
![image](https://user-images.githubusercontent.com/31974251/192829545-3ea36bfc-9a54-492b-984b-2d5d39094d47.png)
@@ -331,7 +331,7 @@ Open the saved image to view the visualization results at
<img src="https://user-images.githubusercontent.com/19339784/184326520-7075e907-10ed-4fad-93f8-52d0e35d4964.jpg", width=480px, height=320px />
</div>
Special note, the exe needs to run when the dependency library configuration method, please refer to the section: [a variety of methods to configure the exe to run the required dependency library](#CommandLineDeps)
Special note, the exe needs to run when the dependency library configuration method, please refer to the section: [a variety of methods to configure the exe to run the required dependency library](#CommandLineDeps).
## 4. Multiple methods to Configure the Required Dependencies for the Exe Runtime
<div id="CommandLineDeps"></div>

View File

@@ -1,42 +1,44 @@
# FastDeploy Android AAR 包使用文档
FastDeploy Android SDK 目前支持图像分类、目标检测、OCR文字识别、语义分割和人脸检测等任务对更多的AI任务支持将会陆续添加进来。以下为各个任务对应的API文档在Android下使用FastDeploy中集成的模型只需以下几个步骤
- 模型初始化
- 调用`predict`接口
- 可视化验证(可选)
English | [简体中文](README_CN.md)
|图像分类|目标检测|OCR文字识别|人像分割|人脸检测|
# FastDeploy Android AAR Package
Currently FastDeploy Android SDK supports image classification, target detection, OCR text recognition, semantic segmentation and face detection. More AI tasks will be added in the future. The following is the API documents for each task. To use the models integrated in FastDeploy on Android, you only need to take the following steps.
- Model initialization
- Calling the `predict` interface
- Visualization validation (optional)
|Image Classification|Target Detection|OCR Text Recognition|Portrait Segmentation|Face Detection|
|:---:|:---:|:---:|:---:|:---:|
|![classify](https://user-images.githubusercontent.com/31974251/203261658-600bcb09-282b-4cd3-a2f2-2c733a223b03.gif)|![detection](https://user-images.githubusercontent.com/31974251/203261763-a7513df7-e0ab-42e5-ad50-79ed7e8c8cd2.gif)|![ocr](https://user-images.githubusercontent.com/31974251/203261817-92cc4fcd-463e-4052-910c-040d586ff4e7.gif)|![seg](https://user-images.githubusercontent.com/31974251/203267867-7c51b695-65e6-402e-9826-5d6d5864da87.gif)|![face](https://user-images.githubusercontent.com/31974251/203261714-c74631dd-ec5b-4738-81a3-8dfc496f7547.gif)|
## 内容目录
## Content
- [下载及配置SDK](#SDK)
- [图像分类API](#Classification)
- [目标检测API](#Detection)
- [语义分割API](#Segmentation)
- [OCR文字识别API](#OCR)
- [人脸检测API](#FaceDetection)
- [识别结果说明](#VisionResults)
- [RuntimeOption说明](#RuntimeOption)
- [可视化接口API](#Visualize)
- [模型使用示例](#Demo)
- [App示例工程使用方式](#App)
- [Download and Configure SDK](#SDK)
- [Image Classification API](#Classification)
- [Target Detection API](#Detection)
- [Semantic Segmentation API](#Segmentation)
- [OCR Text Recognition API ](#OCR)
- [Face Detection API](#FaceDetection)
- [Identification Result Description](#VisionResults)
- [Runtime Option Description](#RuntimeOption)
- [Visualization Interface ](#Visualize)
- [Examples of How to Use Models](#Demo)
- [How to Use the App Sample Project](#App)
## 下载及配置SDK
## Download and Configure SDK
<div id="SDK"></div>
### 下载 FastDeploy Android SDK
Release版本Java SDK 目前仅支持Android当前版本为 1.0.0
### Download FastDeploy Android SDK
The release version is as follows (Java SDK currently supports Android only, and current version is 1.0.0):
| 平台 | 文件 | 说明 |
| Platform | File | Description |
| :--- | :--- | :---- |
| Android Java SDK | [fastdeploy-android-sdk-1.0.0.aar](https://bj.bcebos.com/fastdeploy/release/android/fastdeploy-android-sdk-1.0.0.aar) | NDK 20 编译产出, minSdkVersion 15,targetSdkVersion 28 |
| Android Java SDK | [fastdeploy-android-sdk-1.0.0.aar](https://bj.bcebos.com/fastdeploy/release/android/fastdeploy-android-sdk-1.0.0.aar) | NDK 20 compiles, minSdkVersion 15,targetSdkVersion 28 |
更多预编译库信息,请参考: [download_prebuilt_libraries.md](../../docs/cn/build_and_install/download_prebuilt_libraries.md)
For more information for pre-compile library, please refer to: [download_prebuilt_libraries.md](../../docs/cn/build_and_install/download_prebuilt_libraries.md).
### 配置 FastDeploy Android SDK
## Configure FastDeploy Android SDK
首先,将fastdeploy-android-sdk-xxx.aar拷贝到您Android工程的libs目录下其中`xxx`表示您所下载的SDK的版本号。
First, please copy fastdeploy-android-sdk-xxx.aar to the libs directory of your Android project, where `xxx` indicates the version number of the SDK you download.
```shell
├── build.gradle
├── libs
@@ -45,7 +47,7 @@ Release版本Java SDK 目前仅支持Android当前版本为 1.0.0
└── src
```
然后在您的Android工程中的build.gradble引入FastDeploy SDK如下
Then, please add FastDeploy SDK to build.gradble in your Android project.
```java
dependencies {
implementation fileTree(include: ['*.aar'], dir: 'libs')
@@ -54,349 +56,349 @@ dependencies {
}
```
## 图像分类API
## Image Classification API
<div id="Classification"></div>
### PaddleClasModel Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PaddleClasModel初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径 model.pdmodel
- paramFile: String, paddle格式的参数文件路径 model.pdiparams
- configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
- labelFile: String, 可选参数表示label标签文件所在路径用于可视化如 imagenet1k_label_list.txt每一行包含一个label
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
### PaddleClasModel Java API Introduction
- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleClasModel initialization parameters are described as follows:
- modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
- paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
- configFile: String, preprocessing configuration file of model inference, e.g. infer_cfg.yml.
- labelFile: String, optional, path to the label file, for visualization, e.g. imagenet1k_label_list.txt, in which each line contains a label.
- option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
// 构造函数: constructor w/o label file
public PaddleClasModel(); // 空构造函数之后可以调用init初始化
// Constructor w/o label file
public PaddleClasModel(); // An empty constructor, which can be initialised by calling init function later.
public PaddleClasModel(String modelFile, String paramsFile, String configFile);
public PaddleClasModel(String modelFile, String paramsFile, String configFile, String labelFile);
public PaddleClasModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public PaddleClasModel(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
// Call init manually w/o label file
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public boolean init(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
// 直接预测:不保存图片以及不渲染结果到Bitmap
// Directly predict: do not save images or render result to Bitmap.
public ClassifyResult predict(Bitmap ARGB8888Bitmap)
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap
// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public ClassifyResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float scoreThreshold);
public ClassifyResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // 只渲染 不保存图片
public ClassifyResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // Only rendering images without saving.
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
public boolean release(); // Release native resources.
public boolean initialized(); // Check if initialization is successful.
```
## 目标检测API
## Target Detection API
<div id="Detection"></div>
### PicoDet Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PicoDet初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径 model.pdmodel
- paramFile: String, paddle格式的参数文件路径 model.pdiparams
- configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
- labelFile: String, 可选参数表示label标签文件所在路径用于可视化如 coco_label_list.txt每一行包含一个label
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
### PicoDet Java API Introduction
- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PicoDet initialization parameters are described as follows:
- modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
- paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
- configFile: String, preprocessing configuration file of model inference, e.g. infer_cfg.yml.
- labelFile: String, optional, path to the label file, for visualization, e.g. coco_label_list.txt, in which each line contains a label.
- option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
// 构造函数: constructor w/o label file
public PicoDet(); // 空构造函数之后可以调用init初始化
// Constructor w/o label file.
public PicoDet(); // An empty constructor, which can be initialised by calling init function later.
public PicoDet(String modelFile, String paramsFile, String configFile);
public PicoDet(String modelFile, String paramsFile, String configFile, String labelFile);
public PicoDet(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public PicoDet(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
// Call init manually w/o label file.
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public boolean init(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
// 直接预测:不保存图片以及不渲染结果到Bitmap
// Directly predict: do not save images or render result to Bitmap.
public DetectionResult predict(Bitmap ARGB8888Bitmap)
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap
// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public DetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float scoreThreshold);
public DetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // 只渲染 不保存图片
public DetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // Only rendering images without saving.
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
public boolean release(); // Release native resources.
public boolean initialized(); // Check if initialization is successful.
```
## OCR文字识别API
## OCR Text Recognition API
<div id="OCR"></div>
### PP-OCRv2 & PP-OCRv3 Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。 PP-OCR初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径 model.pdmodel
- paramFile: String, paddle格式的参数文件路径 model.pdiparams
- labelFile: String, 可选参数表示label标签文件所在路径用于可视化如 ppocr_keys_v1.txt每一行包含一个label
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
与其他模型不同的是,PP-OCRv2 PP-OCRv3 包含 DBDetectorClassifierRecognizer等基础模型,以及PPOCRv2PPOCRv3等pipeline类型。
### PP-OCRv2 & PP-OCRv3 Java API Introduction
- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PP-OCR initialization parameters are described as follows:
- modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
- paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
- labelFile: String, optional, path to the label file, for visualization, e.g. ppocr_keys_v1.txt, in which each line contains a label.
- option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
Unlike other models, PP-OCRv2 and PP-OCRv3 contain base models such as DBDetector, Classifier and Recognizer, and pipeline types such as PPOCRv2 and PPOCRv3.
```java
// 构造函数: constructor w/o label file
// Constructor w/o label file
public DBDetector(String modelFile, String paramsFile);
public DBDetector(String modelFile, String paramsFile, RuntimeOption option);
public Classifier(String modelFile, String paramsFile);
public Classifier(String modelFile, String paramsFile, RuntimeOption option);
public Recognizer(String modelFile, String paramsFile, String labelPath);
public Recognizer(String modelFile, String paramsFile, String labelPath, RuntimeOption option);
public PPOCRv2(); // 空构造函数之后可以调用init初始化
public PPOCRv2(); // An empty constructor, which can be initialised by calling init function later.
// Constructor w/o classifier
public PPOCRv2(DBDetector detModel, Recognizer recModel);
public PPOCRv2(DBDetector detModel, Classifier clsModel, Recognizer recModel);
public PPOCRv3(); // 空构造函数之后可以调用init初始化
public PPOCRv3(); // An empty constructor, which can be initialised by calling init function later.
// Constructor w/o classifier
public PPOCRv3(DBDetector detModel, Recognizer recModel);
public PPOCRv3(DBDetector detModel, Classifier clsModel, Recognizer recModel);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
// 直接预测:不保存图片以及不渲染结果到Bitmap
// Directly predict: do not save images or render result to Bitmap.
public OCRResult predict(Bitmap ARGB8888Bitmap)
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap
// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public OCRResult predict(Bitmap ARGB8888Bitmap, String savedImagePath);
public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // 只渲染 不保存图片
public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // Only rendering images without saving.
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
public boolean release(); // Release native resources.
public boolean initialized(); // Check if initialization is successful.
```
## 语义分割API
## Semantic Segmentation API
<div id="Segmentation"></div>
### PaddleSegModel Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径 model.pdmodel
- paramFile: String, paddle格式的参数文件路径 model.pdiparams
- configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
### PaddleSegModel Java API Introduction
- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleSegModel initialization parameters are described as follows:
- modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
- paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
- configFile: String, preprocessing configuration file of model inference, e.g. infer_cfg.yml.
- option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
// 构造函数: constructor w/o label file
public PaddleSegModel(); // 空构造函数之后可以调用init初始化
// Constructor w/o label file
public PaddleSegModel(); // An empty constructor, which can be initialised by calling init function later.
public PaddleSegModel(String modelFile, String paramsFile, String configFile);
public PaddleSegModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
// Call init manually w/o label file
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
// 直接预测:不保存图片以及不渲染结果到Bitmap
// Directly predict: do not save images or render result to Bitmap.
public SegmentationResult predict(Bitmap ARGB8888Bitmap)
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap
// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public SegmentationResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float weight);
public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // 只渲染 不保存图片
// 修改result而非返回result关注性能的用户可以将以下接口与SegmentationResult的CxxBuffer一起使用
public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // Only rendering images without saving.
// Modify result, but not return it. Concerning performance, you can use the following interface with CxxBuffer in SegmentationResult.
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result)
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, String savedImagePath, float weight);
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, boolean rendering, float weight);
```
- 设置竖屏或横屏模式: 对于 PP-HumanSeg系列模型必须要调用该方法设置竖屏模式为true.
- Set vertical or horizontal mode: For PP-HumanSeg series model, you should call this method to set the vertical mode to true.
```java
public void setVerticalScreenFlag(boolean flag);
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
public boolean release(); // Release native resources.
public boolean initialized(); // Check if initialization is successful.
```
## 人脸检测API
## Face Detection API
<div id="FaceDetection"></div>
### SCRFD Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径 model.pdmodel
- paramFile: String, paddle格式的参数文件路径 model.pdiparams
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
### SCRFD Java API Introduction
- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleSegModel initialization parameters are described as follows:
- modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
- paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
- option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
// 构造函数: constructor w/o label file
public SCRFD(); // 空构造函数之后可以调用init初始化
// Constructor w/o label file.
public SCRFD(); // An empty constructor, which can be initialised by calling init function later.
public SCRFD(String modelFile, String paramsFile);
public SCRFD(String modelFile, String paramsFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
// Call init manually w/o label file.
public boolean init(String modelFile, String paramsFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
// 直接预测:不保存图片以及不渲染结果到Bitmap
// Directly predict: do not save images or render result to Bitmap.
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap)
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, float confThreshold, float nmsIouThreshold) // 设置置信度阈值和NMS阈值
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, float confThreshold, float nmsIouThreshold) // Set confidence thresholds and NMS thresholds.
// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float confThreshold, float nmsIouThreshold);
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float confThreshold, float nmsIouThreshold); // 只渲染 不保存图片
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float confThreshold, float nmsIouThreshold); // Only rendering images without saving.
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
public boolean release(); // Release native resources.
public boolean initialized(); // Check if initialization is successful.
```
### YOLOv5Face Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径 model.pdmodel
- paramFile: String, paddle格式的参数文件路径 model.pdiparams
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
### YOLOv5Face Java API Introduction
- Model initialization API: Model initialization API contains two methods, you can initialize directly through the constructor, or call init function at the appropriate program node. PaddleSegModel initialization parameters are described as follows:
- modelFile: String, path to the model file in paddle format, e.g. model.pdmodel.
- paramFile: String, path to the parameter file in paddle format, e.g. model.pdiparams.
- option: RuntimeOption, optional, model initialization option. If this parameter is not passed, the default runtime option will be used.
```java
// 构造函数: constructor w/o label file
public YOLOv5Face(); // 空构造函数之后可以调用init初始化
// Constructor w/o label file.
public YOLOv5Face(); // An empty constructor, which can be initialised by calling init function later.
public YOLOv5Face(String modelFile, String paramsFile);
public YOLOv5Face(String modelFile, String paramsFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
// Call init manually w/o label file.
public boolean init(String modelFile, String paramsFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
- Model prediction API: Model prediction API includes direct prediction API and API with visualization function. Direct prediction means that no image is saved and no result is rendered to Bitmap, but only the inference result is predicted. Prediction and visualization means to predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap (currently supports Bitmap in format ARGB8888), which can be displayed in camera later.
```java
// 直接预测:不保存图片以及不渲染结果到Bitmap
// Directly predict: do not save images or render result to Bitmap.
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap)
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, float confThreshold, float nmsIouThreshold) // 设置置信度阈值和NMS阈值
// 预测并且可视化:预测结果以及可视化,并将可视化后的图片保存到指定的途径,以及将可视化结果渲染在Bitmap
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, float confThreshold, float nmsIouThreshold) // Set confidence thresholds and NMS thresholds.
// Predict and visualize: predict the result and visualize it, and save the visualized image to the specified path, and render the result to Bitmap.
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float confThreshold, float nmsIouThreshold);
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float confThreshold, float nmsIouThreshold); // 只渲染 不保存图片
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float confThreshold, float nmsIouThreshold); // Only rendering images without saving.
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
- Model resource release API: Calling function release() API can release model resources, and true means successful release, false means failure. Calling function initialized() can determine whether the model is initialized successfully, and true means successful initialization, false means failure.
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
public boolean release(); // Release native resources.
public boolean initialized(); // Check if initialization is successful.
```
## 识别结果说明
## Identification Result Description
<div id="VisionResults"></div>
- 图像分类ClassifyResult说明
- Image classification result description
```java
public class ClassifyResult {
public float[] mScores; // [n] 每个类别的得分(概率)
public int[] mLabelIds; // [n] 分类ID 具体的类别类型
public boolean initialized(); // 检测结果是否有效
public float[] mScores; // [n] Scores of every class(probability).
public int[] mLabelIds; // [n] Class ID, specific class type.
public boolean initialized(); // To test whether the result is valid.
}
```
其他参考C++/Python对应的ClassifyResult说明: [api/vision_results/classification_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/classification_result.md)
Other reference: C++/Python corresponding ClassifyResult description: [api/vision_results/classification_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/classification_result.md)
- 目标检测DetectionResult说明
- Target detection result description
```java
public class DetectionResult {
public float[][] mBoxes; // [n,4] 检测框 (x1,y1,x2,y2)
public float[] mScores; // [n] 每个检测框得分(置信度,概率值)
public int[] mLabelIds; // [n] 分类ID
public boolean initialized(); // 检测结果是否有效
public float[][] mBoxes; // [n,4] Detecting box (x1,y1,x2,y2).
public float[] mScores; // [n] Score (confidence level, probability value) for each detecting box.
public int[] mLabelIds; // [n] Class ID.
public boolean initialized(); // To test whether the result is valid.
}
```
其他参考C++/Python对应的DetectionResult说明: [api/vision_results/detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/detection_result.md)
Other reference: C++/Python corresponding DetectionResult description: [api/vision_results/detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/detection_result.md)
- OCR文字识别OCRResult说明
- OCR text recognition result description
```java
public class OCRResult {
public int[][] mBoxes; // [n,8] 表示单张图片检测出来的所有目标框坐标 每个框以8个int数值依次表示框的4个坐标点顺序为左下右下右上左上
public String[] mText; // [n] 表示多个文本框内被识别出来的文本内容
public float[] mRecScores; // [n] 表示文本框内识别出来的文本的置信度
public float[] mClsScores; // [n] 表示文本框的分类结果的置信度
public int[] mClsLabels; // [n] 表示文本框的方向分类类别
public boolean initialized(); // 检测结果是否有效
public int[][] mBoxes; // [n,8] indicates the coordinates of all target boxes detected in a single image. Each box is 8 int values representing the 4 coordinate points of the box, in the order of lower left, lower right, upper right, upper left.
public String[] mText; // [n] indicates the content recognized in multiple text boxes.
public float[] mRecScores; // [n] indicates the confidence level of the text recognized in the text box.
public float[] mClsScores; // [n] indicates the confidence level of the classification result of the text.
public int[] mClsLabels; // [n] indicates the direction classification category of the text box.
public boolean initialized(); // To test whether the result is valid.
}
```
其他参考C++/Python对应的OCRResult说明: [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md)
Other reference: C++/Python corresponding OCRResult description: [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md)
- 语义分割SegmentationResult结果说明
- Semantic segmentation result description
```java
public class SegmentationResult {
public int[] mLabelMap; // 预测到的label map 每个像素位置对应一个label HxW
public float[] mScoreMap; // 预测到的得分 map 每个像素位置对应一个score HxW
public long[] mShape; // label map实际的shape (H,W)
public boolean mContainScoreMap = false; // 是否包含 score map
// 用户可以选择直接使用CxxBuffer而非通过JNI拷贝到Java层
// 该方式可以一定程度上提升性能
public void setCxxBufferFlag(boolean flag); // 设置是否为CxxBuffer模式
public boolean releaseCxxBuffer(); // 手动释放CxxBuffer!!!
public boolean initialized(); // 检测结果是否有效
public int[] mLabelMap; // The predicted label map, each pixel position corresponds to a label HxW.
public float[] mScoreMap; // The predicted score map, each pixel position corresponds to a score HxW.
public long[] mShape; // The real shape(H,W) of label map.
public boolean mContainScoreMap = false; // Whether score map is included.
// You can choose to use CxxBuffer directly instead of copying it to JAVA layer through JNI.
// This method can improve performance to some extent.
public void setCxxBufferFlag(boolean flag); // Set whether the mode is CxxBuffer.
public boolean releaseCxxBuffer(); // Release CxxBuffer manually!!!
public boolean initialized(); // Check if the result is valid.
}
```
其他参考C++/Python对应的SegmentationResult说明: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
Other referenceC++/Python corresponding SegmentationResult description: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
- 人脸检测FaceDetectionResult结果说明
- Face detection result description
```java
public class FaceDetectionResult {
public float[][] mBoxes; // [n,4] 检测框 (x1,y1,x2,y2)
public float[] mScores; // [n] 每个检测框得分(置信度,概率值)
public float[][] mLandmarks; // [nx?,2] 每个检测到的人脸对应关键点
int mLandmarksPerFace = 0; // 每个人脸对应的关键点个数
public boolean initialized(); // 检测结果是否有效
public float[][] mBoxes; // [n,4] detection box (x1,y1,x2,y2)
public float[] mScores; // [n] scores(confidence level, probability value) of every detection box
public float[][] mLandmarks; // [nx?,2] Each detected face corresponding keypoint
int mLandmarksPerFace = 0; // Each face corresponding keypoints number
public boolean initialized(); // Check if the result is valid.
}
```
其他参考C++/Python对应的FaceDetectionResult说明: [api/vision_results/face_detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/face_detection_result.md)
Other referenceC++/Python corresponding FaceDetectionResult description: [api/vision_results/face_detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/face_detection_result.md)
## RuntimeOption说明
## Runtime Option Description
<div id="RuntimeOption"></div>
- RuntimeOption设置说明
- RuntimeOption setting description
```java
public class RuntimeOption {
public void enableLiteFp16(); // 开启fp16精度推理
public void disableLiteFP16(); // 关闭fp16精度推理
public void enableLiteInt8(); // 开启int8精度推理针对量化模型
public void disableLiteInt8(); // 关闭int8精度推理
public void setCpuThreadNum(int threadNum); // 设置线程数
public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
public void enableLiteFp16(); // Enable fp16 precision inference
public void disableLiteFP16(); // Disable fp16 precision inference
public void enableLiteInt8(); // Enable int8 precision inference, for quantized models
public void disableLiteInt8(); // Disable int8 precision inference
public void setCpuThreadNum(int threadNum); // Set number of threads.
public void setLitePowerMode(LitePowerMode mode); // Set power mode.
public void setLitePowerMode(String modeStr); // Set power mode by string.
}
```
## 可视化接口
## Visualization Interface
<div id="Visualize"></div>
FastDeploy Android SDK同时提供一些可视化接口可用于快速验证推理结果。以下接口均把结果result渲染在输入的Bitmap上。具体的可视化API接口如下
FastDeploy Android SDK also provides visual interfaces that can be used to quickly validate the inference results. The following interfaces all render the result in the input Bitmap.
```java
public class Visualize {
// 默认参数接口
// Default parameter interface.
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result);
public static boolean visFaceDetection(Bitmap ARGB8888Bitmap, FaceDetectionResult result);
public static boolean visOcr(Bitmap ARGB8888Bitmap, OCRResult result);
public static boolean visSegmentation(Bitmap ARGB8888Bitmap, SegmentationResult result);
// 有可设置参数的可视化接口
// visDetection: 可设置阈值大于该阈值的框进行绘制、框线大小、字体大小、类别labels等
// Visual interface with configurable parameters.
// visDetection: You can configure the threshold value (draw the boxes higher than the threshold), box line size, font size, labels, etc.
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, float scoreThreshold);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, float scoreThreshold, int lineSize, float fontSize);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, String[] labels);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, String[] labels, float scoreThreshold, int lineSize, float fontSize);
// visClassification: 可设置阈值大于该阈值的框进行绘制、字体大小、类别labels等
// visClassification: You can configure the threshold value (draw the boxes higher than the threshold), font size, labels, etc.
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result, float scoreThreshold,float fontSize);
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result, String[] labels);
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result, String[] labels, float scoreThreshold,float fontSize);
// visSegmentation: weight背景权重
// visSegmentation: Background weight.
public static boolean visSegmentation(Bitmap ARGB8888Bitmap, SegmentationResult result, float weight);
// visFaceDetection: 线大小、字体大小等
// visFaceDetection: String size, font size, etc.
public static boolean visFaceDetection(Bitmap ARGB8888Bitmap, FaceDetectionResult result, int lineSize, float fontSize);
}
```
对应的可视化类型为:
The corresponding visualization types:
```java
import com.baidu.paddle.fastdeploy.vision.Visualize;
```
## 模型使用示例
## Examples of How to Use Models
<div id="Demo"></div>
- 模型调用示例1使用构造函数以及默认的RuntimeOption
- Example 1: Using constructor function and default RuntimeOption.
```java
import java.nio.ByteBuffer;
import android.graphics.Bitmap;
@@ -405,90 +407,92 @@ import android.opengl.GLES20;
import com.baidu.paddle.fastdeploy.vision.DetectionResult;
import com.baidu.paddle.fastdeploy.vision.detection.PicoDet;
// 初始化模型
// Initialize model.
PicoDet model = new PicoDet("picodet_s_320_coco_lcnet/model.pdmodel",
"picodet_s_320_coco_lcnet/model.pdiparams",
"picodet_s_320_coco_lcnet/infer_cfg.yml");
// 模型推理
// Model inference.
DetectionResult result = model.predict(ARGB8888ImageBitmap);
// 释放模型资源
// Release model resources.
model.release();
```
- 模型调用示例2: 在合适的程序节点手动调用init并自定义RuntimeOption
- Example 2: Manually call init function at appropriate program nodes, and customize RuntimeOption.
```java
// import 同上 ...
// import id.
import com.baidu.paddle.fastdeploy.RuntimeOption;
import com.baidu.paddle.fastdeploy.LitePowerMode;
import com.baidu.paddle.fastdeploy.vision.DetectionResult;
import com.baidu.paddle.fastdeploy.vision.detection.PicoDet;
// 新建空模型
// Create a new empty model.
PicoDet model = new PicoDet();
// 模型路径
// Model path.
String modelFile = "picodet_s_320_coco_lcnet/model.pdmodel";
String paramFile = "picodet_s_320_coco_lcnet/model.pdiparams";
String configFile = "picodet_s_320_coco_lcnet/infer_cfg.yml";
// 指定RuntimeOption
// Set RuntimeOption.
RuntimeOption option = new RuntimeOption();
option.setCpuThreadNum(2);
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
option.enableLiteFp16();
// 使用init函数初始化
// Initiaze with init function.
model.init(modelFile, paramFile, configFile, option);
// Bitmap读取、模型预测、资源释放 同上 ...
// Reading Bitmap, model prediction, resource release id.
```
## App示例工程使用方式
## How to Use the App Sample Project
<div id="App"></div>
FastDeploy在java/android/app目录下提供了一些示例工程以下将介绍示例工程的使用方式。由于java/android目录下同时还包含JNI工程因此想要使用示例工程的用户还需要配置NDK如果您只关心Java API的使用并且不想配置NDK可以直接跳转到以下详细的案例链接。
FastDeploy provides some sample projects in the java/android/app directory. Since the java/android directory also contains JNI projects, users who want to use the sample projects also need to configure the NDK. If you only want to use the Java API and don't want to configure the NDK, you can jump to the detailed case links below.
- [图像分类App示例工程](../../examples/vision/classification/paddleclas/android)
- [目标检测App示例工程](../../examples/vision/detection/paddledetection/android)
- [OCR文字识别App示例工程](../../examples/vision/ocr/PP-OCRv2/android)
- [人像分割App示例工程](../../examples/vision/segmentation/paddleseg/android)
- [人脸检测App示例工程](../../examples/vision/facedet/scrfd/android)
- [App sample project of image classification](../../examples/vision/classification/paddleclas/android)
- [App sample project of target detection](../../examples/vision/detection/paddledetection/android)
- [App sample project of OCR text detection](../../examples/vision/ocr/PP-OCRv2/android)
- [App sample project of portrait segmentation](../../examples/vision/segmentation/paddleseg/android)
- [App sample project of face detection](../../examples/vision/facedet/scrfd/android)
### 环境准备
### Prepare for Environment
1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)
2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
1. Install Android Studio tools in your local environment, please refer to [Android Stuido official website](https://developer.android.com/studio) for detailed installation method.
2. Get an Android phone and turn on USB debugging mode. How to turn on: ` Phone Settings -> Find Developer Options -> Turn on Developer Options and USB Debug Mode`.
**注意**:如果您的 Android Studio 尚未配置 NDK ,请根据 Android Studio 用户指南中的[安装及配置 NDK 和 CMake ](https://developer.android.com/studio/projects/install-ndk)内容,预先配置好 NDK 。您可以选择最新的 NDK 版本,或者使用 FastDeploy Android 预测库版本一样的 NDK
### 部署步骤
**Notes**If your Android Studio is not configured with an NDK, please configure the it according to [Installing and Configuring NDK and CMake](https://developer.android.com/studio/projects/install-ndk) in the Android Studio User Guide. You can either choose the latest NDK version or use the same version as the FastDeploy Android prediction library.
1. App示例工程位于 `fastdeploy/java/android/app` 目录
2. 用 Android Studio 打开 `fastdeploy/java/android` 工程,注意是`java/android`目录
3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
### Configuration Steps
1. The App sample project is located in directory `fastdeploy/java/android/app`.
2. Open `fastdeploy/java/android` project by Android Studio, please note that the directory is `java/android`.
3. Connect your phone to your computer, turn on USB debugging and file transfer mode, and connect your own mobile device on Android Studio (your phone needs to be enabled to allow software installation from USB).
<p align="center">
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/31974251/203257262-71b908ab-bb2b-47d3-9efb-67631687b774.png">
</p>
> **注意:**
>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod NDK location` 为您本机配置的 NDK 所在路径。本工程默认使用的NDK版本为20.
>> 如果您是通过 Andriod Studio 的 SDK Tools 下载的 NDK (见本章节"环境准备"),可以直接点击下拉框选择默认路径。
>> 还有一种 NDK 配置方法,你可以在 `java/android/local.properties` 文件中手动完成 NDK 路径配置,如下图所示
>> 如果以上步骤仍旧无法解决 NDK 配置错误,请尝试根据 Andriod Studio 官方文档中的[更新 Android Gradle 插件](https://developer.android.com/studio/releases/gradle-plugin?hl=zh-cn#updating-plugin)章节尝试更新Android Gradle plugin版本。
> **Notes:**
>> If you encounter an NDK configuration error during importing, compiling or running the program, please open ` File > Project Structure > SDK Location` and change `Andriod NDK location` to your locally configured NDK path. The default NDK version in this project is 20.
>> If you downloaded the NDK through SDK Tools in Andriod Studio (see "Prepare for Environment" in this section), you can select the default path by clicking the drop-down box.
>> There is another way to configure the NDK: you can do it manually in the file `java/android/local.properties`, as shown above.
>> If the above steps still can't solve the configuration error, please try to update Android Gradle plugin version according to section [Updating Android Gradle plugin](https://developer.android.com/studio/releases/gradle-plugin?hl=zh-cn#updating-plugin) in official Andriod Studio documentation.
4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
成功后效果如下图一APP 安装到手机;图二: APP 打开后的效果会自动识别图片中的物体并标记图三APP设置选项点击右上角的设置图片可以设置不同选项进行体验。
| APP 图标 | APP 效果 | APP设置项
4. Click the Run button to automatically compile the APP and install it to your phone. (The process will automatically download the pre-compiled FastDeploy Android library and model files, internet connection required.)
The success interface is as follows. Figure 1: Install APP on phone; Figure 2: The opening interface, it will automatically recognize the objects in the picture and mark them; Figure 3: APP setting options, click setting in the upper right corner, and you can set different options.
| APP icon | APP effect | APP setting options
| --- | --- | --- |
| ![app_pic](https://user-images.githubusercontent.com/31974251/203268599-c94018d8-3683-490a-a5c7-a8136a4fa284.jpg) | ![app_res](https://user-images.githubusercontent.com/31974251/197169609-bb214af3-d6e7-4433-bb96-1225cddd441c.jpg) | ![app_setup](https://user-images.githubusercontent.com/31974251/197332983-afbfa6d5-4a3b-4c54-a528-4a3e58441be1.jpg) |
### 切换不同的场景
App示例工程只需要在AndroidManifest.xml中切换不同的Activity即可编译不同场景的App进行体验。
### Switch Between Different Scenarios
App sample project only needs to switch between different Activity in AndroidManifest.xml to compile App in different scenarios.
<p align="center">
<img width="788" alt="image" src="https://user-images.githubusercontent.com/31974251/203258255-b422d3e2-6004-465f-86b6-9fa61a27c6c2.png">
</p>
- 图像分类场景
- Image classification scenario
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.fastdeploy.app.examples">
@@ -502,7 +506,7 @@ App示例工程只需要在AndroidManifest.xml中切换不同的Activity即可
</application>
</manifest>
```
- 目标检测场景
- Target detection scenario
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.fastdeploy.app.examples">
@@ -516,7 +520,7 @@ App示例工程只需要在AndroidManifest.xml中切换不同的Activity即可
</application>
</manifest>
```
- OCR文字识别场景
- OCR text detection scenario
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.fastdeploy.app.examples">
@@ -530,7 +534,7 @@ App示例工程只需要在AndroidManifest.xml中切换不同的Activity即可
</application>
</manifest>
```
- 人像分割场景
- Portrait segmentation scenario
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.fastdeploy.app.examples">
@@ -544,7 +548,7 @@ App示例工程只需要在AndroidManifest.xml中切换不同的Activity即可
</application>
</manifest>
```
- 人脸检测场景
- Face detection scenario
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.fastdeploy.app.examples">

562
java/android/README_CN.md Normal file
View File

@@ -0,0 +1,562 @@
简体中文 | [English](README.md)
# FastDeploy Android AAR 包使用文档
FastDeploy Android SDK 目前支持图像分类、目标检测、OCR文字识别、语义分割和人脸检测等任务对更多的AI任务支持将会陆续添加进来。以下为各个任务对应的API文档在Android下使用FastDeploy中集成的模型只需以下几个步骤
- 模型初始化
- 调用`predict`接口
- 可视化验证(可选)
|图像分类|目标检测|OCR文字识别|人像分割|人脸检测|
|:---:|:---:|:---:|:---:|:---:|
|![classify](https://user-images.githubusercontent.com/31974251/203261658-600bcb09-282b-4cd3-a2f2-2c733a223b03.gif)|![detection](https://user-images.githubusercontent.com/31974251/203261763-a7513df7-e0ab-42e5-ad50-79ed7e8c8cd2.gif)|![ocr](https://user-images.githubusercontent.com/31974251/203261817-92cc4fcd-463e-4052-910c-040d586ff4e7.gif)|![seg](https://user-images.githubusercontent.com/31974251/203267867-7c51b695-65e6-402e-9826-5d6d5864da87.gif)|![face](https://user-images.githubusercontent.com/31974251/203261714-c74631dd-ec5b-4738-81a3-8dfc496f7547.gif)|
## 内容目录
- [下载及配置SDK](#SDK)
- [图像分类API](#Classification)
- [目标检测API](#Detection)
- [语义分割API](#Segmentation)
- [OCR文字识别API](#OCR)
- [人脸检测API](#FaceDetection)
- [识别结果说明](#VisionResults)
- [RuntimeOption说明](#RuntimeOption)
- [可视化接口API](#Visualize)
- [模型使用示例](#Demo)
- [App示例工程使用方式](#App)
## 下载及配置SDK
<div id="SDK"></div>
### 下载 FastDeploy Android SDK
Release版本Java SDK 目前仅支持Android当前版本为 1.0.0
| 平台 | 文件 | 说明 |
| :--- | :--- | :---- |
| Android Java SDK | [fastdeploy-android-sdk-1.0.0.aar](https://bj.bcebos.com/fastdeploy/release/android/fastdeploy-android-sdk-1.0.0.aar) | NDK 20 编译产出, minSdkVersion 15,targetSdkVersion 28 |
更多预编译库信息,请参考: [download_prebuilt_libraries.md](../../docs/cn/build_and_install/download_prebuilt_libraries.md)
### 配置 FastDeploy Android SDK
首先将fastdeploy-android-sdk-xxx.aar拷贝到您Android工程的libs目录下其中`xxx`表示您所下载的SDK的版本号。
```shell
├── build.gradle
├── libs
│   └── fastdeploy-android-sdk-xxx.aar
├── proguard-rules.pro
└── src
```
然后在您的Android工程中的build.gradble引入FastDeploy SDK如下
```java
dependencies {
implementation fileTree(include: ['*.aar'], dir: 'libs')
implementation 'com.android.support:appcompat-v7:28.0.0'
// ...
}
```
## 图像分类API
<div id="Classification"></div>
### PaddleClasModel Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PaddleClasModel初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径如 model.pdmodel
- paramFile: String, paddle格式的参数文件路径如 model.pdiparams
- configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
- labelFile: String, 可选参数表示label标签文件所在路径用于可视化如 imagenet1k_label_list.txt每一行包含一个label
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
```java
// 构造函数: constructor w/o label file
public PaddleClasModel(); // 空构造函数之后可以调用init初始化
public PaddleClasModel(String modelFile, String paramsFile, String configFile);
public PaddleClasModel(String modelFile, String paramsFile, String configFile, String labelFile);
public PaddleClasModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public PaddleClasModel(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public boolean init(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
```java
// 直接预测不保存图片以及不渲染结果到Bitmap上
public ClassifyResult predict(Bitmap ARGB8888Bitmap)
// 预测并且可视化预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap上
public ClassifyResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float scoreThreshold);
public ClassifyResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // 只渲染 不保存图片
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
```
## 目标检测API
<div id="Detection"></div>
### PicoDet Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PicoDet初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径如 model.pdmodel
- paramFile: String, paddle格式的参数文件路径如 model.pdiparams
- configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
- labelFile: String, 可选参数表示label标签文件所在路径用于可视化如 coco_label_list.txt每一行包含一个label
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
```java
// 构造函数: constructor w/o label file
public PicoDet(); // 空构造函数之后可以调用init初始化
public PicoDet(String modelFile, String paramsFile, String configFile);
public PicoDet(String modelFile, String paramsFile, String configFile, String labelFile);
public PicoDet(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public PicoDet(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
public boolean init(String modelFile, String paramsFile, String configFile, String labelFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
```java
// 直接预测不保存图片以及不渲染结果到Bitmap上
public DetectionResult predict(Bitmap ARGB8888Bitmap)
// 预测并且可视化预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap上
public DetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float scoreThreshold);
public DetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float scoreThreshold); // 只渲染 不保存图片
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
```
## OCR文字识别API
<div id="OCR"></div>
### PP-OCRv2 & PP-OCRv3 Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。 PP-OCR初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径如 model.pdmodel
- paramFile: String, paddle格式的参数文件路径如 model.pdiparams
- labelFile: String, 可选参数表示label标签文件所在路径用于可视化如 ppocr_keys_v1.txt每一行包含一个label
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
与其他模型不同的是PP-OCRv2 和 PP-OCRv3 包含 DBDetector、Classifier和Recognizer等基础模型以及PPOCRv2和PPOCRv3等pipeline类型。
```java
// 构造函数: constructor w/o label file
public DBDetector(String modelFile, String paramsFile);
public DBDetector(String modelFile, String paramsFile, RuntimeOption option);
public Classifier(String modelFile, String paramsFile);
public Classifier(String modelFile, String paramsFile, RuntimeOption option);
public Recognizer(String modelFile, String paramsFile, String labelPath);
public Recognizer(String modelFile, String paramsFile, String labelPath, RuntimeOption option);
public PPOCRv2(); // 空构造函数之后可以调用init初始化
// Constructor w/o classifier
public PPOCRv2(DBDetector detModel, Recognizer recModel);
public PPOCRv2(DBDetector detModel, Classifier clsModel, Recognizer recModel);
public PPOCRv3(); // 空构造函数之后可以调用init初始化
// Constructor w/o classifier
public PPOCRv3(DBDetector detModel, Recognizer recModel);
public PPOCRv3(DBDetector detModel, Classifier clsModel, Recognizer recModel);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
```java
// 直接预测不保存图片以及不渲染结果到Bitmap上
public OCRResult predict(Bitmap ARGB8888Bitmap)
// 预测并且可视化预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap上
public OCRResult predict(Bitmap ARGB8888Bitmap, String savedImagePath);
public OCRResult predict(Bitmap ARGB8888Bitmap, boolean rendering); // 只渲染 不保存图片
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
```
## 语义分割API
<div id="Segmentation"></div>
### PaddleSegModel Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径如 model.pdmodel
- paramFile: String, paddle格式的参数文件路径如 model.pdiparams
- configFile: String, 模型推理的预处理配置文件,如 infer_cfg.yml
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
```java
// 构造函数: constructor w/o label file
public PaddleSegModel(); // 空构造函数之后可以调用init初始化
public PaddleSegModel(String modelFile, String paramsFile, String configFile);
public PaddleSegModel(String modelFile, String paramsFile, String configFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
public boolean init(String modelFile, String paramsFile, String configFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
```java
// 直接预测不保存图片以及不渲染结果到Bitmap上
public SegmentationResult predict(Bitmap ARGB8888Bitmap)
// 预测并且可视化预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap上
public SegmentationResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float weight);
public SegmentationResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float weight); // 只渲染 不保存图片
// 修改result而非返回result关注性能的用户可以将以下接口与SegmentationResult的CxxBuffer一起使用
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result)
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, String savedImagePath, float weight);
public boolean predict(Bitmap ARGB8888Bitmap, SegmentationResult result, boolean rendering, float weight);
```
- 设置竖屏或横屏模式: 对于 PP-HumanSeg系列模型必须要调用该方法设置竖屏模式为true.
```java
public void setVerticalScreenFlag(boolean flag);
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
```
## 人脸检测API
<div id="FaceDetection"></div>
### SCRFD Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径如 model.pdmodel
- paramFile: String, paddle格式的参数文件路径如 model.pdiparams
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
```java
// 构造函数: constructor w/o label file
public SCRFD(); // 空构造函数之后可以调用init初始化
public SCRFD(String modelFile, String paramsFile);
public SCRFD(String modelFile, String paramsFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
public boolean init(String modelFile, String paramsFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
```java
// 直接预测不保存图片以及不渲染结果到Bitmap上
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap)
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, float confThreshold, float nmsIouThreshold) // 设置置信度阈值和NMS阈值
// 预测并且可视化预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap上
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float confThreshold, float nmsIouThreshold);
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float confThreshold, float nmsIouThreshold); // 只渲染 不保存图片
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
```
### YOLOv5Face Java API 说明
- 模型初始化 API: 模型初始化API包含两种方式方式一是通过构造函数直接初始化方式二是通过调用init函数在合适的程序节点进行初始化。PaddleSegModel初始化参数说明如下
- modelFile: String, paddle格式的模型文件路径如 model.pdmodel
- paramFile: String, paddle格式的参数文件路径如 model.pdiparams
- option: RuntimeOption可选参数模型初始化option。如果不传入该参数则会使用默认的运行时选项。
```java
// 构造函数: constructor w/o label file
public YOLOv5Face(); // 空构造函数之后可以调用init初始化
public YOLOv5Face(String modelFile, String paramsFile);
public YOLOv5Face(String modelFile, String paramsFile, RuntimeOption option);
// 手动调用init初始化: call init manually w/o label file
public boolean init(String modelFile, String paramsFile, RuntimeOption option);
```
- 模型预测 API模型预测API包含直接预测的API以及带可视化功能的API。直接预测是指不保存图片以及不渲染结果到Bitmap上仅预测推理结果。预测并且可视化是指预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap(目前支持ARGB8888格式的Bitmap), 后续可将该Bitmap在camera中进行显示。
```java
// 直接预测不保存图片以及不渲染结果到Bitmap上
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap)
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, float confThreshold, float nmsIouThreshold) // 设置置信度阈值和NMS阈值
// 预测并且可视化预测结果以及可视化并将可视化后的图片保存到指定的途径以及将可视化结果渲染在Bitmap上
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, String savedImagePath, float confThreshold, float nmsIouThreshold);
public FaceDetectionResult predict(Bitmap ARGB8888Bitmap, boolean rendering, float confThreshold, float nmsIouThreshold); // 只渲染 不保存图片
```
- 模型资源释放 API调用 release() API 可以释放模型资源返回true表示释放成功false表示失败调用 initialized() 可以判断模型是否初始化成功true表示初始化成功false表示失败。
```java
public boolean release(); // 释放native资源
public boolean initialized(); // 检查是否初始化成功
```
## 识别结果说明
<div id="VisionResults"></div>
- 图像分类ClassifyResult说明
```java
public class ClassifyResult {
public float[] mScores; // [n] 每个类别的得分(概率)
public int[] mLabelIds; // [n] 分类ID 具体的类别类型
public boolean initialized(); // 检测结果是否有效
}
```
其他参考C++/Python对应的ClassifyResult说明: [api/vision_results/classification_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/classification_result.md)
- 目标检测DetectionResult说明
```java
public class DetectionResult {
public float[][] mBoxes; // [n,4] 检测框 (x1,y1,x2,y2)
public float[] mScores; // [n] 每个检测框得分(置信度,概率值)
public int[] mLabelIds; // [n] 分类ID
public boolean initialized(); // 检测结果是否有效
}
```
其他参考C++/Python对应的DetectionResult说明: [api/vision_results/detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/detection_result.md)
- OCR文字识别OCRResult说明
```java
public class OCRResult {
public int[][] mBoxes; // [n,8] 表示单张图片检测出来的所有目标框坐标 每个框以8个int数值依次表示框的4个坐标点顺序为左下右下右上左上
public String[] mText; // [n] 表示多个文本框内被识别出来的文本内容
public float[] mRecScores; // [n] 表示文本框内识别出来的文本的置信度
public float[] mClsScores; // [n] 表示文本框的分类结果的置信度
public int[] mClsLabels; // [n] 表示文本框的方向分类类别
public boolean initialized(); // 检测结果是否有效
}
```
其他参考C++/Python对应的OCRResult说明: [api/vision_results/ocr_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/ocr_result.md)
- 语义分割SegmentationResult结果说明
```java
public class SegmentationResult {
public int[] mLabelMap; // 预测到的label map 每个像素位置对应一个label HxW
public float[] mScoreMap; // 预测到的得分 map 每个像素位置对应一个score HxW
public long[] mShape; // label map实际的shape (H,W)
public boolean mContainScoreMap = false; // 是否包含 score map
// 用户可以选择直接使用CxxBuffer而非通过JNI拷贝到Java层
// 该方式可以一定程度上提升性能
public void setCxxBufferFlag(boolean flag); // 设置是否为CxxBuffer模式
public boolean releaseCxxBuffer(); // 手动释放CxxBuffer!!!
public boolean initialized(); // 检测结果是否有效
}
```
其他参考C++/Python对应的SegmentationResult说明: [api/vision_results/segmentation_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/segmentation_result.md)
- 人脸检测FaceDetectionResult结果说明
```java
public class FaceDetectionResult {
public float[][] mBoxes; // [n,4] 检测框 (x1,y1,x2,y2)
public float[] mScores; // [n] 每个检测框得分(置信度,概率值)
public float[][] mLandmarks; // [nx?,2] 每个检测到的人脸对应关键点
int mLandmarksPerFace = 0; // 每个人脸对应的关键点个数
public boolean initialized(); // 检测结果是否有效
}
```
其他参考C++/Python对应的FaceDetectionResult说明: [api/vision_results/face_detection_result.md](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/api/vision_results/face_detection_result.md)
## RuntimeOption说明
<div id="RuntimeOption"></div>
- RuntimeOption设置说明
```java
public class RuntimeOption {
public void enableLiteFp16(); // 开启fp16精度推理
public void disableLiteFP16(); // 关闭fp16精度推理
public void enableLiteInt8(); // 开启int8精度推理针对量化模型
public void disableLiteInt8(); // 关闭int8精度推理
public void setCpuThreadNum(int threadNum); // 设置线程数
public void setLitePowerMode(LitePowerMode mode); // 设置能耗模式
public void setLitePowerMode(String modeStr); // 通过字符串形式设置能耗模式
}
```
## 可视化接口
<div id="Visualize"></div>
FastDeploy Android SDK同时提供一些可视化接口可用于快速验证推理结果。以下接口均把结果result渲染在输入的Bitmap上。具体的可视化API接口如下
```java
public class Visualize {
// 默认参数接口
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result);
public static boolean visFaceDetection(Bitmap ARGB8888Bitmap, FaceDetectionResult result);
public static boolean visOcr(Bitmap ARGB8888Bitmap, OCRResult result);
public static boolean visSegmentation(Bitmap ARGB8888Bitmap, SegmentationResult result);
// 有可设置参数的可视化接口
// visDetection: 可设置阈值大于该阈值的框进行绘制、框线大小、字体大小、类别labels等
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, float scoreThreshold);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, float scoreThreshold, int lineSize, float fontSize);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, String[] labels);
public static boolean visDetection(Bitmap ARGB8888Bitmap, DetectionResult result, String[] labels, float scoreThreshold, int lineSize, float fontSize);
// visClassification: 可设置阈值大于该阈值的框进行绘制、字体大小、类别labels等
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result, float scoreThreshold,float fontSize);
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result, String[] labels);
public static boolean visClassification(Bitmap ARGB8888Bitmap, ClassifyResult result, String[] labels, float scoreThreshold,float fontSize);
// visSegmentation: weight背景权重
public static boolean visSegmentation(Bitmap ARGB8888Bitmap, SegmentationResult result, float weight);
// visFaceDetection: 线大小、字体大小等
public static boolean visFaceDetection(Bitmap ARGB8888Bitmap, FaceDetectionResult result, int lineSize, float fontSize);
}
```
对应的可视化类型为:
```java
import com.baidu.paddle.fastdeploy.vision.Visualize;
```
## 模型使用示例
<div id="Demo"></div>
- 模型调用示例1使用构造函数以及默认的RuntimeOption
```java
import java.nio.ByteBuffer;
import android.graphics.Bitmap;
import android.opengl.GLES20;
import com.baidu.paddle.fastdeploy.vision.DetectionResult;
import com.baidu.paddle.fastdeploy.vision.detection.PicoDet;
// 初始化模型
PicoDet model = new PicoDet("picodet_s_320_coco_lcnet/model.pdmodel",
"picodet_s_320_coco_lcnet/model.pdiparams",
"picodet_s_320_coco_lcnet/infer_cfg.yml");
// 模型推理
DetectionResult result = model.predict(ARGB8888ImageBitmap);
// 释放模型资源
model.release();
```
- 模型调用示例2: 在合适的程序节点手动调用init并自定义RuntimeOption
```java
// import 同上 ...
import com.baidu.paddle.fastdeploy.RuntimeOption;
import com.baidu.paddle.fastdeploy.LitePowerMode;
import com.baidu.paddle.fastdeploy.vision.DetectionResult;
import com.baidu.paddle.fastdeploy.vision.detection.PicoDet;
// 新建空模型
PicoDet model = new PicoDet();
// 模型路径
String modelFile = "picodet_s_320_coco_lcnet/model.pdmodel";
String paramFile = "picodet_s_320_coco_lcnet/model.pdiparams";
String configFile = "picodet_s_320_coco_lcnet/infer_cfg.yml";
// 指定RuntimeOption
RuntimeOption option = new RuntimeOption();
option.setCpuThreadNum(2);
option.setLitePowerMode(LitePowerMode.LITE_POWER_HIGH);
option.enableLiteFp16();
// 使用init函数初始化
model.init(modelFile, paramFile, configFile, option);
// Bitmap读取、模型预测、资源释放 同上 ...
```
## App示例工程使用方式
<div id="App"></div>
FastDeploy在java/android/app目录下提供了一些示例工程以下将介绍示例工程的使用方式。由于java/android目录下同时还包含JNI工程因此想要使用示例工程的用户还需要配置NDK如果您只关心Java API的使用并且不想配置NDK可以直接跳转到以下详细的案例链接。
- [图像分类App示例工程](../../examples/vision/classification/paddleclas/android)
- [目标检测App示例工程](../../examples/vision/detection/paddledetection/android)
- [OCR文字识别App示例工程](../../examples/vision/ocr/PP-OCRv2/android)
- [人像分割App示例工程](../../examples/vision/segmentation/paddleseg/android)
- [人脸检测App示例工程](../../examples/vision/facedet/scrfd/android)
### 环境准备
1. 在本地环境安装好 Android Studio 工具,详细安装方法请见[Android Stuido 官网](https://developer.android.com/studio)。
2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
**注意**:如果您的 Android Studio 尚未配置 NDK ,请根据 Android Studio 用户指南中的[安装及配置 NDK 和 CMake ](https://developer.android.com/studio/projects/install-ndk)内容,预先配置好 NDK 。您可以选择最新的 NDK 版本,或者使用 FastDeploy Android 预测库版本一样的 NDK
### 部署步骤
1. App示例工程位于 `fastdeploy/java/android/app` 目录
2. 用 Android Studio 打开 `fastdeploy/java/android` 工程,注意是`java/android`目录
3. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)
<p align="center">
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/31974251/203257262-71b908ab-bb2b-47d3-9efb-67631687b774.png">
</p>
> **注意:**
>> 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 ` File > Project Structure > SDK Location`,修改 `Andriod NDK location` 为您本机配置的 NDK 所在路径。本工程默认使用的NDK版本为20.
>> 如果您是通过 Andriod Studio 的 SDK Tools 下载的 NDK (见本章节"环境准备"),可以直接点击下拉框选择默认路径。
>> 还有一种 NDK 配置方法,你可以在 `java/android/local.properties` 文件中手动完成 NDK 路径配置,如下图所示
>> 如果以上步骤仍旧无法解决 NDK 配置错误,请尝试根据 Andriod Studio 官方文档中的[更新 Android Gradle 插件](https://developer.android.com/studio/releases/gradle-plugin?hl=zh-cn#updating-plugin)章节尝试更新Android Gradle plugin版本。
4. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载预编译的 FastDeploy Android 库 以及 模型文件,需要联网)
成功后效果如下图一APP 安装到手机;图二: APP 打开后的效果会自动识别图片中的物体并标记图三APP设置选项点击右上角的设置图片可以设置不同选项进行体验。
| APP 图标 | APP 效果 | APP设置项
| --- | --- | --- |
| ![app_pic](https://user-images.githubusercontent.com/31974251/203268599-c94018d8-3683-490a-a5c7-a8136a4fa284.jpg) | ![app_res](https://user-images.githubusercontent.com/31974251/197169609-bb214af3-d6e7-4433-bb96-1225cddd441c.jpg) | ![app_setup](https://user-images.githubusercontent.com/31974251/197332983-afbfa6d5-4a3b-4c54-a528-4a3e58441be1.jpg) |
### 切换不同的场景
App示例工程只需要在AndroidManifest.xml中切换不同的Activity即可编译不同场景的App进行体验。
<p align="center">
<img width="788" alt="image" src="https://user-images.githubusercontent.com/31974251/203258255-b422d3e2-6004-465f-86b6-9fa61a27c6c2.png">
</p>
- 图像分类场景
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.fastdeploy.app.examples">
<!-- ... -->
<activity android:name=".classification.ClassificationMainActivity">
<!-- -->
</activity>
<activity
android:name=".classification.ClassificationSettingsActivity"
</activity>
</application>
</manifest>
```
- 目标检测场景
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.fastdeploy.app.examples">
<!-- ... -->
<activity android:name=".detection.DetectionMainActivity">
<!-- -->
</activity>
<activity
android:name=".detection.DetectionSettingsActivity"
</activity>
</application>
</manifest>
```
- OCR文字识别场景
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.fastdeploy.app.examples">
<!-- ... -->
<activity android:name=".ocr.OcrMainActivity">
<!-- -->
</activity>
<activity
android:name=".ocr.OcrSettingsActivity"
</activity>
</application>
</manifest>
```
- 人像分割场景
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.fastdeploy.app.examples">
<!-- ... -->
<activity android:name=".segmentation.SegmentationMainActivity">
<!-- -->
</activity>
<activity
android:name=".segmentation.SegmentationSettingsActivity"
</activity>
</application>
</manifest>
```
- 人脸检测场景
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.baidu.paddle.fastdeploy.app.examples">
<!-- ... -->
<activity android:name=".facedet.FaceDetMainActivity">
<!-- -->
</activity>
<activity
android:name=".facedet.FaceDetSettingsActivity"
</activity>
</application>
</manifest>
```

View File

@@ -1 +0,0 @@
- TODO

View File

@@ -1 +0,0 @@
README_CN.md

55
serving/README.md Normal file
View File

@@ -0,0 +1,55 @@
[简体中文](README_CN.md) | English
# FastDeploy Serving Deployment
## Introduction
FastDeploy builds an end-to-end serving deployment based on [Triton Inference Server](https://github.com/triton-inference-server/server). The underlying backend uses the FastDeploy high-performance Runtime module and integrates the FastDeploy pre- and post-processing modules to achieve end-to-end serving deployment. It can achieve fast deployment with easy-to-use process and excellent performance.
## Prepare the environment
### Environment requirements
- Linux
- If using a GPU image, NVIDIA Driver >= 470 is required (for older Tesla architecture GPUs, such as T4, the NVIDIA Driver can be 418.40+, 440.33+, 450.51+, 460.27+)
### Obtain Image
#### CPU Image
CPU images only support Paddle/ONNX models for serving deployment on CPUs, and supported inference backends include OpenVINO, Paddle Inference, and ONNX Runtime
```shell
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.1-cpu-only-21.10
```
#### GPU Image
GPU images support Paddle/ONNX models for serving deployment on GPU and CPU, and supported inference backends including OpenVINO, TensorRT, Paddle Inference, and ONNX Runtime
```
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.1-gpu-cuda11.4-trt8.4-21.10
```
Users can also compile the image by themselves according to their own needs, referring to the following documents:
- [FastDeploy Serving Deployment Image Compilation](docs/zh_CN/compile.md)
## Other Tutorials
- [How to Prepare Serving Model Repository](docs/zh_CN/model_repository.md)
- [Serving Deployment Configuration for Runtime](docs/zh_CN/model_configuration.md)
- [Demo of Serving Deployment](docs/zh_CN/demo.md)
### Serving Deployment Demo
| Task | Model |
|---|---|
| Classification | [PaddleClas](../examples/vision/classification/paddleclas/serving/README.md) |
| Detection | [PaddleDetection](../examples/vision/detection/paddledetection/serving/README.md) |
| Detection | [ultralytics/YOLOv5](../examples/vision/detection/yolov5/serving/README.md) |
| NLP | [PaddleNLP/ERNIE-3.0](../examples/text/ernie-3.0/serving/README.md)|
| NLP | [PaddleNLP/UIE](../examples/text/uie/serving/README.md)|
| Speech | [PaddleSpeech/PP-TTS](../examples/audio/pp-tts/serving/README.md)|
| OCR | [PaddleOCR/PP-OCRv3](../examples/vision/ocr/PP-OCRv3/serving/README.md)|

View File

@@ -1,4 +1,4 @@
简体中文 | [English](README_EN.md)
简体中文 | [English](README.md)
# FastDeploy 服务化部署

View File

@@ -1,55 +0,0 @@
[简体中文](README_CN.md) | English
# FastDeploy Serving Deployment
## Introduction
FastDeploy builds an end-to-end serving deployment based on [Triton Inference Server](https://github.com/triton-inference-server/server). The underlying backend uses the FastDeploy high-performance Runtime module and integrates the FastDeploy pre- and post-processing modules to achieve end-to-end serving deployment. It can achieve fast deployment with easy-to-use process and excellent performance.
## Prepare the environment
### Environment requirements
- Linux
- If using a GPU image, NVIDIA Driver >= 470 is required (for older Tesla architecture GPUs, such as T4, the NVIDIA Driver can be 418.40+, 440.33+, 450.51+, 460.27+)
### Obtain Image
#### CPU Image
CPU images only support Paddle/ONNX models for serving deployment on CPUs, and supported inference backends include OpenVINO, Paddle Inference, and ONNX Runtime
```shell
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.1-cpu-only-21.10
```
#### GPU Image
GPU images support Paddle/ONNX models for serving deployment on GPU and CPU, and supported inference backends including OpenVINO, TensorRT, Paddle Inference, and ONNX Runtime
```
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.1-gpu-cuda11.4-trt8.4-21.10
```
Users can also compile the image by themselves according to their own needs, referring to the following documents:
- [FastDeploy Serving Deployment Image Compilation](docs/zh_CN/compile.md)
## Other Tutorials
- [How to Prepare Serving Model Repository](docs/zh_CN/model_repository.md)
- [Serving Deployment Configuration for Runtime](docs/zh_CN/model_configuration.md)
- [Demo of Serving Deployment](docs/zh_CN/demo.md)
### Serving Deployment Demo
| Task | Model |
|---|---|
| Classification | [PaddleClas](../examples/vision/classification/paddleclas/serving/README.md) |
| Detection | [PaddleDetection](../examples/vision/detection/paddledetection/serving/README.md) |
| Detection | [ultralytics/YOLOv5](../examples/vision/detection/yolov5/serving/README.md) |
| NLP | [PaddleNLP/ERNIE-3.0](../examples/text/ernie-3.0/serving/README.md)|
| NLP | [PaddleNLP/UIE](../examples/text/uie/serving/README.md)|
| Speech | [PaddleSpeech/PP-TTS](../examples/audio/pp-tts/serving/README.md)|
| OCR | [PaddleOCR/PP-OCRv3](../examples/vision/ocr/PP-OCRv3/serving/README.md)|

View File

@@ -1 +1,480 @@
English | [中文](../zh_CN/client.md)
# Client Access Instruction
Let us take accessing a YOLOv5 model deployed by fastdeployserver as an example, and describe how the client requests the server for inference services. For how to deploy a YOLOv5 model using fastdeployserver, you can refer to [YOLOv5 service-based deployment](../../../examples/vision/detection/yolov5/serving).
## Fundamental Introduction
Fastdeployserver implements the [Predict Protocol](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/required_api.md) proposed by [kserve](https://github.com/kserve/kserve) , which is an API designed for machine learning model inference service. It is easy to use while supporting high performance deployment scenarios. Currently the API provides access based on both HTTP and GRPC.
When fastdeployserver starts, port 8000 is used to respond to HTTP requests and port 8001 is used to respond to GRPC requests by default. There are usually two types of resources that users request.
### **Model Metadata**
**HTTP**
Way to access: GET `v2/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]`
Use GET to request the url path to get model metadata in service, where `${MODEL_NAME}` indicates the name of the model and `${MODEL_VERSION}` indicates the version of the model. The server will return the metadata in json format as dictionary. With `$metadata_model_response` indicating the return object the content is as follows:
```json
$metadata_model_response =
{
"name" : $string,
"versions" : [ $string, ... ] #optional,
"platform" : $string,
"inputs" : [ $metadata_tensor, ... ],
"outputs" : [ $metadata_tensor, ... ]
}
$metadata_tensor =
{
"name" : $string,
"datatype" : $string,
"shape" : [ $number, ... ]
}
```
**GRPC**
The GRPC of the model service is defined as
```text
service GRPCInferenceService
{
// Check liveness of the inference server.
rpc ServerLive(ServerLiveRequest) returns (ServerLiveResponse) {}
// Check readiness of the inference server.
rpc ServerReady(ServerReadyRequest) returns (ServerReadyResponse) {}
// Check readiness of a model in the inference server.
rpc ModelReady(ModelReadyRequest) returns (ModelReadyResponse) {}
// Get server metadata.
rpc ServerMetadata(ServerMetadataRequest) returns (ServerMetadataResponse) {}
// Get model metadata.
rpc ModelMetadata(ModelMetadataRequest) returns (ModelMetadataResponse) {}
// Perform inference using a specific model.
rpc ModelInfer(ModelInferRequest) returns (ModelInferResponse) {}
}
```
Way to access: Call ModelMetadata method defined in Model Service GRPC interface using GRPC client.
The structure of the requested ModelMetadataRequest message and the returned ServerMetadataResponse message in the interface is as follows, which is basically the same as the json structure above for HTTP.
```text
message ModelMetadataRequest
{
// The name of the model.
string name = 1;
// The version of the model to check for readiness. If not given the
// server will choose a version based on the model and internal policy.
string version = 2;
}
message ModelMetadataResponse
{
// Metadata for a tensor.
message TensorMetadata
{
// The tensor name.
string name = 1;
// The tensor data type.
string datatype = 2;
// The tensor shape. A variable-size dimension is represented
// by a -1 value.
repeated int64 shape = 3;
}
// The model name.
string name = 1;
// The versions of the model available on the server.
repeated string versions = 2;
// The model's platform. See Platforms.
string platform = 3;
// The model's inputs.
repeated TensorMetadata inputs = 4;
// The model's outputs.
repeated TensorMetadata outputs = 5;
}
```
### **Inference Service**
**HTTP**
Way to access: POST `v2/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]/infer`
Use POST to request the url path to request the inference service of the model and get the inference result. Data in the POST request is also uploaded in json format. With `$inference_request` indicating uploaded objects, the content is as follows:
```json
$inference_request =
{
"id" : $string #optional,
"parameters" : $parameters #optional,
"inputs" : [ $request_input, ... ],
"outputs" : [ $request_output, ... ] #optional
}
$request_input =
{
"name" : $string,
"shape" : [ $number, ... ],
"datatype" : $string,
"parameters" : $parameters #optional,
"data" : $tensor_data
}
$request_output =
{
"name" : $string,
"parameters" : $parameters #optional,
}
$parameters =
{
$parameter, ...
}
$parameter = $string : $string | $number | $boolean
```
where `$tensor_data` represents a one-dimensional or multi-dimensional array. In the case of one-dimensional data, it should be arranged in row-major order.
After the server inference is completed, the result will be returned. With `$inference_response` representing return objects, the content is as follow:
```json
$inference_response =
{
"model_name" : $string,
"model_version" : $string #optional,
"id" : $string,
"parameters" : $parameters #optional,
"outputs" : [ $response_output, ... ]
}
$response_output =
{
"name" : $string,
"shape" : [ $number, ... ],
"datatype" : $string,
"parameters" : $parameters #optional,
"data" : $tensor_data
}
```
**GRPC**
Way to access: Call ModelMetadata method defined in Model Service GRPC interface using GRPC client.
The structure of the requested ModelMetadataRequest message and the returned ModelInferResponse message in the interface is as follows, for full definition, please refer to [the GRPC part](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/required_api.md#grpc) in kserve Predict Protocol.
```text
message ModelInferRequest
{
// An input tensor for an inference request.
message InferInputTensor
{
// The tensor name.
string name = 1;
// The tensor data type.
string datatype = 2;
// The tensor shape.
repeated int64 shape = 3;
// Optional inference input tensor parameters.
map<string, InferParameter> parameters = 4;
// The tensor contents using a data-type format. This field must
// not be specified if "raw" tensor contents are being used for
// the inference request.
InferTensorContents contents = 5;
}
// An output tensor requested for an inference request.
message InferRequestedOutputTensor
{
// The tensor name.
string name = 1;
// Optional requested output tensor parameters.
map<string, InferParameter> parameters = 2;
}
// The name of the model to use for inferencing.
string model_name = 1;
// The version of the model to use for inference. If not given the
// server will choose a version based on the model and internal policy.
string model_version = 2;
// Optional identifier for the request. If specified will be
// returned in the response.
string id = 3;
// Optional inference parameters.
map<string, InferParameter> parameters = 4;
// The input tensors for the inference.
repeated InferInputTensor inputs = 5;
// The requested output tensors for the inference. Optional, if not
// specified all outputs produced by the model will be returned.
repeated InferRequestedOutputTensor outputs = 6;
// The data contained in an input tensor can be represented in "raw"
// bytes form or in the repeated type that matches the tensor's data
// type. To use the raw representation 'raw_input_contents' must be
// initialized with data for each tensor in the same order as
// 'inputs'. For each tensor, the size of this content must match
// what is expected by the tensor's shape and data type. The raw
// data must be the flattened, one-dimensional, row-major order of
// the tensor elements without any stride or padding between the
// elements. Note that the FP16 and BF16 data types must be represented as
// raw content as there is no specific data type for a 16-bit float type.
//
// If this field is specified then InferInputTensor::contents must
// not be specified for any input tensor.
repeated bytes raw_input_contents = 7;
}
message ModelInferResponse
{
// An output tensor returned for an inference request.
message InferOutputTensor
{
// The tensor name.
string name = 1;
// The tensor data type.
string datatype = 2;
// The tensor shape.
repeated int64 shape = 3;
// Optional output tensor parameters.
map<string, InferParameter> parameters = 4;
// The tensor contents using a data-type format. This field must
// not be specified if "raw" tensor contents are being used for
// the inference response.
InferTensorContents contents = 5;
}
// The name of the model used for inference.
string model_name = 1;
// The version of the model used for inference.
string model_version = 2;
// The id of the inference request if one was specified.
string id = 3;
// Optional inference response parameters.
map<string, InferParameter> parameters = 4;
// The output tensors holding inference results.
repeated InferOutputTensor outputs = 5;
// The data contained in an output tensor can be represented in
// "raw" bytes form or in the repeated type that matches the
// tensor's data type. To use the raw representation 'raw_output_contents'
// must be initialized with data for each tensor in the same order as
// 'outputs'. For each tensor, the size of this content must match
// what is expected by the tensor's shape and data type. The raw
// data must be the flattened, one-dimensional, row-major order of
// the tensor elements without any stride or padding between the
// elements. Note that the FP16 and BF16 data types must be represented as
// raw content as there is no specific data type for a 16-bit float type.
//
// If this field is specified then InferOutputTensor::contents must
// not be specified for any output tensor.
repeated bytes raw_output_contents = 6;
}
```
## Client Tools
You can use HTTP client tool to request HTTP server or GRPC client tool for GRPC server once you get to know the interface provided by fastdeployserver. When fastdeployserver starts, port 8000 is used to respond to HTTP requests and port 8001 is used to respond to GRPC requests by default.
### Using HTTP client
Here is how to use tritonclient and requests library to access fastdeployserver HTTP service. The first tool is a client specifically made for model service, which encapsulates request and response. And the second tool is a general http client tool, using which for access can help you better understand the above data structure.
1. Using tritonclient to access service
Install tritonclient\[http\]
```bash
pip install tritonclient[http]
```
- Get metadata in YOLOv5 model
```python
import tritonclient.http as httpclient # Importing httpclient.
server_addr = 'localhost:8000' # Please change to the real address to fastdeployserver server here.
client = httpclient.InferenceServerClient(server_addr) # Create clients.
model_metadata = client.get_model_metadata(
model_name='yolov5', model_version='1') # Request metadata in YOLOv5 model.
```
You can print model's input and output.
```python
print(model_metadata.inputs)
```
```text
[{'name': 'INPUT', 'datatype': 'UINT8', 'shape': [-1, -1, -1, 3]}]
```
```python
print(model_metadata.outputs)
```
```text
[{'name': 'detction_result', 'datatype': 'BYTES', 'shape': [-1, -1]}]
```
- Request Inference Service
You can create data according to inputs and outputs of the model, and the request inference.
```python
# Assume that the file name of image data is 000000014439.jpg.
import cv2
image = cv2.imread('000000014439.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)[None]
inputs = []
infer_input = httpclient.InferInput('INPUT', image.shape, 'UINT8') # Create inputs.
infer_input.set_data_from_numpy(image) # Load input data.
inputs.append(infer_input)
outputs = []
infer_output = httpclient.InferRequestedOutput('detction_result') # Create outputs.
outputs.append(infer_output)
response = client.infer(
'yolov5', inputs, model_version='1', outputs=outputs) # Request inference.
response_outputs = response.as_numpy('detction_result') # Get results based on output variable name.
```
2. Using requests to access service
Install requests
```bash
pip install requests
```
- Get metadata in YOLOv5 model
```python
import requests
url = 'http://localhost:8000/v2/models/yolov5/versions/1' # Construct the url based on "Model Metadata" in the above section.
response = requests.get(url)
response = response.json() # Return data as json, and parse in json format.
```
Print the metadata returned.
```python
print(response)
```
```text
{'name': 'yolov5', 'versions': ['1'], 'platform': 'ensemble', 'inputs': [{'name': 'INPUT', 'datatype': 'UINT8', 'shape': [-1, -1, -1, 3]}], 'outputs': [{'name': 'detction_result', 'datatype': 'BYTES', 'shape': [-1, -1]}]}
```
- Request Inference Service
You can create data according to inputs and outputs of the model, and the request inference.
```python
url = 'http://localhost:8000/v2/models/yolov5/versions/1/infer' # Construct the url based on "Inference Service" in the above section.
# Assume that the file name of image data is 000000014439.jpg.
import cv2
image = cv2.imread('000000014439.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)[None]
payload = {
"inputs" : [
{
"name" : "INPUT",
"shape" : image.shape,
"datatype" : "UINT8",
"data" : image.tolist()
}
],
"outputs" : [
{
"name" : "detction_result"
}
]
}
response = requests.post(url, data=json.dumps(payload))
response = response.json() # Return data as json, parse in json format, and you get your inference result.
```
### Using GRPC client
Install tritonclient\[grpc\]
```bash
pip install tritonclient[grpc]
```
Tritonclient\[grpc\] provides a client using GRPC and encapsulates the interaction of GRPC. So you do not have to establish a connection with the server manually or use the stub in grpc to call the server interface directly, but to use the same interface as the tritonclient HTTP client.
- Get metadata in YOLOv5 model
```python
import tritonclient.grpc as grpcclient # Import grpc client.
server_addr = 'localhost:8001' # Please change to the real address to fastdeployserver grpc server here.
client = grpcclient.InferenceServerClient(server_addr) # Create clients
model_metadata = client.get_model_metadata(
model_name='yolov5', model_version='1') # Request metadata in YOLOv5 model.
```
- Request Inference Service
Create request data according to the returned model_metadata. Let us first print the input and output.
```python
print(model_metadata.inputs)
```
```text
[name: "INPUT"
datatype: "UINT8"
shape: -1
shape: -1
shape: -1
shape: 3
]
```
```python
print(model_metadata.outputs)
```
```text
[name: "detction_result"
datatype: "BYTES"
shape: -1
shape: -1
]
```
Create data according to inputs and outputs, and then request inference.
```python
# Assume that the file name of image data is 000000014439.jpg.
import cv2
image = cv2.imread('000000014439.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)[None]
inputs = []
infer_input = grpcclient.InferInput('INPUT', image.shape, 'UINT8') # Create inputs.
infer_input.set_data_from_numpy(image) # Load input data.
inputs.append(infer_input)
outputs = []
infer_output = grpcclient.InferRequestedOutput('detction_result') # Create outputs.
outputs.append(infer_output)
response = client.infer(
'yolov5', inputs, model_version='1', outputs=outputs) # Request inference
response_outputs = response.as_numpy('detction_result') # Get results based on output variable name.
```

View File

@@ -1,7 +1,7 @@
English | [中文](../zh_CN/compile.md)
# FastDeploy Serving Deployment Image Compilation
How to create a FastDploy image
This article is about how to create a FastDploy image.
## GPU Image

View File

@@ -1 +1,206 @@
English | [中文](../zh_CN/model_configuration.md)
# Model Configuration
Each model in the model repository must contain a configuration that provides required and optional information about the model. The configuration information is generally written in [ModelConfig protobuf](https://github.com/triton-inference-server/common/blob/main/protobuf/model_config.proto) format in file *config.pbtxt*.
## Minimum Model General Configuration
Please see the official website for detailed general configuration: [model_configuration](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md). Minimum model configuration of Triton must include: attribute *platform* or *backend*, attribute *max_batch_size* and input and output of the model.
For example, the minimum configuration of a Paddle model should be (with two inputs *input0* and *input1*, one output *output0*, both inputs and outputs are tensors of type float32, and the maximum batch is 8):
```
backend: "fastdeploy"
max_batch_size: 8
input [
{
name: "input0"
data_type: TYPE_FP32
dims: [ 16 ]
},
{
name: "input1"
data_type: TYPE_FP32
dims: [ 16 ]
}
]
output [
{
name: "output0"
data_type: TYPE_FP32
dims: [ 16 ]
}
]
```
## Configuration of CPU, GPU and Instances number
The attribute *instance_group* allows you to configure hardware resource and model inference instances number.
Here's an example of CPU deployment:
```
instance_group [
{
# Create two CPU instances
count: 2
# Use CPU for deployment
kind: KIND_CPU
}
]
```
Another example of deploying two instances on *GPU 0*, and one instance each on *GPU1* and *GPU*:
```
instance_group [
{
# Create tow GPU instances
count: 2
# Use GPU for inference
kind: KIND_GPU
# Deploy on GPU 0
gpus: [ 0 ]
},
{
count: 1
kind: KIND_GPU
# Deploy on GPU 1,2
gpus: [ 1, 2 ]
}
]
```
### Name, Platform and Backend
The attribute *name* is optional. If the model is not specified in the configuration, then the name is the model's directory name. When the name is specified, it should match the directory name.
Set *fastdeploy backend*. You should not configure attribute *platform*, but please instead configure attribute *backend* to *fastdeploy*.
```
backend: "fastdeploy"
```
### FastDeploy Backend Configuration
Currently FastDeploy backend supports *cpu* and *gpu* inference, with *paddle*, *onnxruntime* and *openvino* inference engines supported on *cpu*, and *paddle*, *onnxruntime* and *tensorrt* engines supported on *gpu*.
#### Paddle Engine Configuration
In addition to configuring *Instance Groups*, deciding whether the model runs on CPU or GPU, the Paddle engine can be configured as follows. You can see more specific examples in [A PP-OCRv3 example for Runtime configuration](../../../examples/vision/ocr/PP-OCRv3/serving/models/cls_runtime/config.pbtxt).
```
optimization {
execution_accelerators {
# CPU inference configuration, used with KIND_CPU.
cpu_execution_accelerator : [
{
name : "paddle"
# Set parallel inference computing threads number to 4.
parameters { key: "cpu_threads" value: "4" }
# Set mkldnn acceleration on, or off when set to 0.
parameters { key: "use_mkldnn" value: "1" }
}
],
# GPU inference configuration, used with KIND_GPU.
gpu_execution_accelerator : [
{
name : "paddle"
# Set parallel inference computing threads number to 4.
parameters { key: "cpu_threads" value: "4" }
# Set mkldnn acceleration on, or off when set to 0.
parameters { key: "use_mkldnn" value: "1" }
}
]
}
}
```
#### ONNXRuntime Engine Configuration
In addition to configuring *Instance Groups*, deciding whether the model runs on CPU or GPU, the ONNXRuntime engine can be configured as follows. You can see more specific examples in [A YOLOv5 example for Runtime configuration](../../../examples/vision/detection/yolov5/serving/models/runtime/config.pbtxt).
```
optimization {
execution_accelerators {
cpu_execution_accelerator : [
{
name : "onnxruntime"
# Set parallel inference computing threads number to 4.
parameters { key: "cpu_threads" value: "4" }
}
],
gpu_execution_accelerator : [
{
name : "onnxruntime"
}
]
}
}
```
### OpenVINO Engine Configuration
The OpenVINO engine only supports inferring on CPU, which can be configured as:
```
optimization {
execution_accelerators {
cpu_execution_accelerator : [
{
name : "openvino"
# Set parallel inference computing threads number to 4 (total number of threads for all instances).
parameters { key: "cpu_threads" value: "4" }
# Set num_streams in OpenVINO (usually the same as instances number)
parameters { key: "num_streams" value: "1" }
}
]
}
}
```
### TensorRT Engine Configuration
The TensorRT engine only supports inferring on GPU, which can be configured as:
```
optimization {
execution_accelerators {
gpu_execution_accelerator : [
{
name : "tensorrt"
# Use FP16 inference in TensorRT. You can also choose: trt_fp32, trt_int8
parameters { key: "precision" value: "trt_fp16" }
}
]
}
}
```
You can configure the TensorRT dynamic shape in the following format, and refer to [A PaddleCls example for Runtime configuration](../../../examples/vision/classification/paddleclas/serving/models/runtime/config.pbtxt):
```
optimization {
execution_accelerators {
gpu_execution_accelerator : [ {
# use TRT engine
name: "tensorrt",
# use fp16 on TRT engine
parameters { key: "precision" value: "trt_fp16" }
},
{
# Configure the minimum shape of dynamic shape
name: "min_shape"
# All input name and minimum shape
parameters { key: "input1" value: "1 3 224 224" }
parameters { key: "input2" value: "1 10" }
},
{
# Configure the optimal shape of dynamic shape
name: "opt_shape"
# All input name and optimal shape
parameters { key: "input1" value: "2 3 224 224" }
parameters { key: "input2" value: "2 20" }
},
{
# Configure the maximum shape of dynamic shape
name: "max_shape"
# All input name and maximum shape
parameters { key: "input1" value: "8 3 224 224" }
parameters { key: "input2" value: "8 30" }
}
]
}}
```

View File

@@ -14,7 +14,7 @@ fastdeployserver实现了由[kserve](https://github.com/kserve/kserve)提出的
访问方式: GET `v2/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]`
使用GET请求该url路径可以获取参与服务的模型的元信息其中`${MODEL_NAME}`表示模型的名字,${MODEL_VERSION}表示模型的版本。服务器会把模型的元信息以json格式返回返回的格式为一个字典以$metadata_model_response表示返回的对象各字段和内容形式表示如下
使用GET请求该url路径可以获取参与服务的模型的元信息其中`${MODEL_NAME}`表示模型的名字,`${MODEL_VERSION}`表示模型的版本。服务器会把模型的元信息以json格式返回返回的格式为一个字典`$metadata_model_response`表示返回的对象,各字段和内容形式表示如下:
```json
$metadata_model_response =