mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-12 03:50:39 +08:00
Merge branch 'develop' into develop
This commit is contained in:
@@ -56,13 +56,13 @@
|
||||
## 🌠 近期更新
|
||||
|
||||
- ✨✨✨ **2023.01.17** 发布 [**YOLOv8**](./examples/vision/detection/paddledetection/) 在FastDeploy系列硬件的部署支持。 其中包括 [**Paddle YOLOv8**](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov8) 以及 [**社区 ultralytics YOLOv8**](https://github.com/ultralytics/ultralytics)
|
||||
- [**Paddle YOLOv8**](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov8) 可以部署的硬件:[**Intel CPU**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**NVIDIA GPU**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**Jetson**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**飞腾**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**昆仑芯**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**昇腾**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**ARM CPU**](./examples/vision/detection/paddledetection/cpp/infer_yolov8.cc),[**RK3588**](./examples/vision/detection/paddledetection/rknpu2/yolov8.md) ,包含 **Python** 部署和 **C++** 部署;**算能TPU** 正在更新中
|
||||
- [**Paddle YOLOv8**](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov8) 可以部署的硬件:[**Intel CPU**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**NVIDIA GPU**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**Jetson**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**飞腾**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**昆仑芯**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**昇腾**](./examples/vision/detection/paddledetection/python/infer_yolov8.py)、[**ARM CPU**](./examples/vision/detection/paddledetection/cpp/infer_yolov8.cc)、[**RK3588**](./examples/vision/detection/paddledetection/rknpu2/yolov8.md) 和 [**Sophgo TPU**](./examples/vision/detection/paddledetection/sophgo), 部分硬件包含 **Python** 部署和 **C++** 部署;
|
||||
- [**社区 ultralytics YOLOv8**](https://github.com/ultralytics/ultralytics) 可以部署的硬件:[**Intel CPU**](./examples/vision/detection/yolov8)、[**NVIDIA GPU**](./examples/vision/detection/yolov8)、[**Jetson**](./examples/vision/detection/yolov8),均包含 **Python** 部署和 **C++** 部署;
|
||||
- FastDeploy 一行模型API切换,可以实现**YOLOv8**、 **PP-YOLOE+**、**YOLOv5** 等模型性能对比
|
||||
- FastDeploy 一行模型API切换,可以实现**YOLOv8**、 **PP-YOLOE+**、**YOLOv5** 等模型性能对比。
|
||||
|
||||
- **✨👥✨ 社区交流**
|
||||
|
||||
- **Slack**:Join our [Slack community](https://join.slack.com/t/fastdeployworkspace/shared_invite/zt-1m88mytoi-mBdMYcnTF~9LCKSOKXd6Tg) and chat with other community members about ideas
|
||||
- **Slack**:Join our [Slack community](https://join.slack.com/t/fastdeployworkspace/shared_invite/zt-1o50e4voz-zbiIneCNRf_eH99eS2NVLg) and chat with other community members about ideas
|
||||
|
||||
- **微信**:扫描二维码,填写问卷加入技术社区,与社区开发者交流部署产业落地痛点问题
|
||||
|
||||
|
7
README_EN.md
Executable file → Normal file
7
README_EN.md
Executable file → Normal file
@@ -53,9 +53,10 @@ Including [image classification](examples/vision/classification), [object detect
|
||||
|
||||
## 🌠 Recent updates
|
||||
- ✨✨✨ In **2023.01.17** we released [**YOLOv8**](./examples/vision/detection/paddledetection/) for deployment on FastDeploy series hardware, which includes [**Paddle YOLOv8**](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov8) and [**ultralytics YOLOv8**](https://github.com/ultralytics/ultralytics)
|
||||
- Deployable hardware for [**Paddle YOLOv8**](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov8):[**Intel CPU**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**NVIDIA GPU**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**Jetson**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**Phytium**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**Kunlunxin**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**HUAWEI Ascend**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**ARM CPU**](./examples/vision/detection/paddledetection/cpp/infer_yolov8.cc), both **Python** deployments and **C++** deployments are included. [**Sophgo TPU**]() and [**RK3588**]() are being updated
|
||||
- Deployable hardware for [**ultralytics YOLOv8**](https://github.com/ultralytics/ultralytics): [**Intel CPU**](./examples/vision/detection/yolov8), [**NVIDIA GPU**](./examples/vision/detection/yolov8), [**Jetson**](./examples/vision/detection/yolov8), both **Python** deployments and **C++** deployments are included
|
||||
- FastDeploy one-line model API switch, which can achieve **YOLOv8**, **PP-YOLOE+**, **YOLOv5** and other model performance comparison
|
||||
|
||||
- You can deploy [**Paddle YOLOv8**](https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.5/configs/yolov8) on [**Intel CPU**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**NVIDIA GPU**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**Jetson**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**Phytium**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**Kunlunxin**](./examples/vision/detection/paddledetection/python/infer_yolov8.py), [**HUAWEI Ascend**](./examples/vision/detection/paddledetection/python/infer_yolov8.py) ,[**ARM CPU**](./examples/vision/detection/paddledetection/cpp/infer_yolov8.cc) [**RK3588**](./examples/vision/detection/paddledetection/rknpu2) and [**Sophgo TPU**](./examples/vision/detection/paddledetection/sophgo). Both **Python** deployments and **C++** deployments are included.
|
||||
- You can deploy [**ultralytics YOLOv8**](https://github.com/ultralytics/ultralytics) on [**Intel CPU**](./examples/vision/detection/yolov8), [**NVIDIA GPU**](./examples/vision/detection/yolov8), [**Jetson**](./examples/vision/detection/yolov8). Both **Python** deployments and **C++** deployments are included
|
||||
- Fastdeploy supports quick deployment of multiple models, including **YOLOv8**, **PP-YOLOE+**, **YOLOv5** and other models
|
||||
|
||||
- **✨👥✨ Community**
|
||||
|
||||
|
@@ -21,12 +21,12 @@ if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "aarch64")
|
||||
if (NOT BUILD_FASTDEPLOY_PYTHON)
|
||||
message(STATUS "Build FastDeploy Ascend C++ library on aarch64 platform.")
|
||||
if(NOT PADDLELITE_URL)
|
||||
set(PADDLELITE_URL "https://bj.bcebos.com/fastdeploy/third_libs/lite-linux_arm64_huawei_ascend_npu_1121.tgz")
|
||||
set(PADDLELITE_URL "https://bj.bcebos.com/fastdeploy/third_libs/lite-linux_arm64_huawei_ascend_npu_0118.tgz")
|
||||
endif()
|
||||
else ()
|
||||
message(STATUS "Build FastDeploy Ascend Python library on aarch64 platform.")
|
||||
if(NOT PADDLELITE_URL)
|
||||
set(PADDLELITE_URL "https://bj.bcebos.com/fastdeploy/third_libs/lite-linux_arm64_huawei_ascend_npu_python_1207.tgz")
|
||||
set(PADDLELITE_URL "https://bj.bcebos.com/fastdeploy/third_libs/lite-linux_arm64_huawei_ascend_npu_python_0118.tgz")
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
@@ -101,38 +101,38 @@
|
||||
- [✴️ Python SDK クイックスタート](#fastdeploy-quick-start-python)
|
||||
- [✴️ C++ SDK クイックスタート](#fastdeploy-quick-start-cpp)
|
||||
- **インストールドキュメント**
|
||||
- [プリコンパイルされたライブラリのダウンロードとインストール](docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- [GPU デプロイメント環境のコンパイルとインストール](docs/cn/build_and_install/gpu.md)
|
||||
- [CPU デプロイメント環境のコンパイルとインストール](docs/cn/build_and_install/cpu.md)
|
||||
- [IPU デプロイメント環境のコンパイルとインストール](docs/cn/build_and_install/ipu.md)
|
||||
- [KunlunXin XPUデプロイメント環境のコンパイルとインストール](docs/cn/build_and_install/kunlunxin.md)
|
||||
- [Rockchip RV1126 デプロイメント環境のコンパイルとインストール](docs/cn/build_and_install/rv1126.md)
|
||||
- [Rockchip RK3588 デプロイメント環境のコンパイルとインストール](docs/cn/build_and_install/rknpu2.md)
|
||||
- [Amlogic A311D デプロイメント環境のコンパイルとインストール](docs/cn/build_and_install/a311d.md)
|
||||
- [Huawei Ascend デプロイメント環境のコンパイルとインストール](docs/cn/build_and_install/huawei_ascend.md)
|
||||
- [Jetson デプロイメント環境のコンパイルとインストール](docs/cn/build_and_install/jetson.md)
|
||||
- [Android デプロイメント環境のコンパイルとインストール](docs/cn/build_and_install/android.md)
|
||||
- [プリコンパイルされたライブラリのダウンロードとインストール](../en/build_and_install/download_prebuilt_libraries.md)
|
||||
- [GPU デプロイメント環境のコンパイルとインストール](../en/build_and_install/gpu.md)
|
||||
- [CPU デプロイメント環境のコンパイルとインストール](../en/build_and_install/cpu.md)
|
||||
- [IPU デプロイメント環境のコンパイルとインストール](../en/build_and_install/ipu.md)
|
||||
- [KunlunXin XPUデプロイメント環境のコンパイルとインストール](../en/build_and_install/kunlunxin.md)
|
||||
- [Rockchip RV1126 デプロイメント環境のコンパイルとインストール](../en/build_and_install/rv1126.md)
|
||||
- [Rockchip RK3588 デプロイメント環境のコンパイルとインストール](../en/build_and_install/rknpu2.md)
|
||||
- [Amlogic A311D デプロイメント環境のコンパイルとインストール](../en/build_and_install/a311d.md)
|
||||
- [Huawei Ascend デプロイメント環境のコンパイルとインストール](../en/build_and_install/huawei_ascend.md)
|
||||
- [Jetson デプロイメント環境のコンパイルとインストール](../en/build_and_install/jetson.md)
|
||||
- [Android デプロイメント環境のコンパイルとインストール](../en/build_and_install/android.md)
|
||||
- **クイックユース**
|
||||
- [PP-YOLOE Python 展開例](docs/cn/quick_start/models/python.md)
|
||||
- [PP-YOLOE C++ 展開例](docs/cn/quick_start/models/cpp.md)
|
||||
- [PP-YOLOE Python 展開例](../en/quick_start/models/python.md)
|
||||
- [PP-YOLOE C++ 展開例](../en/quick_start/models/cpp.md)
|
||||
- **バックエンドの利用**
|
||||
- [Runtime Python 使用例](docs/cn/quick_start/runtime/python.md)
|
||||
- [Runtime C++ 使用例](docs/cn/quick_start/runtime/cpp.md)
|
||||
- [モデルデプロイメントのための推論バックエンドの設定方法](docs/cn/faq/how_to_change_backend.md)
|
||||
- [Runtime Python 使用例](../en/quick_start/runtime/python.md)
|
||||
- [Runtime C++ 使用例](../en/quick_start/runtime/cpp.md)
|
||||
- [モデルデプロイメントのための推論バックエンドの設定方法](../en/faq/how_to_change_backend.md)
|
||||
- **サービス・デプロイメント**
|
||||
- [サービス展開イメージのコンパイルとインストール](serving/docs/zh_CN/compile.md)
|
||||
- [サービス展開イメージのコンパイルとインストール](../../serving/docs/zh_CN/compile.md)
|
||||
- [サービス・デプロイメント](serving)
|
||||
- **API ドキュメンテーション**
|
||||
- [Python API ドキュメンテーション](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/python/html/)
|
||||
- [C++ API ドキュメンテーション](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/)
|
||||
- [Android Java API ドキュメンテーション](java/android)
|
||||
- [Android Java API ドキュメンテーション](../../java/android)
|
||||
- **パフォーマンスチューニング**
|
||||
- [量的加速](docs/cn/quantize.md)
|
||||
- [マルチスレッド・マルチプロセスの使用](/tutorials/multi_thread)
|
||||
- [マルチスレッド・マルチプロセスの使用](../../tutorials/multi_thread)
|
||||
- **よくある質問**
|
||||
- [1. Windows上C++ SDK の場合使用方法](docs/cn/faq/use_sdk_on_windows.md)
|
||||
- [2. FastDeploy C++ SDKをAndroidで使用する方法](docs/cn/faq/use_cpp_sdk_on_android.md)
|
||||
- [3. TensorRT 使い方のコツ](docs/cn/faq/tensorrt_tricks.md)
|
||||
- [1. Windows上C++ SDK の場合使用方法](../en/faq/use_sdk_on_windows.md)
|
||||
- [2. FastDeploy C++ SDKをAndroidで使用する方法](../en/faq/use_cpp_sdk_on_android.md)
|
||||
- [3. TensorRT 使い方のコツ](../en/faq/tensorrt_tricks.md)
|
||||
- **続きを読むFastDeployモジュールのデプロイメント**
|
||||
- [Benchmark テスト](benchmark)
|
||||
- **モデル対応表**
|
||||
@@ -140,7 +140,7 @@
|
||||
- [📳 モバイル・エンド側モデル対応表](#fastdeploy-edge-models)
|
||||
- [⚛️ アプレットモデル対応表](#fastdeploy-web-models)
|
||||
- **💕 開発者拠出金**
|
||||
- [新規モデルの追加](docs/cn/faq/develop_a_new_model.md)
|
||||
- [新規モデルの追加](../en/faq/develop_a_new_model.md)
|
||||
|
||||
|
||||
|
||||
|
@@ -17,7 +17,7 @@ English | [中文](../../cn/build_and_install/README.md)
|
||||
- [Build and Install on A311D Platform](a311d.md)
|
||||
- [Build and Install on KunlunXin XPU Platform](kunlunxin.md)
|
||||
- [Build and Install on Huawei Ascend Platform](huawei_ascend.md)
|
||||
|
||||
- [Build and Install on SOPHGO Platform](sophgo.md.md)
|
||||
|
||||
## Build options
|
||||
|
||||
|
@@ -14,61 +14,22 @@ The following tests are at end-to-end speed, and the test environment is as foll
|
||||
* with single-core NPU
|
||||
|
||||
|
||||
| Mission Scenario | Model | Model Version(tested version) | ARM CPU/RKNN speed(ms) |
|
||||
|------------------|-------------------|-------------------------------|--------------------|
|
||||
| Detection | Picodet | Picodet-s | 162/112 |
|
||||
| Detection | RKYOLOV5 | YOLOV5-S-Relu(int8) | -/57 |
|
||||
| Detection | RKYOLOX | - | -/- |
|
||||
| Detection | RKYOLOV7 | - | -/- |
|
||||
| Segmentation | Unet | Unet-cityscapes | -/- |
|
||||
| Segmentation | PP-HumanSegV2Lite | portrait | 133/43 |
|
||||
| Segmentation | PP-HumanSegV2Lite | human | 133/43 |
|
||||
| Face Detection | SCRFD | SCRFD-2.5G-kps-640 | 108/42 |
|
||||
| Mission Scenario | Model | Model Version(tested version) | ARM CPU/RKNN speed(ms) |
|
||||
|----------------------|------------------------------------------------------------------------------------------|--------------------------|--------------------|
|
||||
| Detection | [Picodet](../../../../examples/vision/detection/paddledetection/rknpu2/README.md) | Picodet-s | 162/112 |
|
||||
| Detection | [RKYOLOV5](../../../../examples/vision/detection/rkyolo/README.md) | YOLOV5-S-Relu(int8) | -/57 |
|
||||
| Detection | [RKYOLOX](../../../../examples/vision/detection/rkyolo/README.md) | - | -/- |
|
||||
| Detection | [RKYOLOV7](../../../../examples/vision/detection/rkyolo/README.md) | - | -/- |
|
||||
| Segmentation | [Unet](../../../../examples/vision/segmentation/paddleseg/rknpu2/README.md) | Unet-cityscapes | -/- |
|
||||
| Segmentation | [PP-HumanSegV2Lite](../../../../examples/vision/segmentation/paddleseg/rknpu2/README.md) | portrait(int8) | 133/43 |
|
||||
| Segmentation | [PP-HumanSegV2Lite](../../../../examples/vision/segmentation/paddleseg/rknpu2/README.md) | human(int8) | 133/43 |
|
||||
| Face Detection | [SCRFD](../../../../examples/vision/facedet/scrfd/rknpu2/README.md) | SCRFD-2.5G-kps-640(int8) | 108/42 |
|
||||
| Face FaceRecognition | [InsightFace](../../../../examples/vision/faceid/insightface/rknpu2/README_CN.md) | ms1mv3_arcface_r18(int8) | 81/12 |
|
||||
| Classification | [ResNet](../../../../examples/vision/classification/paddleclas/rknpu2/README.md) | ResNet50_vd | -/33 |
|
||||
|
||||
## Download Pre-trained library
|
||||
|
||||
## How to use RKNPU2 Backend to Infer Models
|
||||
For convenience, here we provide the 1.0.2 version of FastDeploy.
|
||||
|
||||
We provide an example on Scrfd model here to show how to use RKNPU2 Backend for model inference. The modifications mentioned in the annotations below are in comparison to the ONNX CPU.
|
||||
|
||||
```c++
|
||||
int infer_scrfd_npu() {
|
||||
char model_path[] = "./model/scrfd_2.5g_bnkps_shape640x640.rknn";
|
||||
char image_file[] = "./image/test_lite_face_detector_3.jpg";
|
||||
auto option = fastdeploy::RuntimeOption();
|
||||
// Modification1: option.UseRKNPU2 function should be called
|
||||
option.UseRKNPU2();
|
||||
|
||||
// Modification2: The parameter 'fastdeploy::ModelFormat::RKNN' should be transferred when loading the model
|
||||
auto *model = new fastdeploy::vision::facedet::SCRFD(model_path,"",option,fastdeploy::ModelFormat::RKNN);
|
||||
if (!model->Initialized()) {
|
||||
std::cerr << "Failed to initialize." << std::endl;
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Modification3(optional): RKNPU2 supports to normalize using NPU and the input format is nhwc format.
|
||||
// The action of DisableNormalizeAndPermute will block the nor action and hwc to chw converting action during preprocessing.
|
||||
// If you use an already supported model list, please call its method before Predict.
|
||||
model->DisableNormalizeAndPermute();
|
||||
auto im = cv::imread(image_file);
|
||||
auto im_bak = im.clone();
|
||||
fastdeploy::vision::FaceDetectionResult res;
|
||||
clock_t start = clock();
|
||||
if (!model->Predict(&im, &res, 0.8, 0.8)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return 0;
|
||||
}
|
||||
clock_t end = clock();
|
||||
double dur = (double) (end - start);
|
||||
printf("infer_scrfd_npu use time:%f\n", (dur / CLOCKS_PER_SEC));
|
||||
auto vis_im = fastdeploy::vision::Visualize::VisFaceDetection(im_bak, res);
|
||||
cv::imwrite("scrfd_rknn_vis_result.jpg", vis_im);
|
||||
std::cout << "Visualized result saved in ./scrfd_rknn_vis_result.jpg" << std::endl;
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Other related Documents
|
||||
- [How to Build RKNPU2 Deployment Environment](../../build_and_install/rknpu2.md)
|
||||
- [RKNN-Toolkit2 Installation Document](./install_rknn_toolkit2.md)
|
||||
- [How to convert ONNX to RKNN](./export.md)
|
||||
- [FastDeploy RK356X c++ SDK](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-aarch64-rk356X-1.0.2.tgz)
|
||||
- [FastDeploy RK3588 c++ SDK](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-aarch64-rk3588-1.0.2.tgz)
|
||||
|
@@ -15,7 +15,6 @@
|
||||
#include <vector>
|
||||
#include "fastdeploy/fastdeploy_model.h"
|
||||
#include "fastdeploy/runtime.h"
|
||||
#include "fastdeploy/vision/utils/utils.h"
|
||||
#include "./wav.h"
|
||||
|
||||
class Vad : public fastdeploy::FastDeployModel {
|
||||
|
@@ -7,6 +7,11 @@ English | [简体中文](README_CN.md)
|
||||
|
||||
- (1)The *.onnx provided by [Official Repository](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) can directly conduct deployemnt;
|
||||
- (2)Personal models trained by developers should export the ONNX model. Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||
<<<<<<< HEAD
|
||||
|
||||
|
||||
=======
|
||||
>>>>>>> 30def02a8969f52f40b5e3e305271ef8662126f2
|
||||
|
||||
## Download Pre-trained ONNX Model
|
||||
|
||||
|
@@ -5,6 +5,11 @@ The YOLOv7End2EndTRT deployment is based on [YOLOv7](https://github.com/WongKinY
|
||||
|
||||
- (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the deployment. The deployment of *.trt and *.pose models is not supported.
|
||||
- (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
|
||||
<<<<<<< HEAD
|
||||
|
||||
|
||||
=======
|
||||
>>>>>>> 30def02a8969f52f40b5e3e305271ef8662126f2
|
||||
|
||||
## Export the ONNX Model
|
||||
|
||||
|
@@ -101,7 +101,7 @@ VPL model loading and initialization, among which model_file is the exported ONN
|
||||
#### Predict function
|
||||
|
||||
> ```c++
|
||||
> ArcFace::Predict(cv::Mat* im, FaceRecognitionResult* result)
|
||||
> ArcFace::Predict(const cv::Mat& im, FaceRecognitionResult* result)
|
||||
> ```
|
||||
>
|
||||
> Model prediction interface. Input images and output detection results.
|
||||
@@ -121,8 +121,6 @@ Pre-processing and post-processing parameters can be changed by modifying the me
|
||||
Revise through InsightFaceRecognitionPreprocessor::SetAlpha(std::vector<float>& alpha)
|
||||
> > * **beta**(vector<float>): Preprocess normalized beta, and calculated as `x'=x*alpha+beta`. beta defaults to [-1.f, -1.f, -1.f].
|
||||
Revise through InsightFaceRecognitionPreprocessor::SetBeta(std::vector<float>& beta)
|
||||
> > * **permute**(bool): Whether to convert BGR to RGB in pre-processing. Default true
|
||||
Revise through InsightFaceRecognitionPreprocessor::SetPermute(bool permute)
|
||||
|
||||
#### InsightFaceRecognitionPostprocessor member variables (post-processing parameters)
|
||||
> > * **l2_normalize**(bool): Whether to perform l2 normalization before outputting the face vector. Default false.
|
||||
|
@@ -99,7 +99,6 @@ Member variables of AdaFacePreprocessor are as follows
|
||||
> > * **size**(list[int]): This parameter changes the size of the resize during preprocessing, containing two integer elements for [width, height] with default value [112, 112]
|
||||
> > * **alpha**(list[float]): Preprocess normalized alpha, and calculated as `x'=x*alpha+beta`. alpha defaults to [1. / 127.5, 1.f / 127.5, 1. / 127.5]
|
||||
> > * **beta**(list[float]): Preprocess normalized beta, and calculated as `x'=x*alpha+beta`,beta defaults to [-1.f, -1.f, -1.f]
|
||||
> > * **swap_rb**(bool): Whether to convert BGR to RGB in pre-processing. Default True
|
||||
|
||||
#### Member variables of AdaFacePostprocessor
|
||||
Member variables of AdaFacePostprocessor are as follows
|
||||
|
@@ -1,7 +1,7 @@
|
||||
[English](README.md) | 简体中文
|
||||
# InsightFace RKNPU准备部署模型
|
||||
English | [简体中文](README_CN.md)
|
||||
# InsightFace RKNPU Deployment Example
|
||||
|
||||
本教程提供InsightFace模型在RKNPU2环境下的部署,模型的详细介绍已经ONNX模型的下载请查看[模型介绍文档](../README.md)。
|
||||
This document provides the deployment of the InsightFace model in the RKNPU2 environment. For details, please refer to [Model Introduction Document].本教程提供InsightFace模型在RKNPU2环境下的部署,模型的详细介绍已经ONNX模型的下载请查看[模型介绍文档](../README.md)。
|
||||
|
||||
## 支持模型列表
|
||||
目前FastDeploy支持如下模型的部署
|
||||
|
54
examples/vision/faceid/insightface/rknpu2/README_CN.md
Normal file
54
examples/vision/faceid/insightface/rknpu2/README_CN.md
Normal file
@@ -0,0 +1,54 @@
|
||||
[English](README.md) | 简体中文
|
||||
# InsightFace RKNPU准备部署模型
|
||||
|
||||
本教程提供InsightFace模型在RKNPU2环境下的部署,模型的详细介绍已经ONNX模型的下载请查看[模型介绍文档](../README.md)。
|
||||
|
||||
## 支持模型列表
|
||||
目前FastDeploy支持如下模型的部署
|
||||
- ArcFace
|
||||
- CosFace
|
||||
- PartialFC
|
||||
- VPL
|
||||
|
||||
## 下载预训练ONNX模型
|
||||
|
||||
为了方便开发者的测试,下面提供了InsightFace导出的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)其中精度指标来源于InsightFace中对各模型的介绍,详情各参考InsightFace中的说明
|
||||
|
||||
| 模型 | 大小 | 精度 (AgeDB_30) |
|
||||
|:-------------------------------------------------------------------------------------------|:------|:--------------|
|
||||
| [CosFace-r18](https://bj.bcebos.com/paddlehub/fastdeploy/glint360k_cosface_r18.onnx) | 92MB | 97.7 |
|
||||
| [CosFace-r34](https://bj.bcebos.com/paddlehub/fastdeploy/glint360k_cosface_r34.onnx) | 131MB | 98.3 |
|
||||
| [CosFace-r50](https://bj.bcebos.com/paddlehub/fastdeploy/glint360k_cosface_r50.onnx) | 167MB | 98.3 |
|
||||
| [CosFace-r100](https://bj.bcebos.com/paddlehub/fastdeploy/glint360k_cosface_r100.onnx) | 249MB | 98.4 |
|
||||
| [ArcFace-r18](https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r18.onnx) | 92MB | 97.7 |
|
||||
| [ArcFace-r34](https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r34.onnx) | 131MB | 98.1 |
|
||||
| [ArcFace-r50](https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r50.onnx) | 167MB | - |
|
||||
| [ArcFace-r100](https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r100.onnx) | 249MB | 98.4 |
|
||||
| [ArcFace-r100_lr0.1](https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_r100_lr01.onnx) | 249MB | 98.4 |
|
||||
| [PartialFC-r34](https://bj.bcebos.com/paddlehub/fastdeploy/partial_fc_glint360k_r50.onnx) | 167MB | - |
|
||||
| [PartialFC-r50](https://bj.bcebos.com/paddlehub/fastdeploy/partial_fc_glint360k_r100.onnx) | 249MB | - |
|
||||
|
||||
|
||||
## 转换为RKNPU模型
|
||||
|
||||
```bash
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/ms1mv3_arcface_r18.onnx
|
||||
|
||||
python -m paddle2onnx.optimize --input_model ./ms1mv3_arcface_r18/ms1mv3_arcface_r18.onnx \
|
||||
--output_model ./ms1mv3_arcface_r18/ms1mv3_arcface_r18.onnx \
|
||||
--input_shape_dict "{'data':[1,3,112,112]}"
|
||||
|
||||
python /Path/To/FastDeploy/tools/rknpu2/export.py \
|
||||
--config_path tools/rknpu2/config/arcface_unquantized.yaml \
|
||||
--target_platform rk3588
|
||||
```
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
||||
|
||||
|
||||
## 版本说明
|
||||
|
||||
- 本版本文档和代码基于[InsightFace CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) 编写
|
@@ -26,7 +26,7 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_
|
||||
|
||||
# Download the example code for deployment
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vison/ocr/PP-OCRv3/python/
|
||||
cd examples/vision/ocr/PP-OCRv3/python/
|
||||
|
||||
# CPU inference
|
||||
python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device cpu
|
||||
|
@@ -26,7 +26,7 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_
|
||||
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vison/ocr/PP-OCRv3/python/
|
||||
cd examples/vision/ocr/PP-OCRv3/python/
|
||||
|
||||
# CPU推理
|
||||
python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device cpu
|
||||
|
@@ -40,8 +40,6 @@ if(ENABLE_DEEPSTREAM)
|
||||
link_directories(/opt/nvidia/deepstream/deepstream/lib/)
|
||||
list(APPEND ALL_STREAMER_SRCS ${DEEPSTREAM_SRCS})
|
||||
list(APPEND DEPEND_LIBS nvdsgst_meta nvds_meta)
|
||||
else()
|
||||
message(FATAL_ERROR "Currently, DeepStream is required, we will make it optional later.")
|
||||
endif()
|
||||
|
||||
# Link the yaml-cpp in system path, because deepstream also depends on yaml-cpp,
|
||||
|
@@ -30,6 +30,9 @@ FastDeploy Streamer(FDStreamer)是一个AI多媒体流处理框架,以Pipe
|
||||
docker pull nvcr.io/nvidia/deepstream:6.1.1-devel
|
||||
```
|
||||
|
||||
### CPU
|
||||
- GSTreamer 1.14+
|
||||
|
||||
## 编译和运行
|
||||
|
||||
1. [编译FastDeploy](../docs/cn/build_and_install), 或直接下载[FastDeploy预编译库](../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
@@ -46,9 +49,12 @@ cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
```
|
||||
|
||||
编译选项:
|
||||
- ENABLE_DEEPSTREAM,是否使用NVIDIA DeepStream,非NVIDIA GPU的环境需关闭此选项,默认ON
|
||||
|
||||
3. 编译和运行Example
|
||||
|
||||
| Example | 简介 |
|
||||
|:--|:--|
|
||||
| [PP-YOLOE](./examples/ppyoloe) | 多路视频接入,PP-YOLOE目标检测,NVTracker跟踪,硬编解码,写入mp4文件 |
|
||||
| [Video Decoder](./examples/video_decoder) | 视频硬解码 |
|
||||
| [Video Decoder](./examples/video_decoder) | 视频硬解码和软解码 |
|
||||
|
@@ -29,6 +29,9 @@ Install DeepStream 6.1.1 and dependencies manually,or use below docker:
|
||||
docker pull nvcr.io/nvidia/deepstream:6.1.1-devel
|
||||
```
|
||||
|
||||
### CPU
|
||||
- GSTreamer 1.14+
|
||||
|
||||
## Build
|
||||
|
||||
1. [Build FastDeploy](../docs/en/build_and_install), or download [FastDeploy prebuilt libraries](../docs/en/build_and_install/download_prebuilt_libraries.md)
|
||||
@@ -45,9 +48,12 @@ cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
|
||||
make -j
|
||||
```
|
||||
|
||||
CMake Options:
|
||||
- ENABLE_DEEPSTREAM, whether to enable NVIDIA DeepStream, ON by default.
|
||||
|
||||
3. Build and Run Example
|
||||
|
||||
| Example | Brief |
|
||||
|:--|:--|
|
||||
| [PP-YOLOE](./examples/ppyoloe) | Multiple input videos, PP-YOLOE object detection, NvTracker, Hardware codec, writing to mp4 file |
|
||||
| [Video Decoder](./examples/video_decoder) | Video decoding using hardward |
|
||||
| [Video Decoder](./examples/video_decoder) | Video decoding using GPU or CPU |
|
||||
|
@@ -4,7 +4,7 @@
|
||||
|
||||
## 编译和运行
|
||||
|
||||
1. 需要先FastDeploy Streamer, 请参考[README](../../../README_CN.md)
|
||||
1. 该示例依赖DeepStream,需要准备DeepStream环境,并编译FastDeploy Streamer,请参考[README](../../../README_CN.md)
|
||||
|
||||
2. 编译Example
|
||||
```
|
||||
|
@@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)
|
||||
|
||||
## Build and Run
|
||||
|
||||
1. Build FastDeploy Streamer first, [README](../../../README.md)
|
||||
1. This example requires DeepStream, please prepare DeepStream environment and build FastDeploy Streamer, refer to [README](../../../README.md)
|
||||
|
||||
2. Build Example
|
||||
```
|
||||
|
@@ -15,6 +15,7 @@ make -j
|
||||
|
||||
3. 运行
|
||||
```
|
||||
cp ../streamer_cfg.yml .
|
||||
# GPU解码(gpu.yml)或CPU解码(cpu.yml)
|
||||
cp ../gpu.yml streamer_cfg.yml
|
||||
./video_decoder
|
||||
```
|
||||
|
@@ -15,6 +15,7 @@ make -j
|
||||
|
||||
3. Run
|
||||
```
|
||||
cp ../streamer_cfg.yml .
|
||||
# GPU decoding(gpu.yml) or CPU decoding(cpu.yml)
|
||||
cp ../gpu.yml streamer_cfg.yml
|
||||
./video_decoder
|
||||
```
|
||||
|
17
streamer/examples/video_decoder/cpp/cpu.yml
Normal file
17
streamer/examples/video_decoder/cpp/cpu.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
app:
|
||||
type: video_decoder
|
||||
enable-perf-measurement: true
|
||||
perf-measurement-interval-sec: 5
|
||||
|
||||
uridecodebin:
|
||||
uri: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_ride_bike.mov
|
||||
|
||||
videoconvert:
|
||||
|
||||
capsfilter:
|
||||
caps: video/x-raw,format=(string)BGR
|
||||
|
||||
appsink:
|
||||
sync: true
|
||||
max-buffers: 60
|
||||
drop: false
|
@@ -14,6 +14,7 @@
|
||||
#pragma once
|
||||
|
||||
#include "fastdeploy/core/fd_type.h"
|
||||
#include "fastdeploy/runtime/enum_variables.h"
|
||||
#include "fastdeploy/utils/perf.h"
|
||||
#include <gst/gst.h>
|
||||
|
||||
|
Reference in New Issue
Block a user