diff --git a/docs/cn/faq/rknpu2/export.md b/docs/cn/faq/rknpu2/export.md index 1d6bbb296..dc740c7fc 100644 --- a/docs/cn/faq/rknpu2/export.md +++ b/docs/cn/faq/rknpu2/export.md @@ -8,7 +8,7 @@ Fastdeploy已经简单的集成了onnx->rknn的转换过程。 本教程使用tools/rknpu2/export.py文件导出模型,在导出之前需要编写yaml配置文件。 ## 环境要求 -在进行转换前请根据[rknn_toolkit2安装文档](./install_rknn_toolkit2.md)检查环境是否已经安装成功。 +在进行转换前请根据[rknn_toolkit2安装文档](./environment.md)检查环境是否已经安装成功。 ## export.py 配置参数介绍 diff --git a/docs/cn/faq/use_cpp_sdk_on_android.md b/docs/cn/faq/use_cpp_sdk_on_android.md index 4653d568c..b11db3a3b 100644 --- a/docs/cn/faq/use_cpp_sdk_on_android.md +++ b/docs/cn/faq/use_cpp_sdk_on_android.md @@ -73,7 +73,7 @@ Android Studio 生成 JNI 函数定义: 鼠标停留在Java中定义的native函 ## 在C++层实现JNI函数
-以下为PicoDet JNI层实现的示例,相关的辅助函数不在此处赘述,完整的C++代码请参考 [android/app/src/main/cpp](../../../examples/vision/detection/paddledetection/android/app/src/main/cpp/). +以下为PicoDet JNI层实现的示例,相关的辅助函数不在此处赘述,完整的C++代码请参考 [android/app/src/main/cpp](../../examples/vision/detection/paddledetection/android/app/src/main/cpp/). ```C++ // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. // diff --git a/docs/docs_i18n/README_Pу́сский_язы́к.md b/docs/docs_i18n/README_Pу́сский_язы́к.md index c192c81d1..829b4d0e6 100644 --- a/docs/docs_i18n/README_Pу́сский_язы́к.md +++ b/docs/docs_i18n/README_Pу́сский_язы́к.md @@ -29,17 +29,17 @@
-[](examples/vision/classification) -[](examples/vision/detection) -[](examples/vision/segmentation/paddleseg) -[](examples/vision/segmentation/paddleseg) -[](examples/vision/matting) -[](examples/vision/matting) -[](examples/vision/ocr)
-[](examples/vision/facealign) -[](examples/vision/keypointdetection) -[](https://user-images.githubusercontent.com/54695910/200162475-f5d85d70-18fb-4930-8e7e-9ca065c1d618.gif) -[](examples/text) +[](../../examples/vision/classification) +[](../../examples/vision/detection) +[](../../examples/vision/segmentation/paddleseg) +[](../../examples/vision/segmentation/paddleseg) +[](../../examples/vision/matting) +[](../../examples/vision/matting) +[](../../examples/vision/ocr)
+[](../../examples/vision/facealign) +[](../../examples/vision/keypointdetection) +[](https://github.com/PaddlePaddle/FastDeploy/issues/6) +[](../../examples/text) [](https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/parakeet_espnet_fs2_pwg_demo/tn_g2p/parakeet/001.wav)
@@ -270,7 +270,7 @@ int main(int argc, char* argv[]) { | Сценарии миссий | Модели | Linux | Linux | Win | Win | Mac | Mac | Linux | Linux | Linux | Linux | Linux | |:----------------------:|:--------------------------------------------------------------------------------------------:|:------------------------------------------------:|:----------:|:-------:|:----------:|:-------:|:-------:|:-----------:|:---------------:|:-------------:|:-------------:|:-------:| | --- | --- | X86 CPU | NVIDIA GPU | X86 CPU | NVIDIA GPU | X86 CPU | Arm CPU | AArch64 CPU | Phytium D2000CPU | NVIDIA Jetson | Graphcore IPU | Serving | -| Classification | [PaddleClas/ResNet50](./../../examples/vision/classification/paddleclas) | [✅](./examples/vision/classification/paddleclas) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| Classification | [PaddleClas/ResNet50](./../../examples/vision/classification/paddleclas) | [✅](./../../examples/vision/classification/paddleclas) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Classification | [TorchVison/ResNet](./../../examples/vision/classification/resnet) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Classification | [ultralytics/YOLOv5Cls](./../../examples/vision/classification/yolov5cls) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Classification | [PaddleClas/PP-LCNet](./../../examples/vision/classification/paddleclas) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | diff --git a/docs/docs_i18n/README_हिन्दी.md b/docs/docs_i18n/README_हिन्दी.md index 3fbd2f190..db64641c4 100644 --- a/docs/docs_i18n/README_हिन्दी.md +++ b/docs/docs_i18n/README_हिन्दी.md @@ -29,17 +29,17 @@
-[](examples/vision/classification) -[](examples/vision/detection) -[](examples/vision/segmentation/paddleseg) -[](examples/vision/segmentation/paddleseg) -[](examples/vision/matting) -[](examples/vision/matting) -[](examples/vision/ocr)
-[](examples/vision/facealign) -[](examples/vision/keypointdetection) -[](https://user-images.githubusercontent.com/54695910/200162475-f5d85d70-18fb-4930-8e7e-9ca065c1d618.gif) -[](examples/text) +[](../../examples/vision/classification) +[](../../examples/vision/detection) +[](../../examples/vision/segmentation/paddleseg) +[](../../examples/vision/segmentation/paddleseg) +[](../../examples/vision/matting) +[](../../examples/vision/matting) +[](../../examples/vision/ocr)
+[](../../examples/vision/facealign) +[](../../examples/vision/keypointdetection) +[](https://github.com/PaddlePaddle/FastDeploy/issues/6) +[](../../examples/text) [](https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/parakeet_espnet_fs2_pwg_demo/tn_g2p/parakeet/001.wav)
diff --git a/docs/docs_i18n/README_日本語.md b/docs/docs_i18n/README_日本語.md index 6aaa97854..65ae8c009 100644 --- a/docs/docs_i18n/README_日本語.md +++ b/docs/docs_i18n/README_日本語.md @@ -29,17 +29,17 @@
-[](examples/vision/classification) -[](examples/vision/detection) -[](examples/vision/segmentation/paddleseg) -[](examples/vision/segmentation/paddleseg) -[](examples/vision/matting) -[](examples/vision/matting) -[](examples/vision/ocr)
-[](examples/vision/facealign) -[](examples/vision/keypointdetection) -[](https://user-images.githubusercontent.com/54695910/200162475-f5d85d70-18fb-4930-8e7e-9ca065c1d618.gif) -[](examples/text) +[](../../examples/vision/classification) +[](../../examples/vision/detection) +[](../../examples/vision/segmentation/paddleseg) +[](../../examples/vision/segmentation/paddleseg) +[](../../examples/vision/matting) +[](../../examples/vision/matting) +[](../../examples/vision/ocr)
+[](../../examples/vision/facealign) +[](../../examples/vision/keypointdetection) +[](https://github.com/PaddlePaddle/FastDeploy/issues/6) +[](../../examples/text) [](https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/parakeet_espnet_fs2_pwg_demo/tn_g2p/parakeet/001.wav)
@@ -132,7 +132,7 @@ - **よくある質問** - [1. Windows上C++ SDK の場合使用方法](./../../docs/en/faq/use_sdk_on_windows.md) - [2. FastDeploy C++ SDKをAndroidで使用する方法](./../../docs/en/faq/use_cpp_sdk_on_android.md) - - [3. TensorRT 使い方のコツ](./../../doc/en/faq/tensorrt_tricks.md) + - [3. TensorRT 使い方のコツ](./../../docs/en/faq/tensorrt_tricks.md) - **続きを読むFastDeployモジュールのデプロイメント** - [Benchmark テスト](./../../benchmark) - **モデル対応表** diff --git a/docs/docs_i18n/README_한국인.md b/docs/docs_i18n/README_한국인.md index e035c1f39..77d346c6c 100644 --- a/docs/docs_i18n/README_한국인.md +++ b/docs/docs_i18n/README_한국인.md @@ -29,17 +29,17 @@
-[](examples/vision/classification) -[](examples/vision/detection) -[](examples/vision/segmentation/paddleseg) -[](examples/vision/segmentation/paddleseg) -[](examples/vision/matting) -[](examples/vision/matting) -[](examples/vision/ocr)
-[](examples/vision/facealign) -[](examples/vision/keypointdetection) -[](https://user-images.githubusercontent.com/54695910/200162475-f5d85d70-18fb-4930-8e7e-9ca065c1d618.gif) -[](examples/text) +[](../../examples/vision/classification) +[](../../examples/vision/detection) +[](../../examples/vision/segmentation/paddleseg) +[](../../examples/vision/segmentation/paddleseg) +[](../../examples/vision/matting) +[](../../examples/vision/matting) +[](../../examples/vision/ocr)
+[](../../examples/vision/facealign) +[](../../examples/vision/keypointdetection) +[](https://github.com/PaddlePaddle/FastDeploy/issues/6) +[](../../examples/text) [](https://paddlespeech.bj.bcebos.com/Parakeet/docs/demos/parakeet_espnet_fs2_pwg_demo/tn_g2p/parakeet/001.wav)
@@ -131,7 +131,7 @@ - **늘 보는 질문** - [1. Windows C++ SDK 어떻게 사용하는가](./../../docs/cn/faq/use_sdk_on_windows.md) - [2. Android 어떻게 사용하는가 FastDeploy C++ SDK](./../../docs/cn/faq/use_cpp_sdk_on_android.md) - - [3. TensorRT 몇 가지 기술들이 있습니다](./../../docs/cn/faq/tensorrt_tricks.md) + - [3. TensorRT 몇 가지 기술들이 있습니다](./../../docs/en/faq/tensorrt_tricks.md) - **더 많은FastDeploy 배포 모듈** - [Benchmark 테스트](./../../benchmark) - **모델 지원 목록** diff --git a/docs/en/build_and_install/README.md b/docs/en/build_and_install/README.md index 29712a883..3dd273400 100755 --- a/docs/en/build_and_install/README.md +++ b/docs/en/build_and_install/README.md @@ -17,7 +17,7 @@ English | [中文](../../cn/build_and_install/README.md) - [Build and Install on A311D Platform](a311d.md) - [Build and Install on KunlunXin XPU Platform](kunlunxin.md) - [Build and Install on Huawei Ascend Platform](huawei_ascend.md) -- [Build and Install on SOPHGO Platform](sophgo.md.md) +- [Build and Install on SOPHGO Platform](sophgo.md) ## Build options diff --git a/docs/en/faq/rknpu2/install_rknn_toolkit2.md b/docs/en/faq/rknpu2/install_rknn_toolkit2.md index 13f7c5a7a..a7cc106dd 100644 --- a/docs/en/faq/rknpu2/install_rknn_toolkit2.md +++ b/docs/en/faq/rknpu2/install_rknn_toolkit2.md @@ -1,4 +1,4 @@ -English | [中文](../../../cn/faq/rknpu2/install_rknn_toolkit2.md) +English | [中文](../../cn/faq/rknpu2/install_rknn_toolkit2.md) # RKNN-Toolkit2 Installation ## Download @@ -46,4 +46,4 @@ pip install rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl ``` ## Other Documents -- [How to convert ONNX to RKNN](./export.md) \ No newline at end of file +- [How to convert ONNX to RKNN](./export.md) diff --git a/examples/application/js/WebDemo.md b/examples/application/js/WebDemo.md index def6b4284..b9d2fa18d 100644 --- a/examples/application/js/WebDemo.md +++ b/examples/application/js/WebDemo.md @@ -149,7 +149,7 @@ const postConfig = {thresh: 0.5}; await model.predict(Config); ```` -Take the OCR text detection demo as an example, modify the parameters of the text detection post-processing to achieve the effect of expanding the text detection frame, and modify the OCR web demo to execute the [model prediction code](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/application/web_demo/src/pages/cv/ocr/TextRecognition/TextRecognition.vue#L99), ie: +Take the OCR text detection demo as an example, modify the parameters of the text detection post-processing to achieve the effect of expanding the text detection frame, and modify the OCR web demo to execute the [model prediction code](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/application/js/web_demo/src/pages/cv/ocr/TextRecognition/TextRecognition.vue#L99), ie: ```` const res = await ocr.recognize(img, { canvas: canvas.value }); diff --git a/examples/application/js/WebDemo_CN.md b/examples/application/js/WebDemo_CN.md index 3eeb89f73..a6cd2c996 100644 --- a/examples/application/js/WebDemo_CN.md +++ b/examples/application/js/WebDemo_CN.md @@ -148,7 +148,7 @@ const postConfig = {thresh: 0.5}; await model.predict(Config); ``` -以OCR文本检测 demo为例,修改文本检测后处理的参数实现扩大文本检测框的效果,修改OCR web demo中执行[模型预测代码](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/application/web_demo/src/pages/cv/ocr/TextRecognition/TextRecognition.vue#L99),即: +以OCR文本检测 demo为例,修改文本检测后处理的参数实现扩大文本检测框的效果,修改OCR web demo中执行[模型预测代码](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/application/js/web_demo/src/pages/cv/ocr/TextRecognition/TextRecognition.vue#L99),即: ``` const res = await ocr.recognize(img, { canvas: canvas.value }); diff --git a/examples/application/js/converter/RNN.md b/examples/application/js/converter/RNN.md index 2ee83d7df..19b8ad66a 100644 --- a/examples/application/js/converter/RNN.md +++ b/examples/application/js/converter/RNN.md @@ -70,11 +70,11 @@ Formula: rnn_matmul = rnn_origin + Matmul( $ S_{t-1} $, WeightList_hh) 3) rnn_cell Method: Split the rnn_matmul op output into 4 copies, each copy performs a different activation function calculation, and finally outputs lstm_x_y.tmp_c[1, 1, 48]. x∈[0, 3], y∈[0, 24]. -For details, please refer to [rnn_cell](../paddlejs-backend-webgl/src/ops/shader/rnn/rnn_cell.ts). +For details, please refer to [rnn_cell](https://github.com/PaddlePaddle/Paddle.js/blob/release/v2.2.5/packages/paddlejs-backend-webgl/src/ops/shader/rnn/rnn_cell.ts). 4) rnn_hidden Split the rnn_matmul op output into 4 copies, each copy performs a different activation function calculation, and finally outputs lstm_x_y.tmp_h[1, 1, 48]. x∈[0, 3], y∈[0, 24]. -For details, please refer to [rnn_hidden](../paddlejs-backend-webgl/src/ops/shader/rnn/rnn_hidden.ts). +For details, please refer to [rnn_hidden](https://github.com/PaddlePaddle/Paddle.js/blob/release/v2.2.5/packages/paddlejs-backend-webgl/src/ops/shader/rnn/rnn_hidden.ts). diff --git a/examples/application/js/converter/RNN_CN.md b/examples/application/js/converter/RNN_CN.md index b4fe8ccd9..46f0acd37 100644 --- a/examples/application/js/converter/RNN_CN.md +++ b/examples/application/js/converter/RNN_CN.md @@ -73,11 +73,11 @@ paddle源码实现:https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/ 3)rnn_cell 计算方式:将rnn_matmul op输出结果分割成4份,每份执行不同激活函数计算,最后输出lstm_x_y.tmp_c[1, 1, 48]。x∈[0, 3],y∈[0, 24]。 -详见算子实现:[rnn_cell](../paddlejs-backend-webgl/src/ops/shader/rnn/rnn_cell.ts) +详见算子实现:[rnn_cell](https://github.com/PaddlePaddle/Paddle.js/blob/release/v2.2.5/packages/paddlejs-backend-webgl/src/ops/shader/rnn/rnn_cell.ts) 4)rnn_hidden 计算方式:将rnn_matmul op输出结果分割成4份,每份执行不同激活函数计算,最后输出lstm_x_y.tmp_h[1, 1, 48]。x∈[0, 3],y∈[0, 24]。 -详见算子实现:[rnn_hidden](../paddlejs-backend-webgl/src/ops/shader/rnn/rnn_hidden.ts) +详见算子实现:[rnn_hidden](https://github.com/PaddlePaddle/Paddle.js/blob/release/v2.2.5/packages/paddlejs-backend-webgl/src/ops/shader/rnn/rnn_hidden.ts) diff --git a/examples/vision/classification/paddleclas/serving/README.md b/examples/vision/classification/paddleclas/serving/README.md index faca10d0f..cb4515ef4 100644 --- a/examples/vision/classification/paddleclas/serving/README.md +++ b/examples/vision/classification/paddleclas/serving/README.md @@ -80,7 +80,7 @@ The current default configuration runs the TensorRT engine on GPU. If you want t ## Use VisualDL for serving deployment visualization -You can use VisualDL for [serving deployment visualization](../../../../serving/docs/EN/vdl_management-en.md) , the above model preparation, deployment, configuration modification and client request operations can all be performed based on VisualDL. +You can use VisualDL for [serving deployment visualization](../../../../../serving/docs/EN/vdl_management-en.md) , the above model preparation, deployment, configuration modification and client request operations can all be performed based on VisualDL. The serving deployment of PaddleClas by VisualDL only needs the following three steps: ```text diff --git a/examples/vision/detection/fastestdet/README_CN.md b/examples/vision/detection/fastestdet/README_CN.md index c099ee77b..25a0f933e 100644 --- a/examples/vision/detection/fastestdet/README_CN.md +++ b/examples/vision/detection/fastestdet/README_CN.md @@ -11,7 +11,7 @@ 为了方便开发者的测试,下面提供了FastestDet导出的模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库) | 模型 | 大小 | 精度 | 备注 | |:---------------------------------------------------------------- |:----- |:----- |:---- | -| [FastestDet](https://bj.bcebos.com/paddlehub/fastdeploy/FastestDetn.onnx) | 969KB | 25.3% | 此模型文件来源于[FastestDet](https://github.com/dog-qiuqiu/FastestDet.git),BSD-3-Clause license | +| [FastestDet](https://bj.bcebos.com/paddlehub/fastdeploy/FastestDet.onnx) | 969KB | 25.3% | 此模型文件来源于[FastestDet](https://github.com/dog-qiuqiu/FastestDet.git),BSD-3-Clause license | ## 详细部署文档 @@ -21,4 +21,4 @@ ## 版本说明 -- 本版本文档和代码基于[FastestDet](https://github.com/dog-qiuqiu/FastestDet.git) 编写 \ No newline at end of file +- 本版本文档和代码基于[FastestDet](https://github.com/dog-qiuqiu/FastestDet.git) 编写 diff --git a/examples/vision/detection/paddledetection/sophgo/README.md b/examples/vision/detection/paddledetection/sophgo/README.md index 20d30d386..0d201662f 100644 --- a/examples/vision/detection/paddledetection/sophgo/README.md +++ b/examples/vision/detection/paddledetection/sophgo/README.md @@ -5,7 +5,7 @@ 目前SOPHGO支持如下模型的部署 - [PP-YOLOE系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe) - [PicoDet系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet) -- [YOLOV8系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/) +- [YOLOV8系列模型](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) ## 准备PP-YOLOE YOLOV8或者PicoDet部署模型以及转换模型 diff --git a/examples/vision/detection/rkyolo/cpp/README.md b/examples/vision/detection/rkyolo/cpp/README.md index 68c6ea83f..1984ae7a4 100644 --- a/examples/vision/detection/rkyolo/cpp/README.md +++ b/examples/vision/detection/rkyolo/cpp/README.md @@ -35,7 +35,7 @@ mkdir thirdpartys ### Compile and copy SDK to the thirdpartys folder -Refer to [RK2 generation NPU deployment repository compilation](../../../../../../docs/cn/build_and_install/rknpu2.md). It will generate fastdeploy-0.0.3 directory in the build directory after compilation. Move it to the thirdpartys directory. +Refer to [RK2 generation NPU deployment repository compilation](../../../../../docs/cn/build_and_install/rknpu2.md). It will generate fastdeploy-0.0.3 directory in the build directory after compilation. Move it to the thirdpartys directory. ### Copy model files and configuration files to the model folder In the process of Paddle dynamic graph model -> Paddle static graph model -> ONNX model, the ONNX file and the corresponding yaml configuration file will be generated. Please save the configuration file in the model folder. @@ -66,4 +66,4 @@ cd ./build/install - [Model Description](../../) - [Python Deployment](../python) -- [Vision Model Prediction Results](../../../../../../docs/api/vision_results/) +- [Vision Model Prediction Results](../../../../../docs/api/vision_results/) diff --git a/examples/vision/detection/rkyolo/cpp/README_CN.md b/examples/vision/detection/rkyolo/cpp/README_CN.md index 014e48825..924e34984 100644 --- a/examples/vision/detection/rkyolo/cpp/README_CN.md +++ b/examples/vision/detection/rkyolo/cpp/README_CN.md @@ -35,7 +35,7 @@ mkdir thirdpartys ### 编译并拷贝SDK到thirdpartys文件夹 -请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成 +请参考[RK2代NPU部署库编译](../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成 fastdeploy-0.0.3目录,请移动它至thirdpartys目录下. ### 拷贝模型文件,以及配置文件至model文件夹 @@ -67,4 +67,4 @@ cd ./build/install - [模型介绍](../../) - [Python部署](../python) -- [视觉模型预测结果](../../../../../../docs/api/vision_results/) +- [视觉模型预测结果](../../../../../docs/api/vision_results/) diff --git a/examples/vision/detection/rkyolo/python/README.md b/examples/vision/detection/rkyolo/python/README.md index 3aa0d4f42..4cb2a444d 100644 --- a/examples/vision/detection/rkyolo/python/README.md +++ b/examples/vision/detection/rkyolo/python/README.md @@ -3,7 +3,7 @@ English | [简体中文](README_CN.md) Two steps before deployment -- 1. Software and hardware should meet the requirements. Refer to [FastDeploy Environment Requirements](../../../../../../docs/cn/build_and_install/rknpu2.md) +- 1. Software and hardware should meet the requirements. Refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/rknpu2.md) This directory provides examples that `infer.py` fast finishes the deployment of Picodet on RKNPU. The script is as follows @@ -31,5 +31,5 @@ The model needs to be in NHWC format on RKNPU. The normalized image will be embe - [PaddleDetection Model Description](..) - [PaddleDetection C++ Deployment](../cpp) -- [model prediction Results](../../../../../../docs/api/vision_results/) +- [model prediction Results](../../../../../docs/api/vision_results/) - [Convert PaddleDetection RKNN Model Files](../README.md) diff --git a/examples/vision/detection/rkyolo/python/README_CN.md b/examples/vision/detection/rkyolo/python/README_CN.md index 22ff8d5fd..09a0e14aa 100644 --- a/examples/vision/detection/rkyolo/python/README_CN.md +++ b/examples/vision/detection/rkyolo/python/README_CN.md @@ -3,7 +3,7 @@ 在部署前,需确认以下两个步骤 -- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md) +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/rknpu2.md) 本目录下提供`infer.py`快速完成Picodet在RKNPU上部署的示例。执行如下脚本即可完成 @@ -31,5 +31,5 @@ RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作 - [PaddleDetection 模型介绍](..) - [PaddleDetection C++部署](../cpp) -- [模型预测结果说明](../../../../../../docs/api/vision_results/) +- [模型预测结果说明](../../../../../docs/api/vision_results/) - [转换PaddleDetection RKNN模型文档](../README.md) diff --git a/examples/vision/faceid/adaface/python/README_CN.md b/examples/vision/faceid/adaface/python/README_CN.md index 5421bd612..6475b1a32 100644 --- a/examples/vision/faceid/adaface/python/README_CN.md +++ b/examples/vision/faceid/adaface/python/README_CN.md @@ -1,33 +1,125 @@ -[English](README.md) | 简体中文 -# AdaFace准备部署模型 +# AdaFace Python部署示例 +本目录下提供infer_xxx.py快速完成AdaFace模型在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 -- [PaddleClas](https://github.com/PaddlePaddle/PaddleClas/) - - [官方库](https://github.com/PaddlePaddle/PaddleClas/)中训练过后的Paddle模型导出Paddle静态图模型操作后,可进行部署; +在部署前,需确认以下两个步骤 -## 简介 -一直以来,低质量图像的人脸识别都具有挑战性,因为低质量图像的人脸属性是模糊和退化的。将这样的图片输入模型时,将不能很好的实现分类。 -而在人脸识别任务中,我们经常会利用opencv的仿射变换来矫正人脸数据,这时数据会出现低质量退化的现象。如何解决低质量图片的分类问题成为了模型落地时的痛点问题。 +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) -在AdaFace这项工作中,作者在损失函数中引入了另一个因素,即图像质量。作者认为,强调错误分类样本的策略应根据其图像质量进行调整。 -具体来说,简单或困难样本的相对重要性应该基于样本的图像质量来给定。据此作者提出了一种新的损失函数来通过图像质量强调不同的困难样本的重要性。 +以AdaFace为例子, 提供`infer.py`快速完成AdaFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 -由上,AdaFace缓解了低质量图片在输入网络后输出结果精度变低的情况,更加适合在人脸识别任务落地中使用。 +```bash +#下载部署示例代码 +git clone https://github.com/PaddlePaddle/FastDeploy.git +cd examples/vision/faceid/adaface/python/ + +#下载AdaFace模型文件和测试图片 +#下载测试图片 +wget https://bj.bcebos.com/paddlehub/fastdeploy/rknpu2/face_demo.zip +unzip face_demo.zip + +# 如果为Paddle模型,运行以下代码 +wget https://bj.bcebos.com/paddlehub/fastdeploy/mobilefacenet_adaface.tgz +tar zxvf mobilefacenet_adaface.tgz -C ./ + +# CPU推理 +python infer.py --model mobilefacenet_adaface/mobilefacenet_adaface.pdmodel \ + --params_file mobilefacenet_adaface/mobilefacenet_adaface.pdiparams \ + --face face_0.jpg \ + --face_positive face_1.jpg \ + --face_negative face_2.jpg \ + --device cpu +# GPU推理 +python infer.py --model mobilefacenet_adaface/mobilefacenet_adaface.pdmodel \ + --params_file mobilefacenet_adaface/mobilefacenet_adaface.pdiparams \ + --face face_0.jpg \ + --face_positive face_1.jpg \ + --face_negative face_2.jpg \ + --device gpu +# GPU上使用TensorRT推理 +python infer.py --model mobilefacenet_adaface/mobilefacenet_adaface.pdmodel \ + --params_file mobilefacenet_adaface/mobilefacenet_adaface.pdiparams \ + --face face_0.jpg \ + --face_positive face_1.jpg \ + --face_negative face_2.jpg \ + --device gpu \ + --use_trt True + +# 昆仑芯XPU推理 +python infer.py --model mobilefacenet_adaface/mobilefacenet_adaface.pdmodel \ + --params_file mobilefacenet_adaface/mobilefacenet_adaface.pdiparams \ + --face test_lite_focal_arcface_0.JPG \ + --face_positive test_lite_focal_arcface_1.JPG \ + --face_negative test_lite_focal_arcface_2.JPG \ + --device kunlunxin +``` + +运行完成可视化结果如下图所示 + +
+ + + +
+ +```bash +FaceRecognitionResult: [Dim(512), Min(-0.133213), Max(0.148838), Mean(0.000293)] +FaceRecognitionResult: [Dim(512), Min(-0.102777), Max(0.120130), Mean(0.000615)] +FaceRecognitionResult: [Dim(512), Min(-0.116685), Max(0.142919), Mean(0.001595)] +Cosine 01: 0.7483505506964364 +Cosine 02: -0.09605773855893639 +``` + +## AdaFace Python接口 + +```python +fastdeploy.vision.faceid.AdaFace(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.PADDLE) +``` + +AdaFace模型加载和初始化,其中model_file为导出的ONNX模型格式或PADDLE静态图格式 + +**参数** + +> * **model_file**(str): 模型文件路径 +> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 +> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 +> * **model_format**(ModelFormat): 模型格式,默认为PADDLE + +### predict函数 + +> ```python +> AdaFace.predict(image_data) +> ``` +> +> 模型预测结口,输入图像直接输出检测结果。 +> +> **参数** +> +> > * **image_data**(np.ndarray): 输入数据,注意需为HWC,BGR格式 + +> **返回** +> +> > 返回`fastdeploy.vision.FaceRecognitionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) + +### 类成员属性 +#### 预处理参数 +用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果 + +#### AdaFacePreprocessor的成员变量 +以下变量为AdaFacePreprocessor的成员变量 +> > * **size**(list[int]): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[112, 112] +> > * **alpha**(list[float]): 预处理归一化的alpha值,计算公式为`x'=x*alpha+beta`,alpha默认为[1. / 127.5, 1.f / 127.5, 1. / 127.5] +> > * **beta**(list[float]): 预处理归一化的beta值,计算公式为`x'=x*alpha+beta`,beta默认为[-1.f, -1.f, -1.f] +> > * **swap_rb**(bool): 预处理是否将BGR转换成RGB,默认True + +#### AdaFacePostprocessor的成员变量 +以下变量为AdaFacePostprocessor的成员变量 +> > * **l2_normalize**(bool): 输出人脸向量之前是否执行l2归一化,默认False -## 导出Paddle静态图模型 -以AdaFace为例: -训练和导出代码,请参考[AIStudio](https://aistudio.baidu.com/aistudio/projectdetail/4479879?contributionType=1) +## 其它文档 - -## 下载预训练Paddle静态图模型 - -为了方便开发者的测试,下面提供了我转换过的各系列模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库)其中精度指标来源于AIStudio中对各模型的介绍。 - -| 模型 | 大小 | 精度 (AgeDB_30) | -|:----------------------------------------------------------------------------------------------|:------|:--------------| -| [AdaFace-MobileFacenet](https://bj.bcebos.com/paddlehub/fastdeploy/mobilefacenet_adaface.tgz) | 3.2MB | 95.5 | - -## 详细部署文档 - -- [Python部署](python) -- [C++部署](cpp) +- [AdaFace 模型介绍](..) +- [AdaFace C++部署](../cpp) +- [模型预测结果说明](../../../../../docs/api/vision_results/) +- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md) diff --git a/examples/vision/faceid/insightface/rknpu2/cpp/README.md b/examples/vision/faceid/insightface/rknpu2/cpp/README.md index bb88804cd..0c09d4fbe 100644 --- a/examples/vision/faceid/insightface/rknpu2/cpp/README.md +++ b/examples/vision/faceid/insightface/rknpu2/cpp/README.md @@ -44,7 +44,7 @@ unzip face_demo.zip 以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: -- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md) +- [如何在Windows中使用FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md) ## InsightFace C++接口 @@ -113,7 +113,7 @@ VPL模型加载和初始化,其中model_file为导出的ONNX模型格式。 > **参数** > > > * **im**: 输入图像,注意需为HWC,BGR格式 -> > * **result**: 检测结果,包括检测框,各个框的置信度, FaceRecognitionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > * **result**: 检测结果,包括检测框,各个框的置信度, FaceRecognitionResult说明参考[视觉模型预测结果](../../../../../../docs/api/vision_results/) ### 修改预处理以及后处理的参数 预处理和后处理的参数的需要通过修改InsightFaceRecognitionPostprocessor,InsightFaceRecognitionPreprocessor的成员变量来进行修改。 diff --git a/examples/vision/faceid/insightface/rknpu2/python/README_CN.md b/examples/vision/faceid/insightface/rknpu2/python/README_CN.md index fd539f708..c45f28cfc 100644 --- a/examples/vision/faceid/insightface/rknpu2/python/README_CN.md +++ b/examples/vision/faceid/insightface/rknpu2/python/README_CN.md @@ -83,7 +83,7 @@ ArcFace模型加载和初始化,其中model_file为导出的ONNX模型格式 > **返回** > -> > 返回`fastdeploy.vision.FaceRecognitionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) +> > 返回`fastdeploy.vision.FaceRecognitionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../../docs/api/vision_results/) ### 类成员属性 #### 预处理参数 @@ -104,5 +104,5 @@ ArcFace模型加载和初始化,其中model_file为导出的ONNX模型格式 - [InsightFace 模型介绍](..) - [InsightFace C++部署](../cpp) -- [模型预测结果说明](../../../../../docs/api/vision_results/) -- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md) +- [模型预测结果说明](../../../../../../docs/api/vision_results/) +- [如何切换模型推理后端引擎](../../../../../../docs/cn/faq/how_to_change_backend.md) diff --git a/examples/vision/segmentation/paddleseg/README_CN.md b/examples/vision/segmentation/paddleseg/README_CN.md index 7306a5f4f..0b0cda349 100644 --- a/examples/vision/segmentation/paddleseg/README_CN.md +++ b/examples/vision/segmentation/paddleseg/README_CN.md @@ -1,34 +1,47 @@ -[English](README.md) | 简体中文 -# 视觉模型部署 +# PaddleSeg 模型部署 -本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型 +## 模型版本说明 -| 任务类型 | 说明 | 预测结果结构体 | -|:-------------- |:----------------------------------- |:-------------------------------------------------------------------------------- | -| Detection | 目标检测,输入图像,检测图像中物体位置,并返回检测框坐标及类别和置信度 | [DetectionResult](../../docs/api/vision_results/detection_result.md) | -| Segmentation | 语义分割,输入图像,给出图像中每个像素的分类及置信度 | [SegmentationResult](../../docs/api/vision_results/segmentation_result.md) | -| Classification | 图像分类,输入图像,给出图像的分类结果和置信度 | [ClassifyResult](../../docs/api/vision_results/classification_result.md) | -| FaceDetection | 人脸检测,输入图像,检测图像中人脸位置,并返回检测框坐标及人脸关键点 | [FaceDetectionResult](../../docs/api/vision_results/face_detection_result.md) | -| FaceAlignment | 人脸对齐(人脸关键点检测),输入图像,返回人脸关键点 | [FaceAlignmentResult](../../docs/api/vision_results/face_alignment_result.md) | -| KeypointDetection | 关键点检测,输入图像,返回图像中人物行为的各个关键点坐标和置信度 | [KeyPointDetectionResult](../../docs/api/vision_results/keypointdetection_result.md) | -| FaceRecognition | 人脸识别,输入图像,返回可用于相似度计算的人脸特征的embedding | [FaceRecognitionResult](../../docs/api/vision_results/face_recognition_result.md) | -| Matting | 抠图,输入图像,返回图片的前景每个像素点的Alpha值 | [MattingResult](../../docs/api/vision_results/matting_result.md) | -| OCR | 文本框检测,分类,文本框内容识别,输入图像,返回文本框坐标,文本框的方向类别以及框内的文本内容 | [OCRResult](../../docs/api/vision_results/ocr_result.md) | -| MOT | 多目标跟踪,输入图像,检测图像中物体位置,并返回检测框坐标,对象id及类别置信度 | [MOTResult](../../docs/api/vision_results/mot_result.md) | -| HeadPose | 头部姿态估计,返回头部欧拉角 | [HeadPoseResult](../../docs/api/vision_results/headpose_result.md) | +- [PaddleSeg develop](https://github.com/PaddlePaddle/PaddleSeg/tree/develop) -## FastDeploy API设计 +目前FastDeploy支持如下模型的部署 -视觉模型具有较有统一任务范式,在设计API时(包括C++/Python),FastDeploy将视觉模型的部署拆分为四个步骤 +- [U-Net系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/unet/README.md) +- [PP-LiteSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/pp_liteseg/README.md) +- [PP-HumanSeg系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/contrib/PP-HumanSeg/README.md) +- [FCN系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/fcn/README.md) +- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.6/configs/deeplabv3/README.md) -- 模型加载 -- 图像预处理 -- 模型推理 -- 推理结果后处理 +【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../matting) -FastDeploy针对飞桨的视觉套件,以及外部热门模型,提供端到端的部署服务,用户只需准备模型,按以下步骤即可完成整个模型的部署 +## 准备PaddleSeg部署模型 -- 加载模型 -- 调用`predict`接口 +PaddleSeg模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) -FastDeploy在各视觉模型部署时,也支持一键切换后端推理引擎,详情参阅[如何切换模型推理引擎](../../docs/cn/faq/how_to_change_backend.md)。 +**注意** +- PaddleSeg导出的模型包含`model.pdmodel`、`model.pdiparams`和`deploy.yaml`三个文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息 + +## 下载预训练模型 + +为了方便开发者的测试,下面提供了PaddleSeg导出的部分模型 +- without-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op none` +- with-argmax导出方式为:**不指定**`--input_shape`,**指定**`--output_op argmax` + +开发者可直接下载使用。 + +| 模型 | 参数文件大小 |输入Shape | mIoU | mIoU (flip) | mIoU (ms+flip) | +|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- | :----- | +| [Unet-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_with_argmax_infer.tgz) \| [Unet-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Unet_cityscapes_without_argmax_infer.tgz) | 52MB | 1024x512 | 65.00% | 66.02% | 66.89% | +| [PP-LiteSeg-B(STDC2)-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_with_argmax_infer.tgz) \| [PP-LiteSeg-B(STDC2)-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_B_STDC2_cityscapes_without_argmax_infer.tgz) | 31MB | 1024x512 | 79.04% | 79.52% | 79.85% | +|[PP-HumanSegV1-Lite-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV1_Lite_with_argmax_infer.tgz) \| [PP-HumanSegV1-Lite-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Lite_infer.tgz) | 543KB | 192x192 | 86.2% | - | - | +|[PP-HumanSegV2-Lite-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Lite-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Lite_192x192_infer.tgz) | 12MB | 192x192 | 92.52% | - | - | +| [PP-HumanSegV2-Mobile-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_with_argmax_infer.tgz) \| [PP-HumanSegV2-Mobile-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV2_Mobile_192x192_infer.tgz) | 29MB | 192x192 | 93.13% | - | - | +|[PP-HumanSegV1-Server-with-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_with_argmax_infer.tgz) \| [PP-HumanSegV1-Server-without-argmax(通用人像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/PP_HumanSegV1_Server_infer.tgz) | 103MB | 512x512 | 96.47% | - | - | +| [Portait-PP-HumanSegV2-Lite-with-argmax(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_with_argmax_infer.tgz) \| [Portait-PP-HumanSegV2-Lite-without-argmax(肖像分割模型)](https://bj.bcebos.com/paddlehub/fastdeploy/Portrait_PP_HumanSegV2_Lite_256x144_infer.tgz) | 3.6M | 256x144 | 96.63% | - | - | +| [FCN-HRNet-W18-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_with_argmax_infer.tgz) \| [FCN-HRNet-W18-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/FCN_HRNet_W18_cityscapes_without_argmax_infer.tgz)(暂时不支持ONNXRuntime的GPU推理) | 37MB | 1024x512 | 78.97% | 79.49% | 79.74% | +| [Deeplabv3-ResNet101-OS8-cityscapes-with-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_with_argmax_infer.tgz) \| [Deeplabv3-ResNet101-OS8-cityscapes-without-argmax](https://bj.bcebos.com/paddlehub/fastdeploy/Deeplabv3_ResNet101_OS8_cityscapes_without_argmax_infer.tgz) | 150MB | 1024x512 | 79.90% | 80.22% | 80.47% | + +## 详细部署文档 + +- [Python部署](python) +- [C++部署](cpp) diff --git a/tutorials/intel_gpu/cpp/README_CN.md b/tutorials/intel_gpu/cpp/README_CN.md index e8e5de523..1546e16b9 100644 --- a/tutorials/intel_gpu/cpp/README_CN.md +++ b/tutorials/intel_gpu/cpp/README_CN.md @@ -1,11 +1,11 @@ -English | [中文](README_CN.md) +[English](README.md) | 中文 # PaddleClas Python Example 在部署前,需确认以下两个步骤 -- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) -- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../docs/cn/build_and_install/download_prebuilt_libraries.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../docs/cn/build_and_install/download_prebuilt_libraries.md) **注意** 本文档依赖FastDeploy>=1.0.2版本,或nightly built版本。 diff --git a/tutorials/intel_gpu/python/README_CN.md b/tutorials/intel_gpu/python/README_CN.md index 178125031..5f65d03ed 100644 --- a/tutorials/intel_gpu/python/README_CN.md +++ b/tutorials/intel_gpu/python/README_CN.md @@ -1,11 +1,11 @@ -English | [中文](README_CN.md) +[English](README.md) | 中文 # PaddleClas Python Example 在部署前,需确认以下两个步骤 -- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) -- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) +- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../docs/cn/build_and_install/download_prebuilt_libraries.md) +- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../docs/cn/build_and_install/download_prebuilt_libraries.md) ```bash # Get FastDeploy codes diff --git a/tutorials/multi_thread/cpp/pipeline/README.md b/tutorials/multi_thread/cpp/pipeline/README.md index 9e092cc79..24792a3d2 100644 --- a/tutorials/multi_thread/cpp/pipeline/README.md +++ b/tutorials/multi_thread/cpp/pipeline/README.md @@ -45,9 +45,9 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_ # KunlunXin XPU multi-thread inference ./multi_thread_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 4 1 >> **Notice**: the last number in above command is thread number - +``` The above command works for Linux or MacOS. For SDK in Windows, refer to: -- [How to use FastDeploy C++ SDK in Windows](../../../docs/cn/faq/use_sdk_on_windows.md) +- [How to use FastDeploy C++ SDK in Windows](../../../../docs/cn/faq/use_sdk_on_windows.md) The result returned after running is as follows ``` diff --git a/tutorials/multi_thread/cpp/pipeline/README_CN.md b/tutorials/multi_thread/cpp/pipeline/README_CN.md index de18d1f1c..fc614edd3 100644 --- a/tutorials/multi_thread/cpp/pipeline/README_CN.md +++ b/tutorials/multi_thread/cpp/pipeline/README_CN.md @@ -45,9 +45,9 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_ # 昆仑芯XPU推理 ./multi_thread_demo ./ch_PP-OCRv3_det_infer ./ch_ppocr_mobile_v2.0_cls_infer ./ch_PP-OCRv3_rec_infer ./ppocr_keys_v1.txt ./12.jpg 4 1 >> **注意**: 最后一位数字表示线程数 - +``` 以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: -- [如何在Windows中使用FastDeploy C++ SDK](../../../docs/cn/faq/use_sdk_on_windows.md) +- [如何在Windows中使用FastDeploy C++ SDK](../../../../docs/cn/faq/use_sdk_on_windows.md) 运行完成后返回结果如下所示 ``` diff --git a/tutorials/multi_thread/cpp/single_model/README.md b/tutorials/multi_thread/cpp/single_model/README.md index 482f89318..eb56c22d7 100644 --- a/tutorials/multi_thread/cpp/single_model/README.md +++ b/tutorials/multi_thread/cpp/single_model/README.md @@ -1,4 +1,4 @@ -English | [中文]((README_CN.md)) +English | [中文](README_CN.md) # Example of PaddleClas models Python Deployment @@ -36,7 +36,7 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima >> **Notice**: the last number in above command is thread number The above command works for Linux or MacOS. For SDK in Windows, refer to: -- [How to use FastDeploy C++ SDK in Windows ](../../../docs/cn/faq/use_sdk_on_windows.md) +- [How to use FastDeploy C++ SDK in Windows ](../../../../docs/cn/faq/use_sdk_on_windows.md) The result returned after running is as follows ``` diff --git a/tutorials/multi_thread/cpp/single_model/README_CN.md b/tutorials/multi_thread/cpp/single_model/README_CN.md index 179709d97..d97222494 100644 --- a/tutorials/multi_thread/cpp/single_model/README_CN.md +++ b/tutorials/multi_thread/cpp/single_model/README_CN.md @@ -36,7 +36,7 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima >> **注意**: 最后一位数字表示线程数 以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: -- [如何在Windows中使用FastDeploy C++ SDK](../../../docs/cn/faq/use_sdk_on_windows.md) +- [如何在Windows中使用FastDeploy C++ SDK](../../../../docs/cn/faq/use_sdk_on_windows.md) 运行完成后返回结果如下所示 ```