[DOC]fix death url (#1598)

fix death url
This commit is contained in:
CoolCola
2023-03-14 10:22:52 +08:00
committed by GitHub
parent 4ae772c2c7
commit 745d0018fa
24 changed files with 57 additions and 57 deletions

View File

@@ -118,4 +118,4 @@ tar -xf PaddleLite-generic-demo.tar.gz
3. A311D 上部署 YOLOv5 检测模型请参考:[YOLOv5 检测模型在 A311D 上的 C++ 部署示例](../../../examples/vision/detection/yolov5/a311d/README.md) 3. A311D 上部署 YOLOv5 检测模型请参考:[YOLOv5 检测模型在 A311D 上的 C++ 部署示例](../../../examples/vision/detection/yolov5/a311d/README.md)
4. A311D 上部署 PP-LiteSeg 分割模型请参考:[PP-LiteSeg 分割模型在 A311D 上的 C++ 部署示例](../../../examples/vision/segmentation/paddleseg/a311d/README.md) 4. A311D 上部署 PP-LiteSeg 分割模型请参考:[PP-LiteSeg 分割模型在 A311D 上的 C++ 部署示例](../../../examples/vision/segmentation/paddleseg/amlogic/a311d/README.md)

View File

@@ -118,4 +118,4 @@ tar -xf PaddleLite-generic-demo.tar.gz
3. RV1126 上部署 YOLOv5 检测模型请参考:[YOLOv5 检测模型在 RV1126 上的 C++ 部署示例](../../../examples/vision/detection/yolov5/rv1126/README.md) 3. RV1126 上部署 YOLOv5 检测模型请参考:[YOLOv5 检测模型在 RV1126 上的 C++ 部署示例](../../../examples/vision/detection/yolov5/rv1126/README.md)
4. RV1126 上部署 PP-LiteSeg 分割模型请参考:[PP-LiteSeg 分割模型在 RV1126 上的 C++ 部署示例](../../../examples/vision/segmentation/paddleseg/rv1126/README.md) 4. RV1126 上部署 PP-LiteSeg 分割模型请参考:[PP-LiteSeg 分割模型在 RV1126 上的 C++ 部署示例](../../../examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md)

View File

@@ -25,9 +25,9 @@ FastDeploy在RK3588s上进行了测试测试环境如下:
| Detection | [RKYOLOV5](../../../../examples/vision/detection/rkyolo/README.md) | YOLOV5-S-Relu(int8) | 是 | 57 | | Detection | [RKYOLOV5](../../../../examples/vision/detection/rkyolo/README.md) | YOLOV5-S-Relu(int8) | 是 | 57 |
| Detection | [RKYOLOX](../../../../examples/vision/detection/rkyolo/README.md) | yolox-s | 是 | 130 | | Detection | [RKYOLOX](../../../../examples/vision/detection/rkyolo/README.md) | yolox-s | 是 | 130 |
| Detection | [RKYOLOV7](../../../../examples/vision/detection/rkyolo/README.md) | yolov7-tiny | 是 | 58 | | Detection | [RKYOLOV7](../../../../examples/vision/detection/rkyolo/README.md) | yolov7-tiny | 是 | 58 |
| Segmentation | [Unet](../../../../examples/vision/segmentation/paddleseg/rknpu2/README.md) | Unet-cityscapes | 否 | - | | Segmentation | [Unet](../../../../examples/vision/segmentation/paddleseg/rockchip/rknpu2/README.md) | Unet-cityscapes | 否 | - |
| Segmentation | [PP-HumanSegV2Lite](../../../../examples/vision/segmentation/paddleseg/rknpu2/README.md) | portrait(int8) | 是 | 43 | | Segmentation | [PP-HumanSegV2Lite](../../../../examples/vision/segmentation/paddleseg/rockchip/rknpu2/README.md) | portrait(int8) | 是 | 43 |
| Segmentation | [PP-HumanSegV2Lite](../../../../examples/vision/segmentation/paddleseg/rknpu2/README.md) | human(int8) | 是 | 43 | | Segmentation | [PP-HumanSegV2Lite](../../../../examples/vision/segmentation/paddleseg/rockchip/rknpu2/README.md) | human(int8) | 是 | 43 |
| Face Detection | [SCRFD](../../../../examples/vision/facedet/scrfd/rknpu2/README.md) | SCRFD-2.5G-kps-640(int8) | 是 | 42 | | Face Detection | [SCRFD](../../../../examples/vision/facedet/scrfd/rknpu2/README.md) | SCRFD-2.5G-kps-640(int8) | 是 | 42 |
| Face FaceRecognition | [InsightFace](../../../../examples/vision/faceid/insightface/rknpu2/README_CN.md) | ms1mv3_arcface_r18(int8) | 是 | 12 | | Face FaceRecognition | [InsightFace](../../../../examples/vision/faceid/insightface/rknpu2/README_CN.md) | ms1mv3_arcface_r18(int8) | 是 | 12 |

View File

@@ -105,4 +105,4 @@ For more details, please refer to: [Paddle Lite prepares the device environment]
3. For deploying YOLOv5 detection model on A311D, please refer to: [C++ Deployment Example of YOLOv5 Detection Model on A311D](../../../examples/vision/detection/yolov5/a311d/README.md) 3. For deploying YOLOv5 detection model on A311D, please refer to: [C++ Deployment Example of YOLOv5 Detection Model on A311D](../../../examples/vision/detection/yolov5/a311d/README.md)
4. For deploying PP-LiteSeg segmentation model on A311D, please refer to: [C++ Deployment Example of PP-LiteSeg Segmentation Model on A311D](../../../examples/vision/segmentation/paddleseg/a311d/README.md) 4. For deploying PP-LiteSeg segmentation model on A311D, please refer to: [C++ Deployment Example of PP-LiteSeg Segmentation Model on A311D](../../../examples/vision/segmentation/paddleseg/amlogic/a311d/README.md)

View File

@@ -105,4 +105,4 @@ For more details, please refer to: [Paddle Lite prepares the device environment]
3. For deploying YOLOv5 detection model on RV1126, please refer to: [C++ Deployment Example of YOLOv5 Detection Model on RV1126](../../../examples/vision/detection/yolov5/rv1126/README.md) 3. For deploying YOLOv5 detection model on RV1126, please refer to: [C++ Deployment Example of YOLOv5 Detection Model on RV1126](../../../examples/vision/detection/yolov5/rv1126/README.md)
4. For deploying PP-LiteSeg segmentation model on RV1126, please refer to: [C++ Deployment Example of PP-LiteSeg Segmentation Model on RV1126](../../../examples/vision/segmentation/paddleseg/rv1126/README.md) 4. For deploying PP-LiteSeg segmentation model on RV1126, please refer to: [C++ Deployment Example of PP-LiteSeg Segmentation Model on RV1126](../../../examples/vision/segmentation/paddleseg/rockchip/rv1126/README.md)

View File

@@ -35,7 +35,7 @@ sudo ./infer_tinypose_demo ./PP_TinyPose_256x192_infer ./hrnet_demo.jpg
</div> </div>
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: 以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md) - [如何在Windows中使用FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md)
## PP-TinyPose C++接口 ## PP-TinyPose C++接口
@@ -79,5 +79,5 @@ PPTinyPose模型加载和初始化其中model_file为导出的Paddle模型格
- [模型介绍](../../../) - [模型介绍](../../../)
- [Python部署](../../python) - [Python部署](../../python)
- [视觉模型预测结果](../../../../../../docs/api/vision_results/) - [视觉模型预测结果](../../../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../../docs/cn/faq/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -53,7 +53,7 @@ PP-TinyPose模型加载和初始化其中model_file, params_file以及config_
> **返回** > **返回**
> >
> > 返回`fastdeploy.vision.KeyPointDetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) > > 返回`fastdeploy.vision.KeyPointDetectionResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../../docs/api/vision_results/)
### 类成员属性 ### 类成员属性
#### 后处理参数 #### 后处理参数
@@ -66,5 +66,5 @@ PP-TinyPose模型加载和初始化其中model_file, params_file以及config_
- [PP-TinyPose 模型介绍](..) - [PP-TinyPose 模型介绍](..)
- [PP-TinyPose C++部署](../cpp) - [PP-TinyPose C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/) - [模型预测结果说明](../../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -1,3 +1,3 @@
PaddleSeg Matting deployment examples, please refer to [document](../../segmentation/ppmatting/README_CN.md). PaddleSeg Matting deployment examples, please refer to [document](../../segmentation/ppmatting/README.md).
PaddleSeg Matting的部署示例请参考[文档](../../segmentation/ppmatting/README_CN.md). PaddleSeg Matting的部署示例请参考[文档](../../segmentation/ppmatting/README.md).

View File

@@ -40,7 +40,7 @@ wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_
``` ```
The above command works for Linux or MacOS. For SDK in Windows, refer to: The above command works for Linux or MacOS. For SDK in Windows, refer to:
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) - [How to use FastDeploy C++ SDK in Windows](../../../../../../docs/cn/faq/use_sdk_on_windows.md)
The visualized result after running is as follows The visualized result after running is as follows

View File

@@ -8,8 +8,8 @@
软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装) 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
### 模型准备 ### 模型准备
1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README_CN.md#晶晨a311d支持的paddleseg模型)进行部署。 1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README.md#晶晨a311d支持的paddleseg模型)进行部署。
2. 若FastDeploy没有提供满足要求的量化模型用户可以参考[PaddleSeg动态图模型导出为A311D支持的INT8模型](../README_CN.md#paddleseg动态图模型导出为a311d支持的int8模型)自行导出或训练量化模型 2. 若FastDeploy没有提供满足要求的量化模型用户可以参考[PaddleSeg动态图模型导出为A311D支持的INT8模型](../README.md#paddleseg动态图模型导出为a311d支持的int8模型)自行导出或训练量化模型
3. 若上述导出或训练的模型出现精度下降或者报错则需要使用异构计算使得模型算子部分跑在A311D的ARM CPU上进行调试以及精度验证其中异构计算所需的文件是subgraph.txt。具体关于异构计算可参考[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md)。 3. 若上述导出或训练的模型出现精度下降或者报错则需要使用异构计算使得模型算子部分跑在A311D的ARM CPU上进行调试以及精度验证其中异构计算所需的文件是subgraph.txt。具体关于异构计算可参考[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md)。
## 在 A311D 上部署量化后的 PP-LiteSeg 分割模型 ## 在 A311D 上部署量化后的 PP-LiteSeg 分割模型

View File

@@ -5,8 +5,8 @@ This directory provides `infer.c` to finish the deployment of PaddleSeg on CPU/G
Before deployment, two steps require confirmation Before deployment, two steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) - 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) - 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model. Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
@@ -32,7 +32,7 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
``` ```
The above command works for Linux or MacOS. For SDK in Windows, refer to: The above command works for Linux or MacOS. For SDK in Windows, refer to:
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md) - [How to use FastDeploy C++ SDK in Windows](../../../../../../docs/en/faq/use_sdk_on_windows.md)
The visualized result after running is as follows The visualized result after running is as follows
@@ -154,7 +154,7 @@ FD_C_Bool FD_C_PaddleSegWrapperPredict(
> **Params** > **Params**
> * **fd_c_ppseg_wrapper**(FD_C_PaddleSegWrapper*): Pointer to manipulate PaddleSeg object. > * **fd_c_ppseg_wrapper**(FD_C_PaddleSegWrapper*): Pointer to manipulate PaddleSeg object.
> * **img**FD_C_Mat: pointer to cv::Mat object, which can be obained by FD_C_Imread interface > * **img**FD_C_Mat: pointer to cv::Mat object, which can be obained by FD_C_Imread interface
> * **result**(FD_C_SegmentationResult*): Segmentation prediction results, Refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for SegmentationResult > * **result**(FD_C_SegmentationResult*): Segmentation prediction results, Refer to [Vision Model Prediction Results](../../../../../../docs/api/vision_results/) for SegmentationResult
#### Result #### Result
@@ -180,5 +180,5 @@ FD_C_Mat FD_C_VisSegmentation(FD_C_Mat im,
- [PPSegmentation Model Description](../../) - [PPSegmentation Model Description](../../)
- [PaddleSeg Python Deployment](../python) - [PaddleSeg Python Deployment](../python)
- [Model Prediction Results](../../../../../docs/api/vision_results/) - [Model Prediction Results](../../../../../../docs/api/vision_results/)
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md) - [How to switch the model inference backend engine](../../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -5,8 +5,8 @@
在部署前,需确认以下两个步骤 在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
以Linux上推理为例在本目录执行如下命令即可完成编译测试支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4) 以Linux上推理为例在本目录执行如下命令即可完成编译测试支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
@@ -32,10 +32,10 @@ wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
``` ```
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: 以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md) - [如何在Windows中使用FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md)
如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境: 如果用户使用华为昇腾NPU部署, 请参考以下方式在部署前初始化部署环境:
- [如何使用华为昇腾NPU部署](../../../../../docs/cn/faq/use_sdk_on_ascend.md) - [如何使用华为昇腾NPU部署](../../../../../../docs/cn/faq/use_sdk_on_ascend.md)
运行完成可视化结果如下图所示 运行完成可视化结果如下图所示
@@ -155,7 +155,7 @@ FD_C_Bool FD_C_PaddleSegWrapperPredict(
> **参数** > **参数**
> * **fd_c_ppseg_wrapper**(FD_C_PaddleSegWrapper*): 指向PaddleSeg模型的指针 > * **fd_c_ppseg_wrapper**(FD_C_PaddleSegWrapper*): 指向PaddleSeg模型的指针
> * **img**FD_C_Mat: 输入图像的指针指向cv::Mat对象可以调用FD_C_Imread读取图像获取 > * **img**FD_C_Mat: 输入图像的指针指向cv::Mat对象可以调用FD_C_Imread读取图像获取
> * **result**FD_C_SegmentationResult*): Segmentation检测结果SegmentationResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) > * **result**FD_C_SegmentationResult*): Segmentation检测结果SegmentationResult说明参考[视觉模型预测结果](../../../../../../docs/api/vision_results/)
#### Predict结果 #### Predict结果
@@ -181,5 +181,5 @@ FD_C_Mat FD_C_VisSegmentation(FD_C_Mat im,
- [PPSegmentation 系列模型介绍](../../) - [PPSegmentation 系列模型介绍](../../)
- [PaddleSeg Python部署](../python) - [PaddleSeg Python部署](../python)
- [模型预测结果说明](../../../../../docs/api/vision_results/) - [模型预测结果说明](../../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -5,8 +5,8 @@ This directory provides `infer.cs` to finish the deployment of PaddleSeg on CPU/
Before deployment, two steps require confirmation Before deployment, two steps require confirmation
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) - 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md) - 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Please follow below instructions to compile and test in Windows. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model. Please follow below instructions to compile and test in Windows. FastDeploy version 1.0.4 or above (x.x.x>=1.0.4) is required to support this model.
@@ -35,7 +35,7 @@ msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
``` ```
For more information about how to use FastDeploy SDK to compile a project with Visual Studio 2019. Please refer to For more information about how to use FastDeploy SDK to compile a project with Visual Studio 2019. Please refer to
- [Using the FastDeploy C++ SDK on Windows Platform](../../../../../docs/en/faq/use_sdk_on_windows.md) - [Using the FastDeploy C++ SDK on Windows Platform](../../../../../../docs/en/faq/use_sdk_on_windows.md)
## 4. Execute compiled program ## 4. Execute compiled program
@@ -93,12 +93,12 @@ fastdeploy.SegmentationResult Predict(OpenCvSharp.Mat im)
>> >>
> **Return** > **Return**
> >
>> * **result**: Segmentation prediction results, refer to [Vision Model Prediction Results](../../../../../docs/api/vision_results/) for SegmentationResult >> * **result**: Segmentation prediction results, refer to [Vision Model Prediction Results](../../../../../../docs/api/vision_results/) for SegmentationResult
## Other Documents ## Other Documents
- [PPSegmentation Model Description](../../) - [PPSegmentation Model Description](../../)
- [PaddleSeg Python Deployment](../python) - [PaddleSeg Python Deployment](../python)
- [Model Prediction Results](../../../../../docs/api/vision_results/) - [Model Prediction Results](../../../../../../docs/api/vision_results/)
- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md) - [How to switch the model inference backend engine](../../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -5,8 +5,8 @@
在部署前,需确认以下两个步骤 在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
在本目录执行如下命令即可在Windows完成编译测试支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4) 在本目录执行如下命令即可在Windows完成编译测试支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>=1.0.4)
@@ -35,8 +35,8 @@ msbuild infer_demo.sln /m:4 /p:Configuration=Release /p:Platform=x64
``` ```
关于使用Visual Studio 2019创建sln工程或者CMake工程等方式编译的更详细信息可参考如下文档 关于使用Visual Studio 2019创建sln工程或者CMake工程等方式编译的更详细信息可参考如下文档
- [在 Windows 使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md) - [在 Windows 使用 FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md)
- [FastDeploy C++库在Windows上的多种使用方式](../../../../../docs/cn/faq/use_sdk_on_windows_build.md) - [FastDeploy C++库在Windows上的多种使用方式](../../../../../../docs/cn/faq/use_sdk_on_windows_build.md)
## 4. 运行可执行程序 ## 4. 运行可执行程序
@@ -98,5 +98,5 @@ fastdeploy.SegmentationResult Predict(OpenCvSharp.Mat im)
- [模型介绍](../../) - [模型介绍](../../)
- [Python部署](../python) - [Python部署](../python)
- [视觉模型预测结果](../../../../../docs/api/vision_results/) - [视觉模型预测结果](../../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -28,7 +28,7 @@ PaddleSeg支持利用FastDeploy在昆仑芯片上部署Segmentation模型
- [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md) - [DeepLabV3系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/deeplabv3/README.md)
- [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md) - [SegFormer系列模型](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/configs/segformer/README.md)
>>**注意** 若需要在华为昇腾上部署**PP-Matting**、**PP-HumanMatting**请从[Matting模型部署](../../ppmating/)下载对应模型,部署过程与此文档一致 >>**注意** 若需要在华为昇腾上部署**PP-Matting**、**PP-HumanMatting**请从[Matting模型部署](../../../ppmating/)下载对应模型,部署过程与此文档一致
## 准备PaddleSeg部署模型 ## 准备PaddleSeg部署模型
PaddleSeg模型导出请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md) PaddleSeg模型导出请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/blob/develop/docs/model_export_cn.md)

View File

@@ -6,7 +6,7 @@
## 昆仑芯XPU编译FastDeploy环境准备 ## 昆仑芯XPU编译FastDeploy环境准备
在部署前需自行编译基于昆仑芯XPU的预测库参考文档[昆仑芯XPU部署环境编译安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装) 在部署前需自行编译基于昆仑芯XPU的预测库参考文档[昆仑芯XPU部署环境编译安装](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
>>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../matting)下载 >>**注意** **PP-Matting**、**PP-HumanMatting**的模型,请从[Matting模型部署](../../../../matting)下载
```bash ```bash
#下载部署示例代码 #下载部署示例代码

View File

@@ -22,6 +22,6 @@ FastDeploy 量化模型部署的过程大致都与FP32模型类似只是模
| 硬件支持列表 | | | | | 硬件支持列表 | | | |
|:----- | :-- | :-- | :-- | |:----- | :-- | :-- | :-- |
| [NVIDIA GPU](cpu-gpu) | [X86 CPU](cpu-gpu)| [飞腾CPU](cpu-gpu) | [ARM CPU](cpu-gpu) | | [NVIDIA GPU](../cpu-gpu) | [X86 CPU](../cpu-gpu)| [飞腾CPU](../cpu-gpu) | [ARM CPU](../cpu-gpu) |
| [Intel GPU(独立显卡/集成显卡)](cpu-gpu) | [昆仑](kunlun) | [昇腾](ascend) | [瑞芯微](rockchip) | | [Intel GPU(独立显卡/集成显卡)](../cpu-gpu) | [昆仑](../kunlun) | [昇腾](../ascend) | [瑞芯微](../rockchip) |
| [晶晨](amlogic) | [算能](sophgo) | | [晶晨](../amlogic) | [算能](../sophgo) |

View File

@@ -12,11 +12,11 @@
## 转换模型 ## 转换模型
模型转换代码请参考[模型转换文档](../README_CN.md) 模型转换代码请参考[模型转换文档](../README.md)
## 编译SDK ## 编译SDK
请参考[RK2代NPU部署库编译](../../../../../../docs/cn/faq/rknpu2/build.md)编译SDK. 请参考[RK2代NPU部署库编译](../../../../../../../docs/cn/faq/rknpu2/build.md)编译SDK.
### 编译example ### 编译example

View File

@@ -32,7 +32,7 @@ RKNPU上对模型的输入要求是使用NHWC格式且图片归一化操作
- [FastDeploy部署PaddleSeg模型概览](..) - [FastDeploy部署PaddleSeg模型概览](..)
- [PaddleSeg C++部署](../cpp) - [PaddleSeg C++部署](../cpp)
- [转换PaddleSeg模型至RKNN模型文档](../README_CN.md#准备paddleseg部署模型以及转换模型) - [转换PaddleSeg模型至RKNN模型文档](../README.md#准备paddleseg部署模型以及转换模型)
## 常见问题 ## 常见问题
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md) - [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)

View File

@@ -8,8 +8,8 @@
软硬件环境满足要求,以及交叉编译环境的准备,请参考:[瑞芯微RV1126部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装) 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[瑞芯微RV1126部署环境](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install#自行编译安装)
### 模型准备 ### 模型准备
1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README_CN.md#瑞芯微-rv1126-支持的paddleseg模型)进行部署。 1. 用户可以直接使用由[FastDeploy 提供的量化模型](../README.md#瑞芯微-rv1126-支持的paddleseg模型)进行部署。
2. 若FastDeploy没有提供满足要求的量化模型用户可以参考[PaddleSeg动态图模型导出为RV1126支持的INT8模型](../README_CN.md#paddleseg动态图模型导出为rv1126支持的int8模型)自行导出或训练量化模型 2. 若FastDeploy没有提供满足要求的量化模型用户可以参考[PaddleSeg动态图模型导出为RV1126支持的INT8模型](../README.md#paddleseg动态图模型导出为rv1126支持的int8模型)自行导出或训练量化模型
3. 若上述导出或训练的模型出现精度下降或者报错则需要使用异构计算使得模型算子部分跑在RV1126的ARM CPU上进行调试以及精度验证其中异构计算所需的文件是subgraph.txt。具体关于异构计算可参考[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md)。 3. 若上述导出或训练的模型出现精度下降或者报错则需要使用异构计算使得模型算子部分跑在RV1126的ARM CPU上进行调试以及精度验证其中异构计算所需的文件是subgraph.txt。具体关于异构计算可参考[异构计算](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/heterogeneous_computing_on_timvx_npu.md)。
## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型 ## 在 RV1126 上部署量化后的 PP-LiteSeg 分割模型

View File

@@ -65,4 +65,4 @@ When the request is sent successfully, the results are returned in json format a
The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../serving/docs/EN/model_configuration-en.md) to modify the configs in `models/runtime/config.pbtxt`. The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../../serving/docs/EN/model_configuration-en.md) to modify the configs in `models/runtime/config.pbtxt`.

View File

@@ -25,7 +25,7 @@
请参考[SOPHGO部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)编译SDK编译完成后将在build目录下生成fastdeploy-sophgo目录。拷贝fastdeploy-sophgo至当前目录 请参考[SOPHGO部署库编译](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/build_and_install/sophgo.md)编译SDK编译完成后将在build目录下生成fastdeploy-sophgo目录。拷贝fastdeploy-sophgo至当前目录
### 拷贝模型文件以及配置文件至model文件夹 ### 拷贝模型文件以及配置文件至model文件夹
将Paddle模型转换为SOPHGO bmodel模型转换步骤参考[文档](../README_CN.md#将paddleseg推理模型转换为bmodel模型步骤) 将Paddle模型转换为SOPHGO bmodel模型转换步骤参考[文档](../README.md#将paddleseg推理模型转换为bmodel模型步骤)
将转换后的SOPHGO bmodel模型文件拷贝至model中 将转换后的SOPHGO bmodel模型文件拷贝至model中
@@ -53,4 +53,4 @@ make
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html) - [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)
- [FastDeploy部署PaddleSeg模型概览](../../) - [FastDeploy部署PaddleSeg模型概览](../../)
- [Python部署](../python) - [Python部署](../python)
- [模型转换](../README_CN.md#将paddleseg推理模型转换为bmodel模型步骤) - [模型转换](../README.md#将paddleseg推理模型转换为bmodel模型步骤)

View File

@@ -27,7 +27,7 @@ python3 infer.py --model_file ./bmodel/pp_liteseg_1684x_f32.bmodel --config_file
## 快速链接 ## 快速链接
- [pp_liteseg C++部署](../cpp) - [pp_liteseg C++部署](../cpp)
- [转换 pp_liteseg SOPHGO模型文档](../README_CN.md#导出bmodel模型) - [转换 pp_liteseg SOPHGO模型文档](../README.md#导出bmodel模型)
## 常见问题 ## 常见问题
- [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md) - [如何将模型预测结果SegmentationResult转为numpy格式](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/cn/faq/vision_result_related_problems.md)

View File

@@ -45,7 +45,7 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
</div> </div>
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: 以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md) - [如何在Windows中使用FastDeploy C++ SDK](../../../../../../docs/cn/faq/use_sdk_on_windows.md)
## 快速链接 ## 快速链接
- [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html) - [PaddleSeg C++ API文档](https://www.paddlepaddle.org.cn/fastdeploy-api-doc/cpp/html/namespacefastdeploy_1_1vision_1_1segmentation.html)