[Doc] Fix dead links (#517)

* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (#11)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (#16)

* Develop (#11) (#12)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (#13)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (#14)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

* add python comments for external models

* modify resnet c++ comments

* modify C++ comments for external models

* modify python comments and add result class comments

* fix comments compile error

* modify result.h comments

* first commit for dead links

* first commit for dead links

* fix docs deadlinks

* fix docs deadlinks

* fix examples deadlinks

* fix examples deadlinks

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
This commit is contained in:
ziqi-jin
2022-11-07 20:49:41 +08:00
committed by GitHub
parent 0bb40ab100
commit 3c0f4c19f9
32 changed files with 50 additions and 56 deletions

View File

@@ -2,7 +2,7 @@
This directory help to generate Python API documents for FastDeploy. This directory help to generate Python API documents for FastDeploy.
1. First, to generate the latest api documents, you need to install the latest FastDeploy, refer [build and install](en/build_and_install) to build FastDeploy python wheel package with the latest code. 1. First, to generate the latest api documents, you need to install the latest FastDeploy, refer [build and install](../../cn/build_and_install) to build FastDeploy python wheel package with the latest code.
2. After installed FastDeploy in your python environment, there are some dependencies need to install, execute command `pip install -r requirements.txt` in this directory 2. After installed FastDeploy in your python environment, there are some dependencies need to install, execute command `pip install -r requirements.txt` in this directory
3. Execute command `make html` to generate API documents 3. Execute command `make html` to generate API documents

View File

@@ -102,4 +102,4 @@ make install
如何使用FastDeploy Android C++ SDK 请参考使用案例文档: 如何使用FastDeploy Android C++ SDK 请参考使用案例文档:
- [图像分类Android使用文档](../../../examples/vision/classification/paddleclas/android/README.md) - [图像分类Android使用文档](../../../examples/vision/classification/paddleclas/android/README.md)
- [目标检测Android使用文档](../../../examples/vision/detection/paddledetection/android/README.md) - [目标检测Android使用文档](../../../examples/vision/detection/paddledetection/android/README.md)
- [在 Android 通过 JNI 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md) - [在 Android 通过 JNI 中使用 FastDeploy C++ SDK](../../cn/faq/use_cpp_sdk_on_android.md)

View File

@@ -218,7 +218,7 @@ D:\qiuyanjun\fastdeploy_test\infer_ppyoloe\x64\Release\infer_ppyoloe.exe
![image](https://user-images.githubusercontent.com/31974251/192144782-79bccf8f-65d0-4f22-9f41-81751c530319.png) ![image](https://user-images.githubusercontent.com/31974251/192144782-79bccf8f-65d0-4f22-9f41-81751c530319.png)
2其中infer_ppyoloe.cpp的代码可以直接从examples中的代码拷贝过来 2其中infer_ppyoloe.cpp的代码可以直接从examples中的代码拷贝过来
- [examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc](../../examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc) - [examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc](../../../examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc)
3CMakeLists.txt主要包括配置FastDeploy C++ SDK的路径如果是GPU版本的SDK还需要配置CUDA_DIRECTORY为CUDA的安装路径CMakeLists.txt的配置如下 3CMakeLists.txt主要包括配置FastDeploy C++ SDK的路径如果是GPU版本的SDK还需要配置CUDA_DIRECTORY为CUDA的安装路径CMakeLists.txt的配置如下

View File

@@ -221,7 +221,7 @@ This section is for CMake users and describes how to create CMake projects in Vi
![image](https://user-images.githubusercontent.com/31974251/192144782-79bccf8f-65d0-4f22-9f41-81751c530319.png) ![image](https://user-images.githubusercontent.com/31974251/192144782-79bccf8f-65d0-4f22-9f41-81751c530319.png)
2The code of infer_ppyoloe.cpp can be copied directly from the code in examples 2The code of infer_ppyoloe.cpp can be copied directly from the code in examples
- [examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc](../../examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc) - [examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc](../../../examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc)
3CMakeLists.txt mainly includes the configuration of the path of FastDeploy C++ SDK, if it is the GPU version of the SDK, you also need to configure CUDA_DIRECTORY as the installation path of CUDA, the configuration of CMakeLists.txt is as follows 3CMakeLists.txt mainly includes the configuration of the path of FastDeploy C++ SDK, if it is the GPU version of the SDK, you also need to configure CUDA_DIRECTORY as the installation path of CUDA, the configuration of CMakeLists.txt is as follows

View File

@@ -27,7 +27,7 @@ FastDeploy基于PaddleSlim, 集成了一键模型量化的工具, 同时, FastDe
### 用户使用FastDeploy一键模型量化工具来量化模型 ### 用户使用FastDeploy一键模型量化工具来量化模型
Fastdeploy基于PaddleSlim, 为用户提供了一键模型量化的工具,请参考如下文档进行模型量化. Fastdeploy基于PaddleSlim, 为用户提供了一键模型量化的工具,请参考如下文档进行模型量化.
- [FastDeploy 一键模型量化](../../tools/quantization/) - [FastDeploy 一键模型量化](../../tools/auto_compression/)
当用户获得产出的量化模型之后即可以使用FastDeploy来部署量化模型. 当用户获得产出的量化模型之后即可以使用FastDeploy来部署量化模型.

View File

@@ -168,4 +168,4 @@ entity: 华夏 label: LOC pos: [14, 15]
## 配置修改 ## 配置修改
当前分类任务(ernie_seqcls_model/config.pbtxt)默认配置在CPU上运行OpenVINO引擎; 序列标注任务默认配置在GPU上运行Paddle引擎。如果要在CPU/GPU或其他推理引擎上运行, 需要修改配置,详情请参考[配置文档](../../../../../serving/docs/zh_CN/model_configuration.md) 当前分类任务(ernie_seqcls_model/config.pbtxt)默认配置在CPU上运行OpenVINO引擎; 序列标注任务默认配置在GPU上运行Paddle引擎。如果要在CPU/GPU或其他推理引擎上运行, 需要修改配置,详情请参考[配置文档](../../../../serving/docs/zh_CN/model_configuration.md)

View File

@@ -30,4 +30,4 @@ FastDeploy针对飞桨的视觉套件以及外部热门模型提供端到
- 加载模型 - 加载模型
- 调用`predict`接口 - 调用`predict`接口
FastDeploy在各视觉模型部署时也支持一键切换后端推理引擎详情参阅[如何切换模型推理引擎](../../docs/runtime/how_to_change_backend.md)。 FastDeploy在各视觉模型部署时也支持一键切换后端推理引擎详情参阅[如何切换模型推理引擎](../../docs/cn/faq/how_to_change_backend.md)。

View File

@@ -8,7 +8,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.) - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
## 以量化后的ResNet50_Vd模型为例, 进行部署 ## 以量化后的ResNet50_Vd模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署. 在本目录执行如下命令即可完成编译,以及量化模型部署.

View File

@@ -8,7 +8,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.) - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
## 以量化后的ResNet50_Vd模型为例, 进行部署 ## 以量化后的ResNet50_Vd模型为例, 进行部署

View File

@@ -6,7 +6,7 @@
## 前端部署图像分类模型 ## 前端部署图像分类模型
图像分类模型web demo使用[**参考文档**](../../../../examples/application/js/web_demo) 图像分类模型web demo使用[**参考文档**](../../../../application/js/web_demo/)
## MobileNet js接口 ## MobileNet js接口
@@ -34,4 +34,3 @@ console.log(res);
- [PaddleClas模型 python部署](../../paddleclas/python/) - [PaddleClas模型 python部署](../../paddleclas/python/)
- [PaddleClas模型 C++部署](../cpp/) - [PaddleClas模型 C++部署](../cpp/)

View File

@@ -4,8 +4,8 @@
在部署前,需确认以下两个步骤 在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/environment.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/quick_start) - 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
以Linux上 ResNet50 推理为例,在本目录执行如下命令即可完成编译测试 以Linux上 ResNet50 推理为例,在本目录执行如下命令即可完成编译测试
@@ -33,7 +33,7 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
``` ```
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: 以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md) - [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)
## ResNet C++接口 ## ResNet C++接口
@@ -74,4 +74,4 @@ fastdeploy::vision::classification::ResNet(
- [模型介绍](../../) - [模型介绍](../../)
- [Python部署](../python) - [Python部署](../python)
- [视觉模型预测结果](../../../../../docs/api/vision_results/) - [视觉模型预测结果](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/runtime/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -2,8 +2,8 @@
在部署前,需确认以下两个步骤 在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/environment.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../docs/quick_start) - 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
本目录下提供`infer.py`快速完成ResNet50_vd在CPU/GPU以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 本目录下提供`infer.py`快速完成ResNet50_vd在CPU/GPU以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
@@ -69,4 +69,4 @@ fd.vision.classification.ResNet(model_file, params_file, runtime_option=None, mo
- [ResNet 模型介绍](..) - [ResNet 模型介绍](..)
- [ResNet C++部署](../cpp) - [ResNet C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/) - [模型预测结果说明](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/runtime/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -9,7 +9,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.) - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
## 以量化后的PP-YOLOE-l模型为例, 进行部署 ## 以量化后的PP-YOLOE-l模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署. 在本目录执行如下命令即可完成编译,以及量化模型部署.

View File

@@ -8,7 +8,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.) - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
## 以量化后的PP-YOLOE-l模型为例, 进行部署 ## 以量化后的PP-YOLOE-l模型为例, 进行部署

View File

@@ -9,7 +9,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署. - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
## 以量化后的YOLOv5s模型为例, 进行部署 ## 以量化后的YOLOv5s模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署. 在本目录执行如下命令即可完成编译,以及量化模型部署.

View File

@@ -8,7 +8,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署. - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
## 以量化后的YOLOv5s模型为例, 进行部署 ## 以量化后的YOLOv5s模型为例, 进行部署

View File

@@ -9,7 +9,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署. - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
## 以量化后的YOLOv6s模型为例, 进行部署 ## 以量化后的YOLOv6s模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署. 在本目录执行如下命令即可完成编译,以及量化模型部署.

View File

@@ -8,7 +8,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署. - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
## 以量化后的YOLOv6s模型为例, 进行部署 ## 以量化后的YOLOv6s模型为例, 进行部署
```bash ```bash

View File

@@ -4,8 +4,8 @@ English | [简体中文](README.md)
Two steps before deployment: Two steps before deployment:
- 1. The hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/docs_en/environment.md) - 1. The hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. Install FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/docs_en/quick_start) - 2. Install FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
This doc provides a quick `infer.py` demo of YOLOv7 deployment on CPU/GPU, and accelerated GPU deployment by TensorRT. Run the following command: This doc provides a quick `infer.py` demo of YOLOv7 deployment on CPU/GPU, and accelerated GPU deployment by TensorRT. Run the following command:
@@ -62,7 +62,7 @@ YOLOv7 model loading and initialisation, with model_file being the exported ONNX
> **Return** > **Return**
> >
> > Return to`fastdeploy.vision.DetectionResult`Struct. For more details, please refer to [Vision Model Results](../../../../../docs/docs_en/api/vision_results/) > > Return to`fastdeploy.vision.DetectionResult`Struct. For more details, please refer to [Vision Model Results](../../../../../docs/api/vision_results/)
### Class Member Variables ### Class Member Variables
@@ -80,5 +80,5 @@ Users can modify the following pre-processing parameters for their needs. This w
- [YOLOv7 Model Introduction](..) - [YOLOv7 Model Introduction](..)
- [YOLOv7 C++ Deployment](../cpp) - [YOLOv7 C++ Deployment](../cpp)
- [Vision Model Results](../../../../../docs/docs_en/api/vision_results/) - [Vision Model Results](../../../../../docs/api/vision_results/)
- [how to change inference backend](../../../../../docs/docs_en/runtime/how_to_change_inference_backend.md) - [how to change inference backend](../../../../../docs/en/faq/how_to_change_backend.md)

View File

@@ -9,7 +9,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署. - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
## 以量化后的YOLOv7模型为例, 进行部署 ## 以量化后的YOLOv7模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署. 在本目录执行如下命令即可完成编译,以及量化模型部署.

View File

@@ -8,7 +8,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署. - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
## 以量化后的YOLOv7模型为例, 进行部署 ## 以量化后的YOLOv7模型为例, 进行部署
```bash ```bash

View File

@@ -71,4 +71,4 @@ PPTinyPosePipeline模型加载和初始化其中det_model是使用`fd.vision.
- [Pipeline 模型介绍](..) - [Pipeline 模型介绍](..)
- [Pipeline C++部署](../cpp) - [Pipeline C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/) - [模型预测结果说明](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/runtime/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -76,4 +76,4 @@ PP-TinyPose模型加载和初始化其中model_file, params_file以及config_
- [PP-TinyPose 模型介绍](..) - [PP-TinyPose 模型介绍](..)
- [PP-TinyPose C++部署](../cpp) - [PP-TinyPose C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/) - [模型预测结果说明](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/runtime/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -7,7 +7,7 @@
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
以Linux上 PP-Matting 推理为例在本目录执行如下命令即可完成编译测试如若只需在CPU上部署可在[Fastdeploy C++预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md/CPP_prebuilt_libraries.md)下载CPU推理库 以Linux上 PP-Matting 推理为例在本目录执行如下命令即可完成编译测试如若只需在CPU上部署可在[Fastdeploy C++预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)下载CPU推理库
```bash ```bash
#下载SDK编译模型examples代码SDK中包含了examples代码 #下载SDK编译模型examples代码SDK中包含了examples代码

View File

@@ -5,7 +5,7 @@
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
以Linux上 RobustVideoMatting 推理为例在本目录执行如下命令即可完成编译测试如若只需在CPU上部署可在[Fastdeploy C++预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md/CPP_prebuilt_libraries.md)下载CPU推理库 以Linux上 RobustVideoMatting 推理为例在本目录执行如下命令即可完成编译测试如若只需在CPU上部署可在[Fastdeploy C++预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)下载CPU推理库
本目录下提供`infer.cc`快速完成RobustVideoMatting在CPU/GPU以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 本目录下提供`infer.cc`快速完成RobustVideoMatting在CPU/GPU以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成

View File

@@ -16,7 +16,7 @@ import * as ocr from "@paddle-js-models/ocr";
await ocr.init(detConfig, recConfig); await ocr.init(detConfig, recConfig);
const res = await ocr.recognize(img, option, postConfig); const res = await ocr.recognize(img, option, postConfig);
``` ```
ocr模型加载和初始化其中模型为Paddle.js模型格式js模型转换方式参考[文档](../../../../application/web_demo/README.md) ocr模型加载和初始化其中模型为Paddle.js模型格式js模型转换方式参考[文档](../../../../application/js/web_demo/README.md)
**init函数参数** **init函数参数**
@@ -37,5 +37,4 @@ ocr模型加载和初始化其中模型为Paddle.js模型格式js模型转
- [PP-OCRv3 C++部署](../cpp) - [PP-OCRv3 C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/) - [模型预测结果说明](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
- [PP-OCRv3模型web demo文档](../../../../application/web_demo/README.md) - [PP-OCRv3模型web demo文档](../../../../application/js/web_demo/README.md)

View File

@@ -16,7 +16,7 @@ import * as ocr from "@paddle-js-models/ocr";
await ocr.init(detConfig, recConfig); await ocr.init(detConfig, recConfig);
const res = await ocr.recognize(img, option, postConfig); const res = await ocr.recognize(img, option, postConfig);
``` ```
ocr模型加载和初始化其中模型为Paddle.js模型格式js模型转换方式参考[文档](../../../../application/web_demo/README.md) ocr模型加载和初始化其中模型为Paddle.js模型格式js模型转换方式参考[文档](../../../../application/js/web_demo/README.md)
**init函数参数** **init函数参数**

View File

@@ -8,7 +8,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.) - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署 ## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署. 在本目录执行如下命令即可完成编译,以及量化模型部署.

View File

@@ -8,7 +8,7 @@
### 量化模型准备 ### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署. - 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.) - 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署 ## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署

View File

@@ -4,7 +4,7 @@
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../matting) 【注意】如你部署的为**PP-Matting**、**PP-HumanMatting**以及**ModNet**请参考[Matting模型部署](../../../../matting/)
本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成 本目录下提供`infer.py`快速完成PPHumanseg在RKNPU上部署的示例。执行如下脚本即可完成

View File

@@ -7,7 +7,7 @@
## 前端部署PP-Humanseg v1模型 ## 前端部署PP-Humanseg v1模型
PP-Humanseg v1模型web demo部署及使用参考[文档](../../../../application/web_demo/README.md) PP-Humanseg v1模型web demo部署及使用参考[文档](../../../../application/js/web_demo/README.md)
## PP-Humanseg v1 js接口 ## PP-Humanseg v1 js接口
@@ -41,7 +41,3 @@ humanSeg.blurBackground(res)
**drawHumanSeg()函数参数** **drawHumanSeg()函数参数**
> * **seg_values**(number[]): 输入参数一般是getGrayValue函数计算的结果作为输入 > * **seg_values**(number[]): 输入参数一般是getGrayValue函数计算的结果作为输入

View File

@@ -7,7 +7,7 @@
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
以Linux上 PP-Tracking 推理为例在本目录执行如下命令即可完成编译测试如若只需在CPU上部署可在[Fastdeploy C++预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md/CPP_prebuilt_libraries.md)下载CPU推理库 以Linux上 PP-Tracking 推理为例在本目录执行如下命令即可完成编译测试如若只需在CPU上部署可在[Fastdeploy C++预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)下载CPU推理库
```bash ```bash
#下载SDK编译模型examples代码SDK中包含了examples代码 #下载SDK编译模型examples代码SDK中包含了examples代码