[Doc] Fix dead links (#584)

* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (#11)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (#16)

* Develop (#11) (#12)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (#13)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (#14)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

* add python comments for external models

* modify resnet c++ comments

* modify C++ comments for external models

* modify python comments and add result class comments

* fix comments compile error

* modify result.h comments

* deadlink check

* deadlink check

* deadlink check

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
This commit is contained in:
ziqi-jin
2022-11-14 18:44:33 +08:00
committed by GitHub
parent a36f5d3396
commit 57e5841d2e
15 changed files with 53 additions and 54 deletions

View File

@@ -39,7 +39,7 @@
</div> </div>
- 🔥 **2022.11.8Release FastDeploy [release v0.6.0](https://github.com/PaddlePaddle/FastDeploy/tree/release/0.6.0)** - 🔥 **2022.11.8Release FastDeploy [release v0.6.0](https://github.com/PaddlePaddle/FastDeploy/tree/release/0.6.0)**
- **🖥️ 服务端部署:支持推理速度更快的后端,支持更多的模型** - **🖥️ 服务端部署:支持推理速度更快的后端,支持更多的模型**
- 优化 YOLO系列、PaddleClas、PaddleDetection 前后处理内存创建逻辑; - 优化 YOLO系列、PaddleClas、PaddleDetection 前后处理内存创建逻辑;
- 融合视觉预处理操作优化PaddleClas、PaddleDetection预处理性能提升端到端推理性能 - 融合视觉预处理操作优化PaddleClas、PaddleDetection预处理性能提升端到端推理性能
- 服务化部署新增Clone接口支持降低Paddle Inference/TensorRT/OpenVINO后端在多实例下内存/显存的使用; - 服务化部署新增Clone接口支持降低Paddle Inference/TensorRT/OpenVINO后端在多实例下内存/显存的使用;
@@ -47,13 +47,13 @@
- **📲 移动端和端侧部署移动端后端能力升级支持更多的CV模型** - **📲 移动端和端侧部署移动端后端能力升级支持更多的CV模型**
- 集成 RKNPU2 后端,并提供与 Paddle Inference、Paddle Inference TensorRT、TensorRT、OpenVINO、ONNX Runtime、Paddle Lite 等推理后端一致的开发体验; - 集成 RKNPU2 后端,并提供与 Paddle Inference、Paddle Inference TensorRT、TensorRT、OpenVINO、ONNX Runtime、Paddle Lite 等推理后端一致的开发体验;
- 支持 [PP-HumanSeg](./examples/vision/segmentation/paddleseg/rknpu2)、[Unet](./examples/vision/segmentation/paddleseg/rknpu2)、[PicoDet](examples/vision/detection/paddledetection/rknpu2)、[SCRFD](./examples/vision/facedet/scrfd/rknpu2) 等在NPU高需求的特色模型。 - 支持 [PP-HumanSeg](./examples/vision/segmentation/paddleseg/rknpu2)、[Unet](./examples/vision/segmentation/paddleseg/rknpu2)、[PicoDet](examples/vision/detection/paddledetection/rknpu2)、[SCRFD](./examples/vision/facedet/scrfd/rknpu2) 等在NPU高需求的特色模型。
- [**more releases information**](./releases) - [**more releases information**](./releases)
## 目录 ## 目录
* <details open> <summary><b>📖 文档教程(点击可收缩)</b></summary><div> * <details open> <summary><b>📖 文档教程(点击可收缩)</b></summary><div>
- 安装文档 - 安装文档
- [预编译库下载安装](docs/cn/build_and_install/download_prebuilt_libraries.md) - [预编译库下载安装](docs/cn/build_and_install/download_prebuilt_libraries.md)
- [GPU部署环境编译安装](docs/cn/build_and_install/gpu.md) - [GPU部署环境编译安装](docs/cn/build_and_install/gpu.md)
@@ -112,7 +112,7 @@
```bash ```bash
pip install numpy opencv-python fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html pip install numpy opencv-python fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html
``` ```
##### [Conda安装(推荐)](docs/quick_start/Python_prebuilt_wheels.md) ##### [Conda安装(推荐)](docs/cn/build_and_install/download_prebuilt_libraries.md)
```bash ```bash
conda config --add channels conda-forge && conda install cudatoolkit=11.2 cudnn=8.2 conda config --add channels conda-forge && conda install cudatoolkit=11.2 cudnn=8.2
``` ```
@@ -155,7 +155,7 @@ cv2.imwrite("vis_image.jpg", vis_im)
<details> <details>
<summary><b>C++ SDK快速开始点开查看详情</b></summary><div> <summary><b>C++ SDK快速开始点开查看详情</b></summary><div>
#### 安装 #### 安装
@@ -211,7 +211,7 @@ int main(int argc, char* argv[]) {
|:----------------------:|:--------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------:|:-------:|:----------:|:-------:|:----------:|:-------:|:-------:|:-----------:|:-------------:|:-------------:|:-------:| |:----------------------:|:--------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------:|:-------:|:----------:|:-------:|:----------:|:-------:|:-------:|:-----------:|:-------------:|:-------------:|:-------:|
| --- | --- | --- | X86 CPU | NVIDIA GPU | X86 CPU | NVIDIA GPU | X86 CPU | Arm CPU | AArch64 CPU | NVIDIA Jetson | Graphcore IPU | Serving | | --- | --- | --- | X86 CPU | NVIDIA GPU | X86 CPU | NVIDIA GPU | X86 CPU | Arm CPU | AArch64 CPU | NVIDIA Jetson | Graphcore IPU | Serving |
| Classification | [PaddleClas/ResNet50](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | | Classification | [PaddleClas/ResNet50](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ |
| Classification | [TorchVison/ResNet](examples/vision/classification/resnet) | [Python](./examples/vision/classification/resnet/python)/[C++](./examples/vision/classification/resnet/python/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Classification | [TorchVison/ResNet](examples/vision/classification/resnet) | [Python](./examples/vision/classification/resnet/python)/[C++](./examples/vision/classification/resnet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| Classification | [ltralytics/YOLOv5Cls](examples/vision/classification/yolov5cls) | [Python](./examples/vision/classification/yolov5cls/python)/[C++](./examples/vision/classification/yolov5cls/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Classification | [ltralytics/YOLOv5Cls](examples/vision/classification/yolov5cls) | [Python](./examples/vision/classification/yolov5cls/python)/[C++](./examples/vision/classification/yolov5cls/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| Classification | [PaddleClas/PP-LCNet](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | | Classification | [PaddleClas/PP-LCNet](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ |
| Classification | [PaddleClas/PP-LCNetv2](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | | Classification | [PaddleClas/PP-LCNetv2](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ |
@@ -243,9 +243,9 @@ int main(int argc, char* argv[]) {
| Detection | [WongKinYiu/ScaledYOLOv4](./examples/vision/detection/scaledyolov4) | [Python](./examples/vision/detection/scaledyolov4/python)/[C++](./examples/vision/detection/scaledyolov4/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Detection | [WongKinYiu/ScaledYOLOv4](./examples/vision/detection/scaledyolov4) | [Python](./examples/vision/detection/scaledyolov4/python)/[C++](./examples/vision/detection/scaledyolov4/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| Detection | [ppogg/YOLOv5Lite](./examples/vision/detection/yolov5lite) | [Python](./examples/vision/detection/yolov5lite/python)/[C++](./examples/vision/detection/yolov5lite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Detection | [ppogg/YOLOv5Lite](./examples/vision/detection/yolov5lite) | [Python](./examples/vision/detection/yolov5lite/python)/[C++](./examples/vision/detection/yolov5lite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| Detection | [RangiLyu/NanoDetPlus](./examples/vision/detection/nanodet_plus) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/detection/nanodet_plus/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Detection | [RangiLyu/NanoDetPlus](./examples/vision/detection/nanodet_plus) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/detection/nanodet_plus/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| KeyPoint | [PaddleDetection/TinyPose](./examples/vision/keypointdetection/tiny_pose) | [Python](./examples/vision/keypointdetection/tiny_pose/python)/[C++](./examples/vision/keypointdetection/tiny_pose/python/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | KeyPoint | [PaddleDetection/TinyPose](./examples/vision/keypointdetection/tiny_pose) | [Python](./examples/vision/keypointdetection/tiny_pose/python)/[C++](./examples/vision/keypointdetection/tiny_pose/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| KeyPoint | [PaddleDetection/PicoDet + TinyPose](./examples/vision/keypointdetection/det_keypoint_unite) | [Python](./examples/vision/keypointdetection/det_keypoint_unite/python)/[C++](./examples/vision/keypointdetection/det_keypoint_unite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | KeyPoint | [PaddleDetection/PicoDet + TinyPose](./examples/vision/keypointdetection/det_keypoint_unite) | [Python](./examples/vision/keypointdetection/det_keypoint_unite/python)/[C++](./examples/vision/keypointdetection/det_keypoint_unite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| HeadPose | [omasaht/headpose](examples/vision/headpose) | [Python](./xamples/vision/headpose/fsanet/python)/[C++](./xamples/vision/headpose/fsanet/cpp/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | HeadPose | [omasaht/headpose](examples/vision/headpose) | [Python](./examples/vision/headpose/fsanet/python)/[C++](./examples/vision/headpose/fsanet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| Tracking | [PaddleDetection/PP-Tracking](examples/vision/tracking/pptracking) | [Python](examples/vision/tracking/pptracking/python)/[C++](examples/vision/tracking/pptracking/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Tracking | [PaddleDetection/PP-Tracking](examples/vision/tracking/pptracking) | [Python](examples/vision/tracking/pptracking/python)/[C++](examples/vision/tracking/pptracking/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| OCR | [PaddleOCR/PP-OCRv2](./examples/vision/ocr) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | OCR | [PaddleOCR/PP-OCRv2](./examples/vision/ocr) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| OCR | [PaddleOCR/PP-OCRv3](./examples/vision/ocr) | [Python](./examples/vision/ocr/PP-OCRv3/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | OCR | [PaddleOCR/PP-OCRv3](./examples/vision/ocr) | [Python](./examples/vision/ocr/PP-OCRv3/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
@@ -331,12 +331,12 @@ int main(int argc, char* argv[]) {
| OCR | [PaddleOCR/PP-OCRv2](examples/vision/ocr/PP-OCRv2) | 2.3+4.4 | ✅ | ❔ | ❔ | ❔ | -- | -- | -- | -- | | OCR | [PaddleOCR/PP-OCRv2](examples/vision/ocr/PP-OCRv2) | 2.3+4.4 | ✅ | ❔ | ❔ | ❔ | -- | -- | -- | -- |
| OCR | [PaddleOCR/PP-OCRv3](examples/vision/ocr/PP-OCRv3) | 2.4+10.6 | ✅ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | -- | | OCR | [PaddleOCR/PP-OCRv3](examples/vision/ocr/PP-OCRv3) | 2.4+10.6 | ✅ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | -- |
| OCR | PaddleOCR/PP-OCRv3-tiny | 2.4+10.7 | ❔ | ❔ | ❔ | ❔ | -- | -- | -- | -- | | OCR | PaddleOCR/PP-OCRv3-tiny | 2.4+10.7 | ❔ | ❔ | ❔ | ❔ | -- | -- | -- | -- |
## 🌐 Web和小程序部署 ## 🌐 Web和小程序部署
<div id="fastdeploy-web-models"></div> <div id="fastdeploy-web-models"></div>
| 任务场景 | 模型 | [web_demo](examples/application/js/web_demo) | | 任务场景 | 模型 | [web_demo](examples/application/js/web_demo) |
|:------------------:|:-------------------------------------------------------------------------------------------:|:--------------------------------------------:| |:------------------:|:-------------------------------------------------------------------------------------------:|:--------------------------------------------:|
| --- | --- | [Paddle.js](examples/application/js) | | --- | --- | [Paddle.js](examples/application/js) |
@@ -346,7 +346,7 @@ int main(int argc, char* argv[]) {
| Object Recognition | [GestureRecognition](examples/application/js/web_demo/src/pages/cv/recognition) | ✅ | | Object Recognition | [GestureRecognition](examples/application/js/web_demo/src/pages/cv/recognition) | ✅ |
| Object Recognition | [ItemIdentification](examples/application/js/web_demo/src/pages/cv/recognition) | ✅ | | Object Recognition | [ItemIdentification](examples/application/js/web_demo/src/pages/cv/recognition) | ✅ |
| OCR | [PaddleOCR/PP-OCRv3](./examples/application/js/web_demo/src/pages/cv/ocr) | ✅ | | OCR | [PaddleOCR/PP-OCRv3](./examples/application/js/web_demo/src/pages/cv/ocr) | ✅ |
<div id="fastdeploy-community"></div> <div id="fastdeploy-community"></div>
## 社区交流 ## 社区交流

View File

@@ -38,7 +38,7 @@ Including image classification, object detection, image segmentation, face detec
<div align="center"> <div align="center">
<img src="https://user-images.githubusercontent.com/54695910/200145290-d5565d18-6707-4a0b-a9af-85fd36d35d13.jpg" width = "120" height = "120" /> <img src="https://user-images.githubusercontent.com/54695910/200145290-d5565d18-6707-4a0b-a9af-85fd36d35d13.jpg" width = "120" height = "120" />
</div> </div>
- 🔥 **2022.11.8Release FastDeploy [release v0.6.0](https://github.com/PaddlePaddle/FastDeploy/tree/release/0.6.0)** <br> - 🔥 **2022.11.8Release FastDeploy [release v0.6.0](https://github.com/PaddlePaddle/FastDeploy/tree/release/0.6.0)** <br>
- **🖥️ Server-side and Cloud Deployment: Support more backend, Support more CV models** - **🖥️ Server-side and Cloud Deployment: Support more backend, Support more CV models**
- Optimize preprocessing and postprocessing memory creation logic on YOLO series, PaddleClas, PaddleDetection; - Optimize preprocessing and postprocessing memory creation logic on YOLO series, PaddleClas, PaddleDetection;
@@ -54,7 +54,7 @@ Including image classification, object detection, image segmentation, face detec
## Contents ## Contents
* <details open><summary><b>📖 Tutorialsclick to fold</b></summary><div> * <details open><summary><b>📖 Tutorialsclick to fold</b></summary><div>
- Install - Install
- [How to Install FastDeploy Prebuilt Libraries](docs/en/build_and_install/download_prebuilt_libraries.md) - [How to Install FastDeploy Prebuilt Libraries](docs/en/build_and_install/download_prebuilt_libraries.md)
- [How to Build and Install FastDeploy Library on GPU Platform](docs/en/build_and_install/gpu.md) - [How to Build and Install FastDeploy Library on GPU Platform](docs/en/build_and_install/gpu.md)
@@ -158,7 +158,7 @@ vis_im = vision.vis_detection(im, result, score_threshold=0.5)
cv2.imwrite("vis_image.jpg", vis_im) cv2.imwrite("vis_image.jpg", vis_im)
``` ```
</div></details> </div></details>
<div id="fastdeploy-quick-start-cpp"></div> <div id="fastdeploy-quick-start-cpp"></div>
<details> <details>
@@ -213,13 +213,13 @@ Notes: ✅: already supported; ❔: to be supported in the future; N/A: Not Ava
<div align="center"> <div align="center">
<img src="https://user-images.githubusercontent.com/54695910/198620704-741523c1-dec7-44e5-9f2b-29ddd9997344.png" /> <img src="https://user-images.githubusercontent.com/54695910/198620704-741523c1-dec7-44e5-9f2b-29ddd9997344.png" />
</div> </div>
| Task | Model | API | Linux | Linux | Win | Win | Mac | Mac | Linux | Linux | Linux | Linux | | Task | Model | API | Linux | Linux | Win | Win | Mac | Mac | Linux | Linux | Linux | Linux |
|:-----------------------------:|:---------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------:|:---------------------:|:------------------------:|:------------------------:|:------------------------:|:-----------------------:|:---------------------:|:--------------------------:|:---------------------------:|:--------------------------:|:---------------------------:| |:-----------------------------:|:---------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------:|:---------------------:|:------------------------:|:------------------------:|:------------------------:|:-----------------------:|:---------------------:|:--------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|
| --- | --- | --- | <font size=2> X86 CPU | <font size=2> NVIDIA GPU | <font size=2> Intel CPU | <font size=2> NVIDIA GPU | <font size=2> Intel CPU | <font size=2> Arm CPU | <font size=2> AArch64 CPU | <font size=2> NVIDIA Jetson | <font size=2> Graphcore IPU | Serving| | --- | --- | --- | <font size=2> X86 CPU | <font size=2> NVIDIA GPU | <font size=2> Intel CPU | <font size=2> NVIDIA GPU | <font size=2> Intel CPU | <font size=2> Arm CPU | <font size=2> AArch64 CPU | <font size=2> NVIDIA Jetson | <font size=2> Graphcore IPU | Serving|
| Classification | [PaddleClas/ResNet50](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | | Classification | [PaddleClas/ResNet50](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ |
| Classification | [TorchVison/ResNet](examples/vision/classification/resnet) | [Python](./examples/vision/classification/resnet/python)/[C++](./examples/vision/classification/resnet/python/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Classification | [TorchVison/ResNet](examples/vision/classification/resnet) | [Python](./examples/vision/classification/resnet/python)/[C++](./examples/vision/classification/resnet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| Classification | [ltralytics/YOLOv5Cls](examples/vision/classification/yolov5cls) | [Python](./examples/vision/classification/yolov5cls/python)/[C++](./examples/vision/classification/yolov5cls/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Classification | [ltralytics/YOLOv5Cls](examples/vision/classification/yolov5cls) | [Python](./examples/vision/classification/yolov5cls/python)/[C++](./examples/vision/classification/yolov5cls/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| Classification | [PaddleClas/PP-LCNet](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | | Classification | [PaddleClas/PP-LCNet](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ |
| Classification | [PaddleClas/PP-LCNetv2](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | | Classification | [PaddleClas/PP-LCNetv2](./examples/vision/classification/paddleclas) | [Python](./examples/vision/classification/paddleclas/python)/[C++](./examples/vision/classification/paddleclas/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ |
@@ -251,9 +251,9 @@ Notes: ✅: already supported; ❔: to be supported in the future; N/A: Not Ava
| Detection | [WongKinYiu/ScaledYOLOv4](./examples/vision/detection/scaledyolov4) | [Python](./examples/vision/detection/scaledyolov4/python)/[C++](./examples/vision/detection/scaledyolov4/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Detection | [WongKinYiu/ScaledYOLOv4](./examples/vision/detection/scaledyolov4) | [Python](./examples/vision/detection/scaledyolov4/python)/[C++](./examples/vision/detection/scaledyolov4/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| Detection | [ppogg/YOLOv5Lite](./examples/vision/detection/yolov5lite) | [Python](./examples/vision/detection/yolov5lite/python)/[C++](./examples/vision/detection/yolov5lite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Detection | [ppogg/YOLOv5Lite](./examples/vision/detection/yolov5lite) | [Python](./examples/vision/detection/yolov5lite/python)/[C++](./examples/vision/detection/yolov5lite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| Detection | [RangiLyu/NanoDetPlus](./examples/vision/detection/nanodet_plus) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/detection/nanodet_plus/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Detection | [RangiLyu/NanoDetPlus](./examples/vision/detection/nanodet_plus) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/detection/nanodet_plus/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| KeyPoint | [PaddleDetection/TinyPose](./examples/vision/keypointdetection/tiny_pose) | [Python](./examples/vision/keypointdetection/tiny_pose/python)/[C++](./examples/vision/keypointdetection/tiny_pose/python/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | KeyPoint | [PaddleDetection/TinyPose](./examples/vision/keypointdetection/tiny_pose) | [Python](./examples/vision/keypointdetection/tiny_pose/python)/[C++](./examples/vision/keypointdetection/tiny_pose/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| KeyPoint | [PaddleDetection/PicoDet + TinyPose](./examples/vision/keypointdetection/det_keypoint_unite) | [Python](./examples/vision/keypointdetection/det_keypoint_unite/python)/[C++](./examples/vision/keypointdetection/det_keypoint_unite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | KeyPoint | [PaddleDetection/PicoDet + TinyPose](./examples/vision/keypointdetection/det_keypoint_unite) | [Python](./examples/vision/keypointdetection/det_keypoint_unite/python)/[C++](./examples/vision/keypointdetection/det_keypoint_unite/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| HeadPose | [omasaht/headpose](examples/vision/headpose) | [Python](./xamples/vision/headpose/fsanet/python)/[C++](./xamples/vision/headpose/fsanet/cpp/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | HeadPose | [omasaht/headpose](examples/vision/headpose) | [Python](./examples/vision/headpose/fsanet/python)/[C++](./examples/vision/headpose/fsanet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| Tracking | [PaddleDetection/PP-Tracking](examples/vision/tracking/pptracking) | [Python](examples/vision/tracking/pptracking/python)/[C++](examples/vision/tracking/pptracking/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Tracking | [PaddleDetection/PP-Tracking](examples/vision/tracking/pptracking) | [Python](examples/vision/tracking/pptracking/python)/[C++](examples/vision/tracking/pptracking/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| OCR | [PaddleOCR/PP-OCRv2](./examples/vision/ocr) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | OCR | [PaddleOCR/PP-OCRv2](./examples/vision/ocr) | [Python](./examples/vision/detection/nanodet_plus/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| OCR | [PaddleOCR/PP-OCRv3](./examples/vision/ocr) | [Python](./examples/vision/ocr/PP-OCRv3/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | OCR | [PaddleOCR/PP-OCRv3](./examples/vision/ocr) | [Python](./examples/vision/ocr/PP-OCRv3/python)/[C++](./examples/vision/ocr/PP-OCRv3/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
@@ -280,21 +280,21 @@ Notes: ✅: already supported; ❔: to be supported in the future; N/A: Not Ava
| Information Extraction | [PaddleNLP/UIE](./examples/text/uie) | [Python](./examples/text/uie/python)/[C++](./examples/text/uie/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ | | Information Extraction | [PaddleNLP/UIE](./examples/text/uie) | [Python](./examples/text/uie/python)/[C++](./examples/text/uie/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❔ | ❔ |
| NLP | [PaddleNLP/ERNIE-3.0](./examples/text/ernie-3.0) | Python/C++ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ✅ | | NLP | [PaddleNLP/ERNIE-3.0](./examples/text/ernie-3.0) | Python/C++ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ✅ |
| Speech | [PaddleSpeech/PP-TTS](./examples/text/uie) | [Python](examples/audio/pp-tts/python)/C++ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | -- | ✅ | | Speech | [PaddleSpeech/PP-TTS](./examples/text/uie) | [Python](examples/audio/pp-tts/python)/C++ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | -- | ✅ |
<div id="fastdeploy-edge-doc"></div> <div id="fastdeploy-edge-doc"></div>
## 📱 Mobile and Edge Device Deployment ## 📱 Mobile and Edge Device Deployment
<div id="fastdeploy-edge-sdk-npu"></div> <div id="fastdeploy-edge-sdk-npu"></div>
### Paddle Lite NPU Deployment ### Paddle Lite NPU Deployment
- [Rockchip-NPU / Amlogic-NPU / NXP-NPU](./examples/vision/detection/paddledetection/rk1126) - [Rockchip-NPU / Amlogic-NPU / NXP-NPU](./examples/vision/detection/paddledetection/rk1126)
<div id="fastdeploy-edge-models"></div> <div id="fastdeploy-edge-models"></div>
### Mobile and Edge Model List 🔥🔥🔥🔥 ### Mobile and Edge Model List 🔥🔥🔥🔥
<div align="center"> <div align="center">
@@ -340,11 +340,11 @@ Notes: ✅: already supported; ❔: to be supported in the future; N/A: Not Ava
| OCR | [PaddleOCR/PP-OCRv3](examples/vision/ocr/PP-OCRv3) | 2.4+10.6 | ✅ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | -- | | OCR | [PaddleOCR/PP-OCRv3](examples/vision/ocr/PP-OCRv3) | 2.4+10.6 | ✅ | ❔ | ❔ | ❔ | ❔ | ❔ | ❔ | -- |
| OCR | PaddleOCR/PP-OCRv3-tiny | 2.4+10.7 | ❔ | ❔ | ❔ | ❔ | -- | -- | -- | -- | | OCR | PaddleOCR/PP-OCRv3-tiny | 2.4+10.7 | ❔ | ❔ | ❔ | ❔ | -- | -- | -- | -- |
## 🌐 Browser-based Model List ## 🌐 Browser-based Model List
<div id="fastdeploy-web-models"></div> <div id="fastdeploy-web-models"></div>
| Task | Model | [web_demo](examples/application/js/web_demo) | | Task | Model | [web_demo](examples/application/js/web_demo) |
|:------------------:|:-------------------------------------------------------------------------------------------:|:--------------------------------------------:| |:------------------:|:-------------------------------------------------------------------------------------------:|:--------------------------------------------:|
| --- | --- | [Paddle.js](examples/application/js) | | --- | --- | [Paddle.js](examples/application/js) |
@@ -355,7 +355,7 @@ Notes: ✅: already supported; ❔: to be supported in the future; N/A: Not Ava
| Object Recognition | [ItemIdentification](examples/application/js/web_demo/src/pages/cv/recognition) | ✅ | | Object Recognition | [ItemIdentification](examples/application/js/web_demo/src/pages/cv/recognition) | ✅ |
| OCR | [PaddleOCR/PP-OCRv3](./examples/application/js/web_demo/src/pages/cv/ocr) | ✅ | | OCR | [PaddleOCR/PP-OCRv3](./examples/application/js/web_demo/src/pages/cv/ocr) | ✅ |
## Community ## Community
<div id="fastdeploy-community"></div> <div id="fastdeploy-community"></div>

View File

@@ -18,14 +18,14 @@ Currently, FastDeploy supported backends listed as below,
- [C++ examples](./) - [C++ examples](./)
### Related APIs ### Related APIs
- [RuntimeOption](./structfastdeploy_1_1RuntimeOption.html) - [RuntimeOption](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/structfastdeploy_1_1RuntimeOption.html)
- [Runtime](./structfastdeploy_1_1Runtime.html) - [Runtime](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/structfastdeploy_1_1Runtime.html)
## Vision Models ## Vision Models
| Task | Model | API | Example | | Task | Model | API | Example |
| :---- | :---- | :---- | :----- | | :---- | :---- | :---- | :----- |
| object detection | PaddleDetection/PPYOLOE | [fastdeploy::vision::detection::PPYOLOE](./classfastdeploy_1_1vision_1_1detection_1_1PPYOLOE.html) | [C++](./)/[Python](./) | | object detection | PaddleDetection/PPYOLOE | [fastdeploy::vision::detection::PPYOLOE](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/classfastdeploy_1_1vision_1_1detection_1_1PPYOLOE.html) | [C++](./)/[Python](./) |
| keypoint detection | PaddleDetection/PPTinyPose | [fastdeploy::vision::keypointdetection::PPTinyPose](./classfastdeploy_1_1vision_1_1keypointdetection_1_1PPTinyPose.html) | [C++](./)/[Python](./) | | keypoint detection | PaddleDetection/PPTinyPose | [fastdeploy::vision::keypointdetection::PPTinyPose](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/classfastdeploy_1_1pipeline_1_1PPTinyPose.html) | [C++](./)/[Python](./) |
| image classification | PaddleClassification serials | [fastdeploy::vision::classification::PaddleClasModel](./classfastdeploy_1_1vision_1_1classification_1_1PaddleClasModel.html) | [C++](./)/[Python](./) | | image classification | PaddleClassification serials | [fastdeploy::vision::classification::PaddleClasModel](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/classfastdeploy_1_1vision_1_1classification_1_1PaddleClasModel.html) | [C++](./)/[Python](./) |
| semantic segmentation | PaddleSegmentation serials | [fastdeploy::vision::classification::PaddleSegModel](./classfastdeploy_1_1vision_1_1segmentation_1_1PaddleSegModel.html) | [C++](./)/[Python](./) | | semantic segmentation | PaddleSegmentation serials | [fastdeploy::vision::classification::PaddleSegModel](https://baidu-paddle.github.io/fastdeploy-api/cpp/html/classfastdeploy_1_1vision_1_1segmentation_1_1PaddleSegModel.html) | [C++](./)/[Python](./) |

View File

@@ -2,8 +2,8 @@
在运行demo前需确认以下两个步骤 在运行demo前需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
本文档以 PaddleClas 分类模型 MobileNetV2 为例展示CPU上的推理示例 本文档以 PaddleClas 分类模型 MobileNetV2 为例展示CPU上的推理示例
@@ -113,9 +113,9 @@ make -j
source /Path/to/fastdeploy_cpp_sdk/fastdeploy_init.sh source /Path/to/fastdeploy_cpp_sdk/fastdeploy_init.sh
``` ```
本示例代码在各平台(Windows/Linux/Mac)上通用,但编译过程仅支持(Linux/Mac)Windows上使用msbuild进行编译具体使用方式参考[Windows平台使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md) 本示例代码在各平台(Windows/Linux/Mac)上通用,但编译过程仅支持(Linux/Mac)Windows上使用msbuild进行编译具体使用方式参考[Windows平台使用FastDeploy C++ SDK](../../../docs/cn/faq/use_sdk_on_windows.md)
## 其它文档 ## 其它文档
- [Runtime Python 示例](../python) - [Runtime Python 示例](../python)
- [切换模型推理的硬件和后端](../../../../../docs/cn/faq/how_to_change_backend.md) - [切换模型推理的硬件和后端](../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -2,8 +2,8 @@
在运行demo前需确认以下两个步骤 在运行demo前需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) - 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
本文档以 PaddleClas 分类模型 MobileNetV2 为例展示 CPU 上的推理示例 本文档以 PaddleClas 分类模型 MobileNetV2 为例展示 CPU 上的推理示例
@@ -50,4 +50,4 @@ print(results[0].shape)
## 其它文档 ## 其它文档
- [Runtime C++ 示例](../cpp) - [Runtime C++ 示例](../cpp)
- [切换模型推理的硬件和后端](../../../../../docs/cn/faq/how_to_change_backend.md) - [切换模型推理的硬件和后端](../../../docs/cn/faq/how_to_change_backend.md)

View File

@@ -466,8 +466,7 @@ void Predict(
**参数** **参数**
> * **texts**(list(str)): 文本列表 > * **texts**(list(str)): 文本列表
> * **results**(list(dict())): UIE模型抽取结果。UIEResult结构详细可见[UIEResult说明](../../../../docs/api/text_results/uie_result.md)。 > * **results**(list(dict())): UIE模型抽取结果。
## 相关文档 ## 相关文档
[UIE模型详细介绍](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md) [UIE模型详细介绍](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md)

View File

@@ -375,7 +375,7 @@ UIEModel模型加载和初始化其中`model_file`, `params_file`为训练模
> > * **return_dict**(bool): 是否以字典形式输出UIE结果默认为False。 > > * **return_dict**(bool): 是否以字典形式输出UIE结果默认为False。
> **返回** > **返回**
> >
> > 返回`dict(str, list(fastdeploy.text.C.UIEResult))`, 详细可见[UIEResult说明](../../../../docs/api/text_results/uie_result.md) > > 返回`dict(str, list(fastdeploy.text.C.UIEResult))`。
## 相关文档 ## 相关文档

View File

@@ -118,7 +118,7 @@ Paddle-Lite-Demo/object_detection/linux/picodet_detection/run.sh
## 代码讲解 (使用 Paddle Lite `C++ API` 执行预测) ## 代码讲解 (使用 Paddle Lite `C++ API` 执行预测)
ARMLinux 示例基于 C++ API 开发,调用 Paddle Lite `C++s API` 包括以下五步。更详细的 `API` 描述参考:[Paddle Lite C++ API ](https://paddle-lite.readthedocs.io/zh/latest/api_reference/c++_api_doc.html)。 ARMLinux 示例基于 C++ API 开发,调用 Paddle Lite `C++s API` 包括以下五步。更详细的 `API` 描述参考:[Paddle Lite C++ API ](https://paddle-lite.readthedocs.io/zh/latest/api_reference/cxx_api_doc.html)。
```c++ ```c++
#include <iostream> #include <iostream>
@@ -198,7 +198,7 @@ export LD_LIBRARY_PATH=../Paddle-Lite/libs/$TARGET_ABI/
export GLOG_v=0 # Paddle-Lite 日志等级 export GLOG_v=0 # Paddle-Lite 日志等级
export VSI_NN_LOG_LEVEL=0 # TIM-VX 日志等级 export VSI_NN_LOG_LEVEL=0 # TIM-VX 日志等级
export VIV_VX_ENABLE_GRAPH_TRANSFORM=-pcq:1 # NPU 开启 perchannel 量化模型 export VIV_VX_ENABLE_GRAPH_TRANSFORM=-pcq:1 # NPU 开启 perchannel 量化模型
export VIV_VX_SET_PER_CHANNEL_ENTROPY=100 # 同上 export VIV_VX_SET_PER_CHANNEL_ENTROPY=100 # 同上
build/object_detection_demo models/picodetv2_relu6_coco_no_fuse ../../assets/labels/coco_label_list.txt models/picodetv2_relu6_coco_no_fuse/subgraph.txt models/picodetv2_relu6_coco_no_fuse/picodet.yml # 执行 Demo 程序4个 arg 分别为:模型、 label 文件、 自定义异构配置、 yaml build/object_detection_demo models/picodetv2_relu6_coco_no_fuse ../../assets/labels/coco_label_list.txt models/picodetv2_relu6_coco_no_fuse/subgraph.txt models/picodetv2_relu6_coco_no_fuse/picodet.yml # 执行 Demo 程序4个 arg 分别为:模型、 label 文件、 自定义异构配置、 yaml
``` ```
@@ -206,7 +206,7 @@ build/object_detection_demo models/picodetv2_relu6_coco_no_fuse ../../assets/lab
```shell ```shell
# 代码文件 `object_detection_demo/rush.sh` # 代码文件 `object_detection_demo/rush.sh`
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${PADDLE_LITE_DIR}/libs/${TARGET_ARCH_ABI} export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${PADDLE_LITE_DIR}/libs/${TARGET_ARCH_ABI}
build/object_detection_demo {模型} {label} {自定义异构配置文件} {yaml} build/object_detection_demo {模型} {label} {自定义异构配置文件} {yaml}
``` ```

View File

@@ -6,7 +6,7 @@ Users can use the one-click model quantization tool to quantize and deploy the m
## FastDeploy One-Click Model Quantization Tool ## FastDeploy One-Click Model Quantization Tool
FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file. FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file.
For a detailed tutorial, please refer to: [One-Click Model Quantization Tool](... /... /... /... /... /... /... /tools/quantization/) For a detailed tutorial, please refer to: [One-Click Model Quantization Tool](../../../../../tools/auto_compression/)
## Download Quantized YOLOv5s Model ## Download Quantized YOLOv5s Model

View File

@@ -6,7 +6,7 @@ Users can use the one-click model quantization tool to quantize and deploy the m
## FastDeploy One-Click Model Quantization Tool ## FastDeploy One-Click Model Quantization Tool
FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file. FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file.
For detailed tutorial, please refer to : [One-Click Model Quantization Tool](... /... /... /... /... /... /... /tools/quantization/) For detailed tutorial, please refer to : [One-Click Model Quantization Tool](../../../../../tools/auto_compression/)
## Download Quantized YOLOv6s Model ## Download Quantized YOLOv6s Model

View File

@@ -6,7 +6,7 @@ Users can use the one-click model quantization tool to quantize and deploy the m
## FastDeploy One-Click Model Quantization Tool ## FastDeploy One-Click Model Quantization Tool
FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file. FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file.
For detailed tutorial, please refer to : [One-Click Model Quantization Tool](... /... /... /... /... /... /... /tools/quantization/) For detailed tutorial, please refer to : [One-Click Model Quantization Tool](../../../../../tools/auto_compression/)
## Download Quantized YOLOv7 Model ## Download Quantized YOLOv7 Model

View File

@@ -37,4 +37,4 @@ ocr模型加载和初始化其中模型为Paddle.js模型格式js模型转
- [PP-OCRv3 C++部署](../cpp) - [PP-OCRv3 C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/) - [模型预测结果说明](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
- [PP-OCRv3 微信小程序部署文档](../../../../application/web_demo/examples/ocrXcx/) - [PP-OCRv3 微信小程序部署文档](../mini_program/)

View File

@@ -1,6 +1,6 @@
# Model Repository # Model Repository
FastDeploy starts the serving by specifying one or more models in the model repository to deploy the service. When the serving is running, the models in the service can be modified following [Model Management](https://github.com/triton-inference-server/server/blob/main/docs/model_management.md), and obtain serving from one or more model repositories specified at the serving initiation. FastDeploy starts the serving by specifying one or more models in the model repository to deploy the service. When the serving is running, the models in the service can be modified following [Model Management](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_management.md), and obtain serving from one or more model repositories specified at the serving initiation.
## Repository Architecture ## Repository Architecture
@@ -39,7 +39,7 @@ Paddle models are saved in the version number subdirectory, which must be `model
## Model Version ## Model Version
Each model can have one or more versions available in the repository. The subdirectory named with a number in the model directory implies the version number. Subdirectories that are not named with a number, or that start with *0* will be ignored. A [version policy](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#version-policy) can be specified in the model configuration file to control which version of the model in model directory is launched by Triton. Each model can have one or more versions available in the repository. The subdirectory named with a number in the model directory implies the version number. Subdirectories that are not named with a number, or that start with *0* will be ignored. A [version policy](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#version-policy) can be specified in the model configuration file to control which version of the model in model directory is launched by Triton.
## Repository Demo ## Repository Demo

View File

@@ -2,7 +2,7 @@
模型存储库中的每个模型都必须包含一个模型配置,该配置提供了关于模型的必要和可选信息。这些配置信息一般写在 *config.pbtxt* 文件中,[ModelConfig protobuf](https://github.com/triton-inference-server/common/blob/main/protobuf/model_config.proto)格式。 模型存储库中的每个模型都必须包含一个模型配置,该配置提供了关于模型的必要和可选信息。这些配置信息一般写在 *config.pbtxt* 文件中,[ModelConfig protobuf](https://github.com/triton-inference-server/common/blob/main/protobuf/model_config.proto)格式。
## 模型通用最小配置 ## 模型通用最小配置
详细的模型通用配置请看官网文档: [model_configuration](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md).Triton的最小模型配置必须包括: *platform**backend* 属性、*max_batch_size* 属性和模型的输入输出. 详细的模型通用配置请看官网文档: [model_configuration](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md).Triton的最小模型配置必须包括: *platform**backend* 属性、*max_batch_size* 属性和模型的输入输出.
例如一个Paddle模型有两个输入*input0* 和 *input1*,一个输出*output0*输入输出都是float32类型的tensor最大batch为8.则最小的配置如下: 例如一个Paddle模型有两个输入*input0* 和 *input1*,一个输出*output0*输入输出都是float32类型的tensor最大batch为8.则最小的配置如下:

View File

@@ -1,6 +1,6 @@
# 模型仓库(Model Repository) # 模型仓库(Model Repository)
FastDeploy启动服务时指定模型仓库中一个或多个模型部署服务。当服务运行时可以用[Model Management](https://github.com/triton-inference-server/server/blob/main/docs/model_management.md)中描述的方式修改服务中的模型。 FastDeploy启动服务时指定模型仓库中一个或多个模型部署服务。当服务运行时可以用[Model Management](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_management.md)中描述的方式修改服务中的模型。
从服务器启动时指定的一个或多个模型存储库中为模型提供服务 从服务器启动时指定的一个或多个模型存储库中为模型提供服务
## 仓库结构 ## 仓库结构
@@ -36,7 +36,7 @@ $ fastdeploy --model-repository=<model-repository-path>
Paddle模型存在版本号子目录中必须为`model.pdmodel`文件和`model.pdiparams`文件。 Paddle模型存在版本号子目录中必须为`model.pdmodel`文件和`model.pdiparams`文件。
## 模型版本 ## 模型版本
每个模型在仓库中可以有一个或多个可用的版本,模型目录中以数字命名的子目录就是对应的版本,数字即版本号。没有以数字命名的子目录,或以*0*开头的子目录都会被忽略。模型配置文件中可以指定[版本策略](https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md#version-policy)控制Triton启动模型目录中的哪个版本。 每个模型在仓库中可以有一个或多个可用的版本,模型目录中以数字命名的子目录就是对应的版本,数字即版本号。没有以数字命名的子目录,或以*0*开头的子目录都会被忽略。模型配置文件中可以指定[版本策略](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md#version-policy)控制Triton启动模型目录中的哪个版本。
## 模型仓库示例 ## 模型仓库示例
部署Paddle模型时需要的模型必须是2.0版本以上导出的推理模型,模型包含`model.pdmodel``model.pdiparams`两个文件放在版本目录中。 部署Paddle模型时需要的模型必须是2.0版本以上导出的推理模型,模型包含`model.pdmodel``model.pdiparams`两个文件放在版本目录中。