Modify docs for PP-Matting, change VisMattingAlpha to VisMatting, fix Squeeze function. (#354)

* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (#11)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (#16)

* Develop (#11) (#12)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (#13)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (#14)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

* modify PPMatting to PP-Matting, fix squeeze, modify VisMattingAlpha to VisMatting

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
This commit is contained in:
ziqi-jin
2022-10-12 20:36:57 +08:00
committed by GitHub
parent e6f8bd90cf
commit c5446670c9
10 changed files with 45 additions and 43 deletions

View File

@@ -204,8 +204,8 @@ int main(int argc, char* argv[]) {
| <font size=2> Face Recognition | <font size=2> [insightface/PartialFC](./examples/vision/faceid/insightface) | <font size=2> [Python](./examples/vision/faceid/insightface/python)/[C++](./examples/vision/faceid/insightface/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | <font size=2> Face Recognition | <font size=2> [insightface/PartialFC](./examples/vision/faceid/insightface) | <font size=2> [Python](./examples/vision/faceid/insightface/python)/[C++](./examples/vision/faceid/insightface/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| <font size=2> Face Recognition | <font size=2> [insightface/VPL](./examples/vision/faceid/insightface) | <font size=2> [Python](./examples/vision/faceid/insightface/python)/[C++](./examples/vision/faceid/insightface/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | <font size=2> Face Recognition | <font size=2> [insightface/VPL](./examples/vision/faceid/insightface) | <font size=2> [Python](./examples/vision/faceid/insightface/python)/[C++](./examples/vision/faceid/insightface/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| <font size=2> Matting | <font size=2> [ZHKKKe/MODNet](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/modnet/python)/[C++](./examples/vision/matting/modnet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅| | <font size=2> Matting | <font size=2> [ZHKKKe/MODNet](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/modnet/python)/[C++](./examples/vision/matting/modnet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅|
| <font size=2> Matting | <font size=2> [PaddleSeg/PPMatting](./examples/vision/matting/ppmatting) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅| | <font size=2> Matting | <font size=2> [PaddleSeg/PP-Matting](./examples/vision/matting/ppmatting) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅|
| <font size=2> Matting | <font size=2> [PaddleSeg/PPHumanMatting](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅| | <font size=2> Matting | <font size=2> [PaddleSeg/PP-HumanMatting](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅|
| <font size=2> Matting | <font size=2> [PaddleSeg/ModNet](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅| | <font size=2> Matting | <font size=2> [PaddleSeg/ModNet](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅|
| <font size=2> Information Extraction | <font size=2> [PaddleNLP/UIE](./examples/text/uie) | <font size=2> [Python](./examples/text/uie/python)/[C++](./examples/text/uie/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅| | <font size=2> Information Extraction | <font size=2> [PaddleNLP/UIE](./examples/text/uie) | <font size=2> [Python](./examples/text/uie/python)/[C++](./examples/text/uie/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅|

View File

@@ -218,8 +218,8 @@ Notes:
| <font size=2> Face Recognition | <font size=2> [insightface/PartialFC](./examples/vision/faceid/insightface) | <font size=2> [Python](./examples/vision/faceid/insightface/python)/[C++](./examples/vision/faceid/insightface/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | <font size=2> Face Recognition | <font size=2> [insightface/PartialFC](./examples/vision/faceid/insightface) | <font size=2> [Python](./examples/vision/faceid/insightface/python)/[C++](./examples/vision/faceid/insightface/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| <font size=2> Face Recognition | <font size=2> [insightface/VPL](./examples/vision/faceid/insightface) | <font size=2> [Python](./examples/vision/faceid/insightface/python)/[C++](./examples/vision/faceid/insightface/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | <font size=2> Face Recognition | <font size=2> [insightface/VPL](./examples/vision/faceid/insightface) | <font size=2> [Python](./examples/vision/faceid/insightface/python)/[C++](./examples/vision/faceid/insightface/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| <font size=2> Matting | <font size=2> [ZHKKKe/MODNet](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/modnet/python)/[C++](./examples/vision/matting/modnet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅| | <font size=2> Matting | <font size=2> [ZHKKKe/MODNet](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/modnet/python)/[C++](./examples/vision/matting/modnet/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅|
| <font size=2> Matting | <font size=2> [PaddleSeg/PPMatting](./examples/vision/matting/ppmatting) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅| | <font size=2> Matting | <font size=2> [PaddleSeg/PP-Matting](./examples/vision/matting/ppmatting) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅|
| <font size=2> Matting | <font size=2> [PaddleSeg/PPHumanMatting](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅| | <font size=2> Matting | <font size=2> [PaddleSeg/PP-HumanMatting](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅|
| <font size=2> Matting | <font size=2> [PaddleSeg/ModNet](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅| | <font size=2> Matting | <font size=2> [PaddleSeg/ModNet](./examples/vision/matting/modnet) | <font size=2> [Python](./examples/vision/matting/ppmatting/python)/[C++](./examples/vision/matting/ppmatting/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅|
| <font size=2> Information Extraction | <font size=2> [PaddleNLP/UIE](./examples/text/uie) | <font size=2> [Python](./examples/text/uie/python)/[C++](./examples/text/uie/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅| | <font size=2> Information Extraction | <font size=2> [PaddleNLP/UIE](./examples/text/uie) | <font size=2> [Python](./examples/text/uie/python)/[C++](./examples/text/uie/cpp) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅|

View File

@@ -5,6 +5,6 @@ FastDeploy目前支持如下抠图模型部署
| 模型 | 说明 | 模型格式 | 版本 | | 模型 | 说明 | 模型格式 | 版本 |
| :--- | :--- | :------- | :--- | | :--- | :--- | :------- | :--- |
| [ZHKKKe/MODNet](./modnet) | MODNet 系列模型 | ONNX | [CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4) | | [ZHKKKe/MODNet](./modnet) | MODNet 系列模型 | ONNX | [CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4) |
| [PaddleSeg/PPMatting](./ppmatting) | PPMatting 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) | | [PaddleSeg/PP-Matting](./ppmatting) | PP-Matting 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
| [PaddleSeg/PPHumanMatting](./ppmatting) | PPHumanMatting 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) | | [PaddleSeg/PP-HumanMatting](./ppmatting) | PP-HumanMatting 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |
| [PaddleSeg/ModNet](./ppmatting) | ModNet 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) | | [PaddleSeg/ModNet](./ppmatting) | ModNet 系列模型 | Paddle | [Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) |

View File

@@ -32,7 +32,7 @@ void CpuInfer(const std::string& model_file, const std::string& image_file,
return; return;
} }
auto vis_im = fastdeploy::vision::Visualize::VisMattingAlpha(im_bak, res); auto vis_im = fastdeploy::vision::VisMatting(im_bak, res);
auto vis_im_with_bg = auto vis_im_with_bg =
fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res); fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res);
cv::imwrite("visualized_result.jpg", vis_im_with_bg); cv::imwrite("visualized_result.jpg", vis_im_with_bg);
@@ -63,7 +63,7 @@ void GpuInfer(const std::string& model_file, const std::string& image_file,
return; return;
} }
auto vis_im = fastdeploy::vision::Visualize::VisMattingAlpha(im_bak, res); auto vis_im = fastdeploy::vision::VisMatting(im_bak, res);
auto vis_im_with_bg = auto vis_im_with_bg =
fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res); fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res);
cv::imwrite("visualized_result.jpg", vis_im_with_bg); cv::imwrite("visualized_result.jpg", vis_im_with_bg);
@@ -95,7 +95,7 @@ void TrtInfer(const std::string& model_file, const std::string& image_file,
return; return;
} }
auto vis_im = fastdeploy::vision::Visualize::VisMattingAlpha(im_bak, res); auto vis_im = fastdeploy::vision::VisMatting(im_bak, res);
auto vis_im_with_bg = auto vis_im_with_bg =
fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res); fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res);
cv::imwrite("visualized_result.jpg", vis_im_with_bg); cv::imwrite("visualized_result.jpg", vis_im_with_bg);

View File

@@ -1,38 +1,38 @@
# PPMatting模型部署 # PP-Matting模型部署
## 模型版本说明 ## 模型版本说明
- [PPMatting Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) - [PP-Matting Release/2.6](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
## 支持模型列表 ## 支持模型列表
目前FastDeploy支持如下模型的部署 目前FastDeploy支持如下模型的部署
- [PPMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) - [PP-Matting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
- [PPHumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) - [PP-HumanMatting系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
- [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) - [ModNet系列模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
## 导出部署模型 ## 导出部署模型
在部署前需要先将PPMatting导出成部署模型导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)(Tips:导出PPMatting系列模型和PPHumanMatting系列模型需要设置导出脚本的`--input_shape`参数) 在部署前需要先将PP-Matting导出成部署模型导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)(Tips:导出PP-Matting系列模型和PP-HumanMatting系列模型需要设置导出脚本的`--input_shape`参数)
## 下载预训练模型 ## 下载预训练模型
为了方便开发者的测试下面提供了PPMatting导出的各系列模型开发者可直接下载使用。 为了方便开发者的测试下面提供了PP-Matting导出的各系列模型开发者可直接下载使用。
其中精度指标来源于PPMatting中对各模型的介绍(未提供精度数据)详情各参考PPMatting中的说明。 其中精度指标来源于PP-Matting中对各模型的介绍(未提供精度数据)详情各参考PP-Matting中的说明。
| 模型 | 参数大小 | 精度 | 备注 | | 模型 | 参数大小 | 精度 | 备注 |
|:---------------------------------------------------------------- |:----- |:----- | :------ | |:---------------------------------------------------------------- |:----- |:----- | :------ |
| [PPMatting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - | | [PP-Matting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 106MB | - |
| [PPMatting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - | | [PP-Matting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 106MB | - |
| [PPHumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - | | [PP-HumanMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PPHumanMatting.tgz) | 247MB | - |
| [Modnet_ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - | | [Modnet-ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_ResNet50_vd.tgz) | 355MB | - |
| [Modnet_MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - | | [Modnet-MobileNetV2](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_MobileNetV2.tgz) | 28MB | - |
| [Modnet_HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - | | [Modnet-HRNet_w18](https://bj.bcebos.com/paddlehub/fastdeploy/PPModnet_HRNet_w18.tgz) | 51MB | - |

View File

@@ -1,13 +1,13 @@
# PPMatting C++部署示例 # PP-Matting C++部署示例
本目录下提供`infer.cc`快速完成PPMatting在CPU/GPU以及GPU上通过TensorRT加速部署的示例。 本目录下提供`infer.cc`快速完成PP-Matting在CPU/GPU以及GPU上通过TensorRT加速部署的示例。
在部署前,需确认以下两个步骤 在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/environment.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/environment.md)
- 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/quick_start) - 2. 根据开发环境下载预编译部署库和samples代码参考[FastDeploy预编译库](../../../../../docs/quick_start)
以Linux上 PPMatting 推理为例在本目录执行如下命令即可完成编译测试如若只需在CPU上部署可在[Fastdeploy C++预编译库](../../../../../docs/quick_start/CPP_prebuilt_libraries.md)下载CPU推理库 以Linux上 PP-Matting 推理为例在本目录执行如下命令即可完成编译测试如若只需在CPU上部署可在[Fastdeploy C++预编译库](../../../../../docs/quick_start/CPP_prebuilt_libraries.md)下载CPU推理库
```bash ```bash
#下载SDK编译模型examples代码SDK中包含了examples代码 #下载SDK编译模型examples代码SDK中包含了examples代码
@@ -18,7 +18,7 @@ mkdir build && cd build
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/../../../../../../../fastdeploy-linux-x64-gpu-0.2.1 cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/../../../../../../../fastdeploy-linux-x64-gpu-0.2.1
make -j make -j
# 下载PPMatting模型文件和测试图片 # 下载PP-Matting模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
tar -xvf PP-Matting-512.tgz tar -xvf PP-Matting-512.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
@@ -44,7 +44,7 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: 以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md) - [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
## PPMatting C++接口 ## PP-Matting C++接口
### PPMatting类 ### PPMatting类
@@ -57,7 +57,7 @@ fastdeploy::vision::matting::PPMatting(
const ModelFormat& model_format = ModelFormat::PADDLE) const ModelFormat& model_format = ModelFormat::PADDLE)
``` ```
PPMatting模型加载和初始化其中model_file为导出的Paddle模型格式。 PP-Matting模型加载和初始化其中model_file为导出的Paddle模型格式。
**参数** **参数**

View File

@@ -41,7 +41,7 @@ void CpuInfer(const std::string& model_dir, const std::string& image_file,
std::cerr << "Failed to predict." << std::endl; std::cerr << "Failed to predict." << std::endl;
return; return;
} }
auto vis_im = fastdeploy::vision::Visualize::VisMattingAlpha(im_bak, res); auto vis_im = fastdeploy::vision::VisMatting(im_bak, res);
auto vis_im_with_bg = auto vis_im_with_bg =
fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res); fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res);
cv::imwrite("visualized_result.jpg", vis_im_with_bg); cv::imwrite("visualized_result.jpg", vis_im_with_bg);
@@ -75,7 +75,7 @@ void GpuInfer(const std::string& model_dir, const std::string& image_file,
std::cerr << "Failed to predict." << std::endl; std::cerr << "Failed to predict." << std::endl;
return; return;
} }
auto vis_im = fastdeploy::vision::Visualize::VisMattingAlpha(im_bak, res); auto vis_im = fastdeploy::vision::VisMatting(im_bak, res);
auto vis_im_with_bg = auto vis_im_with_bg =
fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res); fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res);
cv::imwrite("visualized_result.jpg", vis_im_with_bg); cv::imwrite("visualized_result.jpg", vis_im_with_bg);
@@ -110,7 +110,7 @@ void TrtInfer(const std::string& model_dir, const std::string& image_file,
std::cerr << "Failed to predict." << std::endl; std::cerr << "Failed to predict." << std::endl;
return; return;
} }
auto vis_im = fastdeploy::vision::Visualize::VisMattingAlpha(im_bak, res); auto vis_im = fastdeploy::vision::VisMatting(im_bak, res);
auto vis_im_with_bg = auto vis_im_with_bg =
fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res); fastdeploy::vision::Visualize::SwapBackgroundMatting(im_bak, bg, res);
cv::imwrite("visualized_result.jpg", vis_im_with_bg); cv::imwrite("visualized_result.jpg", vis_im_with_bg);

View File

@@ -1,18 +1,18 @@
# PPMatting Python部署示例 # PP-Matting Python部署示例
在部署前,需确认以下两个步骤 在部署前,需确认以下两个步骤
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/environment.md) - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/environment.md)
- 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../docs/quick_start) - 2. FastDeploy Python whl包安装参考[FastDeploy Python安装](../../../../../docs/quick_start)
本目录下提供`infer.py`快速完成PPMatting在CPU/GPU以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成 本目录下提供`infer.py`快速完成PP-Matting在CPU/GPU以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
```bash ```bash
#下载部署示例代码 #下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/matting/ppmatting/python cd FastDeploy/examples/vision/matting/ppmatting/python
# 下载PPmatting模型文件和测试图片 # 下载PP-Matting模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz wget https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz
tar -xvf PP-Matting-512.tgz tar -xvf PP-Matting-512.tgz
wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_input.jpg
@@ -32,13 +32,13 @@ python infer.py --model PP-Matting-512 --image matting_input.jpg --bg matting_bg
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg"> <img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg"> <img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852554-6960659f-4fd7-4506-b33b-54e1a9dd89bf.jpg">
</div> </div>
## PPMatting Python接口 ## PP-Matting Python接口
```python ```python
fd.vision.matting.PPMatting(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE) fd.vision.matting.PPMatting(model_file, params_file, config_file, runtime_option=None, model_format=ModelFormat.PADDLE)
``` ```
PPMatting模型加载和初始化其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting) PP-Matting模型加载和初始化其中model_file, params_file以及config_file为训练模型导出的Paddle inference文件具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
**参数** **参数**
@@ -72,7 +72,7 @@ PPMatting模型加载和初始化其中model_file, params_file以及config_fi
## 其它文档 ## 其它文档
- [PPMatting 模型介绍](..) - [PP-Matting 模型介绍](..)
- [PPMatting C++部署](../cpp) - [PP-Matting C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/) - [模型预测结果说明](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/runtime/how_to_change_backend.md) - [如何切换模型推理后端引擎](../../../../../docs/runtime/how_to_change_backend.md)

View File

@@ -59,7 +59,7 @@ bg = cv2.imread(args.bg)
result = model.predict(im.copy()) result = model.predict(im.copy())
print(result) print(result)
# 可视化结果 # 可视化结果
vis_im = fd.vision.vis_matting_alpha(im, result) vis_im = fd.vision.vis_matting(im, result)
vis_im_with_bg = fd.vision.swap_background_matting(im, bg, result) vis_im_with_bg = fd.vision.swap_background_matting(im, bg, result)
cv2.imwrite("visualized_result_fg.jpg", vis_im) cv2.imwrite("visualized_result_fg.jpg", vis_im)
cv2.imwrite("visualized_result_replaced_bg.jpg", vis_im_with_bg) cv2.imwrite("visualized_result_replaced_bg.jpg", vis_im_with_bg)

View File

@@ -89,6 +89,8 @@ void FDTensor::Squeeze(int64_t axis) {
size_t ndim = shape.size(); size_t ndim = shape.size();
FDASSERT(axis >= 0 && axis < ndim, FDASSERT(axis >= 0 && axis < ndim,
"The allowed 'axis' must be in range of (0, %lu)!", ndim); "The allowed 'axis' must be in range of (0, %lu)!", ndim);
FDASSERT(shape[axis]==1,
"The No.%ld dimension of shape should be 1, but it is %ld!", (long)axis, (long)shape[axis]);
shape.erase(shape.begin() + axis); shape.erase(shape.begin() + axis);
} }