DefTruth
49c033a828
[XPU] Support XPU via Paddle Inference backend ( #1987 )
...
* [backend] Support XPU via Paddle Inference backend
* [backend] Support XPU via Paddle Inference backend
* [backend] Support XPU via Paddle Inference backend
* [XPU] support XPU benchmark via paddle inference
* [XPU] support XPU benchmark via paddle inference
* [benchmark] add xpu paddle h2d config files
2023-05-25 14:13:40 +08:00
heliqi
3e7cb88049
[Serving]support 22.12 ( #1974 )
...
support 22.12
2023-05-22 22:27:13 +08:00
DefTruth
652024d2f6
Revert "Remove Paddle Reader" ( #1860 )
...
Revert "Remove Paddle Reader (#1813 )"
This reverts commit f3d44785c4
.
2023-04-23 23:16:31 +08:00
Jason
f3d44785c4
Remove Paddle Reader ( #1813 )
...
* Remove Paddle Reader
* support pp-infer c++14
* disable trt cache
---------
Co-authored-by: wang-xinyu <wangxinyu_es@163.com >
2023-04-20 21:12:43 +08:00
yeliang2258
a509dd8ec1
[Model] Add Paddle3D smoke model ( #1766 )
...
* add smoke model
* add 3d vis
* update code
* update doc
* mv paddle3d from detection to perception
* update result for velocity
* update code for CI
* add set input data for TRT backend
* add serving support for smoke model
* update code
* update code
* update code
---------
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com >
2023-04-14 16:30:56 +08:00
WJJ1995
5c70db176f
[Backend]Add switch_ir_debug for Paddle Backend ( #1700 )
...
* avoid mem copy for cpp benchmark
* set CMAKE_BUILD_TYPE to Release
* Add SegmentationDiff
* change pointer to reference
* fixed bug
* cast uint8 to int32
* Add diff compare for OCR
* Add diff compare for OCR
* rm ppocr pipeline
* Add yolov5 diff compare
* Add yolov5 diff compare
* deal with comments
* deal with comments
* fixed bug
* fixed bug
* fixed thread nums
* Add Failed log
* optimizer x86 pipeline
* Add switch_ir_debug for paddle backend
* fixed for ci
2023-03-24 17:29:31 +08:00
Jack Zhou
012c7771c1
[Serving] Add collect shape and fix serving infer ( #1658 )
...
Add collect shape and fix serving infer
2023-03-20 19:55:30 +08:00
Jason
3b1343c726
[Bug] Fix loadding big model loadding problem ( #1636 )
...
Fix loadding big model loadding problem
2023-03-17 10:25:26 +08:00
Jack Zhou
f4736e7931
Merge pull request #1552 from joey12300/fix_delete_pass
...
[Backend] Fix delete pass of paddle inference
2023-03-08 19:57:08 +08:00
Jason
6be2c0367b
[Example] Update runtime examples ( #1542 )
...
* Add notes for tensors
* Optimize some apis
* move some warnings
2023-03-08 16:56:04 +08:00
zhoushunjie
384eca14fd
Fix delete pass
2023-03-08 06:32:27 +00:00
Jack Zhou
524c85745b
[Backend] Add fixed size optimization for transformer model ( #1430 )
...
Add enable_fixed_size_opt flag
2023-02-24 09:45:04 +08:00
Jason
18e33bae5c
[Other] Optimize runtime module ( #1356 )
...
* Optimize runtime
* fix error
* [Backend] Add option to print tensorrt conversion log (#1386 )
Add option to print tensorrt conversion log
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com >
---------
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com >
2023-02-21 17:01:32 +08:00
WJJ1995
c25d1cc1bc
[Backend]Fixed enable_paddle_to_trt() bug ( #1320 )
...
* add GPL lisence
* add GPL-3.0 lisence
* add GPL-3.0 lisence
* add GPL-3.0 lisence
* support yolov8
* add pybind for yolov8
* add yolov8 readme
* add cpp benchmark
* add cpu and gpu mem
* public part split
* add runtime mode
* fixed bugs
* add cpu_thread_nums
* deal with comments
* deal with comments
* deal with comments
* rm useless code
* add FASTDEPLOY_DECL
* add FASTDEPLOY_DECL
* fixed for windows
* mv rss to pss
* mv rss to pss
* Update utils.cc
* use thread to collect mem
* Add ResourceUsageMonitor
* rm useless code
* fixed bug
* fixed typo
* update ResourceUsageMonitor
* fixed bug
* fixed bug
* add note for ResourceUsageMonitor
* deal with comments
* add macros
* deal with comments
* deal with comments
* deal with comments
* re-lint
* rm pmap and use mem api
* rm pmap and use mem api
* add mem api
* Add PrintBenchmarkInfo func
* Add PrintBenchmarkInfo func
* Add PrintBenchmarkInfo func
* deal with comments
* fixed enable_paddle_to_trt
* add log for paddle_trt
---------
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com >
2023-02-14 17:51:39 +08:00
wwbitejotunn
f1ab47a4ef
code revine
2023-02-13 18:16:45 +00:00
wwbitejotunn
289d353d99
Merge branch 'develop' of https://github.com/paddlepaddle/fastdeploy into set_stream_infer-shareExData
2023-02-13 03:14:18 +00:00
wwbitejotunn
abfa9fd850
prebind output by shareExternalData
2023-02-13 03:11:31 +00:00
wwbitejotunn
898b063216
get cache dir
2023-02-09 20:56:55 +08:00
wwbitejotunn
c2e5f6317e
fix paddle backend
2023-02-09 20:56:55 +08:00
wwbitejotunn
4b293a89de
fix paddle backend
2023-02-09 05:51:30 +00:00
Jason
a4b0565b9a
[Other] Optimize paddle backend ( #1265 )
...
* Optimize paddle backend
* optimize paddle backend
* add version support
2023-02-08 19:12:03 +08:00
DefTruth
f73a538f61
[Backend] support bechmark mode for runtime and backend ( #1201 )
...
* [backend] support bechmark mode for runtime and backend
* [backend] support bechmark mode for runtime and backend
* [pybind11] add benchmark methods pybind
* [pybind11] add benchmark methods pybind
* [Other] Update build scripts
* [Other] Update cmake/summary.cmake
* [Other] update build scripts
* [Other] add ENABLE_BENCHMARK option -> setup.py
* optimize backend time recording
* optimize backend time recording
* optimize trt backend time record
* [backend] optimze backend_time recording for trt
* [benchmark] remove redundant logs
* fixed ov_backend confilct
* [benchmark] fixed paddle_backend conflicts
* [benchmark] fixed paddle_backend conflicts
* [benchmark] fixed paddle_backend conflicts
* [benchmark] remove use_gpu option from ort backend option
* [benchmark] update benchmark_ppdet.py
* [benchmark] update benchmark_ppcls.py
* fixed lite backend conflicts
* [Lite] fixed lite xpu
* add benchmark macro
* add RUNTIME_PROFILE_LOOP macros
* add comments for RUNTIME_PROFILE macros
* add comments for new apis
* add comments for new apis
* update benchmark_ppdet.py
* afixed bugs
* remove unused codes
* optimize RUNTIME_PROFILE_LOOP macros
* optimize RUNTIME_PROFILE_LOOP macros
* add comments for benchmark option and result
* add docs for benchmark namespace
2023-02-06 14:29:35 +08:00
huangjianhui
ba6d75f526
Delete redundant code ( #1222 )
...
Update paddle_backend.cc
Delete redundant code
Co-authored-by: Jason <jiangjiajun@baidu.com >
2023-02-02 15:44:52 +08:00
Jason
b4e322af63
[Other] Optimize load model from memory function ( #1205 )
...
Optimize option for runtime
2023-02-01 15:50:38 +08:00
huangjianhui
76df90afc3
[Other] FastDeploy TensorRT && ONNX backend support to load model form memory ( #1130 )
...
* Update all backends load model from buffer
* Delete redundant code
* Format code style
* Format code style
* Delete redundant code
* Delete redundant code
* Add some FDASSERTs
* Update load model form memory when cloning engine
* Update clone engine code
* Update set_model_buffer api parameters with char pointer
* Release memory buffer variables after finish init backends
* Fix conflict
* Fix bug
2023-02-01 11:36:09 +08:00
Jason
4aa4ebd7c3
[Other] [Part2] Upgrade runtime module ( #1080 )
...
[Other] Upgrade runtime module
2023-01-09 13:22:51 +08:00