[benchmark] add max_workspace_size flags for tensorrt/pptrt backend (#2058)

* [benchmark] fixed paddlex benchmark for picodet 320

* [Bug Fix] fixed paddlex ppseg pp-trt infer error

* [Bug Fix] fixed paddlex dino benchmark trt shapes

* [benchmark] support paddlex ppyoloe pptrt benchmark

* [benchmark] adjust paddlex dino trt shapes

* [benchmark] add max_workspace_size flags for tensorrt/pptrt backend

* [benchmark] add max_workspace_size flags for tensorrt/pptrt backend

* [benchmark] add max_workspace_size flags for tensorrt/pptrt backend

---------

Co-authored-by: qiuyanjun <qiuyanjun@baidu.com>
This commit is contained in:
DefTruth
2023-06-22 16:43:39 +08:00
committed by GitHub
parent 269d65a9bb
commit ff835690a2
3 changed files with 15 additions and 2 deletions

View File

@@ -63,7 +63,10 @@ DEFINE_int32(device_id, -1,
DEFINE_bool(enable_log_info, false,
"Optional, whether to enable log info for paddle backend,"
"default false.");
DEFINE_int32(max_workspace_size, -1,
"Optional, set max workspace size for tensorrt, default -1."
"will force to override the value in config file "
"eg, 2147483647(2GB)");
static void PrintUsage() {
std::cout << "Usage: infer_demo --model model_path --image img_path "
"--config_path config.txt[Path of benchmark config.] "