Commit Graph

7 Commits

Author SHA1 Message Date
DefTruth
12bb44e0de [Bug Fix] fix build xpu encrypt & auth image scripts (#2133)
* [patchelf] fix patchelf error for inference xpu

* [serving] add xpu dockerfile and support fd server

* [serving] add xpu dockerfile and support fd server

* [Serving] support XPU + Tritron

* [Serving] support XPU + Tritron

* [Dockerfile] update xpu tritron docker file -> paddle 0.0.0

* [Dockerfile] update xpu tritron docker file -> paddle 0.0.0

* [Dockerfile] update xpu tritron docker file -> paddle 0.0.0

* [Dockerfile] add comments for xpu tritron dockerfile

* [Doruntime] fix xpu infer error

* [Doruntime] fix xpu infer error

* [XPU] update xpu dockerfile

* add xpu triton server docs

* add xpu triton server docs

* add xpu triton server docs

* add xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* [XPU] Update XPU L3 Cache setting docs

* [XPU] Add Encryption and AUTH support for XPU Server

* [XPU] Add Encryption and AUTH support for XPU Server

* [Bug Fix] fix paddle reader error

* [Serving] Support XPU encrypt & auth server

* [Serving] Support XPU encrypt & auth server

* [Serving] Support XPU encrypt & auth server

* [Serving] Support XPU encrypt & auth server

* [Triton] switch TAG 22.12 -> TAG 21.10wq

* update xpu auth server script

* [Bug Fix] fix build xpu encrypt & auth image scripts
2023-07-24 21:00:05 +08:00
DefTruth
434b48dda5 [Serving] Support FastDeploy XPU Triton Server (#1994)
* [patchelf] fix patchelf error for inference xpu

* [serving] add xpu dockerfile and support fd server

* [serving] add xpu dockerfile and support fd server

* [Serving] support XPU + Tritron

* [Serving] support XPU + Tritron

* [Dockerfile] update xpu tritron docker file -> paddle 0.0.0

* [Dockerfile] update xpu tritron docker file -> paddle 0.0.0

* [Dockerfile] update xpu tritron docker file -> paddle 0.0.0

* [Dockerfile] add comments for xpu tritron dockerfile

* [Doruntime] fix xpu infer error

* [Doruntime] fix xpu infer error

* [XPU] update xpu dockerfile

* add xpu triton server docs

* add xpu triton server docs

* add xpu triton server docs

* add xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs

* update xpu triton server docs
2023-05-29 14:38:25 +08:00
Wang Xinyu
62e051e21d [CVCUDA] CMake integration, vison processor CV-CUDA integration, PaddleClas support CV-CUDA (#1074)
* cvcuda resize

* cvcuda center crop

* cvcuda resize

* add a fdtensor in fdmat

* get cv mat and get tensor support gpu

* paddleclas cvcuda preprocessor

* fix compile err

* fix windows compile error

* rename reused to cached

* address comment

* remove debug code

* add comment

* add manager run

* use cuda and cuda used

* use cv cuda doc

* address comment

---------

Co-authored-by: Jason <jiangjiajun@baidu.com>
2023-01-30 09:33:49 +08:00
heliqi
6310ddc8d6 [Serving]update np.object to np.object_ (#1021)
np.object to np.object_
2022-12-30 16:43:47 +08:00
WJJ1995
de72162af9 [Serving] Fixed preprocess&&postprocess in YOLOv5 Serving (#874)
* add onnx_ort_runtime demo

* rm in requirements

* support batch eval

* fixed MattingResults bug

* move assignment for DetectionResult

* integrated x2paddle

* add model convert readme

* update readme

* re-lint

* add processor api

* Add MattingResult Free

* change valid_cpu_backends order

* add ppocr benchmark

* mv bs from 64 to 32

* fixed quantize.md

* fixed quantize bugs

* Add Monitor for benchmark

* update mem monitor

* Set trt_max_batch_size default 1

* fixed ocr benchmark bug

* support yolov5 in serving

* Fixed yolov5 serving

* Fixed postprocess

Co-authored-by: Jason <jiangjiajun@baidu.com>
2022-12-14 10:14:29 +08:00
heliqi
6ebe612377 [Serving]ppcls preprocessor support gpu (#615)
* serving ppcls support gpu

* serving ppcls preprocessor use cpu
2022-11-17 17:16:32 +08:00
heliqi
b0a30a7b10 [Serving]Add PPCls serving examples (#555)
* add ppcls serving examples

* fix ppcls/serving docs

* fix code style
2022-11-11 13:32:46 +08:00