* [backend] Support XPU via Paddle Inference backend
* [backend] Support XPU via Paddle Inference backend
* [backend] Support XPU via Paddle Inference backend
* [XPU] support XPU benchmark via paddle inference
* [XPU] support XPU benchmark via paddle inference
* [benchmark] add xpu paddle h2d config files
* add smoke model
* add 3d vis
* update code
* update doc
* mv paddle3d from detection to perception
* update result for velocity
* update code for CI
* add set input data for TRT backend
* add serving support for smoke model
* update code
* update code
* update code
---------
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
* Fix links in readme
* Fix links in readme
* Update PPOCRv2/v3 examples
* Update auto compression configs
* Add neww quantization support for paddleclas model
* Update quantized Yolov6s model download link
* Improve PPOCR comments
* Add English doc for quantization
* Fix PPOCR rec model bug
* Add new paddleseg quantization support
* Add new paddleseg quantization support
* Add new paddleseg quantization support
* Add new paddleseg quantization support
* Add Ascend model list
* Add ascend model list
* Add ascend model list
* Add ascend model list
* Add ascend model list
* Add ascend model list
* Add ascend model list
* Support DirectML in onnxruntime
* Support onnxruntime DirectML
* Support onnxruntime DirectML
* Support onnxruntime DirectML
* Support OnnxRuntime DirectML
* Support OnnxRuntime DirectML
* Support OnnxRuntime DirectML
* Support OnnxRuntime DirectML
* Support OnnxRuntime DirectML
* Support OnnxRuntime DirectML
* Support OnnxRuntime DirectML
* Support OnnxRuntime DirectML
* Remove DirectML vision model example
* Imporve OnnxRuntime DirectML
* Imporve OnnxRuntime DirectML
* fix opencv cmake in Windows
* recheck codestyle