Files
FastDeploy/examples/vision/detection/yolov5/quantize
WJJ1995 c6d943b7f0 [Doc] Fixed quantize.md (#795)
* add onnx_ort_runtime demo

* rm in requirements

* support batch eval

* fixed MattingResults bug

* move assignment for DetectionResult

* integrated x2paddle

* add model convert readme

* update readme

* re-lint

* add processor api

* Add MattingResult Free

* change valid_cpu_backends order

* add ppocr benchmark

* mv bs from 64 to 32

* fixed quantize.md

* fixed quantize bugs

Co-authored-by: Jason <jiangjiajun@baidu.com>
2022-12-05 16:38:48 +08:00
..
2022-11-14 18:44:33 +08:00
2022-12-05 16:38:48 +08:00

YOLOv5 Quantized Model Deployment

FastDeploy supports the deployment of quantized models and provides a one-click model quantization tool. Users can use the one-click model quantization tool to quantize and deploy the models themselves or download the quantized models provided by FastDeploy directly for deployment.

FastDeploy One-Click Model Quantization Tool

FastDeploy provides a one-click quantization tool that allows users to quantize a model simply with a configuration file. For a detailed tutorial, please refer to: One-Click Model Quantization Tool

Download Quantized YOLOv5s Model

Users can also directly download the quantized models in the table below for deployment.

Model Inference Backend Hardware FP32 Inference Time Delay INT8  Inference Time Delay Acceleration ratio FP32 mAP INT8 mAP Method
YOLOv5s TensorRT GPU 8.79 5.17 1.70 37.6 36.6 Quantized distillation training
YOLOv5s Paddle Inference CPU 217.05 133.31 1.63 37.6 36.8 Quantized distillation training

The data in the above table shows the end-to-end inference performance of FastDeploy deployment before and after model quantization.

  • The test images are from COCO val2017.
  • The inference time delay is the inference latency on different Runtime in milliseconds.
  • CPU is Intel(R) Xeon(R) Gold 6271C, GPU is Tesla T4, TensorRT version 8.4.15, and the fixed CPU thread is 1 for all tests.

More Detailed Tutorials