mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-11-03 11:02:01 +08:00
[Quantization] Update quantized model deployment examples and update readme. (#377)
* Add PaddleOCR Support * Add PaddleOCR Support * Add PaddleOCRv3 Support * Add PaddleOCRv3 Support * Update README.md * Update README.md * Update README.md * Update README.md * Add PaddleOCRv3 Support * Add PaddleOCRv3 Supports * Add PaddleOCRv3 Suport * Fix Rec diff * Remove useless functions * Remove useless comments * Add PaddleOCRv2 Support * Add PaddleOCRv3 & PaddleOCRv2 Support * remove useless parameters * Add utils of sorting det boxes * Fix code naming convention * Fix code naming convention * Fix code naming convention * Fix bug in the Classify process * Imporve OCR Readme * Fix diff in Cls model * Update Model Download Link in Readme * Fix diff in PPOCRv2 * Improve OCR readme * Imporve OCR readme * Improve OCR readme * Improve OCR readme * Imporve OCR readme * Improve OCR readme * Fix conflict * Add readme for OCRResult * Improve OCR readme * Add OCRResult readme * Improve OCR readme * Improve OCR readme * Add Model Quantization Demo * Fix Model Quantization Readme * Fix Model Quantization Readme * Add the function to do PTQ quantization * Improve quant tools readme * Improve quant tool readme * Improve quant tool readme * Add PaddleInference-GPU for OCR Rec model * Add QAT method to fastdeploy-quantization tool * Remove examples/slim for now * Move configs folder * Add Quantization Support for Classification Model * Imporve ways of importing preprocess * Upload YOLO Benchmark on readme * Upload YOLO Benchmark on readme * Upload YOLO Benchmark on readme * Improve Quantization configs and readme * Add support for multi-inputs model * Add backends and params file for YOLOv7 * Add quantized model deployment support for YOLO series * Fix YOLOv5 quantize readme * Fix YOLO quantize readme * Fix YOLO quantize readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Fix bug, change Fronted to ModelFormat * Change Fronted to ModelFormat * Add examples to deploy quantized paddleclas models * Fix readme * Add quantize Readme * Add quantize Readme * Add quantize Readme * Modify readme of quantization tools * Modify readme of quantization tools * Improve quantization tools readme * Improve quantization readme * Improve PaddleClas quantized model deployment readme * Add PPYOLOE-l quantized deployment examples * Improve quantization tools readme * Improve Quantize Readme * Fix conflicts * Fix conflicts * improve readme * Improve quantization tools and readme * Improve quantization tools and readme * Add quantized deployment examples for PaddleSeg model * Fix cpp readme * Fix memory leak of reader_wrapper function * Fix model file name in PaddleClas quantization examples * Update Runtime and E2E benchmark * Update Runtime and E2E benchmark * Rename quantization tools to auto compression tools * Remove PPYOLOE data when deployed on MKLDNN * Fix readme * Support PPYOLOE with OR without NMS and update readme * Update Readme * Update configs and readme * Update configs and readme * Add Paddle-TensorRT backend in quantized model deploy examples * Support PPYOLOE+ series
This commit is contained in:
@@ -1,23 +1,42 @@
|
||||
# YOLOv6量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型量化的工具.
|
||||
用户可以使用一键模型量化工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
|
||||
## FastDeploy一键模型量化工具
|
||||
FastDeploy 提供了一键量化工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型量化工具](../../../../../tools/quantization/)
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/auto_compression/)
|
||||
## 下载量化完成的YOLOv6s模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.
|
||||
用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
|
||||
|
||||
Benchmark表格说明:
|
||||
- Rtuntime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
|
||||
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
|
||||
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
|
||||
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
|
||||
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
|
||||
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
|
||||
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
|
||||
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
|
||||
|
||||
#### Runtime Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_ptq_model.tar) | TensorRT | GPU | 9.47 | 3.23 | 4.09 |2.81 | 3.37 | 42.5 | 40.7|量化蒸馏训练 |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_ptq_model.tar) | Paddle-TensorRT | GPU | 9.31 | None| 4.17 | 2.95 | 3.16 | 42.5 | 40.7|量化蒸馏训练 |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_ptq_model.tar) | ONNX Runtime | CPU | 334.65 | 126.38 | None | None| 2.65 |42.5| 36.8|量化蒸馏训练 |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_ptq_model.tar) | Paddle Inference | CPU | 352.87 | 123.12 |None | None| 2.87 |42.5| 40.8|量化蒸馏训练 |
|
||||
|
||||
|
||||
#### 端到端 Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_ptq_model.tar) | TensorRT | GPU | 15.66 | 11.30 | 10.25 |9.59 | 1.63 | 42.5 | 40.7|量化蒸馏训练 |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_ptq_model.tar) | Paddle-TensorRT | GPU | 15.03 | None| 11.36 | 9.32 | 1.61 | 42.5 | 40.7|量化蒸馏训练 |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_ptq_model.tar) | ONNX Runtime | CPU | 348.21 | 126.38 | None | None| 2.82 |42.5| 36.8|量化蒸馏训练 |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_ptq_model.tar) | Paddle Inference | CPU | 352.87 | 121.64 |None | None| 3.04 |42.5| 40.8|量化蒸馏训练 |
|
||||
|
||||
|
||||
| 模型 |推理后端 |部署硬件 | FP32推理时延 | INT8推理时延 | 加速比 | FP32 mAP | INT8 mAP | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- | ------ |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_quant.tar) | TensorRT | GPU | 12.89 | 8.92 | 1.45 | 42.5 | 40.6| 量化蒸馏训练 |
|
||||
| [YOLOv6s](https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_quant.tar) | Paddle Inference | CPU | 366.41 | 131.70 | 2.78 |42.5| 41.2|量化蒸馏训练 |
|
||||
|
||||
上表中的数据, 为模型量化前后,在FastDeploy部署的端到端推理性能.
|
||||
- 测试图片为COCO val2017中的图片.
|
||||
- 推理时延为端到端推理(包含前后处理)的平均时延, 单位是毫秒.
|
||||
- CPU为Intel(R) Xeon(R) Gold 6271C, GPU为Tesla T4, TensorRT版本8.4.15, 所有测试中固定CPU线程数为1.
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
|
||||
### 量化模型准备
|
||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
||||
- 2. 用户可以使用FastDeploy提供的[一键模型量化工具](../../../../../../tools/quantization/),自行进行模型量化, 并使用产出的量化模型进行部署.
|
||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
|
||||
|
||||
## 以量化后的YOLOv6s模型为例, 进行部署
|
||||
在本目录执行如下命令即可完成编译,以及量化模型部署.
|
||||
@@ -22,13 +22,15 @@ cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.4.0
|
||||
make -j
|
||||
|
||||
#下载FastDeloy提供的yolov6s量化模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_quant.tar
|
||||
tar -xvf yolov6s_quant.tar
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_qat_model.tar
|
||||
tar -xvf yolov6s_qat_model.tar
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
|
||||
# 在CPU上使用Paddle-Inference推理量化模型
|
||||
./infer_demo yolov6s_quant 000000014439.jpg 0
|
||||
./infer_demo yolov6s_qat_model 000000014439.jpg 0
|
||||
# 在GPU上使用TensorRT推理量化模型
|
||||
./infer_demo yolov6s_quant 000000014439.jpg 1
|
||||
./infer_demo yolov6s_qat_model 000000014439.jpg 1
|
||||
# 在GPU上使用Paddle-TensorRT推理量化模型
|
||||
./infer_demo yolov6s_qat_model 000000014439.jpg 2
|
||||
```
|
||||
|
||||
@@ -68,7 +68,11 @@ int main(int argc, char* argv[]) {
|
||||
} else if (flag == 1) {
|
||||
option.UseGpu();
|
||||
option.UseTrtBackend();
|
||||
}
|
||||
} else if (flag == 2) {
|
||||
option.UseGpu();
|
||||
option.UseTrtBackend();
|
||||
option.EnablePaddleToTrt();
|
||||
}
|
||||
|
||||
std::string model_dir = argv[1];
|
||||
std::string test_image = argv[2];
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
### 量化模型准备
|
||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
||||
- 2. 用户可以使用FastDeploy提供的[一键模型量化工具](../../../../../../tools/quantization/),自行进行模型量化, 并使用产出的量化模型进行部署.
|
||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
|
||||
|
||||
## 以量化后的YOLOv6s模型为例, 进行部署
|
||||
```bash
|
||||
@@ -17,12 +17,14 @@ git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/slim/yolov6/python
|
||||
|
||||
#下载FastDeloy提供的yolov6s量化模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_quant.tar
|
||||
tar -xvf yolov6s_quant.tar
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_qat_model.tar
|
||||
tar -xvf yolov6s_qat_model.tar
|
||||
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
|
||||
|
||||
# 在CPU上使用Paddle-Inference推理量化模型
|
||||
python infer.py --model yolov6s_quant --image 000000014439.jpg --device cpu --backend paddle
|
||||
python infer.py --model yolov6s_qat_model --image 000000014439.jpg --device cpu --backend paddle
|
||||
# 在GPU上使用TensorRT推理量化模型
|
||||
python infer.py --model yolov6s_quant --image 000000014439.jpg --device gpu --backend trt
|
||||
python infer.py --model yolov6s_qat_model --image 000000014439.jpg --device gpu --backend trt
|
||||
# 在GPU上使用Paddle-TensorRT推理量化模型
|
||||
python infer.py --model yolov6s_qat_model --image 000000014439.jpg --device gpu --backend pptrt
|
||||
```
|
||||
|
||||
@@ -47,6 +47,11 @@ def build_option(args):
|
||||
assert args.device.lower(
|
||||
) == "gpu", "TensorRT backend require inference on device GPU."
|
||||
option.use_trt_backend()
|
||||
elif args.backend.lower() == "pptrt":
|
||||
assert args.device.lower(
|
||||
) == "gpu", "TensorRT backend require inference on device GPU."
|
||||
option.use_trt_backend()
|
||||
option.enable_paddle_to_trt()
|
||||
elif args.backend.lower() == "ort":
|
||||
option.use_ort_backend()
|
||||
elif args.backend.lower() == "paddle":
|
||||
|
||||
Reference in New Issue
Block a user