
* Add PaddleOCR Support * Add PaddleOCR Support * Add PaddleOCRv3 Support * Add PaddleOCRv3 Support * Update README.md * Update README.md * Update README.md * Update README.md * Add PaddleOCRv3 Support * Add PaddleOCRv3 Supports * Add PaddleOCRv3 Suport * Fix Rec diff * Remove useless functions * Remove useless comments * Add PaddleOCRv2 Support * Add PaddleOCRv3 & PaddleOCRv2 Support * remove useless parameters * Add utils of sorting det boxes * Fix code naming convention * Fix code naming convention * Fix code naming convention * Fix bug in the Classify process * Imporve OCR Readme * Fix diff in Cls model * Update Model Download Link in Readme * Fix diff in PPOCRv2 * Improve OCR readme * Imporve OCR readme * Improve OCR readme * Improve OCR readme * Imporve OCR readme * Improve OCR readme * Fix conflict * Add readme for OCRResult * Improve OCR readme * Add OCRResult readme * Improve OCR readme * Improve OCR readme * Add Model Quantization Demo * Fix Model Quantization Readme * Fix Model Quantization Readme * Add the function to do PTQ quantization * Improve quant tools readme * Improve quant tool readme * Improve quant tool readme * Add PaddleInference-GPU for OCR Rec model * Add QAT method to fastdeploy-quantization tool * Remove examples/slim for now * Move configs folder * Add Quantization Support for Classification Model * Imporve ways of importing preprocess * Upload YOLO Benchmark on readme * Upload YOLO Benchmark on readme * Upload YOLO Benchmark on readme * Improve Quantization configs and readme * Add support for multi-inputs model * Add backends and params file for YOLOv7 * Add quantized model deployment support for YOLO series * Fix YOLOv5 quantize readme * Fix YOLO quantize readme * Fix YOLO quantize readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Fix bug, change Fronted to ModelFormat * Change Fronted to ModelFormat * Add examples to deploy quantized paddleclas models * Fix readme * Add quantize Readme * Add quantize Readme * Add quantize Readme * Modify readme of quantization tools * Modify readme of quantization tools * Improve quantization tools readme * Improve quantization readme * Improve PaddleClas quantized model deployment readme * Add PPYOLOE-l quantized deployment examples * Improve quantization tools readme * Improve Quantize Readme * Fix conflicts * Fix conflicts * improve readme * Improve quantization tools and readme * Improve quantization tools and readme * Add quantized deployment examples for PaddleSeg model * Fix cpp readme * Fix memory leak of reader_wrapper function * Fix model file name in PaddleClas quantization examples * Update Runtime and E2E benchmark * Update Runtime and E2E benchmark * Rename quantization tools to auto compression tools * Remove PPYOLOE data when deployed on MKLDNN * Fix readme * Support PPYOLOE with OR without NMS and update readme * Update Readme * Update configs and readme * Update configs and readme * Add Paddle-TensorRT backend in quantized model deploy examples * Support PPYOLOE+ series
FastDeploy 一键模型自动化压缩
FastDeploy基于PaddleSlim的Auto Compression Toolkit(ACT), 给用户提供了一键模型自动化压缩的工具. 本文档以Yolov5s为例, 供用户参考如何安装并执行FastDeploy的一键模型自动化压缩.
1.安装
环境依赖
1.用户参考PaddlePaddle官网, 安装develop版本
https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/develop/install/pip/linux-pip.html
2.安装paddleslim-develop版本
git clone https://github.com/PaddlePaddle/PaddleSlim.git & cd PaddleSlim
python setup.py install
fastdeploy-auto-compression 一键模型自动化压缩工具安装方式
用户在当前目录下,运行如下命令:
python setup.py install
2.使用方式
一键模型压缩示例
FastDeploy模型一键自动压缩可包含多种策略, 目前主要采用离线量化和量化蒸馏训练, 下面将从离线量化和量化蒸馏两个策略来介绍如何使用一键模型自动化压缩.
离线量化
1. 准备模型和Calibration数据集
用户需要自行准备待量化模型与Calibration数据集. 本例中用户可执行以下命令, 下载待量化的yolov5s.onnx模型和我们为用户准备的Calibration数据集示例.
# 下载yolov5.onnx
wget https://paddle-slim-models.bj.bcebos.com/act/yolov5s.onnx
# 下载数据集, 此Calibration数据集为COCO val2017中的前320张图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/COCO_val_320.tar.gz
tar -xvf COCO_val_320.tar.gz
2.使用fastdeploy_auto_compress命令,执行一键模型自动化压缩:
以下命令是对yolov5s模型进行量化, 用户若想量化其他模型, 替换config_path为configs文件夹下的其他模型配置文件即可.
fastdeploy_auto_compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model/'
【说明】离线量化(训练后量化):post-training quantization,缩写是PTQ
3.参数说明
目前用户只需要提供一个定制的模型config文件,并指定量化方法和量化后的模型保存路径即可完成量化.
参数 | 作用 |
---|---|
--config_path | 一键压缩所需要的量化配置文件.详解 |
--method | 压缩方式选择, 离线量化选PTQ,量化蒸馏训练选QAT |
--save_dir | 产出的量化后模型路径, 该模型可直接在FastDeploy部署 |
量化蒸馏训练
1.准备待量化模型和训练数据集
FastDeploy一键模型自动化压缩目前的量化蒸馏训练,只支持无标注图片训练,训练过程中不支持评估模型精度. 数据集为真实预测场景下的图片,图片数量依据数据集大小来定,尽量覆盖所有部署场景. 此例中,我们为用户准备了COCO2017训练集中的前320张图片. 注: 如果用户想通过量化蒸馏训练的方法,获得精度更高的量化模型, 可以自行准备更多的数据, 以及训练更多的轮数.
# 下载yolov5.onnx
wget https://paddle-slim-models.bj.bcebos.com/act/yolov5s.onnx
# 下载数据集, 此Calibration数据集为COCO2017训练集中的前320张图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/COCO_train_320.tar
tar -xvf COCO_train_320.tar
2.使用fastdeploy_auto_compress命令,执行一键模型自动化压缩:
以下命令是对yolov5s模型进行量化, 用户若想量化其他模型, 替换config_path为configs文件夹下的其他模型配置文件即可.
# 执行命令默认为单卡训练,训练前请指定单卡GPU, 否则在训练过程中可能会卡住.
export CUDA_VISIBLE_DEVICES=0
fastdeploy_auto_compress --config_path=./configs/detection/yolov5s_quant.yaml --method='QAT' --save_dir='./yolov5s_qat_model/'
3.参数说明
目前用户只需要提供一个定制的模型config文件,并指定量化方法和量化后的模型保存路径即可完成量化.
参数 | 作用 |
---|---|
--config_path | 一键自动化压缩所需要的量化配置文件.详解 |
--method | 压缩方式选择, 离线量化选PTQ,量化蒸馏训练选QAT |
--save_dir | 产出的量化后模型路径, 该模型可直接在FastDeploy部署 |
3. FastDeploy 一键模型自动化压缩 Config文件参考
FastDeploy目前为用户提供了多个模型的压缩config文件,以及相应的FP32模型, 用户可以直接下载使用并体验.
Config文件 | 待压缩的FP32模型 | 备注 |
---|---|---|
mobilenetv1_ssld_quant | mobilenetv1_ssld | |
resnet50_vd_quant | resnet50_vd | |
yolov5s_quant | yolov5s | |
yolov6s_quant | yolov6s | |
yolov7_quant | yolov7 | |
ppyoloe_withNMS_quant | ppyoloe_l | 支持PPYOLOE的s,m,l,x系列模型, 从PaddleDetection导出模型时正常导出, 不要去除NMS |
ppyoloe_plus_withNMS_quant | ppyoloe_plus_s | 支持PPYOLOE+的s,m,l,x系列模型, 从PaddleDetection导出模型时正常导出, 不要去除NMS |
pp_liteseg_quant | pp_liteseg |
4. FastDeploy 部署量化模型
用户在获得量化模型之后,即可以使用FastDeploy进行部署, 部署文档请参考: 具体请用户参考示例文档: