mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-12-24 13:28:13 +08:00
* Add PaddleOCR Support * Add PaddleOCR Support * Add PaddleOCRv3 Support * Add PaddleOCRv3 Support * Update README.md * Update README.md * Update README.md * Update README.md * Add PaddleOCRv3 Support * Add PaddleOCRv3 Supports * Add PaddleOCRv3 Suport * Fix Rec diff * Remove useless functions * Remove useless comments * Add PaddleOCRv2 Support * Add PaddleOCRv3 & PaddleOCRv2 Support * remove useless parameters * Add utils of sorting det boxes * Fix code naming convention * Fix code naming convention * Fix code naming convention * Fix bug in the Classify process * Imporve OCR Readme * Fix diff in Cls model * Update Model Download Link in Readme * Fix diff in PPOCRv2 * Improve OCR readme * Imporve OCR readme * Improve OCR readme * Improve OCR readme * Imporve OCR readme * Improve OCR readme * Fix conflict * Add readme for OCRResult * Improve OCR readme * Add OCRResult readme * Improve OCR readme * Improve OCR readme * Add Model Quantization Demo * Fix Model Quantization Readme * Fix Model Quantization Readme * Add the function to do PTQ quantization * Improve quant tools readme * Improve quant tool readme * Improve quant tool readme * Add PaddleInference-GPU for OCR Rec model * Add QAT method to fastdeploy-quantization tool * Remove examples/slim for now * Move configs folder * Add Quantization Support for Classification Model * Imporve ways of importing preprocess * Upload YOLO Benchmark on readme * Upload YOLO Benchmark on readme * Upload YOLO Benchmark on readme * Improve Quantization configs and readme * Add support for multi-inputs model * Add backends and params file for YOLOv7 * Add quantized model deployment support for YOLO series * Fix YOLOv5 quantize readme * Fix YOLO quantize readme * Fix YOLO quantize readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Fix bug, change Fronted to ModelFormat * Change Fronted to ModelFormat * Add examples to deploy quantized paddleclas models * Fix readme * Add quantize Readme * Add quantize Readme * Add quantize Readme * Modify readme of quantization tools * Modify readme of quantization tools * Improve quantization tools readme * Improve quantization readme * Improve PaddleClas quantized model deployment readme * Add PPYOLOE-l quantized deployment examples * Improve quantization tools readme * Improve Quantize Readme * Fix conflicts * Fix conflicts * improve readme * Improve quantization tools and readme * Improve quantization tools and readme * Add quantized deployment examples for PaddleSeg model * Fix cpp readme * Fix memory leak of reader_wrapper function * Fix model file name in PaddleClas quantization examples * Update Runtime and E2E benchmark * Update Runtime and E2E benchmark * Rename quantization tools to auto compression tools * Remove PPYOLOE data when deployed on MKLDNN * Fix readme * Support PPYOLOE with OR without NMS and update readme * Update Readme * Update configs and readme * Update configs and readme * Add Paddle-TensorRT backend in quantized model deploy examples * Support PPYOLOE+ series * Add reused_input_tensors for PPYOLOE * Improve fastdeploy tools usage * improve fastdeploy tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * remove modify * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Remove extra requirements for fd-auto-compress package * Imporve fastdeploy-tools package * Install fastdeploy-tools package when build fastdeploy-python * Imporve quantization readme
Quantization Config File on FastDeploy
The FastDeploy quantization configuration file contains global configuration, quantization distillation training configuration, post-training quantization configuration and training configuration. In addition to using the configuration files provided by FastDeploy directly in this directory, users can modify the relevant configuration files according to their needs
Demo
# Global config
Global:
model_dir: ./ppyoloe_plus_crn_s_80e_coco #Path to input model
format: paddle #Input model format, please select 'paddle' for paddle model
model_filename: model.pdmodel #Quantized model name in Paddle format
params_filename: model.pdiparams #Parameter name for quantized paddle model
qat_image_path: ./COCO_train_320 #Data set paths for quantization distillation training
ptq_image_path: ./COCO_val_320 #Data set paths for PTQ
input_list: ['image','scale_factor'] #Input name of the model to be quanzitzed
qat_preprocess: ppyoloe_plus_withNMS_image_preprocess # The preprocessing function for Quantization distillation training
ptq_preprocess: ppyoloe_plus_withNMS_image_preprocess # The preprocessing function for PTQ
qat_batch_size: 4 #Batch size
# Quantization distillation training configuration
Distillation:
alpha: 1.0 #Distillation loss weight
loss: soft_label #Distillation loss algorithm
Quantization:
onnx_format: true #Whether to use ONNX quantization standard format or not, must be true to deploy on FastDeploy
use_pact: true #Whether to use the PACT method for training
activation_quantize_type: 'moving_average_abs_max' #Activations quantization methods
quantize_op_types: #OPs that need to be quantized
- conv2d
- depthwise_conv2d
# Post-Training Quantization
PTQ:
calibration_method: 'avg' #Activations calibration algorithm of post-training quantization , Options: avg, abs_max, hist, KL, mse, emd
skip_tensor_list: None #Developers can skip some conv layers‘ quantization
# Training Config
TrainConfig:
train_iter: 3000
learning_rate: 0.00001
optimizer_builder:
optimizer:
type: SGD
weight_decay: 4.0e-05
target_metric: 0.365
More details
FastDeploy one-click quantization tool is powered by PaddeSlim, please refer to Auto Compression Hyperparameter Tutorial for more details.