mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-05 16:48:03 +08:00
[Quantization] Change the usage of FastDeploy auto compression tool. (#576)
* Add PaddleOCR Support * Add PaddleOCR Support * Add PaddleOCRv3 Support * Add PaddleOCRv3 Support * Update README.md * Update README.md * Update README.md * Update README.md * Add PaddleOCRv3 Support * Add PaddleOCRv3 Supports * Add PaddleOCRv3 Suport * Fix Rec diff * Remove useless functions * Remove useless comments * Add PaddleOCRv2 Support * Add PaddleOCRv3 & PaddleOCRv2 Support * remove useless parameters * Add utils of sorting det boxes * Fix code naming convention * Fix code naming convention * Fix code naming convention * Fix bug in the Classify process * Imporve OCR Readme * Fix diff in Cls model * Update Model Download Link in Readme * Fix diff in PPOCRv2 * Improve OCR readme * Imporve OCR readme * Improve OCR readme * Improve OCR readme * Imporve OCR readme * Improve OCR readme * Fix conflict * Add readme for OCRResult * Improve OCR readme * Add OCRResult readme * Improve OCR readme * Improve OCR readme * Add Model Quantization Demo * Fix Model Quantization Readme * Fix Model Quantization Readme * Add the function to do PTQ quantization * Improve quant tools readme * Improve quant tool readme * Improve quant tool readme * Add PaddleInference-GPU for OCR Rec model * Add QAT method to fastdeploy-quantization tool * Remove examples/slim for now * Move configs folder * Add Quantization Support for Classification Model * Imporve ways of importing preprocess * Upload YOLO Benchmark on readme * Upload YOLO Benchmark on readme * Upload YOLO Benchmark on readme * Improve Quantization configs and readme * Add support for multi-inputs model * Add backends and params file for YOLOv7 * Add quantized model deployment support for YOLO series * Fix YOLOv5 quantize readme * Fix YOLO quantize readme * Fix YOLO quantize readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Fix bug, change Fronted to ModelFormat * Change Fronted to ModelFormat * Add examples to deploy quantized paddleclas models * Fix readme * Add quantize Readme * Add quantize Readme * Add quantize Readme * Modify readme of quantization tools * Modify readme of quantization tools * Improve quantization tools readme * Improve quantization readme * Improve PaddleClas quantized model deployment readme * Add PPYOLOE-l quantized deployment examples * Improve quantization tools readme * Improve Quantize Readme * Fix conflicts * Fix conflicts * improve readme * Improve quantization tools and readme * Improve quantization tools and readme * Add quantized deployment examples for PaddleSeg model * Fix cpp readme * Fix memory leak of reader_wrapper function * Fix model file name in PaddleClas quantization examples * Update Runtime and E2E benchmark * Update Runtime and E2E benchmark * Rename quantization tools to auto compression tools * Remove PPYOLOE data when deployed on MKLDNN * Fix readme * Support PPYOLOE with OR without NMS and update readme * Update Readme * Update configs and readme * Update configs and readme * Add Paddle-TensorRT backend in quantized model deploy examples * Support PPYOLOE+ series * Add reused_input_tensors for PPYOLOE * Improve fastdeploy tools usage * improve fastdeploy tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * remove modify * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool * Improve fastdeploy auto compression tool
This commit is contained in:
51
tools/common_tools/common_tools.py
Normal file
51
tools/common_tools/common_tools.py
Normal file
@@ -0,0 +1,51 @@
|
||||
import argparse
|
||||
|
||||
|
||||
def argsparser():
|
||||
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
## argumentments for auto compression
|
||||
parser.add_argument('--auto_compress', default=False, action='store_true')
|
||||
parser.add_argument(
|
||||
'--config_path',
|
||||
type=str,
|
||||
default=None,
|
||||
help="path of compression strategy config.",
|
||||
required=True)
|
||||
parser.add_argument(
|
||||
'--method',
|
||||
type=str,
|
||||
default=None,
|
||||
help="choose PTQ or QAT as quantization method",
|
||||
required=True)
|
||||
parser.add_argument(
|
||||
'--save_dir',
|
||||
type=str,
|
||||
default='./output',
|
||||
help="directory to save compressed model.")
|
||||
parser.add_argument(
|
||||
'--devices',
|
||||
type=str,
|
||||
default='gpu',
|
||||
help="which device used to compress.")
|
||||
|
||||
## arguments for other tools
|
||||
return parser
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
args = argsparser().parse_args()
|
||||
if args.auto_compress == True:
|
||||
try:
|
||||
from fd_auto_compress.fd_auto_compress import auto_compress
|
||||
print("Welcome to use FastDeploy Auto Compression Toolkit!")
|
||||
auto_compress(args)
|
||||
except ImportError:
|
||||
print(
|
||||
"Can not start auto compresssion successfully! Please check if you have installed it!"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
Reference in New Issue
Block a user