[Quantization] Improve the usage of FastDeploy tools. (#660)

* Add PaddleOCR Support

* Add PaddleOCR Support

* Add PaddleOCRv3 Support

* Add PaddleOCRv3 Support

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add PaddleOCRv3 Support

* Add PaddleOCRv3 Supports

* Add PaddleOCRv3 Suport

* Fix Rec diff

* Remove useless functions

* Remove useless comments

* Add PaddleOCRv2 Support

* Add PaddleOCRv3 & PaddleOCRv2 Support

* remove useless parameters

* Add utils of sorting det boxes

* Fix code naming convention

* Fix code naming convention

* Fix code naming convention

* Fix bug in the Classify process

* Imporve OCR Readme

* Fix diff in Cls model

* Update Model Download Link in Readme

* Fix diff in PPOCRv2

* Improve OCR readme

* Imporve OCR readme

* Improve OCR readme

* Improve OCR readme

* Imporve OCR readme

* Improve OCR readme

* Fix conflict

* Add readme for OCRResult

* Improve OCR readme

* Add OCRResult readme

* Improve OCR readme

* Improve OCR readme

* Add Model Quantization Demo

* Fix Model Quantization Readme

* Fix Model Quantization Readme

* Add the function to do PTQ quantization

* Improve quant tools readme

* Improve quant tool readme

* Improve quant tool readme

* Add PaddleInference-GPU for OCR Rec model

* Add QAT method to fastdeploy-quantization tool

* Remove examples/slim for now

* Move configs folder

* Add Quantization Support for Classification Model

* Imporve ways of importing preprocess

* Upload YOLO Benchmark on readme

* Upload YOLO Benchmark on readme

* Upload YOLO Benchmark on readme

* Improve Quantization configs and readme

* Add support for multi-inputs model

* Add backends and params file for YOLOv7

* Add quantized model deployment support for YOLO series

* Fix YOLOv5 quantize readme

* Fix YOLO quantize readme

* Fix YOLO quantize readme

* Improve quantize YOLO readme

* Improve quantize YOLO readme

* Improve quantize YOLO readme

* Improve quantize YOLO readme

* Improve quantize YOLO readme

* Fix bug, change Fronted to ModelFormat

* Change Fronted to ModelFormat

* Add examples to deploy quantized paddleclas models

* Fix readme

* Add quantize Readme

* Add quantize Readme

* Add quantize Readme

* Modify readme of quantization tools

* Modify readme of quantization tools

* Improve quantization tools readme

* Improve quantization readme

* Improve PaddleClas quantized model deployment  readme

* Add PPYOLOE-l quantized deployment examples

* Improve quantization tools readme

* Improve Quantize Readme

* Fix conflicts

* Fix conflicts

* improve readme

* Improve quantization tools and readme

* Improve quantization tools and readme

* Add quantized deployment examples for PaddleSeg model

* Fix cpp readme

* Fix memory leak of reader_wrapper function

* Fix model file name in PaddleClas quantization examples

* Update Runtime and E2E benchmark

* Update Runtime and E2E benchmark

* Rename quantization tools to auto compression tools

* Remove PPYOLOE data when deployed on MKLDNN

* Fix readme

* Support PPYOLOE with OR without NMS and update readme

* Update Readme

* Update configs and readme

* Update configs and readme

* Add Paddle-TensorRT backend in quantized model deploy examples

* Support PPYOLOE+ series

* Add reused_input_tensors for PPYOLOE

* Improve fastdeploy tools usage

* improve fastdeploy tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* remove modify

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Remove extra requirements for fd-auto-compress package

* Imporve fastdeploy-tools package

* Install fastdeploy-tools package when build fastdeploy-python

* Imporve quantization readme
This commit is contained in:
yunyaoXYY
2022-11-23 10:13:50 +08:00
committed by GitHub
parent 521ec87cf5
commit 712d7fd71b
20 changed files with 33 additions and 69 deletions

View File

@@ -3,5 +3,5 @@ requests
tqdm tqdm
numpy numpy
opencv-python opencv-python
fd-auto-compress>=0.0.1 fastdeploy-tools
pyyaml pyyaml

View File

@@ -22,14 +22,11 @@ git clone https://github.com/PaddlePaddle/PaddleSlim.git & cd PaddleSlim
python setup.py install python setup.py install
``` ```
3.安装fd-auto-compress一键模型自动化压缩工具 3.安装fastdeploy-tools工具
```bash ```bash
# 通过pip安装fd-auto-compress. # 通过pip安装fastdeploy-tools. 此工具包目前支持模型一键自动化压缩和模型转换的功能.
# FastDeploy的python包已包含此工具, 不需重复安装. # FastDeploy的python包已包含此工具, 不需重复安装.
pip install fd-auto-compress==0.0.1 pip install fastdeploy-tools==0.0.0
# 在当前目录执行以下命令
python setup.py install
``` ```
### 一键模型自动化压缩工具的使用 ### 一键模型自动化压缩工具的使用
@@ -38,7 +35,7 @@ python setup.py install
```bash ```bash
fastdeploy --auto_compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model/' fastdeploy --auto_compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model/'
``` ```
详细使用文档请参考[FastDeploy一键模型自动化压缩工具](./auto_compression/README.md) 详细使用文档请参考[FastDeploy一键模型自动化压缩工具](./common_tools/auto_compression/README.md)
<p id="2"></p> <p id="2"></p>

View File

@@ -22,14 +22,12 @@ git clone https://github.com/PaddlePaddle/PaddleSlim.git & cd PaddleSlim
python setup.py install python setup.py install
``` ```
3.Install fd-auto-compress package 3.Install fastdeploy-tools package
```bash ```bash
# Installing fd-auto-compress via pip # Installing fastdeploy-tools via pip
# This tool is included in the python installer of FastDeploy, so you don't need to install it again. # This tool is included in the python installer of FastDeploy, so you don't need to install it again.
pip install fd-auto-compress==0.0.1 pip install fastdeploy-tools==0.0.0
# Execute in the current directory
python setup.py install
``` ```
### The Usage of One-Click Model Auto Compression Tool ### The Usage of One-Click Model Auto Compression Tool
@@ -37,7 +35,7 @@ After the above steps are successfully installed, you can use FastDeploy one-cli
```bash ```bash
fastdeploy --auto_compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model/' fastdeploy --auto_compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model/'
``` ```
For detailed documentation, please refer to [FastDeploy One-Click Model Auto Compression Tool](./auto_compression/README.md) For detailed documentation, please refer to [FastDeploy One-Click Model Auto Compression Tool](./common_tools/auto_compression/README_EN.md)
<p id="2"></p> <p id="2"></p>

View File

@@ -1,22 +0,0 @@
import setuptools
import fd_auto_compress
long_description = "fd_auto_compress is a toolkit for model auto compression of FastDeploy.\n\n"
long_description += "Usage: fastdeploy --auto_compress --config_path=./yolov7_tiny_qat_dis.yaml --method='QAT' --save_dir='../v7_qat_outmodel/' \n"
setuptools.setup(
name="fd_auto_compress",
version="0.0.1",
description="A toolkit for model auto compression of FastDeploy.",
long_description=long_description,
long_description_content_type="text/plain",
packages=setuptools.find_packages(),
author='fastdeploy',
author_email='fastdeploy@baidu.com',
url='https://github.com/PaddlePaddle/FastDeploy.git',
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
],
license='Apache 2.0', )

View File

@@ -17,14 +17,8 @@ git clone https://github.com/PaddlePaddle/PaddleSlim.git & cd PaddleSlim
python setup.py install python setup.py install
``` ```
### fastdeploy-auto-compression 一键模型自动化压缩工具安装方式 ### 一键模型自动化压缩工具安装方式
``` FastDeploy一键模型自动化压缩不需要单独的安装, 用户只需要正确安装好[FastDeploy工具包](../../README.md)即可.
# 通过pip安装fd-auto-compress包
pip install fd-auto-compress
# 并在上一层目录(非本级目录)执行如下命令
python setup.py install
```
## 2.使用方式 ## 2.使用方式

View File

@@ -21,16 +21,8 @@ python setup.py install
``` ```
### Install Fastdeploy Auto Compression Toolkit ### Install Fastdeploy Auto Compression Toolkit
FastDeploy One-Click Model Automation compression does not require a separate installation, users only need to properly install the [FastDeploy Toolkit](../../README.md)
Run the following command to install
```
# Install fd-auto-compress package using pip
pip install fd-auto-compress
# Execute the following command in the previous directory (not in the current directory)
python setup.py install
```
## 2. How to Use ## 2. How to Use

View File

@@ -5,37 +5,41 @@ In addition to using the configuration files provided by FastDeploy directly in
## Demo ## Demo
``` ```
# Global config # Global config
Global: Global:
model_dir: ./yolov5s.onnx #Path to input model model_dir: ./ppyoloe_plus_crn_s_80e_coco #Path to input model
format: 'onnx' #Input model format, please select 'paddle' for paddle model format: paddle #Input model format, please select 'paddle' for paddle model
model_filename: model.pdmodel #Quantized model name in Paddle format model_filename: model.pdmodel #Quantized model name in Paddle format
params_filename: model.pdiparams #Parameter name for quantized model name in Paddle format params_filename: model.pdiparams #Parameter name for quantized paddle model
image_path: ./COCO_val_320 #Data set paths for post-training quantization or quantized distillation qat_image_path: ./COCO_train_320 #Data set paths for quantization distillation training
arch: YOLOv5 #Model Architecture ptq_image_path: ./COCO_val_320 #Data set paths for PTQ
input_list: ['x2paddle_images'] #Input name of the model to be quantified input_list: ['image','scale_factor'] #Input name of the model to be quanzitzed
preprocess: yolo_image_preprocess #The preprocessing functions for the data when quantizing the model. Developers can modify or write a new one in . /fdquant/dataset.py qat_preprocess: ppyoloe_plus_withNMS_image_preprocess # The preprocessing function for Quantization distillation training
ptq_preprocess: ppyoloe_plus_withNMS_image_preprocess # The preprocessing function for PTQ
qat_batch_size: 4 #Batch size
#uantization distillation training configuration
# Quantization distillation training configuration
Distillation: Distillation:
alpha: 1.0 # Distillation loss weight alpha: 1.0 #Distillation loss weight
loss: soft_label #Distillation loss algorithm loss: soft_label #Distillation loss algorithm
Quantization: Quantization:
onnx_format: true #Whether to use ONNX quantization standard format or not, must be true to deploy on FastDeploye onnx_format: true #Whether to use ONNX quantization standard format or not, must be true to deploy on FastDeploy
use_pact: true #Whether to use the PACT method for training use_pact: true #Whether to use the PACT method for training
activation_quantize_type: 'moving_average_abs_max' #Activate quantization methods activation_quantize_type: 'moving_average_abs_max' #Activations quantization methods
quantize_op_types: #OPs that need to be quantized quantize_op_types: #OPs that need to be quantized
- conv2d - conv2d
- depthwise_conv2d - depthwise_conv2d
#Post-Training Quantization # Post-Training Quantization
PTQ: PTQ:
calibration_method: 'avg' #Activate calibration algorithm of post-training quantization , Options: avg, abs_max, hist, KL, mse, emd calibration_method: 'avg' #Activations calibration algorithm of post-training quantization , Options: avg, abs_max, hist, KL, mse, emd
skip_tensor_list: None #Developers can skip some conv layers quantization skip_tensor_list: None #Developers can skip some conv layers quantization
#Traning # Training Config
TrainConfig: TrainConfig:
train_iter: 3000 train_iter: 3000
learning_rate: 0.00001 learning_rate: 0.00001
@@ -44,8 +48,9 @@ TrainConfig:
type: SGD type: SGD
weight_decay: 4.0e-05 weight_decay: 4.0e-05
target_metric: 0.365 target_metric: 0.365
``` ```
## More details ## More details
FastDeploy one-click quantization tool is powered by PaddeSlim, please refer to [Automated Compression of Hyperparameter Tutorial](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/example/auto_compression/hyperparameter_tutorial.md) for more details. FastDeploy one-click quantization tool is powered by PaddeSlim, please refer to [Auto Compression Hyperparameter Tutorial](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/example/auto_compression/hyperparameter_tutorial.md) for more details.

View File

@@ -78,7 +78,7 @@ def main():
args = argsparser().parse_args() args = argsparser().parse_args()
if args.auto_compress == True: if args.auto_compress == True:
try: try:
from fd_auto_compress.fd_auto_compress import auto_compress from .auto_compression.fd_auto_compress.fd_auto_compress import auto_compress
print("Welcome to use FastDeploy Auto Compression Toolkit!") print("Welcome to use FastDeploy Auto Compression Toolkit!")
auto_compress(args) auto_compress(args)
except ImportError: except ImportError: