[Quantization] Remove extra requirements for fd-auto-compress package (#582)

* Add PaddleOCR Support

* Add PaddleOCR Support

* Add PaddleOCRv3 Support

* Add PaddleOCRv3 Support

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add PaddleOCRv3 Support

* Add PaddleOCRv3 Supports

* Add PaddleOCRv3 Suport

* Fix Rec diff

* Remove useless functions

* Remove useless comments

* Add PaddleOCRv2 Support

* Add PaddleOCRv3 & PaddleOCRv2 Support

* remove useless parameters

* Add utils of sorting det boxes

* Fix code naming convention

* Fix code naming convention

* Fix code naming convention

* Fix bug in the Classify process

* Imporve OCR Readme

* Fix diff in Cls model

* Update Model Download Link in Readme

* Fix diff in PPOCRv2

* Improve OCR readme

* Imporve OCR readme

* Improve OCR readme

* Improve OCR readme

* Imporve OCR readme

* Improve OCR readme

* Fix conflict

* Add readme for OCRResult

* Improve OCR readme

* Add OCRResult readme

* Improve OCR readme

* Improve OCR readme

* Add Model Quantization Demo

* Fix Model Quantization Readme

* Fix Model Quantization Readme

* Add the function to do PTQ quantization

* Improve quant tools readme

* Improve quant tool readme

* Improve quant tool readme

* Add PaddleInference-GPU for OCR Rec model

* Add QAT method to fastdeploy-quantization tool

* Remove examples/slim for now

* Move configs folder

* Add Quantization Support for Classification Model

* Imporve ways of importing preprocess

* Upload YOLO Benchmark on readme

* Upload YOLO Benchmark on readme

* Upload YOLO Benchmark on readme

* Improve Quantization configs and readme

* Add support for multi-inputs model

* Add backends and params file for YOLOv7

* Add quantized model deployment support for YOLO series

* Fix YOLOv5 quantize readme

* Fix YOLO quantize readme

* Fix YOLO quantize readme

* Improve quantize YOLO readme

* Improve quantize YOLO readme

* Improve quantize YOLO readme

* Improve quantize YOLO readme

* Improve quantize YOLO readme

* Fix bug, change Fronted to ModelFormat

* Change Fronted to ModelFormat

* Add examples to deploy quantized paddleclas models

* Fix readme

* Add quantize Readme

* Add quantize Readme

* Add quantize Readme

* Modify readme of quantization tools

* Modify readme of quantization tools

* Improve quantization tools readme

* Improve quantization readme

* Improve PaddleClas quantized model deployment  readme

* Add PPYOLOE-l quantized deployment examples

* Improve quantization tools readme

* Improve Quantize Readme

* Fix conflicts

* Fix conflicts

* improve readme

* Improve quantization tools and readme

* Improve quantization tools and readme

* Add quantized deployment examples for PaddleSeg model

* Fix cpp readme

* Fix memory leak of reader_wrapper function

* Fix model file name in PaddleClas quantization examples

* Update Runtime and E2E benchmark

* Update Runtime and E2E benchmark

* Rename quantization tools to auto compression tools

* Remove PPYOLOE data when deployed on MKLDNN

* Fix readme

* Support PPYOLOE with OR without NMS and update readme

* Update Readme

* Update configs and readme

* Update configs and readme

* Add Paddle-TensorRT backend in quantized model deploy examples

* Support PPYOLOE+ series

* Add reused_input_tensors for PPYOLOE

* Improve fastdeploy tools usage

* improve fastdeploy tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* remove modify

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Improve fastdeploy auto compression tool

* Remove extra requirements for fd-auto-compress package
This commit is contained in:
yunyaoXYY
2022-11-14 16:39:31 +08:00
committed by GitHub
parent f42218a88e
commit 8dec2115d5
5 changed files with 5 additions and 9 deletions

View File

@@ -3,4 +3,4 @@ requests
tqdm
numpy
opencv-python
fd-auto-compress>=0.0.0
fd-auto-compress>=0.0.1

View File

@@ -20,7 +20,7 @@ python setup.py install
```bash
# 通过pip安装fd-auto-compress.
# FastDeploy的python包已包含此工具, 不需重复安装.
pip install fd-auto-compress
pip install fd-auto-compress==0.0.1
# 在当前目录执行以下命令
python setup.py install

View File

@@ -21,7 +21,7 @@ python setup.py install
```bash
# Installing fd-auto-compress via pip
# This tool is included in the python installer of FastDeploy, so you don't need to install it again.
pip install fd-auto-compress
pip install fd-auto-compress==0.0.1
# Execute in the current directory
python setup.py install

View File

@@ -1 +0,0 @@
paddleslim

View File

@@ -4,11 +4,9 @@ import fd_auto_compress
long_description = "fd_auto_compress is a toolkit for model auto compression of FastDeploy.\n\n"
long_description += "Usage: fastdeploy --auto_compress --config_path=./yolov7_tiny_qat_dis.yaml --method='QAT' --save_dir='../v7_qat_outmodel/' \n"
with open("requirements.txt") as fin:
REQUIRED_PACKAGES = fin.read()
setuptools.setup(
name="fd_auto_compress", # name of package
name="fd_auto_compress",
version="0.0.1",
description="A toolkit for model auto compression of FastDeploy.",
long_description=long_description,
long_description_content_type="text/plain",
@@ -16,7 +14,6 @@ setuptools.setup(
author='fastdeploy',
author_email='fastdeploy@baidu.com',
url='https://github.com/PaddlePaddle/FastDeploy.git',
install_requires=REQUIRED_PACKAGES,
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",