mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-21 15:49:31 +08:00
[Quantization] Update quantized model deployment examples and update readme. (#377)
* Add PaddleOCR Support * Add PaddleOCR Support * Add PaddleOCRv3 Support * Add PaddleOCRv3 Support * Update README.md * Update README.md * Update README.md * Update README.md * Add PaddleOCRv3 Support * Add PaddleOCRv3 Supports * Add PaddleOCRv3 Suport * Fix Rec diff * Remove useless functions * Remove useless comments * Add PaddleOCRv2 Support * Add PaddleOCRv3 & PaddleOCRv2 Support * remove useless parameters * Add utils of sorting det boxes * Fix code naming convention * Fix code naming convention * Fix code naming convention * Fix bug in the Classify process * Imporve OCR Readme * Fix diff in Cls model * Update Model Download Link in Readme * Fix diff in PPOCRv2 * Improve OCR readme * Imporve OCR readme * Improve OCR readme * Improve OCR readme * Imporve OCR readme * Improve OCR readme * Fix conflict * Add readme for OCRResult * Improve OCR readme * Add OCRResult readme * Improve OCR readme * Improve OCR readme * Add Model Quantization Demo * Fix Model Quantization Readme * Fix Model Quantization Readme * Add the function to do PTQ quantization * Improve quant tools readme * Improve quant tool readme * Improve quant tool readme * Add PaddleInference-GPU for OCR Rec model * Add QAT method to fastdeploy-quantization tool * Remove examples/slim for now * Move configs folder * Add Quantization Support for Classification Model * Imporve ways of importing preprocess * Upload YOLO Benchmark on readme * Upload YOLO Benchmark on readme * Upload YOLO Benchmark on readme * Improve Quantization configs and readme * Add support for multi-inputs model * Add backends and params file for YOLOv7 * Add quantized model deployment support for YOLO series * Fix YOLOv5 quantize readme * Fix YOLO quantize readme * Fix YOLO quantize readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Improve quantize YOLO readme * Fix bug, change Fronted to ModelFormat * Change Fronted to ModelFormat * Add examples to deploy quantized paddleclas models * Fix readme * Add quantize Readme * Add quantize Readme * Add quantize Readme * Modify readme of quantization tools * Modify readme of quantization tools * Improve quantization tools readme * Improve quantization readme * Improve PaddleClas quantized model deployment readme * Add PPYOLOE-l quantized deployment examples * Improve quantization tools readme * Improve Quantize Readme * Fix conflicts * Fix conflicts * improve readme * Improve quantization tools and readme * Improve quantization tools and readme * Add quantized deployment examples for PaddleSeg model * Fix cpp readme * Fix memory leak of reader_wrapper function * Fix model file name in PaddleClas quantization examples * Update Runtime and E2E benchmark * Update Runtime and E2E benchmark * Rename quantization tools to auto compression tools * Remove PPYOLOE data when deployed on MKLDNN * Fix readme * Support PPYOLOE with OR without NMS and update readme * Update Readme * Update configs and readme * Update configs and readme * Add Paddle-TensorRT backend in quantized model deploy examples * Support PPYOLOE+ series
This commit is contained in:
36
examples/vision/segmentation/paddleseg/quantize/README.md
Normal file
36
examples/vision/segmentation/paddleseg/quantize/README.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# PaddleSeg 量化模型部署
|
||||
FastDeploy已支持部署量化模型,并提供一键模型自动化压缩的工具.
|
||||
用户可以使用一键模型自动化压缩工具,自行对模型量化后部署, 也可以直接下载FastDeploy提供的量化模型进行部署.
|
||||
|
||||
## FastDeploy一键模型自动化压缩工具
|
||||
FastDeploy 提供了一键模型自动化压缩工具, 能够简单地通过输入一个配置文件, 对模型进行量化.
|
||||
详细教程请见: [一键模型自动化压缩工具](../../../../../tools/auto_compression/)
|
||||
注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。
|
||||
|
||||
## 下载量化完成的PaddleSeg模型
|
||||
用户也可以直接下载下表中的量化模型进行部署.(点击模型名字即可下载)
|
||||
|
||||
Benchmark表格说明:
|
||||
- Rtuntime时延为模型在各种Runtime上的推理时延,包含CPU->GPU数据拷贝,GPU推理,GPU->CPU数据拷贝时间. 不包含模型各自的前后处理时间.
|
||||
- 端到端时延为模型在实际推理场景中的时延, 包含模型的前后处理.
|
||||
- 所测时延均为推理1000次后求得的平均值, 单位是毫秒.
|
||||
- INT8 + FP16 为在推理INT8量化模型的同时, 给Runtime 开启FP16推理选项
|
||||
- INT8 + FP16 + PM, 为在推理INT8量化模型和开启FP16的同时, 开启使用Pinned Memory的选项,可加速GPU->CPU数据拷贝的速度
|
||||
- 最大加速比, 为FP32时延除以INT8推理的最快时延,得到最大加速比.
|
||||
- 策略为量化蒸馏训练时, 采用少量无标签数据集训练得到量化模型, 并在全量验证集上验证精度, INT8精度并不代表最高的INT8精度.
|
||||
- CPU为Intel(R) Xeon(R) Gold 6271C, 所有测试中固定CPU线程数为1. GPU为Tesla T4, TensorRT版本8.4.15.
|
||||
|
||||
#### Runtime Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mIoU | INT8 mIoU | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar)) | Paddle Inference | CPU | 1138.04| 602.62 |None|None | 1.89 |77.37 | 71.62 |量化蒸馏训练 |
|
||||
|
||||
#### 端到端 Benchmark
|
||||
| 模型 |推理后端 |部署硬件 | FP32 Runtime时延 | INT8 Runtime时延 | INT8 + FP16 Runtime时延 | INT8+FP16+PM Runtime时延 | 最大加速比 | FP32 mIoU | INT8 mIoU | 量化方式 |
|
||||
| ------------------- | -----------------|-----------| -------- |-------- |-------- | --------- |-------- |----- |----- |----- |
|
||||
| [PP-LiteSeg-T(STDC1)-cityscapes](https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT_new.tar)) | Paddle Inference | CPU | 4726.65| 4134.91|None|None | 1.14 |77.37 | 71.62 |量化蒸馏训练 |
|
||||
|
||||
## 详细部署文档
|
||||
|
||||
- [Python部署](python)
|
||||
- [C++部署](cpp)
|
@@ -0,0 +1,14 @@
|
||||
PROJECT(infer_demo C CXX)
|
||||
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
|
||||
|
||||
# 指定下载解压后的fastdeploy库路径
|
||||
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
|
||||
|
||||
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
|
||||
|
||||
# 添加FastDeploy依赖头文件
|
||||
include_directories(${FASTDEPLOY_INCS})
|
||||
|
||||
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
|
||||
# 添加FastDeploy库依赖
|
||||
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
|
@@ -0,0 +1,30 @@
|
||||
# PaddleSeg 量化模型 C++部署示例
|
||||
本目录下提供的`infer.cc`,可以帮助用户快速完成PaddleSeg量化模型在CPU/GPU上的部署推理加速.
|
||||
|
||||
## 部署准备
|
||||
### FastDeploy环境准备
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
### 量化模型准备
|
||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
||||
|
||||
## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
|
||||
在本目录执行如下命令即可完成编译,以及量化模型部署.
|
||||
```bash
|
||||
mkdir build
|
||||
cd build
|
||||
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-0.3.0.tgz
|
||||
tar xvf fastdeploy-linux-x64-0.3.0.tgz
|
||||
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.3.0
|
||||
make -j
|
||||
|
||||
#下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
||||
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# 在CPU上使用Paddle-Inference推理量化模型
|
||||
./infer_demo PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ cityscapes_demo.png 1
|
||||
```
|
100
examples/vision/segmentation/paddleseg/quantize/cpp/infer.cc
Normal file
100
examples/vision/segmentation/paddleseg/quantize/cpp/infer.cc
Normal file
@@ -0,0 +1,100 @@
|
||||
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
#include "fastdeploy/vision.h"
|
||||
#ifdef WIN32
|
||||
const char sep = '\\';
|
||||
#else
|
||||
const char sep = '/';
|
||||
#endif
|
||||
|
||||
void InitAndInfer(const std::string& model_dir, const std::string& image_file,
|
||||
const fastdeploy::RuntimeOption& option) {
|
||||
auto model_file = model_dir + sep + "model.pdmodel";
|
||||
auto params_file = model_dir + sep + "model.pdiparams";
|
||||
auto config_file = model_dir + sep + "deploy.yaml";
|
||||
|
||||
auto model = fastdeploy::vision::segmentation::PaddleSegModel(
|
||||
model_file, params_file, config_file,option);
|
||||
|
||||
assert(model.Initialized());
|
||||
|
||||
auto im = cv::imread(image_file);
|
||||
auto im_bak = im.clone();
|
||||
|
||||
fastdeploy::vision::SegmentationResult res;
|
||||
if (!model.Predict(&im, &res)) {
|
||||
std::cerr << "Failed to predict." << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
std::cout << res.Str() << std::endl;
|
||||
|
||||
}
|
||||
|
||||
// int main(int argc, char* argv[]) {
|
||||
// if (argc < 3) {
|
||||
// std::cout
|
||||
// << "Usage: infer_demo path/to/model_dir path/to/image run_option, "
|
||||
// "e.g ./infer_model ./ppseg_model_dir ./test.jpeg 0"
|
||||
// << std::endl;
|
||||
// std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
|
||||
// "with gpu; 2: run with gpu and use tensorrt backend."
|
||||
// << std::endl;
|
||||
// return -1;
|
||||
// }
|
||||
|
||||
// fastdeploy::RuntimeOption option;
|
||||
// option.UseCpu();
|
||||
// option.UsePaddleBackend();
|
||||
// std::cout<<"Xyy-debug, enable Paddle Backend==!";
|
||||
|
||||
// std::string model_dir = argv[1];
|
||||
// std::string test_image = argv[2];
|
||||
// InitAndInfer(model_dir, test_image, option);
|
||||
// return 0;
|
||||
// }
|
||||
|
||||
int main(int argc, char* argv[]) {
|
||||
if (argc < 4) {
|
||||
std::cout << "Usage: infer_demo path/to/quant_model "
|
||||
"path/to/image "
|
||||
"run_option, "
|
||||
"e.g ./infer_demo ./ResNet50_vd_quant ./test.jpeg 0"
|
||||
<< std::endl;
|
||||
std::cout << "The data type of run_option is int, 0: run on cpu with ORT "
|
||||
"backend; 1: run "
|
||||
"on gpu with TensorRT backend. "
|
||||
<< std::endl;
|
||||
return -1;
|
||||
}
|
||||
|
||||
fastdeploy::RuntimeOption option;
|
||||
int flag = std::atoi(argv[3]);
|
||||
|
||||
if (flag == 0) {
|
||||
option.UseCpu();
|
||||
option.UseOrtBackend();
|
||||
std::cout<<"Use ORT!"<<std::endl;
|
||||
} else if (flag == 1) {
|
||||
option.UseCpu();
|
||||
option.UsePaddleBackend();
|
||||
std::cout<<"Use PP!"<<std::endl;
|
||||
}
|
||||
|
||||
std::string model_dir = argv[1];
|
||||
std::string test_image = argv[2];
|
||||
InitAndInfer(model_dir, test_image, option);
|
||||
return 0;
|
||||
}
|
@@ -0,0 +1,28 @@
|
||||
# PaddleSeg 量化模型 Python部署示例
|
||||
本目录下提供的`infer.py`,可以帮助用户快速完成PaddleSeg量化模型在CPU/GPU上的部署推理加速.
|
||||
|
||||
## 部署准备
|
||||
### FastDeploy环境准备
|
||||
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
|
||||
|
||||
### 量化模型准备
|
||||
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
|
||||
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的deploy.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
|
||||
|
||||
|
||||
## 以量化后的PP_LiteSeg_T_STDC1_cityscapes模型为例, 进行部署
|
||||
```bash
|
||||
#下载部署示例代码
|
||||
git clone https://github.com/PaddlePaddle/FastDeploy.git
|
||||
cd examples/vision/segmentation/paddleseg/quantize/python
|
||||
|
||||
#下载FastDeloy提供的PP_LiteSeg_T_STDC1_cityscapes量化模型文件和测试图片
|
||||
wget https://bj.bcebos.com/paddlehub/fastdeploy/PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
||||
tar -xvf PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_PTQ.tar
|
||||
wget https://paddleseg.bj.bcebos.com/dygraph/demo/cityscapes_demo.png
|
||||
|
||||
# 在CPU上使用Paddle-Inference推理量化模型
|
||||
python infer.py --model PP_LiteSeg_T_STDC1_cityscapes_without_argmax_infer_QAT --image cityscapes_demo.png --device cpu --backend paddle
|
||||
|
||||
```
|
@@ -0,0 +1,76 @@
|
||||
import fastdeploy as fd
|
||||
import cv2
|
||||
import os
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
import argparse
|
||||
import ast
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--model", required=True, help="Path of PaddleSeg model.")
|
||||
parser.add_argument(
|
||||
"--image", required=True, help="Path of test image file.")
|
||||
parser.add_argument(
|
||||
"--device",
|
||||
type=str,
|
||||
default='cpu',
|
||||
help="Type of inference device, support 'cpu' or 'gpu'.")
|
||||
parser.add_argument(
|
||||
"--backend",
|
||||
type=str,
|
||||
default="default",
|
||||
help="Type of inference backend, support ort/trt/paddle/openvino, default 'openvino' for cpu, 'tensorrt' for gpu"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--device_id",
|
||||
type=int,
|
||||
default=0,
|
||||
help="Define which GPU card used to run model.")
|
||||
parser.add_argument(
|
||||
"--cpu_thread_num",
|
||||
type=int,
|
||||
default=9,
|
||||
help="Number of threads while inference on CPU.")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def build_option(args):
|
||||
option = fd.RuntimeOption()
|
||||
if args.device.lower() == "gpu":
|
||||
option.use_gpu(0)
|
||||
|
||||
option.set_cpu_thread_num(args.cpu_thread_num)
|
||||
|
||||
if args.backend.lower() == "trt":
|
||||
assert args.device.lower(
|
||||
) == "gpu", "TensorRT backend require inferences on device GPU."
|
||||
option.use_trt_backend()
|
||||
option.set_trt_cache_file(os.path.join(args.model, "model.trt"))
|
||||
option.set_trt_input_shape("x", [1, 3, 256, 256], [1, 3, 1024, 1024],
|
||||
[1, 3, 2048, 2048])
|
||||
elif args.backend.lower() == "ort":
|
||||
option.use_ort_backend()
|
||||
elif args.backend.lower() == "paddle":
|
||||
option.use_paddle_backend()
|
||||
elif args.backend.lower() == "openvino":
|
||||
assert args.device.lower(
|
||||
) == "cpu", "OpenVINO backend require inference on device CPU."
|
||||
option.use_openvino_backend()
|
||||
return option
|
||||
|
||||
|
||||
args = parse_arguments()
|
||||
|
||||
# 配置runtime,加载模型
|
||||
runtime_option = build_option(args)
|
||||
model_file = os.path.join(args.model, "model.pdmodel")
|
||||
params_file = os.path.join(args.model, "model.pdiparams")
|
||||
config_file = os.path.join(args.model, "deploy.yaml")
|
||||
model = fd.vision.segmentation.PaddleSegModel(
|
||||
model_file, params_file, config_file, runtime_option=runtime_option)
|
||||
|
||||
# 预测图片检测结果
|
||||
im = cv2.imread(args.image)
|
||||
result = model.predict(im.copy())
|
||||
print(result)
|
Reference in New Issue
Block a user