Files
FastDeploy/examples/vision/detection/yolov6/quantize/python/README.md
HCQ14 61c2f87e0c [Doc] Update English version of some documents (#1084)
* Create README_CN.md

* Update README.md

* Update README_CN.md

* Create README_CN.md

* Update README.md

* Create README_CN.md

* Update README.md

* Create README_CN.md

* Update README.md

* Create README_CN.md

* Update README.md

* Create README_CN.md

* Update README.md

* Create README_CN.md

* Update README.md

* Create README_CN.md

* Update README.md

* Update README.md

* Update README_CN.md

* Create README_CN.md

* Update README.md

* Update README.md

* Update and rename README_en.md to README_CN.md

* Update WebDemo.md

* Update and rename WebDemo_en.md to WebDemo_CN.md

* Update and rename DEVELOPMENT_cn.md to DEVELOPMENT_CN.md

* Update DEVELOPMENT_CN.md

* Update DEVELOPMENT.md

* Update RNN.md

* Update and rename RNN_EN.md to RNN_CN.md

* Update README.md

* Update and rename README_en.md to README_CN.md

* Update README.md

* Update and rename README_en.md to README_CN.md

* Update README.md

* Update README_cn.md

* Rename README_cn.md to README_CN.md

* Update README.md

* Update README_cn.md

* Rename README_cn.md to README_CN.md

* Update export.md

* Update and rename export_EN.md to export_CN.md

* Update README.md

* Update README.md

* Create README_CN.md

* Update README.md

* Update README.md

* Update kunlunxin.md

* Update classification_result.md

* Update classification_result_EN.md

* Rename classification_result_EN.md to classification_result_CN.md

* Update detection_result.md

* Update and rename detection_result_EN.md to detection_result_CN.md

* Update face_alignment_result.md

* Update and rename face_alignment_result_EN.md to face_alignment_result_CN.md

* Update face_detection_result.md

* Update and rename face_detection_result_EN.md to face_detection_result_CN.md

* Update face_recognition_result.md

* Update and rename face_recognition_result_EN.md to face_recognition_result_CN.md

* Update headpose_result.md

* Update and rename headpose_result_EN.md to headpose_result_CN.md

* Update keypointdetection_result.md

* Update and rename keypointdetection_result_EN.md to keypointdetection_result_CN.md

* Update matting_result.md

* Update and rename matting_result_EN.md to matting_result_CN.md

* Update mot_result.md

* Update and rename mot_result_EN.md to mot_result_CN.md

* Update ocr_result.md

* Update and rename ocr_result_EN.md to ocr_result_CN.md

* Update segmentation_result.md

* Update and rename segmentation_result_EN.md to segmentation_result_CN.md

* Update README.md

* Update README.md

* Update quantize.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2023-01-06 18:01:34 +08:00

1.8 KiB
Executable File

English | 简体中文

YOLOv6 Quantification Model Python Deployment Example

This directory provides examples that infer.py fast finishes the deployment of YOLOv6 quantification models on CPU/GPU.

Prepare the deployment

FastDeploy Environment Preparation

Prepare the quantification model

    1. Users can directly deploy quantized models provided by FastDeploy.
    1. ii. Or users can use the One-click auto-compression tool provided by FastDeploy to automatically conduct quantification model for deployment.

Example: quantized YOLOv6 model

# Download the example code for deployment
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/slim/yolov6/python

# Download yolov6 quantification model files and test images provided by FastDeploy
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov6s_qat_model_new.tar
tar -xvf yolov6s_qat_model.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg

# Use ONNX Runtime quantification model on CPU
python infer.py --model yolov6s_qat_model --image 000000014439.jpg --device cpu --backend ort
# Use TensorRT quantification model on GPU
python infer.py --model yolov6s_qat_model --image 000000014439.jpg --device gpu --backend trt
# Use Paddle-TensorRT quantification model on GPU
python infer.py --model yolov6s_qat_model --image 000000014439.jpg --device gpu --backend pptrt