mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-04 16:22:57 +08:00

* 第一次提交 * 补充一处漏翻译 * deleted: docs/en/quantize.md * Update one translation * Update en version * Update one translation in code * Standardize one writing * Standardize one writing * Update some en version * Fix a grammer problem * Update en version for api/vision result * Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop * Checkout the link in README in vision_results/ to the en documents * Modify a title * Add link to serving/docs/ * Finish translation of demo.md * Update english version of serving/docs/ * Update title of readme * Update some links * Modify a title * Update some links * Update en version of java android README * Modify some titles * Modify some titles * Modify some titles * modify article to document * update some english version of documents in examples * Add english version of documents in examples/visions * Sync to current branch * Add english version of documents in examples * Add english version of documents in examples * Add english version of documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples * Update some documents in examples
4.6 KiB
Executable File
4.6 KiB
Executable File
English | 简体中文
YOLOv7 Python Deployment Demo
Two steps before deployment:
-
- The hardware and software environment meets the requirements. Please refer to FastDeploy Environment Requirements
-
- Install FastDeploy Python whl package. Please refer to FastDeploy Python Installation
This doc provides a quick infer.py
demo of YOLOv7 deployment on CPU/GPU, and accelerated GPU deployment by TensorRT. Run the following command:
# Download sample deployment code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov7/python/
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov7_infer.tar
tar -xf yolov7_infer.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# CPU
python infer_paddle_model.py --model yolov7_infer --image 000000014439.jpg --device cpu
# GPU
python infer_paddle_model.py --model yolov7_infer --image 000000014439.jpg --device gpu
# KunlunXin XPU
python infer_paddle_model.py --model yolov7_infer --image 000000014439.jpg --device kunlunxin
# Huawei Ascend
python infer_paddle_model.py --model yolov7_infer --image 000000014439.jpg --device ascend
If you want to test ONNX model:
# Download yolov7 model files and test images
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov7.onnx
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# CPU Inference
python infer.py --model yolov7.onnx --image 000000014439.jpg --device cpu
# GPU
python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu
# Infer with TensorRT on GPU
python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu --use_trt True
The visualisation of the results is as follows.

YOLOv7 Python Interface
fastdeploy.vision.detection.YOLOv7(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
YOLOv7 model loading and initialisation, with model_file being the exported ONNX model format.
Parameters
- model_file(str): Model file path
- params_file(str): Parameter file path. If the model format is ONNX, the parameter can be filled with an empty string.
- runtime_option(RuntimeOption): Back-end inference configuration. The default is None, i.e. the default is applied
- model_format(ModelFormat): Model format. The default is ONNX format
Predict Function
YOLOv7.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
Model prediction interface with direct output of detection results from the image input.
Parameters
- image_data(np.ndarray): Input image. Images need to be in HWC or BGR format
- conf_threshold(float): Filter threshold for detection box confidence
- nms_iou_threshold(float): iou thresholds during NMS processing
Return
Return to
fastdeploy.vision.DetectionResult
Struct. For more details, please refer to Vision Model Results
Class Member Variables
Pre-processing parameters
Users can modify the following pre-processing parameters for their needs. This will affect the final reasoning and deployment results
- size(list[int]): This parameter modifies the 'resize' during preprocessing and contains two integer elements representing [width, height]. The default value is [640, 640].
- padding_value(list[float]): This parameter modifies the value of the padding when resizing the image. It contains three floating-point elements, representing the values of the three channels. The default value is [114, 114, 114].
- is_no_pad(bool): This parameter determines whether the image is resized by padding,
is_no_pad=ture
means no padding is used. The default value isis_no_pad=false
.- is_mini_pad(bool): This parameter allows the width and height of the image after resize to be the closest value to the
size
member variable, which the pixel size of the padding can be divided by thestride
member variable. The default value isis_mini_pad=false
.- stride(int): Used with
stris_mini_pad
member value. The default value isstride=32