mirror of
				https://github.com/PaddlePaddle/FastDeploy.git
				synced 2025-10-31 11:56:44 +08:00 
			
		
		
		
	 61c2f87e0c
			
		
	
	61c2f87e0c
	
	
	
		
			
			* Create README_CN.md * Update README.md * Update README_CN.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Update README.md * Update README_CN.md * Create README_CN.md * Update README.md * Update README.md * Update and rename README_en.md to README_CN.md * Update WebDemo.md * Update and rename WebDemo_en.md to WebDemo_CN.md * Update and rename DEVELOPMENT_cn.md to DEVELOPMENT_CN.md * Update DEVELOPMENT_CN.md * Update DEVELOPMENT.md * Update RNN.md * Update and rename RNN_EN.md to RNN_CN.md * Update README.md * Update and rename README_en.md to README_CN.md * Update README.md * Update and rename README_en.md to README_CN.md * Update README.md * Update README_cn.md * Rename README_cn.md to README_CN.md * Update README.md * Update README_cn.md * Rename README_cn.md to README_CN.md * Update export.md * Update and rename export_EN.md to export_CN.md * Update README.md * Update README.md * Create README_CN.md * Update README.md * Update README.md * Update kunlunxin.md * Update classification_result.md * Update classification_result_EN.md * Rename classification_result_EN.md to classification_result_CN.md * Update detection_result.md * Update and rename detection_result_EN.md to detection_result_CN.md * Update face_alignment_result.md * Update and rename face_alignment_result_EN.md to face_alignment_result_CN.md * Update face_detection_result.md * Update and rename face_detection_result_EN.md to face_detection_result_CN.md * Update face_recognition_result.md * Update and rename face_recognition_result_EN.md to face_recognition_result_CN.md * Update headpose_result.md * Update and rename headpose_result_EN.md to headpose_result_CN.md * Update keypointdetection_result.md * Update and rename keypointdetection_result_EN.md to keypointdetection_result_CN.md * Update matting_result.md * Update and rename matting_result_EN.md to matting_result_CN.md * Update mot_result.md * Update and rename mot_result_EN.md to mot_result_CN.md * Update ocr_result.md * Update and rename ocr_result_EN.md to ocr_result_CN.md * Update segmentation_result.md * Update and rename segmentation_result_EN.md to segmentation_result_CN.md * Update README.md * Update README.md * Update quantize.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md
		
			
				
	
	
		
			55 lines
		
	
	
		
			1.7 KiB
		
	
	
	
		
			Markdown
		
	
	
	
	
	
			
		
		
	
	
			55 lines
		
	
	
		
			1.7 KiB
		
	
	
	
		
			Markdown
		
	
	
	
	
	
| English | [简体中文](README_CN.md)
 | |
| # Python Inference
 | |
| 
 | |
| Before running demo, the following two steps need to be confirmed:
 | |
| 
 | |
| - 1. Hardware and software environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../docs/en/build_and_install/download_prebuilt_libraries.md).
 | |
| - 2. Install FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../docs/cn/build_and_install/download_prebuilt_libraries.md).
 | |
| 
 | |
| This document shows an inference example on the CPU using the PaddleClas classification model MobileNetV2 as an example.
 | |
| 
 | |
| ## 1. Obtaining the model
 | |
| 
 | |
| ``` python
 | |
| import fastdeploy as fd
 | |
| 
 | |
| model_url = "https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz"
 | |
| fd.download_and_decompress(model_url, path=".")
 | |
| ```
 | |
| 
 | |
| ## 2. Backend Configuration
 | |
| 
 | |
| ``` python
 | |
| option = fd.RuntimeOption()
 | |
| 
 | |
| option.set_model_path("mobilenetv2/inference.pdmodel",
 | |
|                       "mobilenetv2/inference.pdiparams")
 | |
| 
 | |
| # **** CPU Configuration ****
 | |
| option.use_cpu()
 | |
| option.use_ort_backend()
 | |
| option.set_cpu_thread_num(12)
 | |
| 
 | |
| # Initialise runtime
 | |
| runtime = fd.Runtime(option)
 | |
| 
 | |
| # Get model input name
 | |
| input_name = runtime.get_input_info(0).name
 | |
| 
 | |
| # Constructing random data for inference
 | |
| results = runtime.infer({
 | |
|     input_name: np.random.rand(1, 3, 224, 224).astype("float32")
 | |
| })
 | |
| 
 | |
| print(results[0].shape)
 | |
| ```
 | |
| When loading is complete, you will get the following output information indicating the initialized backend and the hardware devices.
 | |
| ```
 | |
| [INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init	Runtime initialized with Backend::OrtBackend in device Device::CPU.
 | |
| ```
 | |
| 
 | |
| ## Other Documents
 | |
| 
 | |
| - [A C++ example for Runtime C++](../cpp)
 | |
| - [Switching hardware and backend for model inference](../../../docs/en/faq/how_to_change_backend.md)
 |