Files
FastDeploy/docs/en/quick_start/runtime/python.md
charl-u 02eab973ce [Doc]Add English version of documents in docs/cn and api/vision_results (#931)
* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md
2022-12-22 18:15:01 +08:00

1.7 KiB

English | 中文

Python Inference

Please check out the FastDeploy is already installed in your environment. You can refer to FastDeploy Installation to install the pre-compiled FastDeploy, or customize your installation.

This document shows an inference sample on the CPU using the PaddleClas classification model MobileNetV2 as an example.

1. Obtaining the Module

import fastdeploy as fd

model_url = "https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz"
fd.download_and_decompress(model_url, path=".")

2. Backend Configuration

option = fd.RuntimeOption()

option.set_model_path("mobilenetv2/inference.pdmodel",
                      "mobilenetv2/inference.pdiparams")

# **** CPU Configuration ****
option.use_cpu()
option.use_ort_backend()
option.set_cpu_thread_num(12)

# Initialise runtime
runtime = fd.Runtime(option)

# Get model input name
input_name = runtime.get_input_info(0).name

# Constructing random data for inference
results = runtime.infer({
    input_name: np.random.rand(1, 3, 224, 224).astype("float32")
})

print(results[0].shape)

When loading is complete, you can get the following output information indicating the initialized backend and the hardware devices.

[INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init	Runtime initialized with Backend::OrtBackend in device Device::CPU.

Other Documents