Files
FastDeploy/docs/en/quick_start/runtime/python.md
charl-u 1135d33dd7 [Doc]Add English version of documents in examples/ (#1042)
* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md

* Update english version of serving/docs/

* Update title of readme

* Update some links

* Modify a title

* Update some links

* Update en version of java android README

* Modify some titles

* Modify some titles

* Modify some titles

* modify article to document

* update some english version of documents in examples

* Add english version of documents in examples/visions

* Sync to current branch

* Add english version of documents in examples
2023-01-06 09:35:12 +08:00

1.7 KiB

English | 中文

Python Inference

Please check out the FastDeploy is already installed in your environment. You can refer to FastDeploy Installation to install the pre-compiled FastDeploy, or customize your installation.

This document shows an inference sample on the CPU using the PaddleClas classification model MobileNetV2 as an example.

1. Obtaining the model

import fastdeploy as fd

model_url = "https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz"
fd.download_and_decompress(model_url, path=".")

2. Backend Configuration

option = fd.RuntimeOption()

option.set_model_path("mobilenetv2/inference.pdmodel",
                      "mobilenetv2/inference.pdiparams")

# **** CPU Configuration ****
option.use_cpu()
option.use_ort_backend()
option.set_cpu_thread_num(12)

# Initialise runtime
runtime = fd.Runtime(option)

# Get model input name
input_name = runtime.get_input_info(0).name

# Constructing random data for inference
results = runtime.infer({
    input_name: np.random.rand(1, 3, 224, 224).astype("float32")
})

print(results[0].shape)

When loading is complete, you will get the following output information indicating the initialized backend and the hardware devices.

[INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init	Runtime initialized with Backend::OrtBackend in device Device::CPU.

Other Documents