mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-15 05:01:00 +08:00
Merge branch 'develop' of https://github.com/PaddlePaddle/FastDeploy into huawei
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
|
||||
English | [中文](../../cn/build_and_install/sophgo.md)
|
||||
# How to Build SOPHGO Deployment Environment
|
||||
|
||||
## SOPHGO Environment Preparation
|
||||
|
@@ -5,7 +5,7 @@ Please check out the FastDeploy C++ deployment library is already in your enviro
|
||||
|
||||
This document shows an inference sample on the CPU using the PaddleClas classification model MobileNetV2 as an example.
|
||||
|
||||
## 1. Obtaining the Module
|
||||
## 1. Obtaining the Model
|
||||
|
||||
```bash
|
||||
wget https://bj.bcebos.com/fastdeploy/models/mobilenetv2.tgz
|
||||
|
@@ -5,7 +5,7 @@ Please check out the FastDeploy is already installed in your environment. You ca
|
||||
|
||||
This document shows an inference sample on the CPU using the PaddleClas classification model MobileNetV2 as an example.
|
||||
|
||||
## 1. Obtaining the Module
|
||||
## 1. Obtaining the model
|
||||
|
||||
``` python
|
||||
import fastdeploy as fd
|
||||
@@ -42,7 +42,7 @@ results = runtime.infer({
|
||||
|
||||
print(results[0].shape)
|
||||
```
|
||||
When loading is complete, you can get the following output information indicating the initialized backend and the hardware devices.
|
||||
When loading is complete, you will get the following output information indicating the initialized backend and the hardware devices.
|
||||
```
|
||||
[INFO] fastdeploy/fastdeploy_runtime.cc(283)::Init Runtime initialized with Backend::OrtBackend in device Device::CPU.
|
||||
```
|
||||
|
Reference in New Issue
Block a user