mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-24 09:03:43 +08:00
* fix infer.py and README * [Example] Merge Download Paddle Model, Paddle->Onnx->Mlir->Bmodel and inference into infer.py. Modify README.md * modify pp_liteseg sophgo infer.py and README.md * fix PPOCR,PPYOLOE,PICODET,LITESEG sophgo infer.py and README.md * fix memory overflow problem while inferring with sophgo backend * fix memory overflow problem while inferring with sophgo backend --------- Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: xuyizhou <yizhou.xu@sophgo.com>
PaddleSeg语义分割模型高性能全场景部署方案-FastDeploy
1. FastDeploy介绍
⚡️FastDeploy是一款全场景、易用灵活、极致高效的AI推理部署工具,支持云边端部署。使用FastDeploy可以简单高效的在X86 CPU、NVIDIA GPU、飞腾CPU、ARM CPU、Intel GPU、昆仑、昇腾、瑞芯微、晶晨、算能等10+款硬件上对PaddleSeg语义分割模型进行快速部署,并且支持Paddle Inference、Paddle Lite、TensorRT、OpenVINO、ONNXRuntime、RKNPU2、SOPHGO等多种推理后端。
2. 硬件支持列表
| 硬件类型 | 该硬件是否支持 | 使用指南 | Python | C++ |
|---|---|---|---|---|
| X86 CPU | ✅ | 链接 | ✅ | ✅ |
| NVIDIA GPU | ✅ | 链接 | ✅ | ✅ |
| 飞腾CPU | ✅ | 链接 | ✅ | ✅ |
| ARM CPU | ✅ | 链接 | ✅ | ✅ |
| Intel GPU(集成显卡) | ✅ | 链接 | ✅ | ✅ |
| Intel GPU(独立显卡) | ✅ | 链接 | ✅ | ✅ |
| 昆仑 | ✅ | 链接 | ✅ | ✅ |
| 昇腾 | ✅ | 链接 | ✅ | ✅ |
| 瑞芯微 | ✅ | 链接 | ✅ | ✅ |
| 晶晨 | ✅ | 链接 | -- | ✅ |
| 算能 | ✅ | 链接 | ✅ | ✅ |
3. 详细使用文档
- X86 CPU
- NVIDIA GPU
- 飞腾CPU
- ARM CPU
- Intel GPU
- 昆仑 XPU
- 昇腾 Ascend
- 瑞芯微 Rockchip
- 晶晨 Amlogic
- 算能 Sophgo
4. 更多部署方式
5. 常见问题
遇到问题可查看常见问题集合,搜索FastDeploy issue,或给FastDeploy提交issue: