Files
FastDeploy/serving/README_CN.md
Thomas Young afe8444782 Add doc for serving (#730)
* add ocr serving example

* 1

1

* Add files via upload

* Update README.md

* Delete ocr_pipeline.png

* Add files via upload

* Delete ocr_pipeline.png

* Add files via upload

* 1

1

* 1

1

* Update README.md

* 1

1

* fix codestyle

* fix codestyle

* Update README_CN.md

* Update README_EN.md

* Update demo.md

* Update demo.md

* Add files via upload

* Update demo.md

* Add files via upload

* Delete dynamic_batching.png

* Delete instance_group.png

* Delete simple_ensemble.png

* Add files via upload

* Update demo.md

* Update demo.md

* Update demo.md

* Update demo.md

* Delete dynamic_batching.png

* Delete instance_group.png

* Delete simple_ensemble.png

* Update demo.md

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: heliqi <1101791222@qq.com>
2022-11-28 19:49:18 +08:00

2.0 KiB
Raw Blame History

简体中文 | English

FastDeploy 服务化部署

简介

FastDeploy基于Triton Inference Server搭建了端到端的服务化部署。底层后端使用FastDeploy高性能Runtime模块并串联FastDeploy前后处理模块实现端到端的服务化部署。具有快速部署、使用简单、性能卓越的特性。

准备环境

环境要求

  • Linux
  • 如果使用GPU镜像 要求NVIDIA Driver >= 470(如果是旧的Tesla架构GPU如T4使用的NVIDIA Driver可以是418.40+、440.33+、450.51+、460.27+)

获取镜像

CPU镜像

CPU镜像仅支持Paddle/ONNX模型在CPU上进行服务化部署支持的推理后端包括OpenVINO、Paddle Inference和ONNX Runtime

docker pull paddlepaddle/fastdeploy:0.6.0-cpu-only-21.10

GPU镜像

GPU镜像支持Paddle/ONNX模型在GPU/CPU上进行服务化部署支持的推理后端包括OpenVINO、TensorRT、Paddle Inference和ONNX Runtime

docker pull paddlepaddle/fastdeploy:0.6.0-gpu-cuda11.4-trt8.4-21.10

用户也可根据自身需求,参考如下文档自行编译镜像

其它文档

服务化部署示例

任务场景 模型
Classification PaddleClas
Detection PaddleDetection
Detection ultralytics/YOLOv5
NLP PaddleNLP/ERNIE-3.0
NLP PaddleNLP/UIE
Speech PaddleSpeech/PP-TTS
OCR PaddleOCR/PP-OCRv3