mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-22 08:09:28 +08:00

* Create README_CN.md * Update README.md * Update README_CN.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Create README_CN.md * Update README.md * Update README.md * Update README_CN.md * Create README_CN.md * Update README.md * Update README.md * Update and rename README_en.md to README_CN.md * Update WebDemo.md * Update and rename WebDemo_en.md to WebDemo_CN.md * Update and rename DEVELOPMENT_cn.md to DEVELOPMENT_CN.md * Update DEVELOPMENT_CN.md * Update DEVELOPMENT.md * Update RNN.md * Update and rename RNN_EN.md to RNN_CN.md * Update README.md * Update and rename README_en.md to README_CN.md * Update README.md * Update and rename README_en.md to README_CN.md * Update README.md * Update README_cn.md * Rename README_cn.md to README_CN.md * Update README.md * Update README_cn.md * Rename README_cn.md to README_CN.md * Update export.md * Update and rename export_EN.md to export_CN.md * Update README.md * Update README.md * Create README_CN.md * Update README.md * Update README.md * Update kunlunxin.md * Update classification_result.md * Update classification_result_EN.md * Rename classification_result_EN.md to classification_result_CN.md * Update detection_result.md * Update and rename detection_result_EN.md to detection_result_CN.md * Update face_alignment_result.md * Update and rename face_alignment_result_EN.md to face_alignment_result_CN.md * Update face_detection_result.md * Update and rename face_detection_result_EN.md to face_detection_result_CN.md * Update face_recognition_result.md * Update and rename face_recognition_result_EN.md to face_recognition_result_CN.md * Update headpose_result.md * Update and rename headpose_result_EN.md to headpose_result_CN.md * Update keypointdetection_result.md * Update and rename keypointdetection_result_EN.md to keypointdetection_result_CN.md * Update matting_result.md * Update and rename matting_result_EN.md to matting_result_CN.md * Update mot_result.md * Update and rename mot_result_EN.md to mot_result_CN.md * Update ocr_result.md * Update and rename ocr_result_EN.md to ocr_result_CN.md * Update segmentation_result.md * Update and rename segmentation_result_EN.md to segmentation_result_CN.md * Update README.md * Update README.md * Update quantize.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md
3.5 KiB
3.5 KiB
简体中文 | English
PP-TTS流式语音合成服务化部署
介绍
本文介绍了使用FastDeploy搭建流式语音合成服务的方法.
服务端必须在docker内启动,而客户端不是必须在docker容器内.
本文所在路径($PWD)下的streaming_pp_tts里包含模型的配置和代码(服务端会加载模型和代码以启动服务), 需要将其映射到docker中使用.
使用
1. 服务端
1.1 Docker
docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09
docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09
docker exec -it -u root fastdeploy bash
1.2 安装(在docker内)
apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip
python3 -m pip install --upgrade pip
pip3 install -U fastdeploy-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html
pip3 install -U paddlespeech paddlepaddle
export LC_ALL="zh_CN.UTF-8"
export LANG="zh_CN.UTF-8"
export LANGUAGE="zh_CN:zh:en_US:en"
1.3 下载模型(在docker内,可跳过)
模型文件会自动下载并解压缩, 如果您想要手动下载, 请使用下面的命令.
cd /models/streaming_pp_tts/1
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip
wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip
unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip
unzip mb_melgan_csmsc_onnx_0.2.0.zip
为了方便用户使用, 我们推荐用户使用1.1中的docker -v
命令将$PWD(streaming_pp_tts及里面包含的模型的配置和代码)映射到了docker内的/models
路径, 用户也可以使用其他办法, 但无论使用哪种方法, 最终在docker内的模型目录及结构如下图所示.
/models
│
└───streaming_pp_tts #整个服务模型文件夹
│ config.pbtxt #服务模型配置文件
│ stream_client.py #客户端代码
│
└───1 #模型版本号,此处为1
│ model.py #模型启动代码
└───fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0 #启动代码所需的模型文件
└───mb_melgan_csmsc_onnx_0.2.0 #启动代码所需的模型文件
1.4 启动服务端(在docker内)
fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_pp_tts
参数:
model-repository
(required): 整套模型streaming_pp_tts存放的路径.model-control-mode
(required): 模型加载的方式,现阶段, 使用'explicit'即可.load-model
(required): 需要加载的模型的名称.http-port
(optional): HTTP服务的端口号. 默认:8000
. 本示例中未使用该端口.grpc-port
(optional): GRPC服务的端口号. 默认:8001
.metrics-port
(optional): 服务端指标的端口号. 默认:8002
. 本示例中未使用该端口.
2. 客户端
2.1 安装
pip3 install tritonclient[all]
2.2 发送请求
python3 /models/streaming_pp_tts/stream_client.py