Files
FastDeploy/docs/zh/usage/kunlunxin_xpu_deployment.md
yyssys c415885a94
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
Publish Job / publish_pre_check (push) Has been cancelled
Publish Job / print_publish_pre_check_outputs (push) Has been cancelled
Publish Job / FD-Clone-Linux (push) Has been cancelled
Publish Job / Show Code Archive Output (push) Has been cancelled
Publish Job / BUILD_SM8090 (push) Has been cancelled
Publish Job / BUILD_SM8689 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8090 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8689 (push) Has been cancelled
Publish Job / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
Publish Job / Run FastDeploy LogProb Tests (push) Has been cancelled
Publish Job / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
Publish Job / Run Base Tests (push) Has been cancelled
Publish Job / Run Accuracy Tests (push) Has been cancelled
Publish Job / Run Stable Tests (push) Has been cancelled
CI Images Build / FD-Clone-Linux (push) Has been cancelled
CI Images Build / Show Code Archive Output (push) Has been cancelled
CI Images Build / CI Images Build (push) Has been cancelled
CI Images Build / BUILD_SM8090 (push) Has been cancelled
CI Images Build / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
CI Images Build / Run FastDeploy LogProb Tests (push) Has been cancelled
CI Images Build / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
CI Images Build / Run Base Tests (push) Has been cancelled
CI Images Build / Run Accuracy Tests (push) Has been cancelled
CI Images Build / Run Stable Tests (push) Has been cancelled
CI Images Build / Publish Docker Images Pre Check (push) Has been cancelled
[Docs]Add ENABLE_V1_KVCACHE_SCHEDULER=0 to docs (#4268)
2025-09-25 20:09:03 +08:00

8.3 KiB

支持的模型

模型名 上下文长度 量化 所需卡数 部署命令 最低版本要求
ERNIE-4.5-300B-A47B 32K WINT8 8 export XPU_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-300B-A47B-Paddle \
--port 8188 \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--max-num-seqs 64 \
--quantization "wint8" \
--gpu-memory-utilization 0.9
>=2.0.3
ERNIE-4.5-300B-A47B 32K WINT4 4 (推荐) export XPU_VISIBLE_DEVICES="0,1,2,3" or "4,5,6,7"
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-300B-A47B-Paddle \
--port 8188 \
--tensor-parallel-size 4 \
--max-model-len 32768 \
--max-num-seqs 64 \
--quantization "wint4" \
--gpu-memory-utilization 0.9
>=2.0.0
ERNIE-4.5-300B-A47B 32K WINT4 8 export XPU_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-300B-A47B-Paddle \
--port 8188 \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--max-num-seqs 64 \
--quantization "wint4" \
--gpu-memory-utilization 0.95
>=2.0.0
ERNIE-4.5-300B-A47B 128K WINT4 8 (推荐) export XPU_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-300B-A47B-Paddle \
--port 8188 \
--tensor-parallel-size 8 \
--max-model-len 131072 \
--max-num-seqs 64 \
--quantization "wint4" \
--gpu-memory-utilization 0.9
>=2.0.0
ERNIE-4.5-21B-A3B 32K BF16 1 export XPU_VISIBLE_DEVICES="0" # 指定任意一张卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-21B-A3B-Paddle \
--port 8188 \
--tensor-parallel-size 1 \
--max-model-len 32768 \
--max-num-seqs 128 \
--gpu-memory-utilization 0.9
>=2.1.0
ERNIE-4.5-21B-A3B 32K WINT8 1 export XPU_VISIBLE_DEVICES="0" # 指定任意一张卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-21B-A3B-Paddle \
--port 8188 \
--tensor-parallel-size 1 \
--max-model-len 32768 \
--max-num-seqs 128 \
--quantization "wint8" \
--gpu-memory-utilization 0.9
>=2.1.0
ERNIE-4.5-21B-A3B 32K WINT4 1 export XPU_VISIBLE_DEVICES="0" # 指定任意一张卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-21B-A3B-Paddle \
--port 8188 \
--tensor-parallel-size 1 \
--max-model-len 32768 \
--max-num-seqs 128 \
--quantization "wint4" \
--gpu-memory-utilization 0.9
>=2.1.0
ERNIE-4.5-21B-A3B 128K BF16 1 export XPU_VISIBLE_DEVICES="0" # 指定任意一张卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-21B-A3B-Paddle \
--port 8188 \
--tensor-parallel-size 1 \
--max-model-len 131072 \
--max-num-seqs 128 \
--gpu-memory-utilization 0.9
>=2.1.0
ERNIE-4.5-21B-A3B 128K WINT8 1 export XPU_VISIBLE_DEVICES="0" # 指定任意一张卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-21B-A3B-Paddle \
--port 8188 \
--tensor-parallel-size 1 \
--max-model-len 131072 \
--max-num-seqs 128 \
--quantization "wint8" \
--gpu-memory-utilization 0.9
>=2.1.0
ERNIE-4.5-21B-A3B 128K WINT4 1 export XPU_VISIBLE_DEVICES="0" # 指定任意一张卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-21B-A3B-Paddle \
--port 8188 \
--tensor-parallel-size 1 \
--max-model-len 131072 \
--max-num-seqs 128 \
--quantization "wint4" \
--gpu-memory-utilization 0.9
>=2.1.0
ERNIE-4.5-0.3B 32K BF16 1 export XPU_VISIBLE_DEVICES="0" # 指定任意一张卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-0.3B-Paddle \
--port 8188 \
--tensor-parallel-size 1 \
--max-model-len 32768 \
--max-num-seqs 128 \
--gpu-memory-utilization 0.9
>=2.0.3
ERNIE-4.5-0.3B 32K WINT8 1 export XPU_VISIBLE_DEVICES="x" # 指定任意一张卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-0.3B-Paddle \
--port 8188 \
--tensor-parallel-size 1 \
--max-model-len 32768 \
--max-num-seqs 128 \
--quantization "wint8" \
--gpu-memory-utilization 0.9
>=2.0.3
ERNIE-4.5-0.3B 128K BF16 1 export XPU_VISIBLE_DEVICES="0" # 指定任意一张卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-0.3B-Paddle \
--port 8188 \
--tensor-parallel-size 1 \
--max-model-len 131072 \
--max-num-seqs 128 \
--gpu-memory-utilization 0.9
>=2.0.3
ERNIE-4.5-0.3B 128K WINT8 1 export XPU_VISIBLE_DEVICES="0" # 指定任意一张卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
--model PaddlePaddle/ERNIE-4.5-0.3B-Paddle \
--port 8188 \
--tensor-parallel-size 1 \
--max-model-len 131072 \
--max-num-seqs 128 \
--quantization "wint8" \
--gpu-memory-utilization 0.9
>=2.0.3

快速开始

OpenAI 兼容服务器

您还可以通过如下命令,基于 FastDeploy 实现 OpenAI API 协议兼容的服务器部署。

启动服务

基于 WINT4 精度和 32K 上下文部署 ERNIE-4.5-300B-A47B-Paddle 模型到 4 卡 P800 服务器

export XPU_VISIBLE_DEVICES="0,1,2,3" # 设置使用的 XPU 卡
export ENABLE_V1_KVCACHE_SCHEDULER=0 # V1不支持
python -m fastdeploy.entrypoints.openai.api_server \
    --model baidu/ERNIE-4.5-300B-A47B-Paddle \
    --port 8188 \
    --tensor-parallel-size 4 \
    --max-model-len 32768 \
    --max-num-seqs 64 \
    --quantization "wint4" \
    --gpu-memory-utilization 0.9

注意: 使用 P800 在 4 块 XPU 上进行部署时,由于受到卡间互联拓扑等硬件限制,仅支持以下两种配置方式: export XPU_VISIBLE_DEVICES="0,1,2,3" or export XPU_VISIBLE_DEVICES="4,5,6,7"

更多参数可以参考 参数说明

全部支持的模型可以在上方的 支持的模型 章节找到。

请求服务

您可以基于 OpenAI 协议,通过 curl 和 python 两种方式请求服务。

curl -X POST "http://0.0.0.0:8188/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
  "messages": [
    {"role": "user", "content": "Where is the capital of China?"}
  ]
}'
import openai
host = "0.0.0.0"
port = "8188"
client = openai.Client(base_url=f"http://{host}:{port}/v1", api_key="null")

response = client.completions.create(
    model="null",
    prompt="Where is the capital of China?",
    stream=True,
)
for chunk in response:
    print(chunk.choices[0].text, end='')
print('\n')

response = client.chat.completions.create(
    model="null",
    messages=[
        {"role": "user", "content": "Where is the capital of China?"},
    ],
    stream=True,
)
for chunk in response:
    if chunk.choices[0].delta:
        print(chunk.choices[0].delta.content, end='')
print('\n')

OpenAI 协议的更多说明可参考文档 OpenAI Chat Completion API,以及与 OpenAI 协议的区别可以参考 兼容 OpenAI 协议的服务化部署