Files
FastDeploy/docs/zh/quantization/online_quantization.md
YuBaoku 425205b03c
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
Publish Job / publish_pre_check (push) Has been cancelled
Publish Job / print_publish_pre_check_outputs (push) Has been cancelled
Publish Job / FD-Clone-Linux (push) Has been cancelled
Publish Job / Show Code Archive Output (push) Has been cancelled
Publish Job / BUILD_SM8090 (push) Has been cancelled
Publish Job / BUILD_SM8689 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8090 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8689 (push) Has been cancelled
Publish Job / Run FD Image Build (push) Has been cancelled
Publish Job / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
Publish Job / Run FastDeploy LogProb Tests (push) Has been cancelled
Publish Job / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
Publish Job / Run Base Tests (push) Has been cancelled
Publish Job / Run Accuracy Tests (push) Has been cancelled
Publish Job / Run Stable Tests (push) Has been cancelled
CI Images Build / FD-Clone-Linux (push) Has been cancelled
CI Images Build / Show Code Archive Output (push) Has been cancelled
CI Images Build / CI Images Build (push) Has been cancelled
CI Images Build / BUILD_SM8090 (push) Has been cancelled
CI Images Build / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
CI Images Build / Run FastDeploy LogProb Tests (push) Has been cancelled
CI Images Build / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
CI Images Build / Run Base Tests (push) Has been cancelled
CI Images Build / Run Accuracy Tests (push) Has been cancelled
CI Images Build / Run Stable Tests (push) Has been cancelled
CI Images Build / Publish Docker Images Pre Check (push) Has been cancelled
[Doc] fix the port conflict issue in the usage example (#4379)
2025-10-13 20:17:06 +08:00

55 lines
2.4 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# 在线量化
在线量化是指推理引擎在加载 BF16 权重后对权重做量化而不是加载离线量化好的低精度权重。FastDeploy 支持将 BF16 在线量化到多种精度包括INT4, INT8 和 FP8.
## 1. WINT8 & WINT4
仅将权重在线量化为 INT8 或 INT4推理时即时地将权重反量化为 BF16 后与激活进行计算。
- **量化粒度**:仅支持 channel-wise 粒度的量化;
- **支持硬件**GPUXPU
- **支持结构**MoE 结构Dense Linear
### 启动WINT8或WINT4推理服务
```
python -m fastdeploy.entrypoints.openai.api_server \
--model baidu/ERNIE-4.5-300B-A47B-Paddle \
--port 8180 --engine-worker-queue-port 8181 \
--cache-queue-port 8183 --metrics-port 8182 \
--tensor-parallel-size 8 \
--quantization wint8 \
--max-model-len 32768 \
--max-num-seqs 32
```
- 通过指定 `--model baidu/ERNIE-4.5-300B-A47B-Paddle` 可自动从AIStudio下载模型。FastDeploy依赖Paddle格式的模型更多说明参考[支持模型列表](../supported_models.md)。
- 通过设置 `--quantization``wint8``wint4` 选择在线 INT8/INT4 量化。
- 部署 ERNIE-4.5-300B-A47B-Paddle WINT8 最少需要 80G *8卡, WINT4 则需要 80GB* 4卡。
- 更多部署教程请参考[get_started](../get_started/ernie-4.5.md).
## 2. Block-wise FP8
加载 BF16 模型,将权重以 128X128 block-wise 的粒度在线量化为 FP8 数值类型。推理时,激活会动态、即时地做 token-wise FP8 量化。
- **FP8规格**float8_e4m3fn
- **支持硬件**Hopper GPU 架构
- **支持结构**MoE 结构Dense Linear
### 启动Block-wise FP8推理服务
```
python -m fastdeploy.entrypoints.openai.api_server \
--model baidu/ERNIE-4.5-300B-A47B-Paddle \
--port 8180 --engine-worker-queue-port 8181 \
--cache-queue-port 8183 --metrics-port 8182 \
--tensor-parallel-size 8 \
--quantization block_wise_fp8 \
--max-model-len 32768 \
--max-num-seqs 32
```
- 通过指定 `--model baidu/ERNIE-4.5-300B-A47B-Paddle` 可自动从AIStudio下载模型。FastDeploy依赖Paddle格式的模型更多说明参考[支持模型列表](../supported_models.md)。
- 通过设置 `--quantization``block_wise_fp8` 选择在线 Block-wise FP8 量化。
- 部署 ERNIE-4.5-300B-A47B-Paddle Block-wise FP8 最少需要 80G * 8卡。
- 更多部署教程请参考[get_started](../get_started/ernie-4.5.md)