[Doc] update wint2 doc (#3819)

* update_wint2_doc
This commit is contained in:
AIbin
2025-09-03 11:27:43 +08:00
committed by GitHub
parent d81c57146f
commit 54b458fd98
4 changed files with 179 additions and 27 deletions

View File

@@ -1,21 +1,101 @@
# WINT2 Quantization # WINT2 Quantization
Weights are compressed offline using the CCQ (Convolutional Coding Quantization) method. The actual stored numerical type of weights is INT8, with 4 weights packed into each INT8 value, equivalent to 2 bits per weight. Activations are not quantized. During inference, weights are dequantized and decoded in real-time to BF16 numerical type, and calculations are performed using BF16 numerical type. Weights are compressed offline using the [CCQ (Convolutional Coding Quantization)](https://arxiv.org/pdf/2507.07145) method. The actual stored numerical type of weights is INT8, with 4 weights packed into each INT8 value, equivalent to 2 bits per weight. Activations are not quantized. During inference, weights are dequantized and decoded in real-time to BF16 numerical type, and calculations are performed using BF16 numerical type.
- **Supported Hardware**: GPU - **Supported Hardware**: GPU
- **Supported Architecture**: MoE architecture - **Supported Architecture**: MoE architecture
This method relies on the convolution algorithm to use overlapping bits to map 2-bit values to a larger numerical representation space, so that the model weight quantization retains more information of the original data while compressing the true value to an extremely low 2-bit size. The general principle can be seen in the figure below:
[卷积编码量化示意图](./wint2.png)
CCQ WINT2 is generally used in resource-constrained and low-threshold scenarios. Taking ERNIE-4.5-300B-A47B as an example, weights are compressed to 89GB, supporting single-card deployment on 141GB H20. CCQ WINT2 is generally used in resource-constrained and low-threshold scenarios. Taking ERNIE-4.5-300B-A47B as an example, weights are compressed to 89GB, supporting single-card deployment on 141GB H20.
## Run WINT2 Inference Service ## Executing WINT2 Offline Inference
- When executing TP2/TP4 models, you can change the `model_name_or_path` and `tensor_parallel_size` parameters.
```
model_name_or_path = "baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle"
prompts = ["解析三首李白的诗"]
from fastdeploy import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.7, top_p=0, max_tokens=128)
llm = LLM(model=model_name_or_path, tensor_parallel_size=1, use_cudagraph=True,)
outputs = llm.generate(prompts, sampling_params)
print(outputs)
```
## Run WINT2 Inference Service
- When executing TP2/TP4 models, you can change the `--model` and `tensor-parallel-size` parameters.
``` ```
python -m fastdeploy.entrypoints.openai.api_server \ python -m fastdeploy.entrypoints.openai.api_server \
--model baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle \ --model baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle \
--port 8180 --engine-worker-queue-port 8181 \ --port 8180 \
--cache-queue-port 8182 --metrics-port 8182 \ --metrics-port 8181 \
--tensor-parallel-size 1 \ --engine-worker-queue-port 8182 \
--max-model-len 32768 \ --cache-queue-port 8183 \
--max-num-seqs 32 --tensor-parallel-size 1 \
--max-model-len 32768 \
--use-cudagraph \
--enable-prefix-caching \
--enable-chunked-prefill \
--max-num-seqs 256
```
## Request the Service
After starting the service, the following output indicates successful initialization:
```shell
api_server.py[line:91] Launching metrics service at http://0.0.0.0:8181/metrics
api_server.py[line:94] Launching chat completion service at http://0.0.0.0:8180/v1/chat/completions
api_server.py[line:97] Launching completion service at http://0.0.0.0:8180/v1/completions
INFO: Started server process [13909]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8180 (Press CTRL+C to quit)
```
### Health Check
Verify service status (HTTP 200 indicates success):
```shell
curl -i http://0.0.0.0:8180/health
```
### cURL Request
Send requests to the service with the following command:
```shell
curl -X POST "http://0.0.0.0:1822/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "Write me a poem about large language model."}
],
"stream": true
}'
```
### Python Client (OpenAI-compatible API)
FastDeploy's API is OpenAI-compatible. You can also use Python for requests:
```python
import openai
host = "0.0.0.0"
port = "8180"
client = openai.Client(base_url=f"http://{host}:{port}/v1", api_key="null")
response = client.chat.completions.create(
model="null",
messages=[
{"role": "system", "content": "I'm a helpful AI assistant."},
{"role": "user", "content": "Write me a poem about large language model."},
],
stream=True,
)
for chunk in response:
if chunk.choices[0].delta:
print(chunk.choices[0].delta.content, end='')
print('\n')
``` ```
By specifying `--model baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle`, the offline quantized WINT2 model can be automatically downloaded from AIStudio. In the config.json file of this model, there will be WINT2 quantization-related configuration information, so there's no need to set `--quantization` when starting the inference service. By specifying `--model baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle`, the offline quantized WINT2 model can be automatically downloaded from AIStudio. In the config.json file of this model, there will be WINT2 quantization-related configuration information, so there's no need to set `--quantization` when starting the inference service.
@@ -54,9 +134,7 @@ On the ERNIE-4.5-300B-A47B model, comparison of WINT2 vs WINT4 performance:
| Test Set | Dataset Size | WINT4 | WINT2 | | Test Set | Dataset Size | WINT4 | WINT2 |
|---------|---------|---------|---------| |---------|---------|---------|---------|
| IFEval |500|88.17 | 85.40 | | IFEval |500|88.17 | 85.95 |
|BBH|6511|94.43|92.02| |BBH|6511|94.43|90.06|
|DROP|9536|91.17|89.97| |DROP|9536|91.17|89.32|
|GSM8K|1319|96.21|95.98| |CMMLU|11477|89.92|86.55|
|CMath|600|96.50|96.00|
|CMMLU|11477|89.92|86.22|

BIN
docs/quantization/wint2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

View File

@@ -1,21 +1,96 @@
# WINT2量化 # WINT2量化
权重经过CCQConvolutional Coding Quantization方法离线压缩。权重实际存储数值类型INT8每个INT8数值中打包了4个权重等价于每个权重2bits. 激活不做量化,计算时将权重实时反量化解码为BF16数值类型并用BF16数值类型计算。 权重经过 [CCQ卷积编码量化)](https://arxiv.org/pdf/2507.07145) 方法进行离线压缩。权重实际存储数值类型INT8每个INT8数值中打包了4个权重等价于每个权重2bits激活不做量化。在推理过程中,权重会被实时反量化解码为BF16数值类型使用BF16数值类型进行计算。
- **支持硬件**GPU - **支持硬件**GPU
- **支持结构**MoE结构 - **支持结构**MoE结构
该方法依托卷积算法利用重叠的Bit位将2Bit的数值映射到更大的数值表示空间使得模型权重量化后既保留原始数据更多的信息同时将真实数值压缩到极低的2Bit大小大致原理可参考下图
[卷积编码量化示意图](./wint2.png)
CCQ WINT2一般用于资源受限的低门槛场景以ERNIE-4.5-300B-A47B为例将权重压缩到89GB可支持141GB H20单卡部署。 CCQ WINT2一般用于资源受限的低门槛场景以ERNIE-4.5-300B-A47B为例将权重压缩到89GB可支持141GB H20单卡部署。
## 启动WINT2推理服务 ## 执行WINT2离线推理
- 执行TP2/TP4模型时可更换`model_name_or_path`以及`tensor_parallel_size`参数。
```
model_name_or_path = "baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle"
prompts = ["解析三首李白的诗"]
from fastdeploy import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.7, top_p=0, max_tokens=128)
llm = LLM(model=model_name_or_path, tensor_parallel_size=1, use_cudagraph=True,)
outputs = llm.generate(prompts, sampling_params)
print(outputs)
```
## 启动WINT2推理服务
- 执行TP2/TP4模型时可更换`--model`以及`tensor-parallel-size`参数;
``` ```
python -m fastdeploy.entrypoints.openai.api_server \ python -m fastdeploy.entrypoints.openai.api_server \
--model baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle \ --model baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle \
--port 8180 --engine-worker-queue-port 8181 \ --port 8180 \
--cache-queue-port 8182 --metrics-port 8182 \ --metrics-port 8181 \
--tensor-parallel-size 1 \ --engine-worker-queue-port 8182 \
--max-model-len 32768 \ --cache-queue-port 8183 \
--max-num-seqs 32 --tensor-parallel-size 1 \
--max-model-len 32768 \
--use-cudagraph \
--enable-prefix-caching \
--enable-chunked-prefill \
--max-num-seqs 256
```
## 用户发起服务请求
执行启动服务指令后,当终端打印如下信息,说明服务已经启动成功。
```
api_server.py[line:91] Launching metrics service at http://0.0.0.0:8181/metrics
api_server.py[line:94] Launching chat completion service at http://0.0.0.0:8180/v1/chat/completions
api_server.py[line:97] Launching completion service at http://0.0.0.0:8180/v1/completions
INFO: Started server process [13909]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8180 (Press CTRL+C to quit)
```
FastDeploy提供服务探活接口用以判断服务的启动状态执行如下命令返回 ```HTTP/1.1 200 OK``` 即表示服务启动成功。
```shell
curl -i http://0.0.0.0:8180/health
```
通过如下命令发起服务请求
```shell
curl -X POST "http://0.0.0.0:8180/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "把李白的静夜思改写为现代诗"}
]
}'
```
FastDeploy服务接口兼容OpenAI协议可以通过如下Python代码发起服务请求。
```python
import openai
host = "0.0.0.0"
port = "8180"
client = openai.Client(base_url=f"http://{host}:{port}/v1", api_key="null")
response = client.chat.completions.create(
model="null",
messages=[
{"role": "system", "content": "I'm a helpful AI assistant."},
{"role": "user", "content": "把李白的静夜思改写为现代诗"},
],
stream=True,
)
for chunk in response:
if chunk.choices[0].delta:
print(chunk.choices[0].delta.content, end='')
print('\n')
``` ```
通过指定 `--model baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle` 可自动从AIStudio下载已离线量化好的WINT2模型在该模型的config.json文件中会有WINT2量化相关的配置信息不用再在启动推理服务时设置 `--quantization`. 通过指定 `--model baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle` 可自动从AIStudio下载已离线量化好的WINT2模型在该模型的config.json文件中会有WINT2量化相关的配置信息不用再在启动推理服务时设置 `--quantization`.
@@ -54,8 +129,7 @@ python -m fastdeploy.entrypoints.openai.api_server \
| 测试集 |数据集大小| WINT4 | WINT2 | | 测试集 |数据集大小| WINT4 | WINT2 |
|---------|---------|---------|---------| |---------|---------|---------|---------|
| IFEval |500|88.17 | 85.40 | | IFEval |500|88.17 | 85.95 |
|BBH|6511|94.43|92.02| |BBH|6511|94.43|90.06|
|DROP|9536|91.17|89.97| |DROP|9536|91.17|89.32|
|CMMLU|11477|89.92|86.55|
## WINT2推理性能

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB