mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-05 08:37:06 +08:00
Sync v2.0 version of code to github repo
This commit is contained in:
46
docs/quantization/README.md
Normal file
46
docs/quantization/README.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Quantization
|
||||
|
||||
FastDeploy supports various quantization inference precisions including FP8, INT8, INT4, 2-bits, etc. It supports different precision inference for weights, activations, and KVCache tensors, which can meet the inference requirements of different scenarios such as low cost, low latency, and long context.
|
||||
|
||||
## 1. Precision Support List
|
||||
|
||||
| Quantization Method | Weight Precision | Activation Precision | KVCache Precision | Online/Offline | Supported Hardware |
|
||||
|---------|---------|---------|------------|---------|---------|
|
||||
| [WINT8](online_quantization.md#1-wint8--wint4) | INT8 | BF16 | BF16 | Online | GPU, XPU |
|
||||
| [WINT4](online_quantization.md#1-wint8--wint4) | INT4 | BF16 | BF16 | Online | GPU, XPU |
|
||||
| [Block-wise FP8](online_quantization.md#2-block-wise-fp8) | block-wise static FP8 | token-wise dynamic FP8 | BF16 | Online | GPU |
|
||||
| [WINT2](wint2.md) | 2Bits | BF16 | BF16 | Offline | GPU |
|
||||
| MixQuant | INT4/INT8 | INT8/BF16 | INT8/BF16 | Offline | GPU, XPU |
|
||||
|
||||
**Notes**
|
||||
|
||||
1. **Quantization Method**: Corresponds to the "quantization" field in the quantization configuration file.
|
||||
2. **Online/Offline Quantization**: Mainly used to distinguish when to quantize the weights.
|
||||
- **Online Quantization**: The weights are quantized after being loaded into inference engine.
|
||||
- **Offline Quantization**: Before inference, weights are quantized offline and stored as low-bit numerical types. During inference, the quantized low-bit numerical values are loaded.
|
||||
3. **Dynamic/Static Quantization**: Mainly used to distinguish the quantization method of activations
|
||||
- **Static Quantization**: Quantization coefficients are determined and stored before inference. During inference, pre-calculated quantization coefficients are loaded. Since quantization coefficients remain fixed (static) during inference, it's called static quantization.
|
||||
- **Dynamic Quantization**: During inference, quantization coefficients for the current batch are calculated in real-time. Since quantization coefficients change dynamically during inference, it's called dynamic quantization.
|
||||
|
||||
## 2. Model Support List
|
||||
|
||||
| Model Name | Supported Quantization Precision |
|
||||
|---------|---------|
|
||||
| ERNIE-4.5-300B-A47B | WINT8, WINT4, Block-wise FP8, MixQuant|
|
||||
|
||||
## 3. Quantization Precision Terminology
|
||||
|
||||
FastDeploy names various quantization precisions in the following format:
|
||||
|
||||
```
|
||||
{tensor abbreviation}{numerical type}{tensor abbreviation}{numerical type}{tensor abbreviation}{numerical type}
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
- **W8A8C8**: W=weights, A=activations, C=CacheKV; 8 defaults to INT8
|
||||
- **W8A8C16**: 16 defaults to BF16, others same as above
|
||||
- **W4A16C16 / WInt4 / weight-only int4**: 4 defaults to INT4
|
||||
- **WNF4A8C8**: NF4 refers to 4bits norm-float numerical type
|
||||
- **Wfp8Afp8**: Both weights and activations are FP8 precision
|
||||
- **W4Afp8**: Weights are INT4, activations are FP8
|
54
docs/quantization/online_quantization.md
Normal file
54
docs/quantization/online_quantization.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Online Quantization
|
||||
|
||||
Online quantization refers to the inference engine quantizing weights after loading BF16 weights, rather than loading pre-quantized low-precision weights. FastDeploy supports online quantization of BF16 to various precisions, including: INT4, INT8, and FP8.
|
||||
|
||||
## 1. WINT8 & WINT4
|
||||
|
||||
Only weights are quantized to INT8 or INT4. During inference, weights are dequantized to BF16 in real-time and then computed with activations.
|
||||
- **Quantization Granularity**: Only supports channel-wise granularity quantization.
|
||||
- **Supported Hardware**: GPU, XPU
|
||||
- **Supported Architecture**: MoE architecture, Dense Linear
|
||||
|
||||
### Run WINT8 or WINT4 Inference Service
|
||||
|
||||
```
|
||||
python -m fastdeploy.entrypoints.openai.api_server \
|
||||
--model baidu/ERNIE-4.5-300B-A47B-Paddle \
|
||||
--port 8180 --engine-worker-queue-port 8181 \
|
||||
--cache-queue-port 8182 --metrics-port 8182 \
|
||||
--tensor-parallel-size 8 \
|
||||
--quantization wint8 \
|
||||
--max-model-len 32768 \
|
||||
--max-num-seqs 32
|
||||
```
|
||||
|
||||
- By specifying `--model baidu/ERNIE-4.5-300B-A47B-Paddle`, the model can be automatically downloaded from AIStudio. FastDeploy depends on Paddle format models. For more information, please refer to [Supported Model List](https://console.cloud.baidu-int.com/devops/icode/repos/baidu/paddle_internal/FastDeploy/blob/feature%2Finference-refactor-20250528/docs/supported_models.md).
|
||||
- By setting `--quantization` to `wint8` or `wint4`, online INT8/INT4 quantization can be selected.
|
||||
- Deploying ERNIE-4.5-300B-A47B-Paddle WINT8 requires at least 80G * 8 cards, while WINT4 requires 80GB * 4 cards.
|
||||
- For more deployment tutorials, please refer to [get_started](../get_started/ernie-4.5.md).
|
||||
|
||||
## 2. Block-wise FP8
|
||||
|
||||
Load BF16 model and quantize weights to FP8 numerical type with 128X128 block-wise granularity. During inference, activations are dynamically quantized to FP8 in real-time with token-wise granularity.
|
||||
|
||||
- **FP8 Specification**: float8_e4m3fn
|
||||
- **Supported Hardware**: GPU Hopper architecture
|
||||
- **Supported Architecture**: MoE architecture, Dense Linear
|
||||
|
||||
### Run Block-wise FP8 Inference Service
|
||||
|
||||
```
|
||||
python -m fastdeploy.entrypoints.openai.api_server \
|
||||
--model baidu/ERNIE-4.5-300B-A47B-Paddle \
|
||||
--port 8180 --engine-worker-queue-port 8181 \
|
||||
--cache-queue-port 8182 --metrics-port 8182 \
|
||||
--tensor-parallel-size 8 \
|
||||
--quantization block_wise_fp8 \
|
||||
--max-model-len 32768 \
|
||||
--max-num-seqs 32
|
||||
```
|
||||
|
||||
- By specifying `--model baidu/ERNIE-4.5-300B-A47B-Paddle`, the model can be automatically downloaded from AIStudio. FastDeploy depends on Paddle format models. For more information, please refer to [Supported Model List](https://console.cloud.baidu-int.com/devops/icode/repos/baidu/paddle_internal/FastDeploy/blob/feature%2Finference-refactor-20250528/docs/supported_models.md).
|
||||
- By setting `--quantization` to `block_wise_fp8`, online Block-wise FP8 quantization can be selected.
|
||||
- Deploying ERNIE-4.5-300B-A47B-Paddle Block-wise FP8 requires at least 80G * 8 cards.
|
||||
- For more deployment tutorials, please refer to [get_started](../get_started/ernie-4.5.md)
|
59
docs/quantization/wint2.md
Normal file
59
docs/quantization/wint2.md
Normal file
@@ -0,0 +1,59 @@
|
||||
# WINT2 Quantization
|
||||
|
||||
Weights are compressed offline using the CCQ (Convolutional Coding Quantization) method. The actual stored numerical type of weights is INT8, with 4 weights packed into each INT8 value, equivalent to 2 bits per weight. Activations are not quantized. During inference, weights are dequantized and decoded in real-time to BF16 numerical type, and calculations are performed using BF16 numerical type.
|
||||
- **Supported Hardware**: GPU
|
||||
- **Supported Architecture**: MoE architecture
|
||||
|
||||
CCQ WINT2 is generally used in resource-constrained and low-threshold scenarios. Taking ERNIE-4.5-300B-A47B as an example, weights are compressed to 89GB, supporting single-card deployment on 141GB H20.
|
||||
|
||||
## Run WINT2 Inference Service
|
||||
|
||||
```
|
||||
python -m fastdeploy.entrypoints.openai.api_server \
|
||||
--model baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle \
|
||||
--port 8180 --engine-worker-queue-port 8181 \
|
||||
--cache-queue-port 8182 --metrics-port 8182 \
|
||||
--tensor-parallel-size 1 \
|
||||
--max-model-len 32768 \
|
||||
--max-num-seqs 32
|
||||
```
|
||||
|
||||
By specifying `--model baidu/ERNIE-4.5-300B-A47B-2Bits-Paddle`, the offline quantized WINT2 model can be automatically downloaded from AIStudio. In the config.json file of this model, there will be WINT2 quantization-related configuration information, so there's no need to set `--quantization` when starting the inference service.
|
||||
|
||||
Example of quantization configuration in the model's config.json file:
|
||||
|
||||
```
|
||||
"quantization_config": {
|
||||
"dense_quant_type": "wint8",
|
||||
"moe_quant_type": "w4w2",
|
||||
"quantization": "wint2",
|
||||
"moe_quant_config": {
|
||||
"moe_w4_quant_config": {
|
||||
"quant_type": "wint4",
|
||||
"quant_granularity": "per_channel",
|
||||
"quant_start_layer": 0,
|
||||
"quant_end_layer": 6
|
||||
},
|
||||
"moe_w2_quant_config": {
|
||||
"quant_type": "wint2",
|
||||
"quant_granularity": "pp_acc",
|
||||
"quant_group_size": 64,
|
||||
"quant_start_layer": 7,
|
||||
"quant_end_layer": 53
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- For more deployment tutorials, please refer to [get_started](../get_started/ernie-4.5.md);
|
||||
- For more model descriptions, please refer to [Supported Model List](https://console.cloud.baidu-int.com/devops/icode/repos/baidu/paddle_internal/FastDeploy/blob/feature%2Finference-refactor-20250528/docs/supported_models.md).
|
||||
|
||||
## WINT2 Performance
|
||||
|
||||
On the ERNIE-4.5-300B-A47B model, comparison of WINT2 vs WINT4 performance:
|
||||
|
||||
| Test Set | Dataset Size | WINT4 | WINT2 |
|
||||
|---------|---------|---------|---------|
|
||||
| IFEval |500|88.17 | 85.40 |
|
||||
|BBH|6511|94.43|92.02|
|
||||
|DROP|9536|91.17|89.97|
|
Reference in New Issue
Block a user