polish code with new pre-commit rule (#2923)

This commit is contained in:
Zero Rains
2025-07-19 23:19:27 +08:00
committed by GitHub
parent b8676d71a8
commit 25698d56d1
424 changed files with 14307 additions and 13518 deletions

View File

@@ -24,7 +24,7 @@ python -m fastdeploy.entrypoints.openai.api_server \
- By specifying `--model baidu/ERNIE-4.5-300B-A47B-Paddle`, the model can be automatically downloaded from AIStudio. FastDeploy depends on Paddle format models. For more information, please refer to [Supported Model List](../supported_models.md).
- By setting `--quantization` to `wint8` or `wint4`, online INT8/INT4 quantization can be selected.
- Deploying ERNIE-4.5-300B-A47B-Paddle WINT8 requires at least 80G * 8 cards, while WINT4 requires 80GB * 4 cards.
- Deploying ERNIE-4.5-300B-A47B-Paddle WINT8 requires at least 80G *8 cards, while WINT4 requires 80GB* 4 cards.
- For more deployment tutorials, please refer to [get_started](../get_started/ernie-4.5.md).
## 2. Block-wise FP8
@@ -51,4 +51,4 @@ python -m fastdeploy.entrypoints.openai.api_server \
- By specifying `--model baidu/ERNIE-4.5-300B-A47B-Paddle`, the model can be automatically downloaded from AIStudio. FastDeploy depends on Paddle format models. For more information, please refer to [Supported Model List](../supported_models.md).
- By setting `--quantization` to `block_wise_fp8`, online Block-wise FP8 quantization can be selected.
- Deploying ERNIE-4.5-300B-A47B-Paddle Block-wise FP8 requires at least 80G * 8 cards.
- For more deployment tutorials, please refer to [get_started](../get_started/ernie-4.5.md)
- For more deployment tutorials, please refer to [get_started](../get_started/ernie-4.5.md)