[Doc] Update parameters of serving

This commit is contained in:
Jiang-Jia-Jun
2025-07-30 22:35:01 +08:00
parent fe0e3f508b
commit 998968f1e8
2 changed files with 2 additions and 0 deletions

View File

@@ -94,6 +94,7 @@ The differences in request parameters between FastDeploy and the OpenAI protocol
- `enable_thinking`: Optional[bool] = True (whether to enable reasoning for models that support deep thinking)
- `repetition_penalty`: Optional[float] = None (coefficient for directly penalizing repeated token generation (>1 penalizes repetition, <1 encourages repetition))
- `return_token_ids`: Optional[bool] = False: (whether to return token ids as a list)
- `include_stop_str_in_output`: Optional[bool] = False: (whether to include the stop strings in output text. Defaults to False.)
> Note: For multimodal models, since the reasoning chain is enabled by default, resulting in overly long outputs, `max_tokens` can be set to the model's maximum output length or the default value can be used.

View File

@@ -93,6 +93,7 @@ FastDeploy 与 OpenAI 协议的请求参数差异如下,其余请求参数会
- `enable_thinking`: Optional[bool] = True 支持深度思考的模型是否打开思考
- `repetition_penalty`: Optional[float] = None: 直接对重复生成的token进行惩罚的系数>1时惩罚重复<1时鼓励重复
- `return_token_ids`: Optional[bool] = False: 是否返回 token id 列表
- `include_stop_str_in_output`: Optional[bool] = False: 是否返回结束符
> 注: 若为多模态模型 由于思考链默认打开导致输出过长max tokens 可以设置为模型最长输出,或使用默认值。