[Feature] Support limit thinking len for text models (#3527)

* support limit thinking len

* remove default think_end_id

* remove reasoning_max_tokens

* update think_end_id for ernie

* update think_end_id for ernie.

---------

Co-authored-by: K11OntheBoat <“ruianmaidanglao@163.com”>
Co-authored-by: luukunn <981429396@qq.com>
This commit is contained in:
K11OntheBoat
2025-08-22 14:48:15 +08:00
committed by GitHub
parent 4d6fb96cd6
commit 93d999b830
6 changed files with 64 additions and 26 deletions

View File

@@ -118,6 +118,7 @@ class ModelConfig:
self.enable_redundant_experts = False
self.redundant_experts_num = 0
self.quantization = None
self.think_end_id = None
for key, value in args.items():
if hasattr(self, key):
setattr(self, key, value)