luukunn 
							
						 
					 
					
						
						
							
						
						4a9c04a746 
					 
					
						
						
							
							[Feature] add tool parser ( #3518 )  
						
						... 
						
						
						
						* [Feature] Pass through the `chat_template_kwargs` to the data processing module (#3421 )
* fix chat_template_args
* fix args
* add offline
* add offline
* fix
* fix
* fix default enable_thinking value
* fix default enable_thinking value
* modify condition
* Revert "modify condition"
This reverts commit 26430bdeb1#3272 )
* add tool-parser
* add tool-parser
* add tool parser
* add tool parser
* fix
* add offline
* add offline
* fix
* parsers:tool&reasoning
* 修改tool parser名称·
* update
* fix reasoning-parser
* add requirements
* fix finish reason
* fix
* fix reasoning-parser
* fix
* fix
* fix
* fix
* fix
---------
Co-authored-by: zhuzixuan <zhuzixuan@baidu.com >
* [Feature] add tool parser (#3483 )
* add tool parser
* add x1 enable_thinking
* restart ci
* fix vl reasoning parser
* modify call style
* modify call style
* add offline enablethinking
* fix completion
* fix
* fix unit test
* fix unit test
* fix unit test
* fix vl reasoning parser
* fix vl reasoning parser
* fix unit test
---------
Co-authored-by: zhuzixuan <zhuzixuan@baidu.com > 
						
						
					 
					
						2025-08-22 11:14:35 +08:00 
						 
				 
			
				
					
						
							
							
								yinwei 
							
						 
					 
					
						
						
							
						
						101605869c 
					 
					
						
						
							
							[XPU] Fixed the issue of performance degradation caused by enabling ENABLE_V1_KVCACHE_SCHEDULER ( #3393 )  
						
						... 
						
						
						
						* fix v1 schedule oom bug
* fix v1 schedule oom bug 
						
						
					 
					
						2025-08-14 17:41:40 +08:00 
						 
				 
			
				
					
						
							
							
								Jiang-Jia-Jun 
							
						 
					 
					
						
						
							
						
						28918702c2 
					 
					
						
						
							
							Revert "Merge branch 'feature/online/vs_think_20250813' into release/2.1"  
						
						... 
						
						
						
						This reverts commit 02596fc53703347626a6 
						
						
					 
					
						2025-08-14 17:20:29 +08:00 
						 
				 
			
				
					
						
							
							
								luukunn 
							
						 
					 
					
						
						
							
						
						81092c0fe3 
					 
					
						
						
							
							add tool parser  
						
						
						
						
					 
					
						2025-08-13 16:06:22 +08:00 
						 
				 
			
				
					
						
							
							
								chenjian 
							
						 
					 
					
						
						
							
						
						25f51b0611 
					 
					
						
						
							
							Fix block num in schduelr v1 for release 2.1 ( #3315 )  
						
						... 
						
						
						
						* fix bug for scheduler v0
* fix block num setting in scheduler v1 for release 2.1
* fix block num setting in scheduler v1 for release 2.1
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com >
Co-authored-by: YUNSHEN XIE <1084314248@qq.com > 
						
						
					 
					
						2025-08-12 00:41:05 +08:00 
						 
				 
			
				
					
						
							
							
								ltd0924 
							
						 
					 
					
						
						
							
						
						d17886de19 
					 
					
						
						
							
							[Feature] support ep in mixed mode ( #3001 )  
						
						... 
						
						
						
						* [LLM] support ep
* Update worker_process.py
* Update expert_service.py
* Update worker_process.py
* format files 
						
						
					 
					
						2025-07-30 20:43:39 +08:00 
						 
				 
			
				
					
						
							
							
								bukejiyu 
							
						 
					 
					
						
						
							
						
						db698bda01 
					 
					
						
						
							
							qwen loader ( #3057 )  
						
						
						
						
					 
					
						2025-07-30 19:09:38 +08:00 
						 
				 
			
				
					
						
							
							
								YuanRisheng 
							
						 
					 
					
						
						
							
						
						99a70fc722 
					 
					
						
						
							
							unify parallel config ( #3070 )  
						
						
						
						
					 
					
						2025-07-30 11:41:23 +08:00 
						 
				 
			
				
					
						
							
							
								Zero Rains 
							
						 
					 
					
						
						
							
						
						b2f9a42d87 
					 
					
						
						
							
							[Feature] Support repetition early stop ( #3024 )  
						
						... 
						
						
						
						* support repetition early stop and support user to set the parameter
* remove log
* fix codestyle
* add the early_stop_config to rollout_config
* update config and EarlyStopper class
* fix the bug for triton
* modify the stop method
* update description
* modify the usage for stop_flags
---------
Co-authored-by: Yuanle Liu <yuanlehome@163.com > 
						
						
					 
					
						2025-07-29 22:42:54 +08:00 
						 
				 
			
				
					
						
							
							
								YuanRisheng 
							
						 
					 
					
						
						
							
						
						502ee92a0a 
					 
					
						
						
							
							Unify server-side and model-side Config (Part3)  ( #3047 )  
						
						... 
						
						
						
						* merge model config
* fix arch
* fix rl 
						
						
					 
					
						2025-07-29 17:07:44 +08:00 
						 
				 
			
				
					
						
							
							
								YuanRisheng 
							
						 
					 
					
						
						
							
						
						bddf403576 
					 
					
						
						
							
							Unify server-side and model-side Config (Part2) ( #3035 )  
						
						... 
						
						
						
						* merge speculative and graph opt conifg
* add attr 
						
						
					 
					
						2025-07-28 15:31:48 +08:00 
						 
				 
			
				
					
						
							
							
								YuanRisheng 
							
						 
					 
					
						
						
							
						
						6ccc10ad47 
					 
					
						
						
							
							Unify server-side and model-side Config (Part1) ( #3018 )  
						
						... 
						
						
						
						* move cache config
* fix mtp 
						
						
					 
					
						2025-07-28 10:51:52 +08:00 
						 
				 
			
				
					
						
							
							
								ltd0924 
							
						 
					 
					
						
						
							
						
						f935d6f862 
					 
					
						
						
							
							[BugFix] fix multinode deployment ( #2977 )  
						
						
						
						
					 
					
						2025-07-24 15:04:04 +08:00 
						 
				 
			
				
					
						
							
							
								chenjian 
							
						 
					 
					
						
						
							
						
						85a78d695d 
					 
					
						
						
							
							[Feature] Support block scheduler v1 for FD ( #2928 )  
						
						... 
						
						
						
						* Support FD block scheduler v1
* Support FD block scheduler v1
* Support FD block scheduler v1
* Fix according to copilot review
* Fix according to review
* Remove is_dummy
* Fix bug when real_bsz=1
* Fix infer first token cost time
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com > 
						
						
					 
					
						2025-07-23 20:31:31 +08:00 
						 
				 
			
				
					
						
							
							
								Ryan 
							
						 
					 
					
						
						
							
						
						95b5af24db 
					 
					
						
						
							
							[SOT] Add sot warmup (NVIDIA GPU Only) ( #2929 )  
						
						... 
						
						
	
		
			
	 
	
	
		
	
	
		
			
				
	Deploy GitHub Pages / deploy (push) Has been cancelled 
				
			 
		
		
	 
 
	 
						
						* add sot warmup
* fix code style
* change batch_size list
* add param to config
* rm free_list settings && set sot_warmup_sizes
* finish debug with dynamic dims by type annotations
* add profile_run guard
* rm sth useless 
						
						
					 
					
						2025-07-22 21:36:14 +08:00 
						 
				 
			
				
					
						
							
							
								zhink 
							
						 
					 
					
						
						
							
						
						0262ef7eb3 
					 
					
						
						
							
							custom all reduce support cuda graph ( #2938 )  
						
						... 
						
						
	
		
			
	 
	
	
		
	
	
		
			
				
	Deploy GitHub Pages / deploy (push) Has been cancelled 
				
			 
		
		
	 
 
	 
						
						* Support enabling cuda graph and custom all reduce at the same time, and fix the overwritten custom all reduce flag
* rename communication_op to communication 
						
						
					 
					
						2025-07-21 22:52:03 +08:00 
						 
				 
			
				
					
						
							
							
								Zero Rains 
							
						 
					 
					
						
						
							
						
						25698d56d1 
					 
					
						
						
							
							polish code with new pre-commit rule ( #2923 )  
						
						
						
						
					 
					
						2025-07-19 23:19:27 +08:00 
						 
				 
			
				
					
						
							
							
								ltd0924 
							
						 
					 
					
						
						
							
						
						b630031414 
					 
					
						
						
							
							[LLM] fix serval bugs ( #2878 )  
						
						
						
						
					 
					
						2025-07-17 14:21:05 +08:00 
						 
				 
			
				
					
						
							
							
								ltd0924 
							
						 
					 
					
						
						
							
						
						9c25dcca0b 
					 
					
						
						
							
							[LLM] Update Multinode Deployment ( #2830 )  
						
						... 
						
						
	
		
			
	 
	
	
		
	
	
		
			
				
	Deploy GitHub Pages / deploy (push) Has been cancelled 
				
			 
		
		
	 
 
	 
						
						* [LLM] fix multinode bugs
* [LLM] update multinode deployment
* [LLM] update multinode deployment
* [LLM] update multinode deployment
* [LLM] update multinode deployment
* [LLM] update multinode deployment
* [LLM] fix ci bugs
* Update fastdeploy/engine/args_utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
* [LLM] update random port
* [LLM] update random port
* [LLM] fix ci bugs
* fix ci bugs
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com > 
						
						
					 
					
						2025-07-16 23:42:54 +08:00 
						 
				 
			
				
					
						
							
							
								Yuanle Liu 
							
						 
					 
					
						
						
							
						
						dda4a9f848 
					 
					
						
						
							
							rl update ( #2861 )  
						
						
						
						
					 
					
						2025-07-16 00:33:10 -07:00 
						 
				 
			
				
					
						
							
							
								RAM 
							
						 
					 
					
						
						
							
						
						0fad10b35a 
					 
					
						
						
							
							[Executor] CUDA Graph support padding batch ( #2844 )  
						
						... 
						
						
						
						* cuda graph support padding batch
* Integrate the startup parameters for the graph optimization backend and provide support for user - defined capture sizes.
* Do not insert max_num_seqs when the user specifies a capture list
* Support set graph optimization config from YAML file
* update cuda graph ci
* fix ci bug
* fix ci bug 
						
						
					 
					
						2025-07-15 19:49:01 -07:00 
						 
				 
			
				
					
						
							
							
								yulangz 
							
						 
					 
					
						
						
							
						
						830de5a925 
					 
					
						
						
							
							[XPU] Supports TP4 deployment on 4,5,6,7 ( #2794 )  
						
						... 
						
						
						
						* 支持通过 XPU_VISIBLE_DEVICES 指定 4,5,6,7 卡运行
* 修改 XPU 文档中多卡说明 
						
						
					 
					
						2025-07-10 16:48:08 +08:00 
						 
				 
			
				
					
						
							
							
								chen 
							
						 
					 
					
						
						
							
						
						d33105baeb 
					 
					
						
						
							
							[Feature] Online Chat API Support Return logprobs ( #2777 )  
						
						... 
						
						
						
						* online chat support logprobs
* check xpu
* check vl_gpu_model_runner and xpu_model_runner
* get_worker() check platform 
						
						
					 
					
						2025-07-10 16:33:40 +08:00 
						 
				 
			
				
					
						
							
							
								Sunny-bot1 
							
						 
					 
					
						
						
							
						
						e45050cae3 
					 
					
						
						
							
							[Feature] support top_k_top_p sampling ( #2753 )  
						
						... 
						
						
						
						* support top_k_top_p sampling
* fix
* add api param
* add api para
* fix
* fix
* fix
* fix
* fix
* fix
* fix 
						
						
					 
					
						2025-07-09 20:58:58 -07:00 
						 
				 
			
				
					
						
							
							
								zhink 
							
						 
					 
					
						
						
							
						
						b89180f1cd 
					 
					
						
						
							
							[Feature] support custom all-reduce ( #2758 )  
						
						... 
						
						
						
						* [Feature] support custom all-reduce
* add vllm adapted 
						
						
					 
					
						2025-07-09 16:00:27 +08:00 
						 
				 
			
				
					
						
							
							
								GoldPancake 
							
						 
					 
					
						
						
							
						
						f7cad30a38 
					 
					
						
						
							
							[Feature] Add speculative decoding simulation benchmark. ( #2751 )  
						
						... 
						
						
						
						* Add speculative decoding simulation benchmark
* Fix the name of the parameter 
						
						
					 
					
						2025-07-09 12:08:43 +08:00 
						 
				 
			
				
					
						
							
							
								gaoziyuan 
							
						 
					 
					
						
						
							
						
						6b10c19482 
					 
					
						
						
							
							【Feature】add fd commit/branch info when start server ( #2752 )  
						
						... 
						
						
						
						* add_commit_config
* fix
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com > 
						
						
					 
					
						2025-07-09 11:52:22 +08:00 
						 
				 
			
				
					
						
							
							
								liddk1121 
							
						 
					 
					
						
						
							
						
						1b54a2831e 
					 
					
						
						
							
							Adapt for iluvatar gpu ( #2684 )  
						
						
						
						
					 
					
						2025-07-07 16:53:14 +08:00 
						 
				 
			
				
					
						
							
							
								ltd0924 
							
						 
					 
					
						
						
							
						
						68b4755587 
					 
					
						
						
							
							[LLM] support multi node deploy ( #2708 )  
						
						... 
						
						
	
		
			
	 
	
	
		
	
	
		
			
				
	Deploy GitHub Pages / deploy (push) Has been cancelled 
				
			 
		
		
	 
 
	 
						
						* [LLM] support multi node deploy
* Update engine.py
* fix bugs
* fix
* [LLM] support multi node deploy
* [LLM] support multi node deploy
---------
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com > 
						
						
					 
					
						2025-07-06 10:33:51 +08:00 
						 
				 
			
				
					
						
							
							
								Yuanle Liu 
							
						 
					 
					
						
						
							
						
						240bdac2a4 
					 
					
						
						
							
							[feat] support fa3 backend for pd disaggregated ( #2695 )  
						
						... 
						
						
	
		
			
	 
	
	
		
	
	
		
			
				
	Deploy GitHub Pages / deploy (push) Has been cancelled 
				
			 
		
		
	 
 
	 
						
						* support fa3 backend run in pd disaggregated
* support fa3 backend run in pd disaggregated
* support fa3 backend run in pd disaggregated
* support fa3 backend run in pd disaggregated
* delete use_fast_ffn 
						
						
					 
					
						2025-07-03 22:33:27 +08:00 
						 
				 
			
				
					
						
							
							
								Jiang-Jia-Jun 
							
						 
					 
					
						
						
							
						
						05c670e593 
					 
					
						
						
							
							[Sync] Update to latest code ( #2679 )  
						
						... 
						
						
						
						* [Sync] Update to latest code
* Add new code files
* Add new code files
* update code
* Try to fix build.sh
* Try to fix build.sh
* Update code
* Update requirements.txt
* Update code
---------
Co-authored-by: Jiang-Jia-Jun <jiangjiajun@baidu.com > 
						
						
					 
					
						2025-07-03 15:43:53 +08:00 
						 
				 
			
				
					
						
							
							
								Jiang-Jia-Jun 
							
						 
					 
					
						
						
							
						
						92c2cfa2e7 
					 
					
						
						
							
							Sync v2.0 version of code to github repo  
						
						
						
						
					 
					
						2025-06-29 23:29:37 +00:00 
						 
				 
			
				
					
						
							
							
								jiangjiajun 
							
						 
					 
					
						
						
							
						
						684703fd72 
					 
					
						
						
							
							[LLM] First commit the llm deployment code  
						
						
						
						
					 
					
						2025-06-09 19:20:15 +08:00