RichardWooSJTU 
							
						 
					 
					
						
						
							
						
						fee544e808 
					 
					
						
						
							
							fix ep prefill ( #2762 )  
						
						
						
						
					 
					
						2025-07-09 14:03:05 +08:00 
						 
				 
			
				
					
						
							
							
								Ryan 
							
						 
					 
					
						
						
							
						
						c4718fd693 
					 
					
						
						
							
							Enable SOT D2St in Multimodal Model ( #2735 )  
						
						
						
						
					 
					
						2025-07-09 12:26:18 +08:00 
						 
				 
			
				
					
						
							
							
								GoldPancake 
							
						 
					 
					
						
						
							
						
						f7cad30a38 
					 
					
						
						
							
							[Feature] Add speculative decoding simulation benchmark. ( #2751 )  
						
						... 
						
						
						
						* Add speculative decoding simulation benchmark
* Fix the name of the parameter 
						
						
					 
					
						2025-07-09 12:08:43 +08:00 
						 
				 
			
				
					
						
							
							
								RichardWooSJTU 
							
						 
					 
					
						
						
							
						
						6610aa29d0 
					 
					
						
						
							
							Revert "[Bug fix] fix attention rank init ( #2743 )" ( #2761 )  
						
						... 
						
						
						
						This reverts commit e8bbe7244b 
						
						
					 
					
						2025-07-09 10:38:12 +08:00 
						 
				 
			
				
					
						
							
							
								Ryan 
							
						 
					 
					
						
						
							
						
						f72c4de539 
					 
					
						
						
							
							[SOT] Make custom_op dy&st unified ( #2733 )  
						
						... 
						
						
	
		
			
	 
	
	
		
	
	
		
			
				
	Deploy GitHub Pages / deploy (push) Has been cancelled 
				
			 
		
		
	 
 
	 
						
						* make_custom_op dy&st unified
* add instance judgement 
						
						
					 
					
						2025-07-08 19:21:44 +08:00 
						 
				 
			
				
					
						
							
							
								RichardWooSJTU 
							
						 
					 
					
						
						
							
						
						e8bbe7244b 
					 
					
						
						
							
							[Bug fix] fix attention rank init ( #2743 )  
						
						... 
						
						
						
						* fix attention rank init
* fix attention rank init 
						
						
					 
					
						2025-07-08 17:19:49 +08:00 
						 
				 
			
				
					
						
							
							
								lizexu123 
							
						 
					 
					
						
						
							
						
						525be243e7 
					 
					
						
						
							
							[Bug fix] Fixed the garbled text issues in Qwen3-8B ( #2737 )  
						
						... 
						
						
						
						* fix qwen3.py
* update
* update lm_head tie_word_embeddings
* update tie_word_embeddings
* fix
* fix tie_word_embedding not in config.json
---------
Co-authored-by: lizexu <lizexu@baidu.com > 
						
						
					 
					
						2025-07-07 23:15:27 -07:00 
						 
				 
			
				
					
						
							
							
								EnflameGCU 
							
						 
					 
					
						
						
							
						
						d0f4d6ba3a 
					 
					
						
						
							
							[GCU] Support gcu platform ( #2702 )  
						
						... 
						
						
						
						baseline: e7fa57ebaexing.wo@163.com > 
						
						
					 
					
						2025-07-08 13:00:52 +08:00 
						 
				 
			
				
					
						
							
							
								gaoziyuan 
							
						 
					 
					
						
						
							
						
						26d5d737dd 
					 
					
						
						
							
							【Fearture】support qwen2 some func ( #2740 )  
						
						... 
						
						
						
						* add rl qwen model support
* fix
* fix 
						
						
					 
					
						2025-07-08 12:03:04 +08:00 
						 
				 
			
				
					
						
							
							
								Ryan 
							
						 
					 
					
						
						
							
						
						fefbd65cf8 
					 
					
						
						
							
							[SOT] Remove BreakGraph with paddle.maximum ( #2731 )  
						
						... 
						
						
						
						* rm if with clip
* clip -> maximum
* int64 -> int32 
						
						
					 
					
						2025-07-08 11:44:25 +08:00 
						 
				 
			
				
					
						
							
							
								ming1753 
							
						 
					 
					
						
						
							
						
						1eb8ea7328 
					 
					
						
						
							
							[Bug fix] fix complie bug when sm < 89 ( #2738 )  
						
						
						
						
					 
					
						2025-07-08 11:24:52 +08:00 
						 
				 
			
				
					
						
							
							
								ming1753 
							
						 
					 
					
						
						
							
						
						ef6649a577 
					 
					
						
						
							
							[Optimize] Optimize tensorwise fp8 performance ( #2729 )  
						
						... 
						
						
	
		
			
	 
	
	
		
	
	
		
			
				
	Deploy GitHub Pages / deploy (push) Has been cancelled 
				
			 
		
		
	 
 
	 
						
						* [Optimize] Optimize tensorwise fp8 performance 
						
						
					 
					
						2025-07-07 20:06:28 +08:00 
						 
				 
			
				
					
						
							
							
								liddk1121 
							
						 
					 
					
						
						
							
						
						1b54a2831e 
					 
					
						
						
							
							Adapt for iluvatar gpu ( #2684 )  
						
						
						
						
					 
					
						2025-07-07 16:53:14 +08:00 
						 
				 
			
				
					
						
							
							
								GoldPancake 
							
						 
					 
					
						
						
							
						
						e7fa57ebae 
					 
					
						
						
							
							Extract eh_proj Layer from ParallelLMHead for MTP to Avoid Weight Transposition Issue ( #2707 )  
						
						... 
						
						
	
		
			
	 
	
	
		
	
	
		
			
				
	Deploy GitHub Pages / deploy (push) Has been cancelled 
				
			 
		
		
	 
 
	 
						
						* fix mtp eh_proj layer
* fix mtp update_cfg function
* fix stringdoc
* simplify class name 
						
						
					 
					
						2025-07-04 14:15:04 +08:00 
						 
				 
			
				
					
						
							
							
								Yuanle Liu 
							
						 
					 
					
						
						
							
						
						240bdac2a4 
					 
					
						
						
							
							[feat] support fa3 backend for pd disaggregated ( #2695 )  
						
						... 
						
						
	
		
			
	 
	
	
		
	
	
		
			
				
	Deploy GitHub Pages / deploy (push) Has been cancelled 
				
			 
		
		
	 
 
	 
						
						* support fa3 backend run in pd disaggregated
* support fa3 backend run in pd disaggregated
* support fa3 backend run in pd disaggregated
* support fa3 backend run in pd disaggregated
* delete use_fast_ffn 
						
						
					 
					
						2025-07-03 22:33:27 +08:00 
						 
				 
			
				
					
						
							
							
								Jiang-Jia-Jun 
							
						 
					 
					
						
						
							
						
						05c670e593 
					 
					
						
						
							
							[Sync] Update to latest code ( #2679 )  
						
						... 
						
						
						
						* [Sync] Update to latest code
* Add new code files
* Add new code files
* update code
* Try to fix build.sh
* Try to fix build.sh
* Update code
* Update requirements.txt
* Update code
---------
Co-authored-by: Jiang-Jia-Jun <jiangjiajun@baidu.com > 
						
						
					 
					
						2025-07-03 15:43:53 +08:00 
						 
				 
			
				
					
						
							
							
								AIbin 
							
						 
					 
					
						
						
							
						
						a197dcd729 
					 
					
						
						
							
							【Inference Optimize】Support ERNIE-4_5-300B-A47B-2BITS-Paddle model TP2/TP4 Inference ( #2666 )  
						
						... 
						
						
						
						* Support TP2&TP4 Wint
* Support TP2&TP4 Wint2 Inference 
						
						
					 
					
						2025-07-01 18:29:11 +08:00 
						 
				 
			
				
					
						
							
							
								Jiang-Jia-Jun 
							
						 
					 
					
						
						
							
						
						92c2cfa2e7 
					 
					
						
						
							
							Sync v2.0 version of code to github repo  
						
						
						
						
					 
					
						2025-06-29 23:29:37 +00:00 
						 
				 
			
				
					
						
							
							
								jiangjiajun 
							
						 
					 
					
						
						
							
						
						684703fd72 
					 
					
						
						
							
							[LLM] First commit the llm deployment code  
						
						
						
						
					 
					
						2025-06-09 19:20:15 +08:00