[Feature] block sparse attention (#3668)

* 支持稀疏attn

* fix bug

* code style

* fix moba attn get kv shape

* 修复a100编译

* codestyle

* code style

* code style

* code style

* fix conflict

* 增加单侧

* code style

* 增加eblite 加载时间

* fix bug

* for ci

* for ci

* for ci

* for ci

* 支持mlp block size 128

* 增加小算子单测

* fix 单测 mlp

* 将环境变量加入到config里面

* fix rollout config

* 修复显存

* add test server

* add test server

* fix mlp  最后一层使用full attn
This commit is contained in:
yangjianfengo1
2025-08-29 19:46:30 +08:00
committed by GitHub
parent ccd52b5596
commit 3754a9906d
31 changed files with 6553 additions and 10 deletions

View File

@@ -34,6 +34,7 @@ from fastdeploy.config import (
FDConfig,
GraphOptimizationConfig,
LoadConfig,
MobaAttentionConfig,
ModelConfig,
ParallelConfig,
SpeculativeConfig,
@@ -553,6 +554,12 @@ def parse_args():
default=None,
help="Configation of Graph optimization backend.",
)
parser.add_argument(
"--moba_attention_config",
type=json.loads,
default=None,
help="Configation of moba attention.",
)
parser.add_argument(
"--guided_decoding_backend",
type=str,
@@ -658,6 +665,8 @@ def initialize_fd_config(args, ranks: int = 1, local_rank: int = 0) -> FDConfig:
graph_opt_config = GraphOptimizationConfig(args.graph_optimization_config)
moba_attention_config = MobaAttentionConfig(args.moba_attention_config)
early_stop_config = EarlyStopConfig(args.early_stop_config)
# Note(tangbinhan): used for load_checkpoint
@@ -739,6 +748,7 @@ def initialize_fd_config(args, ranks: int = 1, local_rank: int = 0) -> FDConfig:
cache_config=cache_config,
engine_worker_queue_port=args.engine_worker_queue_port,
ips=args.ips,
moba_attention_config=moba_attention_config,
)
update_fd_config_for_mm(fd_config)