[Feature] block sparse attention (#3209)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled

* 支持稀疏attn

* fix bug

* code style

* fix moba attn get kv shape

* 修复a100编译

* codestyle

* code style

* code style

* code style

* fix conflict

* 增加单侧

* code style

* 增加eblite 加载时间

* fix bug

* for ci

* for ci

* for ci

* for ci

* 支持mlp block size 128

* 增加小算子单测

* fix 单测 mlp

* 将环境变量加入到config里面

* fix rollout config
This commit is contained in:
yangjianfengo1
2025-08-26 22:16:04 +08:00
committed by GitHub
parent f0a362af18
commit 646a0c2fd8
31 changed files with 6507 additions and 10 deletions

View File

@@ -385,6 +385,7 @@ elif paddle.is_compiled_with_cuda():
"-Igpu_ops",
"-Ithird_party/nlohmann_json/include",
]
nvcc_version = get_nvcc_version()
print(f"nvcc_version = {nvcc_version}")
if nvcc_version >= 12.0:
@@ -508,6 +509,10 @@ elif paddle.is_compiled_with_cuda():
# Hopper optmized mla
sources += find_end_files("gpu_ops/mla_attn", ".cu")
sources += ["gpu_ops/flash_mask_attn/flash_mask_attn.cu"]
sources += find_end_files("gpu_ops/moba_attn/moba_decoder_attn/", ".cu")
sources += find_end_files("gpu_ops/moba_attn/moba_encoder_attn/", ".cu")
sources += find_end_files("gpu_ops/moba_attn/moba_process/", ".cu")
sources += ["gpu_ops/moba_attn/moba_attn.cu"]
os.system("python utils/auto_gen_w4afp8_gemm_kernel.py")
sources += find_end_files("gpu_ops/w4afp8_gemm", ".cu")
os.system("python utils/auto_gen_wfp8afp8_sparse_gemm_kernel.py")