mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-05 16:48:03 +08:00
[Feature] block sparse attention (#3668)
* 支持稀疏attn * fix bug * code style * fix moba attn get kv shape * 修复a100编译 * codestyle * code style * code style * code style * fix conflict * 增加单侧 * code style * 增加eblite 加载时间 * fix bug * for ci * for ci * for ci * for ci * 支持mlp block size 128 * 增加小算子单测 * fix 单测 mlp * 将环境变量加入到config里面 * fix rollout config * 修复显存 * add test server * add test server * fix mlp 最后一层使用full attn
This commit is contained in:
@@ -64,6 +64,9 @@ class CUDAPlatform(Platform):
|
||||
elif selected_backend == _Backend.FLASH_ATTN:
|
||||
logger.info("Using FLASH ATTN backend.")
|
||||
return "fastdeploy.model_executor.layers.attention.FlashAttentionBackend"
|
||||
elif selected_backend == _Backend.MOBA_ATTN:
|
||||
logger.info("Using MOBA ATTN backend.")
|
||||
return "fastdeploy.model_executor.layers.attention.MobaAttentionBackend"
|
||||
else:
|
||||
raise ValueError(
|
||||
"Invalid attention backend you specified.\n"
|
||||
|
Reference in New Issue
Block a user