Revert "【FIX】Change the name of sparse attn from moba to plas (#3845)" (#4001)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled

This reverts commit e31c8f7336.
This commit is contained in:
Jiang-Jia-Jun
2025-09-09 11:08:23 +08:00
committed by GitHub
parent bbd548ceb6
commit c60adf4281
13 changed files with 150 additions and 150 deletions

View File

@@ -64,9 +64,9 @@ class CUDAPlatform(Platform):
elif selected_backend == _Backend.FLASH_ATTN:
logger.info("Using FLASH ATTN backend.")
return "fastdeploy.model_executor.layers.attention.FlashAttentionBackend"
elif selected_backend == _Backend.PLAS_ATTN:
logger.info("Using PLAS ATTN backend.")
return "fastdeploy.model_executor.layers.attention.PlasAttentionBackend"
elif selected_backend == _Backend.MOBA_ATTN:
logger.info("Using MOBA ATTN backend.")
return "fastdeploy.model_executor.layers.attention.MobaAttentionBackend"
else:
raise ValueError(
"Invalid attention backend you specified.\n"