Logo
Explore Help
Sign In
apps/FastDeploy
1
0
Fork 0
You've already forked FastDeploy
mirror of https://github.com/PaddlePaddle/FastDeploy.git synced 2025-12-24 13:28:13 +08:00
Code Issues Actions 2 Packages Projects Releases Wiki Activity
Files
af39819fcd8991f73ede4786f72fc2a72aa71876
FastDeploy/fastdeploy/model_executor/layers/moe
History
chen aa35ce449d [Optimization] EP empty_input_forward Remove Communication (#5254)
2025-12-01 21:10:40 +08:00
..
__init__.py
support w4afp8 EP inference (#3044)
2025-08-25 11:27:45 +08:00
ep.py
[Feature] Support noaux for eplb (#5143)
2025-11-21 14:10:32 +08:00
fused_moe_backend_base.py
[Intel HPU] change MoE weights and scales from list to tensor and add… (#5289)
2025-11-28 19:17:05 +08:00
fused_moe_cutlass_backend.py
[Optimization] EP empty_input_forward Remove Communication (#5254)
2025-12-01 21:10:40 +08:00
fused_moe_deepgemm_backend.py
[Optimization] Refine row parallel bias and nranks and moe all_reduce (#5247)
2025-11-26 05:09:09 -08:00
fused_moe_marlin_backend.py
[Optimization] Refine row parallel bias and nranks and moe all_reduce (#5247)
2025-11-26 05:09:09 -08:00
fused_moe_triton_backend.py
[Optimization] Refine row parallel bias and nranks and moe all_reduce (#5247)
2025-11-26 05:09:09 -08:00
fused_moe_wint2_backend.py
[Optimization] Refine row parallel bias and nranks and moe all_reduce (#5247)
2025-11-26 05:09:09 -08:00
moe.py
[Feature] support chunked moe (#4575)
2025-12-01 15:17:18 +08:00
triton_moe_kernels.py
[OPs] MoE support wfp8afp8(channelwise) and improve per_token_quant_fp8 (#4238)
2025-09-24 16:39:51 +08:00
Powered by Gitea Version: 1.25.2 Page: 936ms Template: 21ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API