Logo
Explore Help
Sign In
apps/FastDeploy
1
0
Fork 0
You've already forked FastDeploy
mirror of https://github.com/PaddlePaddle/FastDeploy.git synced 2025-12-24 13:28:13 +08:00
Code Issues Actions 2 Packages Projects Releases Wiki Activity
Files
b87e2c6184b1d918b60a528aecfd54aa877e2403
FastDeploy/fastdeploy/model_executor/layers/quantization
History
zhupengyang 26ff2f8683 [XPU] refine fused moe (#4219)
2025-10-16 19:04:07 +08:00
..
ops
[Optimize] Support WINT8 and group scale for Machete (#3905)
2025-09-15 12:01:34 +08:00
__init__.py
[BugFix]fix v1 loader moe bf16, and supoort dynamic_load_weight create quant param (#4229)
2025-09-24 14:12:05 +08:00
block_wise_fp8.py
[v1 loader]qwen Offline fp8 (#4036)
2025-09-15 13:44:11 +08:00
kv_cache.py
[XPU] Support W4A8C8-TP4-300B Model (#4068)
2025-10-10 15:41:32 +08:00
mix_quant.py
[v1 loader]qwen Offline fp8 (#4036)
2025-09-15 13:44:11 +08:00
quant_base.py
…
tensor_wise_fp8.py
…
w4a8.py
[XPU] Support W4A8C8-TP4-300B Model (#4068)
2025-10-10 15:41:32 +08:00
w4afp8.py
load hadamard_block_size from config (#3797)
2025-09-05 17:07:58 +08:00
w8a8.py
fix w8a8.py (#3733)
2025-09-03 10:57:26 +08:00
weight_only.py
[XPU] refine fused moe (#4219)
2025-10-16 19:04:07 +08:00
wfp8afp8.py
[OPs] MoE support wfp8afp8(channelwise) and improve per_token_quant_fp8 (#4238)
2025-09-24 16:39:51 +08:00
wint2.py
…
Powered by Gitea Version: 1.25.2 Page: 2445ms Template: 145ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API