Logo
Explore Help
Sign In
apps/FastDeploy
1
0
Fork 0
You've already forked FastDeploy
mirror of https://github.com/PaddlePaddle/FastDeploy.git synced 2025-12-24 13:28:13 +08:00
Code Issues Actions 2 Packages Projects Releases Wiki Activity
Files
8a9e7b53af4a98583cab65e4b44e3265a93e56d2
FastDeploy/fastdeploy/model_executor/layers/quantization
History
YuBaoku 819b2dbbae Revert "【New Feature】W4afp8 supports per group quantization (#4272)" (#4854)
This reverts commit 93fcf7e4ec.
2025-11-06 17:48:28 +08:00
..
ops
WINT4/WINT8 dense gemm default use Machete (#4451)
2025-10-23 17:57:59 +08:00
__init__.py
[BugFix]fix v1 loader moe bf16, and supoort dynamic_load_weight create quant param (#4229)
2025-09-24 14:12:05 +08:00
block_wise_fp8.py
[v1 loader]qwen Offline fp8 (#4036)
2025-09-15 13:44:11 +08:00
kv_cache.py
[XPU] Support W4A8C8-TP4-300B Model (#4068)
2025-10-10 15:41:32 +08:00
mix_quant.py
Revert "【New Feature】W4afp8 supports per group quantization (#4272)" (#4854)
2025-11-06 17:48:28 +08:00
quant_base.py
polish code with new pre-commit rule (#2923)
2025-07-19 23:19:27 +08:00
tensor_wise_fp8.py
[NewFeatures] support eplb (#3547)
2025-08-26 16:19:30 +08:00
w4a8.py
[XPU] Support W4A8C8-TP4-300B Model (#4068)
2025-10-10 15:41:32 +08:00
w4afp8.py
load hadamard_block_size from config (#3797)
2025-09-05 17:07:58 +08:00
w8a8.py
fix w8a8.py (#3733)
2025-09-03 10:57:26 +08:00
weight_only.py
WINT4/WINT8 dense gemm default use Machete (#4451)
2025-10-23 17:57:59 +08:00
wfp8afp8.py
[BugFix]Fix wfp8afp8 triton moe group_topk renormalized=True (#4449)
2025-10-16 23:17:48 +08:00
wint2.py
fix wint2 config (#4721)
2025-10-31 15:44:14 +08:00
Powered by Gitea Version: 1.25.2 Page: 299ms Template: 8ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API