Logo
Explore Help
Sign In
apps/FastDeploy
1
0
Fork 0
You've already forked FastDeploy
mirror of https://github.com/PaddlePaddle/FastDeploy.git synced 2025-12-24 13:28:13 +08:00
Code Issues Actions 2 Packages Projects Releases Wiki Activity
Files
a2ec2c415222fbbc9b22e74f822a6bcf9e12e2b6
FastDeploy/fastdeploy/model_executor
History
YuanRisheng a2ec2c4152 [FDConfig]Remove max_model_len in FDConfig (#4350)
* modify max_model_len

* fix unittest

* fix unittest

---------

Co-authored-by: root <root@yqlcc01-sys-rpm12rzmwjd.yqlcc01.baidu.com>
2025-10-11 14:04:17 +08:00
..
graph_optimization
[Executor]CUDAGraph support Speculate Decode (#3769)
2025-10-09 21:18:29 +08:00
guided_decoding
[FDConfig]Remove max_num_batched_tokens/max_num_seqs in parallel config (#4116)
2025-09-17 10:43:35 +08:00
layers
[FDConfig]Remove max_model_len in FDConfig (#4350)
2025-10-11 14:04:17 +08:00
model_loader
[v1 loader]code style (#4204)
2025-09-23 19:36:00 +08:00
models
[FDConfig]Remove max_model_len in FDConfig (#4350)
2025-10-11 14:04:17 +08:00
ops
[Intel HPU] Support intel hpu platform (#4161)
2025-09-24 12:27:50 +08:00
__init__.py
polish code with new pre-commit rule (#2923)
2025-07-19 23:19:27 +08:00
forward_meta.py
[XPU] Support W4A8C8-TP4-300B Model (#4068)
2025-10-10 15:41:32 +08:00
load_weight_utils.py
[v1 loader]code style (#4204)
2025-09-23 19:36:00 +08:00
pre_and_post_process.py
Fix wrong batch size of thinking_mask (#4296)
2025-09-28 14:56:42 +08:00
utils.py
[Feature] support pool (#3827)
2025-09-22 14:09:09 +08:00
Powered by Gitea Version: 1.25.2 Page: 1344ms Template: 120ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API