Logo
Explore Help
Sign In
apps/FastDeploy
1
0
Fork 0
You've already forked FastDeploy
mirror of https://github.com/PaddlePaddle/FastDeploy.git synced 2025-12-24 13:28:13 +08:00
Code Issues Actions 2 Packages Projects Releases Wiki Activity
Files
25a983ba9c3e761e78b2a117ea5e141abc94e2eb
FastDeploy/fastdeploy/model_executor
History
RAM 25a983ba9c 1.fix the bug of draft model with ep 2.fix sampler bug (#4589)
2025-10-27 17:47:34 +08:00
..
graph_optimization
[Graph Optimization] Add dy_runnable and introduce cudagraph_switch_threshold for cudagraph mode switching (#4578)
2025-10-24 18:36:52 +08:00
guided_decoding
[FDConfig]Remove reasoning_parser/guided_decoding_backend/disable_any_whitespace/device_ids in FDConfig (#4362)
2025-10-17 10:40:59 +08:00
layers
1.fix the bug of draft model with ep 2.fix sampler bug (#4589)
2025-10-27 17:47:34 +08:00
model_loader
[Metax] adapt DeepSeek (#4498)
2025-10-24 10:14:53 +08:00
models
[V1 loader] Qwen25 VL support v1 loader and torch style safetensors load (#4388)
2025-10-27 10:54:15 +08:00
ops
delete useless code (#4544)
2025-10-23 13:40:34 +08:00
__init__.py
polish code with new pre-commit rule (#2923)
2025-07-19 23:19:27 +08:00
forward_meta.py
fix import image_ops error on some platforms (#4559)
2025-10-24 16:09:20 +08:00
load_weight_utils.py
[v1 loader]code style (#4204)
2025-09-23 19:36:00 +08:00
pre_and_post_process.py
[Iluvatar GPU] fix ci error caused by rebuild_padding param and cuda graph (#4504)
2025-10-21 21:41:41 +08:00
utils.py
[V1 loader] Qwen25 VL support v1 loader and torch style safetensors load (#4388)
2025-10-27 10:54:15 +08:00
Powered by Gitea Version: 1.25.2 Page: 588ms Template: 8ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API