Logo
Explore Help
Sign In
apps/FastDeploy
1
0
Fork 0
You've already forked FastDeploy
mirror of https://github.com/PaddlePaddle/FastDeploy.git synced 2025-10-11 11:30:20 +08:00
Code Issues Actions 6 Packages Projects Releases Wiki Activity
Files
a42fc3f40bed7aa61de42065fd9f744ed236ee19
FastDeploy/fastdeploy/model_executor
History
xiaoxiaohehe001 a42fc3f40b [Feature] Support 45tVL EP FP8 Infer. (#2909)
* support_mm_ep_fp8

* support_mm_ep
2025-07-18 17:57:15 +08:00
..
graph_optimization
[Executor] CUDA Graph support padding batch (#2844)
2025-07-15 19:49:01 -07:00
guided_decoding
support vl ori_vacab_size (#2900)
2025-07-18 16:26:14 +08:00
layers
remove cum_offsets from get_block_shape_and_split_kv_block (#2913)
2025-07-18 16:13:32 +08:00
models
[Feature] Support 45tVL EP FP8 Infer. (#2909)
2025-07-18 17:57:15 +08:00
ops
refactor rl get_name_mappings_to_training (#2847)
2025-07-15 07:31:42 -07:00
__init__.py
[LLM] First commit the llm deployment code
2025-06-09 19:20:15 +08:00
forward_meta.py
[Inference, rename] remove padding_offsets from atten use batch_id_per_token (#2880)
2025-07-17 18:41:31 +08:00
load_weight_utils.py
[Feature][MTP] Support cacheKV transfer in per_chunk mode (#2890)
2025-07-17 17:58:08 +08:00
model_loader.py
[vl]remove duplicated load logic (#2744)
2025-07-13 07:36:26 +08:00
pre_and_post_process.py
[Inference, rename] remove padding_offsets from atten use batch_id_per_token (#2880)
2025-07-17 18:41:31 +08:00
Powered by Gitea Version: 1.24.5 Page: 117ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API