mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-12-24 13:28:13 +08:00
* add start intercept * Adjustment GraphOptConfig * pre-commit * default use cudagraph * set default value * default use cuda graph * pre-commit * fix test case bug * disable rl * fix moba attention * only support gpu * Temporarily disable PD Disaggregation * set max_num_seqs of test case as 1 * set max_num_seqs and temperature * fix max_num_batched_tokens bug * close cuda graph * success run wint2 * profile run with max_num_batched_tokens * 1.add c++ memchecker 2.success run wint2 * updatee a800 yaml * update docs * 1. delete check 2. fix plas attn test case * default use use_unique_memory_pool * add try-except for warmup * ban mtp, mm, rl * fix test case mock * fix ci bug * fix form_model_get_output_topp0 bug * fix ci bug * refine deepseek ci * refine code * Disable PD * fix sot yaml
11 lines
252 B
YAML
11 lines
252 B
YAML
reasoning-parser: ernie_x1
|
|
tool_call_parser: ernie_x1
|
|
tensor_parallel_size: 4
|
|
max_model_len: 65536
|
|
max_num_seqs: 128
|
|
enable_prefix_caching: True
|
|
enable_chunked_prefill: True
|
|
gpu_memory_utilization: 0.85
|
|
graph_optimization_config:
|
|
use_cudagraph: True
|