mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-12-24 13:28:13 +08:00
* support mm prefix caching * update code * fix mm_hashes * support encoder cache * add encoder cache * update code * update encoder cache * fix features bug * fix worker bug * support processor cache, need to optimize yet * refactor multimodal data cache * update code * update code * update v1 scheduler * update code * update code * update codestyle * support turn off processor cache and encoder cache * update pre-commit * fix code * solve review * update code * update code * update test case * set processor cache in GiB * update test case * support mm prefix caching for qwen model * fix code style check * update pre-commit * fix unit test * fix unit test * add ci test case * fix rescheduled bug * change text_after_process to prompt_tokens * fix unit test * fix chat template * change model path * [EP] fix adapter bugs (#4572) * Update expert_service.py * Update common_engine.py * Update expert_service.py * fix v1 hang bug (#4573) * fix import image_ops error on some platforms (#4559) * [CLI]Update parameters in bench latecy cli tool and fix collect-env cli tool (#4558) * add collect-env * del files * [Graph Optimization] Add dy_runnable and introduce cudagraph_switch_threshold for cudagraph mode switching (#4578) * add new branch for sot * reorder * fix batch bug * [XPU]Moe uses a new operator (#4585) * [XPU]Moe uses a new operator * [XPU]Moe uses a new operator * update response * [Feature] Support Paddle-OCR (#4396) * init * update code * fix code style & disable thinking * adapt for common_engine.update_mm_requests_chunk_size * use 3d rope * use flash_attn_unpadded * opt siglip * update to be compatible with the latest codebase * fix typo * optim OCR performance * fix bug * fix bug * fix bug * fix bug * normlize name * modify xpu rope * revert logger * fix bug * fix bug * fix bug * support default_v1 * optim performance * fix bug --------- Co-authored-by: root <root@szzj-acg-tge1-fdda9.szzj.baidu.com> Co-authored-by: zhangyue66 <zhangyue66@baidu.com> * [DataProcessor] add reasoning_tokens into usage info (#4520) * add reasoning_tokens into usage info initial commit * add unit tests * modify unit test * modify and add unit tests * fix unit test * move steam usage to processor * modify processor * modify test_logprobs * modify test_logprobs.py * modify stream reasoning tokens accumulation * fix unit test * perf: Optimize task queue communication from engine to worker (#4531) * perf: Optimize task queue communication from engine to worker * perf: get_tasks to numpy * perf: get_tasks remove to_numpy * fix: request & replace ENV * remove test_e2w_perf.py * fix code style --------- Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com> * Clean up ports after processing results (#4587) * [CI] Add /re-run command in PR comments to restart failed CI workflows (#4593) * [Others] api server exits when worker process is dead (#3271) * [fix] fix terminal hangs when worker process is dead * [chore] change sleep time of monitor * [chore] remove redundant comments * update docs --------- Co-authored-by: ApplEOFDiscord <wwy640130@163.com> Co-authored-by: ApplEOFDiscord <31272106+ApplEOFDiscord@users.noreply.github.com> Co-authored-by: ltd0924 <32387785+ltd0924@users.noreply.github.com> Co-authored-by: yinwei <yinwei_hust@163.com> Co-authored-by: JYChen <zoooo0820@qq.com> Co-authored-by: qwes5s5 <45442318+qwes5s5@users.noreply.github.com> Co-authored-by: Ryan <zihaohuang@aliyun.com> Co-authored-by: yyssys <atyangshuang@foxmail.com> Co-authored-by: ming1753 <61511741+ming1753@users.noreply.github.com> Co-authored-by: root <root@szzj-acg-tge1-fdda9.szzj.baidu.com> Co-authored-by: zhangyue66 <zhangyue66@baidu.com> Co-authored-by: kxz2002 <115912648+kxz2002@users.noreply.github.com> Co-authored-by: SunLei <sunlei5788@gmail.com> Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com> Co-authored-by: Zhang Yulong <35552275+ZhangYulongg@users.noreply.github.com> Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com> Co-authored-by: 李泳桦 <39643373+liyonghua0910@users.noreply.github.com>
41 lines
2.4 KiB
Markdown
41 lines
2.4 KiB
Markdown
[简体中文](../zh/usage/log.md)
|
|
|
|
# Log Description
|
|
|
|
FastDeploy generates the following log files during deployment. Below is an explanation of each log's purpose.
|
|
By default, logs are stored in the `log` directory under the execution path. To specify a custom directory, set the environment variable `FD_LOG_DIR`.
|
|
|
|
## Inference Service Logs
|
|
* `backup_env.*.json` : Records environment variables set during instance startup. The number of files matches the number of GPU cards.
|
|
* `envlog.*` : Logs environment variables set during instance startup. The number of files matches the number of GPU cards.
|
|
* `console.log` : Records model startup time and other information. This log is also printed to the console.
|
|
* `data_processor.log` : Logs input/output data encoding and decoding details.
|
|
* `fastdeploy.log` : Records configuration information during instance startup, as well as request and response details during runtime.
|
|
* `workerlog.*` : Tracks model loading progress and inference operator errors. Each GPU card has a corresponding file.
|
|
* `worker_process.log` : Logs engine inference data for each iteration.
|
|
* `cache_manager.log` : Records KV Cache logical index allocation for each request and cache hit status.
|
|
* `launch_worker.log` : Logs model startup information and error messages.
|
|
* `gpu_worker.log` : Records KV Cache block count information during profiling.
|
|
* `gpu_model_runner.log` : Contains model details and loading time.
|
|
|
|
## Online Inference Client Logs
|
|
* `api_server.log` : Logs startup parameters and received request information.
|
|
|
|
## Scheduler Logs
|
|
* `scheduler.log` : Records scheduler information, including node status and request allocation details.
|
|
|
|
## Speculative Decoding Logs
|
|
* `speculate.log` : Contains speculative decoding-related information.
|
|
|
|
## Prefix Caching Logs
|
|
* `cache_queue_manager.log` : Logs startup parameters and received request information.
|
|
* `cache_transfer_manager.log` : Logs startup parameters and received request information.
|
|
* `launch_cache_manager.log` : Records cache transfer startup parameters and error messages.
|
|
|
|
## PD Disaggregation Logs
|
|
* `cache_messager.log` : Logs transmission protocols and messages used by the P instance.
|
|
* `splitwise_connector.log` : Records data received from P/D instances and connection establishment details.
|
|
|
|
## CudaGraph Logs
|
|
* `cudagraph_piecewise_backend.log` : Logs CudaGraph startup and error information.
|