Sync v2.0 version of code to github repo

This commit is contained in:
Jiang-Jia-Jun
2025-06-29 23:29:37 +00:00
parent d151496038
commit 92c2cfa2e7
597 changed files with 78776 additions and 22905 deletions

View File

@@ -1,98 +1,132 @@
# FastDeploy大模型离线推理
# Offline Inference
## 1. 使用方式
以Qwen模型为例通过FastDeploy离线推理可支持本地加载Qwen2模型并处理用户数据使用方式如下
## 1. Usage
FastDeploy supports offline inference by loading models locally and processing user data. Usage examples:
### Text Completion Interface (LLM.generate)
```python
from fastdeploy import LLM, SamplingParams
prompts = [
"where is Beijing?",
"把李白的静夜思改写为现代诗",
"Write me a poem about large language model.",
]
# 采样参数
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="Qwen/Qwen2-7B-Instruct",tensor_parallel_size=1,max_model_len=4096)
# Sampling parameters
sampling_params = SamplingParams(top_p=0.95, max_tokens=6400)
# 批量进行推理llm内部基于资源情况进行请求排队、动态插入处理
# Load model
llm = LLM(model="ERNIE-4.5-0.3B", tensor_parallel_size=1, max_model_len=8192)
# Batch inference (internal request queuing and dynamic batching)
outputs = llm.generate(prompts, sampling_params)
# 输出结果
# Output results
for output in outputs:
prompt = output.prompt
generated_text = output.outputs.text
```
本示例中 `SamplingParams` `LLM` `LLM.generate` 以及输出output对应的结构体 `RequestOutput` 接口说明见如下文档说明。
### Chat Interface (LLM.chat)
```python
from fastdeploy import LLM, SamplingParams
注: 若为X1 模型输出
msg1=[
{"role": "system", "content": "I'm a helpful AI assistant."},
{"role": "user", "content": "把李白的静夜思改写为现代诗"},
]
msg2 = [
{"role": "system", "content": "I'm a helpful AI assistant."},
{"role": "user", "content": "Write me a poem about large language model."},
]
messages = [msg1, msg2]
# Sampling parameters
sampling_params = SamplingParams(top_p=0.95, max_tokens=6400)
# Load model
llm = LLM(model="ERNIE-4.5-0.3B", tensor_parallel_size=1, max_model_len=8192)
# Batch inference (internal request queuing and dynamic batching)
outputs = llm.chat(messages, sampling_params)
# Output results
for output in outputs:
prompt = output.prompt
generated_text = output.outputs.text
```
Documentation for `SamplingParams`, `LLM.generate`, `LLM.chat`, and output structure `RequestOutput` is provided below.
> Note: For X1 model output
```python
# 输出结果
# Output results
for output in outputs:
prompt = output.prompt
generated_text = output.outputs.text
reasoning_text = output.outputs.resoning_content
```
## 2. 接口说明
## 2. API Documentation
### 2.1 fastdeploy.LLM
* model(str): 模型路径
* max_model_len(int): 部署时的最大长度(输入+输出)默认2048
* tensor_parallel_size(int): TP并行配置的卡数默认1
* block_size(int): Cache管理单元block的Token数建议配置为64默认值64
* max_num_seqs(int): Decode阶段最大的Batch数超过Batch数的请求将会在队列中排队默认值8
* gpu_memory_utilization(float): GPU显存使用率默认0.9
* num_gpu_blocks_override(int): 手动设定预分配的KV Cache block数量服务启动默认会自动计算可用的KV Cache block数量通过此参数可改为手动指定。默认值None
* max_num_batched_tokens(int): Prefill阶段进行batch时最大的Token数量默认与max_model_len一致在高并发时此参数会影响首Token耗时。默认值None
* kv_cache_ratio(float): KV Cache分配给输入的比例推荐值=平均输入长度/(平均输入长度+平均输出长度默认值0.75
* use_warmup(int): 是否在启动时进行warmup会自动生成极限长度数据进行warmup默认自动计算KV Cache时会使用
* engine_worker_queue_port(int): 引擎内部进程间通信使用端口号默认值8002
* enable_mm(bool): 启用多模推理默认值False
For ```LLM``` configuration, refer to [Parameter Documentation](parameters.md).
> 参数配置说明:
> 1. 模型服务启动后会在日志文件log/fastdeploy.log中打印如 `Doing profile, the total_block_num:640` 的日志其中640即表示自动计算得到的KV Cache block数量将它乘以block_size(默认值64)即可得到部署后总共可以在KV Cache中缓存的Token数。
> 2. `max_num_seqs` 用于配置decode阶段最大并发处理请求数该参数可以基于第1点中缓存的Token数来计算一个较优值例如线上统计输入平均token数800, 输出平均token数500本次计>算得到KV Cache block为640 block_size为64。那么我们可以配置 `kv_cache_ratio = 800 / (800 + 500) = 0.6` , 配置 `max_seq_len = 640 * 64 / (800 + 500) = 31`。
> Configuration Notes:
> 1. `port` and `metrics_port` is only used for online inference.
> 2. After startup, the service logs KV Cache block count (e.g. `total_block_num:640`). Multiply this by block_size (default 64) to get total cacheable tokens.
> 3. Calculate `max_num_seqs` based on cacheable tokens. Example: avg input=800 tokens, output=500 tokens, blocks=640 → `kv_cache_ratio = 800/(800+500)=0.6`, `max_seq_len = 640*64/(800+500)=31`.
### 2.2 fastdeploy.LLM.generate
* prompts(str,list[str],list[int]): 输入的prompt, 支持batch prompt 输入,解码后的token ids 进行输入
* sampling_params: 模型超参设置具体说明见2.3
* use_tqdm: 是否打开推理进度可视化
* prompts(str,list[str],list[int]): Input prompts (batch supported), accepts decoded token ids
* sampling_params: See 2.4 for parameter details
* use_tqdm: Enable progress visualization
### 2.3 fastdeploy.SamplingParams
### 2.3 fastdeploy.LLM.chat
* presence_penalty(float): 控制模型生成重复内容的惩罚系数,正值降低重复话题出现的概率
* frequence_penalty(float): 控制重复token的惩罚力度比presence_penalty更严格会惩罚高频重复
* repetition_penalty(float): 直接对重复生成的token进行惩罚的系数>1时惩罚重复<1时鼓励重复
* temperature(float): 控制生成随机性的参数,值越高结果越随机,值越低结果越确定
* top_p(float): 概率累积分布截断阈值仅考虑累计概率达到此阈值的最可能token集合
* max_tokens(int): 限制模型生成的最大token数量包括输入和输出
* min_tokens(int): 强制模型生成的最少token数量避免过早结束
* messages(list[dict],list[list[dict]]): Input messages (batch supported)
* sampling_params: See 2.4 for parameter details
* use_tqdm: Enable progress visualization
* chat_template_kwargs(dict): Extra template parameters (currently supports enable_thinking(bool))
### 2.4 fastdeploy.engine.request.RequestOutput
### 2.4 fastdeploy.SamplingParams
* request_id(str): 标识request 的id
* prompt(str)输入请求的request内容
* prompt_token_ids(list[int]): 拼接后经过词典解码的输入的token 列表
* outputs(fastdeploy.engine.request.CompletionOutput): 输出结果
* finished(bool)标识当前query 是否推理结束
* metrics(fastdeploy.engine.request.RequestMetrics):记录推理耗时指标
* presence_penalty(float): Penalizes repeated topics (positive values reduce repetition)
* frequency_penalty(float): Strict penalty for repeated tokens
* repetition_penalty(float): Direct penalty for repeated tokens (>1 penalizes, <1 encourages)
* temperature(float): Controls randomness (higher = more random)
* top_p(float): Probability threshold for token selection
* max_tokens(int): Maximum generated tokens (input + output)
* min_tokens(int): Minimum forced generation length
### 2.5 fastdeploy.engine.request.CompletionOutput
### 2.5 fastdeploy.engine.request.RequestOutput
* index(int)推理服务时的batch index
* token_ids(list[int])输出的token 列表
* text(str): token ids 对应的文本
* resoning_content(str):仅X1 模型有效)返回思考链的结果
* request_id(str): Request identifier
* prompt(str): Input content
* prompt_token_ids(list[int]): Tokenized input
* outputs(fastdeploy.engine.request.CompletionOutput): Results
* finished(bool): Completion status
* metrics(fastdeploy.engine.request.RequestMetrics): Performance metrics
* num_cached_tokens(int): Cached token count (only valid when enable_prefix_caching``` is enabled)
* error_code(int): Error code
* error_msg(str): Error message
### 2.6 fastdeploy.engine.request.RequestMetrics
### 2.6 fastdeploy.engine.request.CompletionOutput
* arrival_time(float)::收到数据的时间,若流式返回则该时间为拿到推理结果的时间,若非流式返回则为收到推理数据
* inference_start_time(float)::开始推理的时间点
* first_token_time(float)::推理侧首token 耗时
* time_in_queue(float):等待推理的排队耗时
* model_forward_time(float)::推理侧模型前向的耗时
* model_execute_time(float):: 模型执行耗时,包括前向推理,排队,预处理(文本拼接,解码操作)的耗时
* index(int): Batch index
* send_idx(int): Request token index
* token_ids(list[int]): Output tokens
* text(str): Decoded text
* reasoning_content(str): (X1 model only) Chain-of-thought output
### 2.7 fastdeploy.engine.request.RequestMetrics
* arrival_time(float): Request receipt time
* inference_start_time(float): Inference start time
* first_token_time(float): First token latency
* time_in_queue(float): Queuing time
* model_forward_time(float): Forward pass duration
* model_execute_time(float): Total execution time (including preprocessing)