Compare commits

..

9 Commits

Author SHA1 Message Date
ltd0924
e42dc8c694 [BUGFIX] clear request (#4320)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Co-authored-by: ltd0924 <luotingdan@baidu.com>
2025-09-29 20:37:58 +08:00
chen
63a03ee152 [feature]2.2 custom_allreduce support cudagraph recapture (#4307)
* custom_allreduce support cudagraph recapture

* delete code

* add shut_down/restart default group
2025-09-29 18:14:21 +08:00
kxz2002
9cc2c99539 initial commit (#4304)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
2025-09-29 11:21:57 +08:00
luukunn
31e32b5821 [fix]remove reasoning_max_tokens=max_toksns*0.8 in sampling_params (#4294)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
* [fix]Modify follow-up push parameters and Modify the verification method for thinking length (#4086)

* 续推参数  generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式

* 续推参数  generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式

* 续推参数  generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式

* 续推参数  generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式

* add completion_token_ids

* add logger

* fix reasoning_max_tokens ParameterError

* add unittest

* add unittest

* add unittest

* add unittest

* add unittest

* add unit test

* fix

* [fix]update apply_chat_template (#4137)

* update apply_chat_template

* fix unittest

* fix unittest

* fix

* fix

* fix unit test

* fix

* fix unit test

* add unit test

* fix reasoning_max_tokens
2025-09-28 14:44:54 +08:00
luukunn
aebe12a58d [fix]update apply_chat_template (#4249)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
* [fix]Modify follow-up push parameters and Modify the verification method for thinking length (#4086)

* 续推参数  generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式

* 续推参数  generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式

* 续推参数  generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式

* 续推参数  generated_token_ids 修改成 completion_token_ids;修改思考长度校验方式

* add completion_token_ids

* add logger

* fix reasoning_max_tokens ParameterError

* add unittest

* add unittest

* add unittest

* add unittest

* add unittest

* add unit test

* fix

* [fix]update apply_chat_template (#4137)

* update apply_chat_template

* fix unittest

* fix unittest

* fix

* fix

* fix unit test

* fix

* fix unit test

* add unit test
2025-09-25 16:41:56 +08:00
chen
8fdb950e9f include_stop_str_in_output=False not return eos text (#4231)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
2025-09-24 14:07:30 +08:00
Zhong Hui
a460462d2a fix ernie vl distributed attr. (#4217)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
2025-09-23 19:37:38 +08:00
李泳桦
cb8d87b945 [fix] fix clearing caches synchronization and add more logs (#4212)
* [fix] fix clearing caches synchronization and add more logs

* [chore] print cache_ready_signal in log
2025-09-23 19:36:38 +08:00
ltd0924
de4feff147 [Feature]CP support data clear (#4214)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
* Update serving_chat.py

* Update serving_completion.py

* Update serving_completion.py

* mv connection_manager init

* [BugFix] fix kv cache

* fix format

* [Feature] support clear data

---------

Co-authored-by: Yuanle Liu <yuanlehome@163.com>
Co-authored-by: RAM <gstian5555@outlook.com>
2025-09-23 16:53:39 +08:00
29 changed files with 306 additions and 154 deletions

View File

@@ -616,6 +616,8 @@ int64_t open_mem_handle(paddle::Tensor& mem_handle);
void free_shared_buffer(int64_t buffer);
void clear_ipc_handles(int64_t _fa);
// speculative decoding Kernel
std::vector<paddle::Tensor> SpeculateGetPaddingOffset(
const paddle::Tensor& input_ids,
@@ -1204,6 +1206,8 @@ PYBIND11_MODULE(fastdeploy_ops, m) {
m.def("free_shared_buffer", &free_shared_buffer, "free_shared_buffer");
m.def("clear_ipc_handles", &clear_ipc_handles, "clear_ipc_handles");
m.def("open_mem_handle", &open_mem_handle, "open_mem_handle");
m.def("get_graph_buffer_ipc_meta", &get_graph_buffer_ipc_meta, "get_graph_buffer_ipc_meta");

View File

@@ -122,10 +122,14 @@ void register_graph_buffers(fptr_t _fa,
for (int i = 0; i < handles.size(); i++) {
bytes.emplace_back(handles[i].begin(), handles[i].end());
}
bytes.reserve(handles.size());
fa->register_graph_buffers(bytes, offsets);
}
void clear_ipc_handles(fptr_t _fa) {
auto fa = reinterpret_cast<paddle::CustomAllreduce*>(_fa);
fa->clear_ipc_handles();
}
std::tuple<fptr_t, paddle::Tensor> allocate_shared_buffer_and_handle(
int64_t size) {

View File

@@ -517,10 +517,15 @@ class CustomAllreduce {
#undef KL
}
~CustomAllreduce() {
void clear_ipc_handles(){
for (auto [_, ptr] : ipc_handles_) {
CUDACHECK(cudaIpcCloseMemHandle(ptr));
}
ipc_handles_.clear();
}
~CustomAllreduce() {
clear_ipc_handles();
}
};
} // namespace paddle

View File

@@ -201,12 +201,12 @@ class CacheTransferManager:
def _init_gpu_cache(self, args):
if not args.create_cache_tensor:
logger.info("Waiting for runners to create kv cache.")
logger.info(f"[rank {self.rank}/{self.n_ranks}] Waiting for runners to create kv cache.")
while self.cache_ready_signal.value[self.rank] != 1:
time.sleep(1)
logger.info("OK! Stop waiting.")
time.sleep(0.1)
logger.info(f"[rank {self.rank}/{self.n_ranks}] OK! Stop waiting.")
logger.info("Initializing kv cache for all layers.")
logger.info(f"[rank {self.rank}/{self.n_ranks}] Initializing kv cache for all layers.")
paddle.set_device(f"gpu:{self.device}")
for i in range(args.num_layers + self.num_extra_layers):
num_gpu_blocks = args.num_gpu_blocks if i < args.num_layers else self.num_extra_layer_gpu_blocks
@@ -215,13 +215,13 @@ class CacheTransferManager:
val_name = f"value_caches_{i}_rank{self.rank}.device{self.device}"
if args.create_cache_tensor:
logger.info(f"..creating kv cache for layer {i}: {cache_shape}")
logger.info(f"[rank {self.rank}/{self.n_ranks}] ..creating kv cache for layer {i}: {cache_shape}")
key_cache = paddle.full(shape=cache_shape, fill_value=0, dtype=args.cache_dtype)
val_cache = paddle.full(shape=cache_shape, fill_value=0, dtype=args.cache_dtype)
set_data_ipc(key_cache, key_name)
set_data_ipc(val_cache, val_name)
else:
logger.info(f"..attaching kv cache for layer {i}: {cache_shape}")
logger.info(f"[rank {self.rank}/{self.n_ranks}] ..attaching kv cache for layer {i}: {cache_shape}")
key_cache = paddle.empty(shape=[], dtype=args.cache_dtype)
val_cache = paddle.empty(shape=[], dtype=args.cache_dtype)
key_cache = share_external_data(key_cache, key_name, cache_shape)
@@ -233,20 +233,22 @@ class CacheTransferManager:
self.gpu_cache_v_tensors.append(self.gpu_cache_kvs[val_name])
if args.create_cache_tensor:
logger.info("✅ kv cache is ready!")
logger.info("[rank {self.rank}/{self.n_ranks}] ✅ kv cache is ready!")
self.cache_ready_signal.value[self.rank] = 1
cache_kv_size_byte = sum([tmp.numel() * 1 for key, tmp in self.gpu_cache_kvs.items()])
logger.info(f"device :{self.device}")
logger.info(f"cache_kv_size_byte : {cache_kv_size_byte}")
logger.info(f"done init cache (full) gmem alloc : {paddle.device.cuda.memory_allocated()}")
logger.info(f"[rank {self.rank}/{self.n_ranks}] device :{self.device}")
logger.info(f"[rank {self.rank}/{self.n_ranks}] cache_kv_size_byte : {cache_kv_size_byte}")
logger.info(
f"[rank {self.rank}/{self.n_ranks}] done init cache (full) gmem alloc : {paddle.device.cuda.memory_allocated()}"
)
def _init_cpu_cache(self, args):
if args.num_cpu_blocks == 0:
logger.info("💡 no swap space (cpu cache) is specified.")
logger.info(f"[rank {self.rank}/{self.n_ranks}] 💡 no swap space (cpu cache) is specified.")
self.swap_space_ready_signal.value[self.rank] = 1
return
logger.info("Initializing swap space (cpu cache) for all layers.")
logger.info(f"[rank {self.rank}/{self.n_ranks}] Initializing swap space (cpu cache) for all layers.")
paddle.set_device("cpu")
self.k_dst_ptrs = []
self.v_dst_ptrs = []
@@ -254,12 +256,14 @@ class CacheTransferManager:
key_name = f"key_caches_{i}_rank{self.rank}"
val_name = f"value_caches_{i}_rank{self.rank}"
need_to_allocate_bytes = args.num_cpu_blocks * args.bytes_per_layer_per_block
logger.info(f"..creating cpu cache for layer {i}: {2 * need_to_allocate_bytes / 1024 ** 3:.2f}GB")
logger.info(
f"[rank {self.rank}/{self.n_ranks}] ..creating cpu cache for layer {i}: {2 * need_to_allocate_bytes / 1024 ** 3:.2f}GB"
)
self.cpu_cache_kvs[key_name] = cuda_host_alloc(need_to_allocate_bytes)
self.k_dst_ptrs.append(self.cpu_cache_kvs[key_name])
self.cpu_cache_kvs[val_name] = cuda_host_alloc(need_to_allocate_bytes)
self.v_dst_ptrs.append(self.cpu_cache_kvs[val_name])
logger.info("✅ swap space (cpu cache) is ready!")
logger.info(f"[rank {self.rank}/{self.n_ranks}] ✅ swap space (cpu cache) is ready!")
self.swap_space_ready_signal.value[self.rank] = 1
def _do_swap_to_cpu_task(
@@ -473,6 +477,10 @@ class CacheTransferManager:
while True:
if kv_cache_status_signal.value[0] == KVCacheStatus.CLEARING:
try:
logger.info(
f"[rank {self.rank}/{self.n_ranks}] Start clearing caches {self.cache_ready_signal.value}"
)
# clear cpu caches
if envs.FD_ENABLE_SWAP_SPACE_CLEARING:
paddle.set_device("cpu")
for ptrs in self.k_dst_ptrs + self.v_dst_ptrs:
@@ -486,37 +494,58 @@ class CacheTransferManager:
while np.sum(self.swap_space_ready_signal.value) != 0:
time.sleep(0.1)
# clear gpu caches
paddle.set_device(f"gpu:{self.device}")
for name, tensor in self.gpu_cache_kvs.items():
unset_data_ipc(tensor, name, True, False)
self.gpu_cache_kvs.clear()
self.gpu_cache_k_tensors.clear()
self.gpu_cache_v_tensors.clear()
# reset cache_ready_signal
self.cache_ready_signal.value[self.rank] = 0
if np.sum(self.cache_ready_signal.value) == 0:
logger.info(
f"[rank {self.rank}/{self.n_ranks}] Finish clearing caches {self.cache_ready_signal.value}"
)
# wait for all ranks caches to be cleared
if np.sum(self.cache_ready_signal.value) != 0:
time.sleep(0.1)
# reset kv_cache_status_signal
kv_cache_status_signal.value[0] = KVCacheStatus.CLEARED
logger.info("All ranks finish clearing caches")
except Exception as e:
logger.error(f"Failed to clear caches: {e}")
logger.error(f"[rank {self.rank}/{self.n_ranks}] Failed to clear caches: {e}")
elif kv_cache_status_signal.value[0] == KVCacheStatus.UPDATING:
try:
logger.info(
f"[rank {self.rank}/{self.n_ranks}] Start restoring caches {self.cache_ready_signal.value}"
)
# restore cpu cache
if envs.FD_ENABLE_SWAP_SPACE_CLEARING:
self._init_cpu_cache(args)
while np.sum(self.swap_space_ready_signal.value) != args.mp_num:
time.sleep(0.1)
# restore gpu cache and set cache_ready_signal
self._init_gpu_cache(args)
logger.info(
f"[rank {self.rank}/{self.n_ranks}] Finish restoring caches {self.cache_ready_signal.value}"
)
# wait for all ranks caches to be ready
while np.sum(self.cache_ready_signal.value) != args.mp_num:
time.sleep(0.1)
# set kv_cache_status_signal
logger.info("All ranks finish restoring caches")
kv_cache_status_signal.value[0] = KVCacheStatus.NORMAL
except Exception as e:
logger.error(f"Failed to restore caches: {e}")
logger.error(f"[rank {self.rank}/{self.n_ranks}] Failed to restore caches: {e}")
time.sleep(0.1)

View File

@@ -42,6 +42,12 @@ def use_custom_allreduce(custom_all_reduce_max_bytes: int = 8192 * 1024):
_TP_AR = CustomAllreduce(model_parallel_group, custom_all_reduce_max_bytes)
def custom_ar_clear_ipc_handles():
global _TP_AR
if _TP_AR is not None:
_TP_AR.clear_ipc_handles()
try:
@paddle.jit.marker.unified

View File

@@ -25,6 +25,7 @@ from paddle.distributed.communication.group import Group
from fastdeploy.distributed.custom_all_reduce import cuda_wrapper
from fastdeploy.model_executor.ops.gpu import (
all_reduce,
clear_ipc_handles,
dispose,
get_graph_buffer_ipc_meta,
init_custom_all_reduce,
@@ -220,6 +221,9 @@ class CustomAllreduce:
else:
return self.all_reduce(input, input, registered=False)
def clear_ipc_handles(self):
clear_ipc_handles(self._ptr)
def close(self):
if self._ptr:
dispose(self._ptr)

View File

@@ -801,6 +801,19 @@ class EngineSevice:
def check_and_free_block_tables(self):
self.resource_manager.check_and_free_block_tables()
def clear_data(self):
try:
llm_logger.info("Clear Data: Start")
self.token_processor.clear_data()
self.engine_worker_queue.clear_data()
self.send_response_server.req_dict.clear()
self.recv_request_server.req_dict.clear()
llm_logger.info("Clear Data: Successfully")
return True
except Exception as e:
llm_logger.error(f"Clear data error: {e}")
return False
def _exit_sub_services(self):
"""
exit sub services

View File

@@ -222,7 +222,9 @@ class LLMEngine:
if sampling_params is not None:
request.sampling_params = sampling_params
request.preprocess_start_time = time.time()
chat_template_kwargs = kwargs.get("chat_template_kwargs") or {}
chat_template_kwargs["chat_template"] = kwargs.get("chat_template")
kwargs["chat_template_kwargs"] = chat_template_kwargs
request = self.data_processor.process_request(request, self.cfg.max_model_len, **kwargs)
request.prompt_token_ids_len = len(request.prompt_token_ids)
request.need_prefill_tokens = request.prompt_token_ids_len
@@ -234,9 +236,6 @@ class LLMEngine:
request.get("max_tokens"),
),
)
if request.get("reasoning_max_tokens") is None:
default_reasoning_max_tokens = max(int(request.get("max_tokens") * 0.8), 1)
request.set("reasoning_max_tokens", default_reasoning_max_tokens)
min_tokens = request.get("min_tokens")
if input_ids_len + min_tokens >= self.cfg.max_model_len:
error_msg = (

View File

@@ -159,8 +159,6 @@ class SamplingParams:
def __post_init__(self):
if self.seed is None:
self.seed = random.randint(0, 922337203685477580)
if self.max_tokens is not None and self.reasoning_max_tokens is None:
self.reasoning_max_tokens = max(int(self.max_tokens * 0.8), 1)
self._verify_args()
def _verify_args(self) -> None:

View File

@@ -512,6 +512,10 @@ class ResourceManagerV1(ResourceManager):
def finish_requests_async(self, request_ids: Union[str, Iterable[str]]):
return self.finish_execution_pool.submit(self.finish_requests, request_ids)
def clear_data(self):
self.waiting: deque[Request] = deque()
self.to_be_rescheduled_request_id_set = set()
def finish_requests(self, request_ids: Union[str, Iterable[str]]):
llm_logger.info(f"recycle resources for requests: {request_ids}")
try:

View File

@@ -65,6 +65,8 @@ class EngineClient:
enable_prefix_caching=None,
splitwise_role=None,
):
import fastdeploy.model_executor.models # noqa: F401
architectures = ModelConfig({"model": model_name_or_path}).architectures[0]
if MultimodalRegistry.contains_model(architectures):
self.enable_mm = True
@@ -139,6 +141,9 @@ class EngineClient:
self.zmq_client = ZmqIpcClient(model, mode)
self.zmq_client.connect()
def check_model_weight_status(self):
return self.model_weights_status_signal.value[0] < 0
async def format_and_add_data(self, prompts: dict):
"""
Format the request data and send the request to the server.
@@ -167,6 +172,9 @@ class EngineClient:
task["preprocess_start_time"] = time.time()
try:
chat_template_kwargs = task.get("chat_template_kwargs", {})
chat_template_kwargs.update({"chat_template": task.get("chat_template"), "tools": task.get("tools")})
task["chat_template_kwargs"] = chat_template_kwargs
if inspect.iscoroutinefunction(self.data_processor.process_request_dict):
await self.data_processor.process_request_dict(task, self.max_model_len)
else:

View File

@@ -480,6 +480,7 @@ def reset_scheduler():
if llm_engine is None:
return Response("Engine not loaded", status_code=500)
llm_engine.engine.clear_data()
llm_engine.engine.scheduler.reset()
return Response("Scheduler Reset Successfully", status_code=200)
@@ -498,6 +499,7 @@ def control_scheduler(request: ControlSchedulerRequest):
return JSONResponse(content=content.model_dump(), status_code=500)
if request.reset:
llm_engine.engine.clear_data()
llm_engine.engine.scheduler.reset()
if request.load_shards_num or request.reallocate_shard:

View File

@@ -210,6 +210,8 @@ class OpenAIServingChat:
decoder_base_url=self.tokenizer_base_url,
)
while num_choices > 0:
if self.engine_client.check_model_weight_status():
raise ValueError("Engine is clearing model weight")
try:
response = await asyncio.wait_for(response_queue.get(), timeout=10)
current_waiting_time = 0
@@ -425,6 +427,8 @@ class OpenAIServingChat:
decoder_base_url=self.tokenizer_base_url,
)
while True:
if self.engine_client.check_model_weight_status():
raise ValueError("Engine is clearing model weight")
try:
response = await asyncio.wait_for(response_queue.get(), timeout=10)
current_waiting_time = 0

View File

@@ -216,6 +216,8 @@ class OpenAIServingCompletion:
completion_batched_token_ids = [[] for _ in range(num_choices)]
current_waiting_time = 0
while num_choices > 0:
if self.engine_client.check_model_weight_status():
raise ValueError("Engine is clearing model weight")
try:
response = await asyncio.wait_for(response_queue.get(), timeout=10)
current_waiting_time = 0
@@ -333,6 +335,8 @@ class OpenAIServingCompletion:
)
current_waiting_time = 0
while num_choices > 0:
if self.engine_client.check_model_weight_status():
raise ValueError("Engine is clearing model weight")
try:
response = await asyncio.wait_for(response_queue.get(), timeout=10)
current_waiting_time = 0

View File

@@ -88,7 +88,6 @@ class Ernie4_5Processor(BaseDataProcessor):
str: error message
"""
data_processor_logger.info(f"Start processing request: {request}")
request.chat_template = kwargs.get("chat_template")
request = self._apply_default_parameters(request)
if request.get("eos_token_ids") is None or len(request.eos_token_ids) == 0:
request.eos_token_ids = self.eos_token_ids
@@ -127,7 +126,7 @@ class Ernie4_5Processor(BaseDataProcessor):
)
elif request.messages is not None:
task = request.to_dict()
chat_template_kwargs = kwargs.get("chat_template_kwargs")
chat_template_kwargs = kwargs.get("chat_template_kwargs", {})
if chat_template_kwargs:
if isinstance(chat_template_kwargs, dict):
for k, v in chat_template_kwargs.items():
@@ -135,7 +134,7 @@ class Ernie4_5Processor(BaseDataProcessor):
task[k] = v
else:
raise ValueError("Invalid input: chat_template_kwargs must be a dict")
request.prompt_token_ids = self.messages2ids(task)
request.prompt_token_ids = self.messages2ids(task, **chat_template_kwargs)
else:
raise ValueError(f"The request should have `prompt_token_ids`, `prompt` or `messages`: {request}.")
@@ -205,7 +204,7 @@ class Ernie4_5Processor(BaseDataProcessor):
req_id = request.get("request_id", None)
data_processor_logger.info(f"req_id:{req_id}, tokens:{tokens}, token_ids: {token_ids}")
elif request.get("messages"):
chat_template_kwargs = request.get("chat_template_kwargs")
chat_template_kwargs = request.get("chat_template_kwargs", {})
if chat_template_kwargs:
if isinstance(chat_template_kwargs, dict):
for k, v in chat_template_kwargs.items():
@@ -213,7 +212,7 @@ class Ernie4_5Processor(BaseDataProcessor):
request[k] = v
else:
raise ValueError("Invalid input: chat_template_kwargs must be a dict")
request["prompt_token_ids"] = self.messages2ids(request)
request["prompt_token_ids"] = self.messages2ids(request, **chat_template_kwargs)
else:
raise ValueError(f"Request must contain 'prompt_token_ids', 'prompt', or 'messages': {request}")
@@ -379,7 +378,7 @@ class Ernie4_5Processor(BaseDataProcessor):
del self.tool_parser_dict[req_id]
return response_dict
def messages2ids(self, request_or_messages):
def messages2ids(self, request_or_messages, **kwargs):
"""
Convert multi-turn messages into ID sequences.
@@ -397,7 +396,7 @@ class Ernie4_5Processor(BaseDataProcessor):
tokenize=False,
split_special_tokens=False,
add_special_tokens=False,
chat_template=request_or_messages.get("chat_template", None),
**kwargs,
)
request_or_messages["text_after_process"] = spliced_message
req_id = None

View File

@@ -113,7 +113,6 @@ class Ernie4_5_VLProcessor(Ernie4_5Processor):
def process_request(self, request, max_model_len=None, **kwargs):
"""process the input data"""
request.chat_template = kwargs.get("chat_template")
task = request.to_dict()
task["chat_template_kwargs"] = kwargs.get("chat_template_kwargs")
self.process_request_dict(task, max_model_len)

View File

@@ -250,8 +250,8 @@ class DataProcessor:
"video",
]:
image_message_list.append(item)
prompt_token_ids = self.apply_chat_template(request)
chat_template_kwargs = request.get("chat_template_kwargs", {})
prompt_token_ids = self.apply_chat_template(request, **chat_template_kwargs)
if len(prompt_token_ids) == 0:
raise ValueError("Invalid input: prompt_token_ids must be a non-empty sequence of token IDs")
image_start_index = 0
@@ -480,7 +480,7 @@ class DataProcessor:
break
self.tokenizer = Ernie4_5Tokenizer.from_pretrained(self.model_name_or_path)
def apply_chat_template(self, request):
def apply_chat_template(self, request, **kwargs):
"""
Convert multi-turn messages into ID sequences.
@@ -498,7 +498,7 @@ class DataProcessor:
request,
tokenize=False,
add_generation_prompt=request.get("add_generation_prompt", True),
chat_template=request.get("chat_template", None),
**kwargs,
)
prompt_token_str = prompt_token_template.replace("<|image@placeholder|>", "").replace(
"<|video@placeholder|>", ""

View File

@@ -185,6 +185,9 @@ class DataProcessor(BaseDataProcessor):
from paddleformers.trl.llm_utils import get_eos_token_id
self.eos_token_ids = get_eos_token_id(self.tokenizer, self.generation_config)
data_processor_logger.info(
f"The eos_token_ids obtained by merging tokenizer and generation_config is {self.eos_token_ids}"
)
self.eos_token_id_len = len(self.eos_token_ids)
self.pad_token_id = self.get_pad_id()
self.reasoning_parser = None
@@ -205,7 +208,6 @@ class DataProcessor(BaseDataProcessor):
str: error message
"""
data_processor_logger.info(f"Start processing request: {request}")
request.chat_template = kwargs.get("chat_template")
request = self._apply_default_parameters(request)
if request.get("eos_token_ids") is None or len(request.eos_token_ids) == 0:
request.eos_token_ids = self.eos_token_ids
@@ -239,7 +241,7 @@ class DataProcessor(BaseDataProcessor):
if self.tokenizer.chat_template is None:
raise ValueError("This model does not support chat_template.")
task = request.to_dict()
chat_template_kwargs = kwargs.get("chat_template_kwargs")
chat_template_kwargs = kwargs.get("chat_template_kwargs", {})
if chat_template_kwargs:
if isinstance(chat_template_kwargs, dict):
for k, v in chat_template_kwargs.items():
@@ -248,7 +250,7 @@ class DataProcessor(BaseDataProcessor):
else:
raise ValueError("Invalid input: chat_template_kwargs must be a dict")
task.setdefault("enable_thinking", True)
request.prompt_token_ids = self.messages2ids(task)
request.prompt_token_ids = self.messages2ids(task, **chat_template_kwargs)
else:
raise ValueError(f"The request should have `input_ids`, `text` or `messages`: {request}.")
@@ -313,7 +315,7 @@ class DataProcessor(BaseDataProcessor):
elif request.get("messages"):
if self.tokenizer.chat_template is None:
raise ValueError("This model does not support chat_template.")
chat_template_kwargs = request.get("chat_template_kwargs")
chat_template_kwargs = request.get("chat_template_kwargs", {})
if chat_template_kwargs:
if isinstance(chat_template_kwargs, dict):
for k, v in chat_template_kwargs.items():
@@ -322,7 +324,7 @@ class DataProcessor(BaseDataProcessor):
else:
raise ValueError("Invalid input: chat_template_kwargs must be a dict")
request.setdefault("enable_thinking", True)
request["prompt_token_ids"] = self.messages2ids(request)
request["prompt_token_ids"] = self.messages2ids(request, **chat_template_kwargs)
else:
raise ValueError(f"Request must contain 'prompt_token_ids', 'prompt', or 'messages': {request}")
@@ -396,7 +398,7 @@ class DataProcessor(BaseDataProcessor):
is_end = response_dict["finished"]
req_id = response_dict["request_id"]
if is_end and len(token_ids) > 0 and not kwargs.get("include_stop_str_in_output"):
if token_ids[-1] == self.tokenizer.eos_token_id:
if token_ids[-1] in self.eos_token_ids:
token_ids = token_ids[:-1]
delta_text, _, previous_texts = self.ids2tokens(token_ids, req_id)
if is_end:
@@ -434,7 +436,7 @@ class DataProcessor(BaseDataProcessor):
token_ids = response_dict["outputs"]["token_ids"]
if is_end and len(token_ids) > 0 and not kwargs.get("include_stop_str_in_output"):
if token_ids[-1] == self.tokenizer.eos_token_id:
if token_ids[-1] in self.eos_token_ids:
token_ids = token_ids[:-1]
delta_text, previous_token_ids, previous_texts = self.ids2tokens(token_ids, req_id)
response_dict["outputs"]["raw_prediction"] = delta_text
@@ -527,7 +529,7 @@ class DataProcessor(BaseDataProcessor):
return tokens["input_ids"][0]
def messages2ids(self, request):
def messages2ids(self, request, **kwargs):
"""
Convert multi-turn messages into ID sequences.
@@ -544,7 +546,7 @@ class DataProcessor(BaseDataProcessor):
split_special_tokens=False,
add_special_tokens=False,
return_tensors="pd",
chat_template=request.get("chat_template", None),
**kwargs,
)
request["text_after_process"] = spliced_message
req_id = None

View File

@@ -392,6 +392,13 @@ class EngineWorkerQueue:
llm_logger.debug("get tasks from queue success")
return item
def clear_data(self):
self.lock.acquire()
self.tasks[:] = list()
self.client_read_flag[:] = [1] * self.num_client
self.lock.release()
llm_logger.info("clear data for engine worker queue")
def cleanup(self):
"""
Exit the worker queue gracefully.

View File

@@ -23,7 +23,10 @@ import paddle.nn.layer
from paddle.device.cuda import graphs
from fastdeploy.config import FDConfig
from fastdeploy.distributed.communication import capture_custom_allreduce
from fastdeploy.distributed.communication import (
capture_custom_allreduce,
custom_ar_clear_ipc_handles,
)
from fastdeploy.utils import get_logger
logger = get_logger("cudagrpah_piecewise_backend", "cudagraph_piecewise_backend.log")
@@ -208,6 +211,7 @@ class CudaGraphPiecewiseBackend:
def clear_graph(self):
""" """
# Clear graphs
custom_ar_clear_ipc_handles()
for id, entry in self.concrete_size_entries.items():
if entry.cuda_graph:
del entry.cuda_graph

View File

@@ -23,7 +23,7 @@ from paddle import nn
from paddle.autograd import PyLayer
from paddle.distributed.fleet.utils import recompute
from fastdeploy.model_executor.layers.utils import _set_var_distributed, get_tensor
from fastdeploy.model_executor.layers.utils import get_tensor
from fastdeploy.model_executor.models.ernie4_5_vl.dist_utils import (
RowSequenceParallelLinear,
all_gather_group,
@@ -197,19 +197,7 @@ class VariableResolutionResamplerModel(nn.Layer):
self.after_norm = RMSNorm(out_config)
if self.tensor_parallel_degree > 1:
for idx in [2, 3]:
mark_as_sequence_parallel_parameter(self.spatial_linear[idx].weight)
mark_as_sequence_parallel_parameter(self.spatial_linear[idx].bias)
_set_var_distributed(self.spatial_linear[idx].weight, split_axis=0)
_set_var_distributed(self.spatial_linear[idx].bias, split_axis=0)
if self.use_temporal_conv:
for idx in [0, 2, 3]:
mark_as_sequence_parallel_parameter(self.temporal_linear[idx].weight)
mark_as_sequence_parallel_parameter(self.temporal_linear[idx].bias)
mark_as_sequence_parallel_parameter(self.mlp.weight)
mark_as_sequence_parallel_parameter(self.mlp.bias)
mark_as_sequence_parallel_parameter(self.after_norm.weight)
set_weight_attrs(self.spatial_linear[0].weight, {"output_dim": False})
def spatial_conv_reshape(self, x, spatial_conv_size):

View File

@@ -464,6 +464,31 @@ class TokenProcessor:
main_process_metrics.request_inference_time.observe(current_time - task.inference_start_time)
main_process_metrics.request_generation_tokens.observe(self.tokens_counter[task.request_id])
def clear_data(self):
if envs.ENABLE_V1_KVCACHE_SCHEDULER:
self.resource_manager.clear_data()
for i in range(self.cfg.max_num_seqs):
if self.resource_manager.stop_flags[i]:
continue
task = self.resource_manager.tasks_list[i]
result = RequestOutput(
request_id=task.request_id,
outputs=CompletionOutput(
index=i,
send_idx=self.tokens_counter[task.request_id],
token_ids=task.eos_token_ids,
draft_token_ids=[],
),
finished=True,
metrics=RequestMetrics(
arrival_time=time.time(),
request_start_time=task.arrival_time,
),
)
is_prefill = task.disaggregate_info is not None and task.disaggregate_info["role"] == "prefill"
self._recycle_resources(task.request_id, i, task, result, is_prefill)
llm_logger.warning(f"clear data for task {task.request_id}")
def _record_speculative_decoding_mertics(self, accept_num):
"""Record metrics of speculative decoding"""
if not hasattr(main_process_metrics, "spec_decode_draft_acceptance_rate"):

View File

@@ -66,6 +66,7 @@ class DynamicWeightManager:
paddle.device.cuda.empty_cache()
if not self.first_load:
paddle.distributed.restart_process_group()
paddle.distributed.restart_process_group(self.parallel_config.tp_group)
if self.parallel_config.enable_expert_parallel:
paddle.distributed.restart_process_group(self.parallel_config.ep_group)
@@ -115,7 +116,7 @@ class DynamicWeightManager:
self._verify_parameters("clearance")
if self.parallel_config.tensor_parallel_size > 1:
paddle.distributed.barrier(self.parallel_config.tp_group)
paddle.distributed.shutdown_process_group(self.parallel_config.tp_group)
paddle.distributed.shutdown_process_group(self.parallel_config.tp_group)
if self.parallel_config.enable_expert_parallel:
paddle.distributed.barrier(self.parallel_config.ep_group)
paddle.distributed.shutdown_process_group(self.parallel_config.ep_group)
@@ -222,12 +223,14 @@ class DynamicWeightManager:
while model_weights_status.value[0] != ModelWeightsStatus.NORMAL:
if model_weights_status.value[0] == ModelWeightsStatus.UPDATING:
logger.info("infer engine stopped! start to load new checkpoint...")
model_runner.clear_requests()
model_runner.update_parameters(pid)
while model_weights_status.value[0] != ModelWeightsStatus.NORMAL:
time.sleep(0.01)
logger.info("finished loading new checkpoint")
elif model_weights_status.value[0] == ModelWeightsStatus.CLEARING:
logger.info("infer engine stopped! start to clear checkpoint...")
model_runner.clear_requests()
model_runner.clear_parameters(pid)
while model_weights_status.value[0] != ModelWeightsStatus.CLEARED:
time.sleep(0.01)

View File

@@ -1028,12 +1028,12 @@ class GPUModelRunner(ModelRunnerBase):
create_cache_tensor = profile or self.parallel_config.splitwise_role == "mixed"
if not create_cache_tensor:
logger.info("Waiting for cache managers to create kv cache..")
logger.info(f"Waiting for cache managers to create kv cache.. {cache_ready_signal.value}")
while cache_ready_signal.value[self.local_rank] != 1:
time.sleep(1)
logger.info("OK! Stop waiting.")
logger.info(f"OK! Stop waiting. {cache_ready_signal.value}")
logger.info("Initializing kv cache for all layers.")
logger.info(f"Initializing kv cache for all layers. {cache_ready_signal.value}")
cache_kvs_list = []
for i in range(self.model_config.num_hidden_layers):
key_cache_name = f"key_caches_{i}_rank{local_rank}.device{self.device_id}"
@@ -1054,8 +1054,8 @@ class GPUModelRunner(ModelRunnerBase):
self.share_inputs["caches"] = cache_kvs_list
if not profile and create_cache_tensor:
logger.info("✅ kv cache is ready!")
cache_ready_signal.value[self.local_rank] = 1
logger.info(f"✅ kv cache is ready! {cache_ready_signal.value}")
paddle.device.cuda.empty_cache()
@@ -1704,6 +1704,10 @@ class GPUModelRunner(ModelRunnerBase):
self.forward_meta.clear_caches()
paddle.device.cuda.empty_cache()
def clear_requests(self):
"""Dynamic model loader use to clear requests use for RL"""
self.share_inputs["stop_flags"][:] = True
def clear_parameters(self, pid):
"""Dynamic model loader use to clear parameters use for RL"""
# Clear CUDAGraph

View File

@@ -337,6 +337,8 @@ class PaddleDisWorkerProc:
self.worker.model_runner,
self.parallel_config.engine_worker_queue_port,
)
logger.info(f"current task queue data: {self.task_queue.num_tasks()}")
self.task_queue.clear_data()
self.model_weights_signal[0] = ModelWeightsStatus.NORMAL
logger.info(f"Rank: {self.local_rank} has updated or cleared parameters.")

View File

@@ -0,0 +1,36 @@
import unittest
from unittest.mock import MagicMock, patch
from fastdeploy.entrypoints.engine_client import EngineClient
class TestEngineClient(unittest.IsolatedAsyncioTestCase):
async def asyncSetUp(self):
# 创建 EngineClient 实例的模拟对象
with patch.object(EngineClient, "__init__", return_value=None) as mock_init:
self.engine_client = EngineClient("model_path")
mock_init.side_effect = lambda *args, **kwargs: print(f"__init__ called with {args}, {kwargs}")
self.engine_client.data_processor = MagicMock()
self.engine_client.zmq_client = MagicMock()
self.engine_client.max_model_len = 1024
self.engine_client.enable_mm = False
async def test_add_request(self):
request = {
"chat_template_kwargs": {"enable_thinking": True},
"prompt_token_ids": [1],
"chat_template": "Hello",
"max_tokens": 20,
"tools": [1],
}
await self.engine_client.add_requests(request)
assert "chat_template" in request["chat_template_kwargs"], "'chat_template' not found in 'chat_template_kwargs"
assert "tools" in request["chat_template_kwargs"], "'tools' not found in 'chat_template_kwargs'"
assert request["chat_template_kwargs"]["chat_template"] == "Hello"
assert request["chat_template_kwargs"]["tools"] == [1]
if __name__ == "__main__":
unittest.main()

View File

@@ -17,6 +17,8 @@ class TestErnie4_5ProcessorProcessResponseDictStreaming(unittest.TestCase):
self.processor.decode_status = {}
self.processor.reasoning_end_dict = {}
self.processor.tool_parser_dict = {}
self.processor.generation_config = MagicMock()
self.processor.eos_token_ids = [1]
# 模拟 ids2tokens 方法
def mock_ids2tokens(token_ids, task_id):
@@ -24,6 +26,18 @@ class TestErnie4_5ProcessorProcessResponseDictStreaming(unittest.TestCase):
self.processor.ids2tokens = mock_ids2tokens
def mock_messages2ids(request, **kwargs):
if "chat_template" in kwargs:
return [1]
else:
return [0]
def mock_apply_default_parameters(request):
return request
self.processor.messages2ids = mock_messages2ids
self.processor._apply_default_parameters = mock_apply_default_parameters
# 模拟推理解析器
self.mock_reasoning_parser = MagicMock()
self.mock_reasoning_parser.__class__.__name__ = "ErnieX1ReasoningParser"
@@ -49,6 +63,17 @@ class TestErnie4_5ProcessorProcessResponseDictStreaming(unittest.TestCase):
# 验证结果
self.assertEqual(result["outputs"]["raw_prediction"], "delta_text")
def test_process_request_dict(self):
request_dict = {
"messages": [{"role": "user", "content": "Hello!"}],
"chat_template_kwargs": {"chat_template": "Hello!"},
"eos_token_ids": [1],
"temperature": 1,
"top_p": 1,
}
result = self.processor.process_request_dict(request_dict, 100)
self.assertEqual(result["prompt_token_ids"], [1])
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,63 @@
import unittest
from unittest.mock import MagicMock, patch
from fastdeploy.engine.request import Request
from fastdeploy.input.text_processor import DataProcessor
class TestDataProcessorProcess(unittest.TestCase):
def setUp(self):
# 创建 DataProcessor 实例的模拟对象
with patch.object(DataProcessor, "__init__", return_value=None) as mock_init:
self.processor = DataProcessor("model_path")
mock_init.side_effect = lambda *args, **kwargs: print(f"__init__ called with {args}, {kwargs}")
# 设置必要的属性
self.processor.tokenizer = MagicMock()
self.processor.tokenizer.eos_token_id = 1
self.processor.decode_status = {}
self.processor.reasoning_end_dict = {}
self.processor.tool_parser_dict = {}
self.processor.generation_config = MagicMock()
self.processor.eos_token_ids = [1]
def mock_messages2ids(request, **kwargs):
if "chat_template" in kwargs:
return [1]
else:
return [0]
def mock_apply_default_parameters(request):
return request
self.processor.messages2ids = mock_messages2ids
self.processor._apply_default_parameters = mock_apply_default_parameters
def test_process_request(self):
request = Request.from_dict(
{
"request_id": "123",
"messages": [{"role": "user", "content": "Hello!"}],
"eos_token_ids": [1],
"temperature": 1,
"top_p": 1,
}
)
chat_template_kwargs = {"chat_template": "Hello!"}
result = self.processor.process_request(request, 100, chat_template_kwargs=chat_template_kwargs)
self.assertEqual(result.prompt_token_ids, [1])
def test_process_request_dict(self):
request_dict = {
"messages": [{"role": "user", "content": "Hello!"}],
"chat_template_kwargs": {"chat_template": "Hello!"},
"eos_token_ids": [1],
"temperature": 1,
"top_p": 1,
}
result = self.processor.process_request_dict(request_dict, 100)
self.assertEqual(result["prompt_token_ids"], [1])
if __name__ == "__main__":
unittest.main()

View File

@@ -3,15 +3,11 @@ import unittest
from pathlib import Path
from unittest.mock import AsyncMock, MagicMock, mock_open, patch
from fastdeploy.engine.request import Request
from fastdeploy.engine.sampling_params import SamplingParams
from fastdeploy.entrypoints.chat_utils import load_chat_template
from fastdeploy.entrypoints.llm import LLM
from fastdeploy.entrypoints.openai.protocol import ChatCompletionRequest
from fastdeploy.entrypoints.openai.serving_chat import OpenAIServingChat
from fastdeploy.input.ernie4_5_processor import Ernie4_5Processor
from fastdeploy.input.ernie4_5_vl_processor import Ernie4_5_VLProcessor
from fastdeploy.input.text_processor import DataProcessor
class TestLodChatTemplate(unittest.IsolatedAsyncioTestCase):
@@ -108,91 +104,6 @@ class TestLodChatTemplate(unittest.IsolatedAsyncioTestCase):
chat_completion = await self.chat_completion_handler.create_chat_completion(request)
self.assertEqual("hello", chat_completion["chat_template"])
@patch("fastdeploy.input.ernie4_5_vl_processor.Ernie4_5_VLProcessor.__init__")
def test_ernie4_5_vl_processor(self, mock_class):
mock_class.return_value = None
ernie4_5_vl_processor = Ernie4_5_VLProcessor()
mock_request = Request.from_dict({"request_id": "123"})
def mock_apply_default_parameters(request):
return request
def mock_process_request(request, max_model_len):
return request
ernie4_5_vl_processor._apply_default_parameters = mock_apply_default_parameters
ernie4_5_vl_processor.process_request_dict = mock_process_request
result = ernie4_5_vl_processor.process_request(mock_request, chat_template="hello")
self.assertEqual("hello", result.chat_template)
@patch("fastdeploy.input.text_processor.DataProcessor.__init__")
def test_text_processor_process_request(self, mock_class):
mock_class.return_value = None
text_processor = DataProcessor()
mock_request = Request.from_dict(
{"request_id": "123", "prompt": "hi", "max_tokens": 128, "temperature": 1, "top_p": 1}
)
def mock_apply_default_parameters(request):
return request
def mock_process_request(request, max_model_len):
return request
def mock_text2ids(text, max_model_len):
return [1]
text_processor._apply_default_parameters = mock_apply_default_parameters
text_processor.process_request_dict = mock_process_request
text_processor.text2ids = mock_text2ids
text_processor.eos_token_ids = [1]
result = text_processor.process_request(mock_request, chat_template="hello")
self.assertEqual("hello", result.chat_template)
@patch("fastdeploy.input.ernie4_5_processor.Ernie4_5Processor.__init__")
def test_ernie4_5_processor_process(self, mock_class):
mock_class.return_value = None
ernie4_5_processor = Ernie4_5Processor()
mock_request = Request.from_dict(
{"request_id": "123", "messages": ["hi"], "max_tokens": 128, "temperature": 1, "top_p": 1}
)
def mock_apply_default_parameters(request):
return request
def mock_process_request(request, max_model_len):
return request
def mock_messages2ids(text):
return [1]
ernie4_5_processor._apply_default_parameters = mock_apply_default_parameters
ernie4_5_processor.process_request_dict = mock_process_request
ernie4_5_processor.messages2ids = mock_messages2ids
ernie4_5_processor.eos_token_ids = [1]
ernie4_5_processor.reasoning_parser = MagicMock()
result = ernie4_5_processor.process_request(mock_request, chat_template="hello")
self.assertEqual("hello", result.chat_template)
@patch("fastdeploy.entrypoints.llm.LLM.__init__")
def test_llm_load(self, mock_class):
mock_class.return_value = None
llm = LLM()
llm.llm_engine = MagicMock()
llm.default_sampling_params = MagicMock()
llm.chat_template = "hello"
def mock_run_engine(req_ids, **kwargs):
return req_ids
def mock_add_request(**kwargs):
return kwargs.get("chat_template")
llm._run_engine = mock_run_engine
llm._add_request = mock_add_request
result = llm.chat(["hello"], sampling_params=SamplingParams(1))
self.assertEqual("hello", result)
@patch("fastdeploy.entrypoints.llm.LLM.__init__")
def test_llm(self, mock_class):
mock_class.return_value = None