Files
FastDeploy/test/input/test_ernie_processor.py
luukunn 132a8ef425 Release/2.1 (#3414)
* Pre ce modified (#3335) (#3360)

* Pre ce modified (#3335)

* update

* update

* fix

* fix

* update

* update

* update

* fix

* update

* update

* update

* add ut fix pr(3367)

* [Bug Fix] Fix V1 video bug (#3387)

* fix stopseq error info (#3342)

Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>

* [BugFix] Fix default log level of paddleformers (#3377)

Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>

* [Polish Code] Remove useless notes

* feat(log):add_request_and_response_log (#3392)

* Optimize CI execution workflow. (#3371) (#3384)

* fix

* [BugFix] fix control signal release failed (#3374)

* [BugFix]

* [BugFix]

* [BugFix]

* [BugFix]

* fix

* fix

---------

Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>

* Revert "Merge branch 'feature/online/vs_think_20250813' into release/2.1"

This reverts commit 02596fc537, reversing
changes made to 03347626a6.

* [XPU] Fixed the issue of performance degradation caused by enabling ENABLE_V1_KVCACHE_SCHEDULER (#3393)

* fix v1 schedule oom bug

* fix v1 schedule oom bug

* [BugFix] fix ErnieProcessor not set raw_prediction (#3401)

* [Doc]Release fastdeploy-xpu 2.1.0 (#3407)

* fix v1 schedule oom bug

* fix v1 schedule oom bug

* update release note

* [Doc]Release fastdeploy-xpu 2.0.3  (#3408)

* fix v1 schedule oom bug

* fix v1 schedule oom bug

* update release note

* update info

---------

Co-authored-by: YUNSHEN XIE <1084314248@qq.com>
Co-authored-by: ming1753 <61511741+ming1753@users.noreply.github.com>
Co-authored-by: JYChen <zoooo0820@qq.com>
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>
Co-authored-by: Jiang-Jia-Jun <jiangjiajun@baidu.com>
Co-authored-by: xiaolei373 <zley373@gmail.com>
Co-authored-by: ltd0924 <32387785+ltd0924@users.noreply.github.com>
Co-authored-by: yinwei <yinwei_hust@163.com>
Co-authored-by: memoryCoderC <1137889088@qq.com>
2025-08-14 20:53:47 +08:00

54 lines
2.1 KiB
Python

import unittest
from unittest.mock import MagicMock, patch
from fastdeploy.input.ernie_processor import ErnieProcessor
class TestErnieProcessorProcessResponseDictStreaming(unittest.TestCase):
def setUp(self):
# 创建 ErnieProcessor 实例的模拟对象
with patch.object(ErnieProcessor, "__init__", return_value=None) as mock_init:
self.processor = ErnieProcessor("model_path")
mock_init.side_effect = lambda *args, **kwargs: print(f"__init__ called with {args}, {kwargs}")
# 设置必要的属性
self.processor.tokenizer = MagicMock()
self.processor.tokenizer.eos_token_id = 1
self.processor.decode_status = {}
self.processor.tool_parsers = {}
# 模拟 ids2tokens 方法
def mock_ids2tokens(token_ids, task_id):
return "delta_text", [2, 3], "previous_texts"
self.processor.ids2tokens = mock_ids2tokens
# 模拟推理解析器
self.mock_reasoning_parser = MagicMock()
self.mock_reasoning_parser.__class__.__name__ = "ErnieX1ReasoningParser"
self.mock_reasoning_parser.extract_reasoning_content_streaming.return_value = ("reasoning", "text")
self.processor.reasoning_parser = self.mock_reasoning_parser
# 模拟工具解析器
self.mock_tool_parser = MagicMock()
self.mock_tool_parser.extract_tool_calls_streaming.return_value = "tool_call"
self.mock_tool_parser_obj = MagicMock()
self.mock_tool_parser_obj.return_value = self.mock_tool_parser
self.processor.tool_parser_obj = self.mock_tool_parser_obj
def test_process_response_dict_streaming_normal_case(self):
"""测试正常情况下的流式响应处理"""
# 准备输入
response_dict = {"finished": False, "request_id": "req1", "outputs": {"token_ids": [4, 5]}}
kwargs = {"enable_thinking": True}
# 调用方法
result = self.processor.process_response_dict_streaming(response_dict, **kwargs)
# 验证结果
self.assertEqual(result["outputs"]["raw_prediction"], "delta_text")
if __name__ == "__main__":
unittest.main()