mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-10-05 16:48:03 +08:00
【BugFix】completion接口echo回显支持 (#3245)
* wenxin-tools-511,修复v1/completion无法回显的问题。 * 支持多prompt的回显 * 支持多prompt情况下的流式回显 * 补充了 completion 接口支持 echo 的单元测试 * pre-commit * 移除了多余的test文件 * 修复了completion接口echo支持的单测方法 * 补充了单元测试文件 * 补充单测 * unittest * 补充单测 * 修复单测 * 删除不必要的assert. * 重新提交 * 更新测试方法 * ut * 验证是否是正确思路单测 * 验证是否是正确思路单测 * 验证是否是正确思路单测3 * 优化单测代码,有针对性地缩小单测范围。 * 优化单测代码2,有针对性地缩小单测范围。 * 优化单测代码3,有针对性地缩小单测范围。 * support 'echo' in chat/completion. * update * update * update * update * update * update * 补充了关于tokenid的单元测试 * update * 修正index错误 * 修正index错误
This commit is contained in:
@@ -241,6 +241,14 @@ class OpenAIServingCompletion:
|
||||
if dealer is not None:
|
||||
dealer.close()
|
||||
|
||||
async def _echo_back_prompt(self, request, res, idx):
|
||||
if res["outputs"].get("send_idx", -1) == 0 and request.echo:
|
||||
if isinstance(request.prompt, list):
|
||||
prompt_text = request.prompt[idx]
|
||||
else:
|
||||
prompt_text = request.prompt
|
||||
res["outputs"]["text"] = prompt_text + (res["outputs"]["text"] or "")
|
||||
|
||||
def calc_finish_reason(self, max_tokens, token_num, output, tool_called):
|
||||
if max_tokens is None or token_num != max_tokens:
|
||||
if tool_called or output.get("tool_call"):
|
||||
@@ -338,6 +346,7 @@ class OpenAIServingCompletion:
|
||||
else:
|
||||
arrival_time = res["metrics"]["arrival_time"] - inference_start_time[idx]
|
||||
|
||||
await self._echo_back_prompt(request, res, idx)
|
||||
output = res["outputs"]
|
||||
output_top_logprobs = output["top_logprobs"]
|
||||
logprobs_res: Optional[CompletionLogprobs] = None
|
||||
@@ -450,7 +459,7 @@ class OpenAIServingCompletion:
|
||||
final_res = final_res_batch[idx]
|
||||
prompt_token_ids = prompt_batched_token_ids[idx]
|
||||
assert prompt_token_ids is not None
|
||||
prompt_text = final_res["prompt"]
|
||||
prompt_text = request.prompt
|
||||
completion_token_ids = completion_batched_token_ids[idx]
|
||||
|
||||
output = final_res["outputs"]
|
||||
@@ -468,16 +477,14 @@ class OpenAIServingCompletion:
|
||||
|
||||
if request.echo:
|
||||
assert prompt_text is not None
|
||||
if request.max_tokens == 0:
|
||||
token_ids = prompt_token_ids
|
||||
output_text = prompt_text
|
||||
token_ids = [*prompt_token_ids, *output["token_ids"]]
|
||||
if isinstance(prompt_text, list):
|
||||
output_text = prompt_text[idx] + output["text"]
|
||||
else:
|
||||
token_ids = [*prompt_token_ids, *output["token_ids"]]
|
||||
output_text = prompt_text + output["text"]
|
||||
output_text = str(prompt_text) + output["text"]
|
||||
else:
|
||||
token_ids = output["token_ids"]
|
||||
output_text = output["text"]
|
||||
|
||||
finish_reason = self.calc_finish_reason(request.max_tokens, final_res["output_token_ids"], output, False)
|
||||
|
||||
choice_data = CompletionResponseChoice(
|
||||
|
Reference in New Issue
Block a user