mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-12-24 13:28:13 +08:00
[Feature] Support Paddle-OCR (#4396)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
Publish Job / publish_pre_check (push) Has been cancelled
Publish Job / print_publish_pre_check_outputs (push) Has been cancelled
Publish Job / FD-Clone-Linux (push) Has been cancelled
Publish Job / Show Code Archive Output (push) Has been cancelled
Publish Job / BUILD_SM8090 (push) Has been cancelled
Publish Job / BUILD_SM8689 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8090 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8689 (push) Has been cancelled
Publish Job / Run FD Image Build (push) Has been cancelled
Publish Job / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
Publish Job / Run FastDeploy LogProb Tests (push) Has been cancelled
Publish Job / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
Publish Job / Run Base Tests (push) Has been cancelled
Publish Job / Run Accuracy Tests (push) Has been cancelled
Publish Job / Run Stable Tests (push) Has been cancelled
CI Images Build / FD-Clone-Linux (push) Has been cancelled
CI Images Build / Show Code Archive Output (push) Has been cancelled
CI Images Build / CI Images Build (push) Has been cancelled
CI Images Build / BUILD_SM8090 (push) Has been cancelled
CI Images Build / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
CI Images Build / Run FastDeploy LogProb Tests (push) Has been cancelled
CI Images Build / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
CI Images Build / Run Base Tests (push) Has been cancelled
CI Images Build / Run Accuracy Tests (push) Has been cancelled
CI Images Build / Run Stable Tests (push) Has been cancelled
CI Images Build / Publish Docker Images Pre Check (push) Has been cancelled
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled
Publish Job / publish_pre_check (push) Has been cancelled
Publish Job / print_publish_pre_check_outputs (push) Has been cancelled
Publish Job / FD-Clone-Linux (push) Has been cancelled
Publish Job / Show Code Archive Output (push) Has been cancelled
Publish Job / BUILD_SM8090 (push) Has been cancelled
Publish Job / BUILD_SM8689 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8090 (push) Has been cancelled
Publish Job / PADDLE_PYPI_UPLOAD_8689 (push) Has been cancelled
Publish Job / Run FD Image Build (push) Has been cancelled
Publish Job / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
Publish Job / Run FastDeploy LogProb Tests (push) Has been cancelled
Publish Job / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
Publish Job / Run Base Tests (push) Has been cancelled
Publish Job / Run Accuracy Tests (push) Has been cancelled
Publish Job / Run Stable Tests (push) Has been cancelled
CI Images Build / FD-Clone-Linux (push) Has been cancelled
CI Images Build / Show Code Archive Output (push) Has been cancelled
CI Images Build / CI Images Build (push) Has been cancelled
CI Images Build / BUILD_SM8090 (push) Has been cancelled
CI Images Build / Run FastDeploy Unit Tests and Coverage (push) Has been cancelled
CI Images Build / Run FastDeploy LogProb Tests (push) Has been cancelled
CI Images Build / Extracted partial CE model tasks to run in CI. (push) Has been cancelled
CI Images Build / Run Base Tests (push) Has been cancelled
CI Images Build / Run Accuracy Tests (push) Has been cancelled
CI Images Build / Run Stable Tests (push) Has been cancelled
CI Images Build / Publish Docker Images Pre Check (push) Has been cancelled
* init * update code * fix code style & disable thinking * adapt for common_engine.update_mm_requests_chunk_size * use 3d rope * use flash_attn_unpadded * opt siglip * update to be compatible with the latest codebase * fix typo * optim OCR performance * fix bug * fix bug * fix bug * fix bug * normlize name * modify xpu rope * revert logger * fix bug * fix bug * fix bug * support default_v1 * optim performance * fix bug --------- Co-authored-by: root <root@szzj-acg-tge1-fdda9.szzj.baidu.com> Co-authored-by: zhangyue66 <zhangyue66@baidu.com>
This commit is contained in:
20
fastdeploy/input/paddleocr_vl_processor/__init__.py
Normal file
20
fastdeploy/input/paddleocr_vl_processor/__init__.py
Normal file
@@ -0,0 +1,20 @@
|
||||
"""
|
||||
# Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
|
||||
from .paddleocr_vl_processor import PaddleOCRVLProcessor
|
||||
from .process import DataProcessor
|
||||
|
||||
__all__ = ["DataProcessor", "PaddleOCRVLProcessor"]
|
||||
275
fastdeploy/input/paddleocr_vl_processor/image_processor.py
Normal file
275
fastdeploy/input/paddleocr_vl_processor/image_processor.py
Normal file
@@ -0,0 +1,275 @@
|
||||
"""
|
||||
# Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
|
||||
"""Image processor class for Keye."""
|
||||
|
||||
# TODO: Support videos
|
||||
|
||||
import json
|
||||
import logging
|
||||
import math
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Union
|
||||
|
||||
import numpy as np
|
||||
from paddleformers.transformers.feature_extraction_utils import BatchFeature
|
||||
from paddleformers.transformers.image_processing_utils import BaseImageProcessor
|
||||
from paddleformers.transformers.image_utils import (
|
||||
ImageInput,
|
||||
is_valid_image,
|
||||
make_list_of_images,
|
||||
to_numpy_array,
|
||||
)
|
||||
|
||||
_OPENAI_CLIP_MEAN = [0.48145466, 0.4578275, 0.40821073]
|
||||
_OPENAI_CLIP_STD = [0.26862954, 0.26130258, 0.27577711]
|
||||
|
||||
|
||||
def make_batched_images(images) -> List[List[ImageInput]]:
|
||||
"""
|
||||
Accepts images in list or nested list format, and makes a list of images for preprocessing.
|
||||
|
||||
Args:
|
||||
images (`Union[List[List[ImageInput]], List[ImageInput], ImageInput]`):
|
||||
The input image.
|
||||
|
||||
Returns:
|
||||
list: A list of images.
|
||||
"""
|
||||
if isinstance(images, (list, tuple)) and isinstance(images[0], (list, tuple)) and is_valid_image(images[0][0]):
|
||||
return [img for img_list in images for img in img_list]
|
||||
|
||||
elif isinstance(images, (list, tuple)) and is_valid_image(images[0]):
|
||||
return images
|
||||
|
||||
elif is_valid_image(images):
|
||||
return [images]
|
||||
|
||||
raise ValueError(f"Could not make batched images from {images}")
|
||||
|
||||
|
||||
def adjust_size(size, patch_size):
|
||||
num_patches = size // patch_size
|
||||
if num_patches % 2 != 0:
|
||||
num_patches -= 1
|
||||
return num_patches * patch_size
|
||||
|
||||
|
||||
def smart_resize(
|
||||
height: int,
|
||||
width: int,
|
||||
factor: int = 28,
|
||||
min_pixels: int = 28 * 28 * 130,
|
||||
max_pixels: int = 28 * 28 * 1280,
|
||||
):
|
||||
"""Rescales the image so that the following conditions are met:
|
||||
|
||||
1. Both dimensions (height and width) are divisible by 'factor'.
|
||||
|
||||
2. The total number of pixels is within the range ['min_pixels', 'max_pixels'].
|
||||
|
||||
3. The aspect ratio of the image is maintained as closely as possible.
|
||||
|
||||
"""
|
||||
# if height < factor or width < factor:
|
||||
# raise ValueError(f"height:{height} or width:{width} must be larger than factor:{factor}")
|
||||
# if int(height < factor//4) + int(width < factor//4):
|
||||
# raise ValueError(f"height:{height} or width:{width} must be larger than factor:{factor//4}")
|
||||
|
||||
if height < factor:
|
||||
logging.debug(f"smart_resize: height={height} < factor={factor}, reset height=factor")
|
||||
width = round((width * factor) / height)
|
||||
height = factor
|
||||
|
||||
if width < factor:
|
||||
logging.debug(f"smart_resize: width={width} < factor={factor}, reset width=factor")
|
||||
height = round((height * factor) / width)
|
||||
width = factor
|
||||
|
||||
if max(height, width) / min(height, width) > 200:
|
||||
raise ValueError(
|
||||
f"absolute aspect ratio must be smaller than 200, got {max(height, width) / min(height, width)}"
|
||||
)
|
||||
h_bar = round(height / factor) * factor
|
||||
w_bar = round(width / factor) * factor
|
||||
if h_bar * w_bar > max_pixels:
|
||||
beta = math.sqrt((height * width) / max_pixels)
|
||||
h_bar = math.floor(height / beta / factor) * factor
|
||||
w_bar = math.floor(width / beta / factor) * factor
|
||||
elif h_bar * w_bar < min_pixels:
|
||||
beta = math.sqrt(min_pixels / (height * width))
|
||||
h_bar = math.ceil(height * beta / factor) * factor
|
||||
w_bar = math.ceil(width * beta / factor) * factor
|
||||
return h_bar, w_bar
|
||||
|
||||
|
||||
class ImageProcessor(BaseImageProcessor):
|
||||
model_input_names = [
|
||||
"pixel_values",
|
||||
"image_grid_thw",
|
||||
"pixel_values_videos",
|
||||
"video_grid_thw",
|
||||
]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
do_resize: bool = True,
|
||||
resample: int = 3,
|
||||
do_rescale: bool = True,
|
||||
rescale_factor: Union[int, float] = 1 / 255,
|
||||
do_normalize: bool = True,
|
||||
image_mean: Optional[Union[float, List[float]]] = None,
|
||||
image_std: Optional[Union[float, List[float]]] = None,
|
||||
do_convert_rgb: bool = True,
|
||||
min_pixels: int = 28 * 28 * 130,
|
||||
max_pixels: int = 28 * 28 * 1280,
|
||||
patch_size: int = 14,
|
||||
temporal_patch_size: int = 1,
|
||||
merge_size: int = 2,
|
||||
**kwargs,
|
||||
) -> None:
|
||||
super().__init__()
|
||||
self.do_resize = do_resize
|
||||
self.resample = resample
|
||||
self.do_rescale = do_rescale
|
||||
self.rescale_factor = rescale_factor
|
||||
self.do_normalize = do_normalize
|
||||
self.image_mean = image_mean if image_mean is not None else _OPENAI_CLIP_MEAN
|
||||
self.image_std = image_std if image_std is not None else _OPENAI_CLIP_STD
|
||||
self.min_pixels = min_pixels
|
||||
self.max_pixels = max_pixels
|
||||
self.patch_size = patch_size
|
||||
self.temporal_patch_size = temporal_patch_size
|
||||
self.merge_size = merge_size
|
||||
self.size = {"min_pixels": min_pixels, "max_pixels": max_pixels} # not used
|
||||
self.do_convert_rgb = do_convert_rgb
|
||||
|
||||
@classmethod
|
||||
def from_pretrained(cls, pretrained_model_dir):
|
||||
pretrained_model_dir = Path(pretrained_model_dir)
|
||||
image_processor_config_path = pretrained_model_dir / "preprocessor_config.json"
|
||||
with open(image_processor_config_path, "r", encoding="utf-8") as f:
|
||||
image_processor_config = json.load(f)
|
||||
return cls(**image_processor_config)
|
||||
|
||||
def _preprocess(
|
||||
self,
|
||||
images,
|
||||
do_resize: Optional[bool] = None,
|
||||
do_rescale: Optional[bool] = None,
|
||||
rescale_factor: Optional[float] = None,
|
||||
do_normalize: Optional[bool] = None,
|
||||
image_mean: Optional[Union[float, List[float]]] = None,
|
||||
image_std: Optional[Union[float, List[float]]] = None,
|
||||
do_convert_rgb: Optional[bool] = None,
|
||||
):
|
||||
images = make_list_of_images(images)
|
||||
|
||||
if do_convert_rgb:
|
||||
images = [image.convert("RGB") for image in images]
|
||||
|
||||
width, height = images[0].size
|
||||
resized_height, resized_width = height, width
|
||||
processed_images = []
|
||||
|
||||
for image in images:
|
||||
if do_resize:
|
||||
resized_height, resized_width = smart_resize(
|
||||
height,
|
||||
width,
|
||||
factor=self.patch_size * self.merge_size,
|
||||
min_pixels=self.min_pixels,
|
||||
max_pixels=self.max_pixels,
|
||||
)
|
||||
|
||||
image = image.resize((resized_width, resized_height), resample=self.resample)
|
||||
|
||||
image = to_numpy_array(image)
|
||||
|
||||
if do_rescale:
|
||||
image = (image * rescale_factor).astype(np.float32)
|
||||
|
||||
if do_normalize:
|
||||
image = image.astype(np.float32)
|
||||
image -= np.array(image_mean, dtype=np.float32)
|
||||
image /= np.array(image_std, dtype=np.float32)
|
||||
|
||||
processed_images.append(image)
|
||||
|
||||
patches = np.array(processed_images)
|
||||
patches = patches.transpose(0, 3, 1, 2)
|
||||
if patches.shape[0] == 1:
|
||||
patches = np.tile(patches, (self.temporal_patch_size, 1, 1, 1))
|
||||
channel = patches.shape[1]
|
||||
grid_t = patches.shape[0] // self.temporal_patch_size
|
||||
grid_h, grid_w = (
|
||||
resized_height // self.patch_size,
|
||||
resized_width // self.patch_size,
|
||||
)
|
||||
|
||||
patches = patches.reshape(
|
||||
grid_t,
|
||||
self.temporal_patch_size,
|
||||
channel,
|
||||
grid_h,
|
||||
self.patch_size,
|
||||
grid_w,
|
||||
self.patch_size,
|
||||
)
|
||||
patches = patches.transpose(0, 3, 5, 2, 1, 4, 6)
|
||||
assert self.temporal_patch_size == 1
|
||||
flatten_patches = patches.reshape(grid_t * grid_h * grid_w, channel, self.patch_size, self.patch_size)
|
||||
return flatten_patches, np.array([grid_t, grid_h, grid_w])
|
||||
|
||||
def preprocess(
|
||||
self,
|
||||
images,
|
||||
videos=None,
|
||||
do_resize: Optional[bool] = None,
|
||||
size: Optional[Dict[str, int]] = None,
|
||||
do_rescale: Optional[bool] = None,
|
||||
rescale_factor: Optional[float] = None,
|
||||
do_normalize: Optional[bool] = None,
|
||||
image_mean: Optional[Union[float, List[float]]] = None,
|
||||
image_std: Optional[Union[float, List[float]]] = None,
|
||||
do_convert_rgb: Optional[bool] = None,
|
||||
return_tensors=None,
|
||||
):
|
||||
do_resize = do_resize if do_resize is not None else self.do_resize
|
||||
size = size if size is not None else self.size
|
||||
do_rescale = do_rescale if do_rescale is not None else self.do_rescale
|
||||
rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor
|
||||
do_normalize = do_normalize if do_normalize is not None else self.do_normalize
|
||||
image_mean = image_mean if image_mean is not None else self.image_mean
|
||||
image_std = image_std if image_std is not None else self.image_std
|
||||
do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb
|
||||
|
||||
if videos is not None:
|
||||
raise NotImplementedError("Videos are not yet supported")
|
||||
|
||||
patches, image_grid_thw = self._preprocess(
|
||||
images,
|
||||
do_resize=do_resize,
|
||||
do_rescale=do_rescale,
|
||||
rescale_factor=rescale_factor,
|
||||
do_normalize=do_normalize,
|
||||
image_mean=image_mean,
|
||||
image_std=image_std,
|
||||
do_convert_rgb=do_convert_rgb,
|
||||
)
|
||||
pixel_values = np.array(patches)
|
||||
data = {"pixel_values": pixel_values, "grid_thw": image_grid_thw}
|
||||
return BatchFeature(data=data, tensor_type=return_tensors)
|
||||
@@ -0,0 +1,290 @@
|
||||
"""
|
||||
# Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
|
||||
import numpy as np
|
||||
|
||||
from fastdeploy.engine.request import Request
|
||||
from fastdeploy.input.text_processor import DataProcessor as TextProcessor
|
||||
from fastdeploy.utils import data_processor_logger
|
||||
|
||||
from .process import DataProcessor
|
||||
|
||||
|
||||
class PaddleOCRVLProcessor(TextProcessor):
|
||||
"""
|
||||
PaddleOCR Vision-Language processor for handling multimodal inputs.
|
||||
|
||||
This processor extends TextProcessor to support:
|
||||
- Image processing
|
||||
- Multimodal feature extraction
|
||||
- Tokenization and position encoding
|
||||
- Request processing and model input generation
|
||||
|
||||
Attributes:
|
||||
processor (DataProcessor): Underlying data processor instance
|
||||
tokenizer: Text tokenizer instance
|
||||
limit_mm_per_prompt (dict): Limits for multimodal inputs per prompt
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
config,
|
||||
model_name_or_path,
|
||||
limit_mm_per_prompt=None,
|
||||
mm_processor_kwargs=None,
|
||||
reasoning_parser_obj=None,
|
||||
tool_parser_obj=None,
|
||||
):
|
||||
"""
|
||||
Initialize PaddleOCRVLProcessor instance.
|
||||
|
||||
Args:
|
||||
config: Model configuration object
|
||||
model_name_or_path (str): Pretrained model name or path
|
||||
limit_mm_per_prompt (dict, optional): Limits for multimodal inputs
|
||||
mm_processor_kwargs (dict, optional): Multimodal processor arguments
|
||||
reasoning_parser_obj: Reasoning parser instance
|
||||
tool_parser_obj: Tool parser instance
|
||||
"""
|
||||
super().__init__(model_name_or_path, reasoning_parser_obj, tool_parser_obj)
|
||||
|
||||
data_processor_logger.info(f"model_name_or_path: {model_name_or_path}")
|
||||
processor_kwargs = self._parse_processor_kwargs(mm_processor_kwargs)
|
||||
self.processor = DataProcessor(
|
||||
model_path=model_name_or_path,
|
||||
tokens_per_second=config.vision_config.tokens_per_second,
|
||||
tokenizer=self.tokenizer,
|
||||
**processor_kwargs,
|
||||
)
|
||||
self.image_patch_id = self.processor.image_patch_id
|
||||
self.limit_mm_per_prompt = self._parse_limits(limit_mm_per_prompt)
|
||||
|
||||
def process_request(self, request, max_model_len=None, **kwargs):
|
||||
"""
|
||||
Process incoming request and generate model inputs.
|
||||
|
||||
Args:
|
||||
request: Input request object
|
||||
max_model_len (int, optional): Maximum context length
|
||||
**kwargs: Additional processing parameters
|
||||
|
||||
Returns:
|
||||
Request: Processed request with model inputs
|
||||
"""
|
||||
task = request.to_dict()
|
||||
task["enable_thinking"] = kwargs.get("enable_thinking", False)
|
||||
self.process_request_dict(task, max_model_len)
|
||||
request = Request.from_dict(task)
|
||||
request = self._apply_default_parameters(request)
|
||||
return request
|
||||
|
||||
def _parse_processor_kwargs(self, kwargs):
|
||||
"""
|
||||
Parse and validate multimodal processor arguments.
|
||||
|
||||
Args:
|
||||
kwargs (dict): Processor configuration arguments
|
||||
|
||||
Returns:
|
||||
dict: Validated processor arguments
|
||||
|
||||
Raises:
|
||||
ValueError: If arguments format is invalid
|
||||
"""
|
||||
if not kwargs:
|
||||
return {}
|
||||
|
||||
try:
|
||||
if not isinstance(kwargs, dict):
|
||||
raise ValueError("mm-processor-kwargs must be a dictionary")
|
||||
|
||||
# Validate kwargs types against expected schema
|
||||
data_processor_logger.info(f"Processing kwargs: {kwargs}")
|
||||
expected_types = {
|
||||
"video_max_frames": int, # Maximum video frames parameter
|
||||
"video_min_frames": int, # Minimum video frames parameter
|
||||
}
|
||||
|
||||
for key, value in kwargs.items():
|
||||
if key in expected_types and not isinstance(value, expected_types[key]):
|
||||
raise ValueError(
|
||||
f"Invalid type for {key}: expected {expected_types[key].__name__}, got {type(value).__name__}"
|
||||
)
|
||||
|
||||
return kwargs
|
||||
|
||||
except Exception as e:
|
||||
data_processor_logger.warning(f"Invalid mm-processor-kwargs format: {e}")
|
||||
return {}
|
||||
|
||||
def _parse_limits(self, limits):
|
||||
"""
|
||||
Parse and validate multimodal input limits.
|
||||
|
||||
Args:
|
||||
limits (dict): Input limits configuration
|
||||
|
||||
Returns:
|
||||
dict: Validated limits with defaults
|
||||
|
||||
Raises:
|
||||
ValueError: If limits format is invalid
|
||||
"""
|
||||
DEFAULT_LIMITS = {"image": 1, "video": 1, "audio": 1}
|
||||
|
||||
if not limits:
|
||||
return DEFAULT_LIMITS
|
||||
|
||||
try:
|
||||
if not isinstance(limits, dict):
|
||||
raise ValueError("limit-mm-per-prompt must be a dictionary")
|
||||
data_processor_logger.info(f"_parse_limits:{limits}")
|
||||
return {**DEFAULT_LIMITS, **limits}
|
||||
except Exception as e:
|
||||
data_processor_logger.warning(f"Invalid limit-mm-per-prompt format: {e}, using default limits")
|
||||
return DEFAULT_LIMITS
|
||||
|
||||
def _check_mm_limits(self, item):
|
||||
"""
|
||||
Validate multimodal inputs against configured limits.
|
||||
|
||||
Args:
|
||||
item: Input request item to validate
|
||||
|
||||
Raises:
|
||||
ValueError: If input exceeds configured limits
|
||||
"""
|
||||
if isinstance(item, dict):
|
||||
# 请求包含prompt和multi_modal_data
|
||||
mm_data = item
|
||||
else:
|
||||
# 请求包含messages
|
||||
mm_data = {"image": [], "video": []}
|
||||
|
||||
for message in item:
|
||||
if isinstance(message.get("content"), list):
|
||||
for part in message["content"]:
|
||||
if part.get("type") in ["image_url", "image"]:
|
||||
mm_data["image"].append(part)
|
||||
elif part.get("type") in ["video_url", "video"]:
|
||||
mm_data["video"].append(part)
|
||||
|
||||
for modality, data in mm_data.items():
|
||||
if modality in self.limit_mm_per_prompt:
|
||||
limit = self.limit_mm_per_prompt[modality]
|
||||
if len(data) > limit:
|
||||
raise ValueError(f"Too many {modality} items in prompt, " f"got {len(data)} but limit is {limit}")
|
||||
|
||||
def process_request_dict(self, request, max_model_len=None):
|
||||
"""
|
||||
Process request dictionary into model inputs.
|
||||
|
||||
Args:
|
||||
request (dict): Input request dictionary
|
||||
max_model_len (int, optional): Maximum context length
|
||||
|
||||
Returns:
|
||||
dict: Processed request with model inputs
|
||||
|
||||
Raises:
|
||||
ValueError: If request format is invalid
|
||||
"""
|
||||
|
||||
request = self._apply_default_parameters(request)
|
||||
if not request.get("eos_token_ids"):
|
||||
request["eos_token_ids"] = self.eos_token_ids
|
||||
|
||||
stop_sequences = request.get("stop", [])
|
||||
if stop_sequences:
|
||||
stop_seqs, stop_seqs_len = self.update_stop_seq(stop_sequences)
|
||||
request["stop_token_ids"] = stop_seqs
|
||||
request["stop_seqs_len"] = stop_seqs_len
|
||||
|
||||
if request.get("prompt"):
|
||||
multimodal_data = request.get("multimodal_data")
|
||||
if multimodal_data is None:
|
||||
multimodal_data = {}
|
||||
self._check_mm_limits(multimodal_data)
|
||||
images = multimodal_data.get("image", None)
|
||||
videos = multimodal_data.get("video", None)
|
||||
outputs = self.processor.text2ids(request["prompt"], images, videos)
|
||||
|
||||
elif request.get("messages"):
|
||||
messages = request["messages"]
|
||||
self._check_mm_limits(messages)
|
||||
outputs = self.processor.request2ids(request)
|
||||
|
||||
else:
|
||||
raise ValueError(f"Request must contain 'prompt', or 'messages': {request}")
|
||||
|
||||
metadata = request.get("metadata")
|
||||
# Handle continuation of previous generation by appending existing tokens
|
||||
if metadata and metadata.get("generated_token_ids"):
|
||||
self.append_generated_tokens(outputs, metadata["generated_token_ids"])
|
||||
outputs = self.pack_outputs(outputs)
|
||||
|
||||
request["prompt_token_ids"] = outputs["input_ids"].tolist()
|
||||
request["prompt_token_ids_len"] = len(request["prompt_token_ids"])
|
||||
request["multimodal_inputs"] = outputs
|
||||
|
||||
# Handle prompt truncation if exceeds model context length
|
||||
if max_model_len is not None and len(request["prompt_token_ids"]) > max_model_len:
|
||||
request["prompt_token_ids"] = request["prompt_token_ids"][
|
||||
: max_model_len - 1
|
||||
] # Leave space for at least 1 new token
|
||||
|
||||
# Set default max_tokens if not specified
|
||||
if request.get("max_tokens") is None:
|
||||
request["max_tokens"] = max(1, max_model_len - len(request["prompt_token_ids"])) # Ensure at least 1 token
|
||||
|
||||
return request
|
||||
|
||||
def append_generated_tokens(self, outputs, generated_token_ids):
|
||||
"""
|
||||
Append generated tokens to existing outputs.
|
||||
|
||||
Args:
|
||||
outputs: Current model outputs
|
||||
generated_token_ids: Generated tokens to append
|
||||
"""
|
||||
out = {"input_ids": [], "token_type_ids": [], "position_ids": [], "cur_position": outputs["cur_position"]}
|
||||
self.processor._add_text(generated_token_ids, out)
|
||||
|
||||
outputs["input_ids"] = np.concatenate(
|
||||
[outputs["input_ids"], np.array(out["input_ids"], dtype=np.int64)], axis=0
|
||||
)
|
||||
outputs["token_type_ids"] = np.concatenate(
|
||||
[outputs["token_type_ids"], np.array(out["token_type_ids"], dtype=np.int64)], axis=0
|
||||
)
|
||||
outputs["position_ids"] = np.concatenate(
|
||||
[outputs["position_ids"], out["position_ids"][0]], axis=1, dtype=np.int64
|
||||
)
|
||||
outputs["cur_position"] = out["cur_position"]
|
||||
|
||||
def pack_outputs(self, outputs):
|
||||
"""
|
||||
Prepare final output dictionary for model.
|
||||
|
||||
Args:
|
||||
outputs: Intermediate processing outputs
|
||||
|
||||
Returns:
|
||||
dict: Packed output dictionary with all required fields
|
||||
"""
|
||||
outputs["image_patch_id"] = self.processor.image_token_id
|
||||
outputs["video_patch_id"] = self.processor.video_token_id
|
||||
outputs["position_ids"] = outputs["position_ids"].transpose(1, 0)
|
||||
return outputs
|
||||
471
fastdeploy/input/paddleocr_vl_processor/process.py
Normal file
471
fastdeploy/input/paddleocr_vl_processor/process.py
Normal file
@@ -0,0 +1,471 @@
|
||||
"""
|
||||
# Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
|
||||
from typing import Any, Dict, List, Union
|
||||
|
||||
import numpy as np
|
||||
from paddleformers.transformers import AutoTokenizer
|
||||
|
||||
from fastdeploy.entrypoints.chat_utils import parse_chat_messages
|
||||
from fastdeploy.input.utils import IDS_TYPE_FLAG
|
||||
from fastdeploy.utils import data_processor_logger
|
||||
|
||||
from .image_processor import ImageProcessor
|
||||
|
||||
|
||||
class DataProcessor:
|
||||
"""
|
||||
Processes multimodal inputs (text, images, videos) into model-ready formats.
|
||||
|
||||
Handles:
|
||||
- Tokenization of text with special tokens for visual content
|
||||
- Image and video preprocessing
|
||||
- Generation of 3D positional embeddings
|
||||
- Conversion of chat messages to model inputs
|
||||
|
||||
Attributes:
|
||||
tokenizer: Text tokenizer instance
|
||||
image_processor: Image/video preprocessor
|
||||
image_token: Special token for image placeholders
|
||||
video_token: Special token for video placeholders
|
||||
vision_start: Token marking start of visual content
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_path: str,
|
||||
video_min_frames: int = 4,
|
||||
video_max_frames: int = 768,
|
||||
tokens_per_second: int = 2,
|
||||
tokenizer=None,
|
||||
**kwargs,
|
||||
) -> None:
|
||||
"""
|
||||
Initialize the data processor.
|
||||
|
||||
Args:
|
||||
model_path: Path to pretrained model
|
||||
video_min_frames: Minimum frames to sample from videos
|
||||
video_max_frames: Maximum frames to sample from videos
|
||||
tokens_per_second: Temporal resolution for positional embeddings
|
||||
**kwargs: Additional configuration
|
||||
"""
|
||||
self.min_frames = video_min_frames
|
||||
self.max_frames = video_max_frames
|
||||
|
||||
# Initialize tokenizer with left padding and fast tokenizer
|
||||
if tokenizer is None:
|
||||
self.tokenizer = AutoTokenizer.from_pretrained(model_path, padding_side="left", use_fast=True)
|
||||
self.tokenizer.ignored_index = -100 # Set ignored index for loss calculation
|
||||
else:
|
||||
self.tokenizer = tokenizer
|
||||
self.image_processor = ImageProcessor.from_pretrained(model_path) # Initialize image processor
|
||||
|
||||
# Convolution sizes for patch aggregation
|
||||
self.spatial_conv_size = self.image_processor.merge_size
|
||||
self.temporal_conv_size = self.image_processor.temporal_patch_size
|
||||
|
||||
# Special tokens and IDs
|
||||
self.image_token = "<|image_pad|>"
|
||||
self.video_token = "<|video_pad|>"
|
||||
|
||||
self.image_token_id = self.tokenizer.convert_tokens_to_ids(self.image_token)
|
||||
self.video_token_id = self.tokenizer.convert_tokens_to_ids(self.video_token)
|
||||
self.image_patch_id = self.image_token_id
|
||||
|
||||
self.vision_start = "<|vision_start|>"
|
||||
self.vision_start_id = self.tokenizer.convert_tokens_to_ids(self.vision_start)
|
||||
|
||||
self.tokens_per_second = tokens_per_second
|
||||
|
||||
self.role_prefixes = {
|
||||
"system": "",
|
||||
"user": "User: ",
|
||||
"bot": "Assistant: ",
|
||||
"assistant": "Assistant: ",
|
||||
}
|
||||
|
||||
def _pack_outputs(self, outputs):
|
||||
"""
|
||||
Pack and convert all output data into numpy arrays with appropriate types.
|
||||
|
||||
Args:
|
||||
outputs (dict): Dictionary containing model outputs with keys:
|
||||
- images: List of visual features
|
||||
- grid_thw: List of spatial dimensions
|
||||
- image_type_ids: List of content type indicators
|
||||
- input_ids: List of token IDs
|
||||
- token_type_ids: List of type identifiers
|
||||
- position_ids: List of position embeddings
|
||||
|
||||
Returns:
|
||||
dict: Processed outputs with all values converted to numpy arrays
|
||||
"""
|
||||
# Process visual outputs - stack if exists or set to None if empty
|
||||
if not outputs["images"]:
|
||||
outputs["images"] = None # No images case
|
||||
outputs["grid_thw"] = None # No spatial dimensions
|
||||
outputs["image_type_ids"] = None # No type IDs
|
||||
else:
|
||||
outputs["images"] = np.vstack(outputs["images"]) # Stack image features vertically
|
||||
outputs["grid_thw"] = np.vstack(outputs["grid_thw"]) # Stack spatial dimensions
|
||||
outputs["image_type_ids"] = np.array(outputs["image_type_ids"]) # Convert to numpy array
|
||||
|
||||
# Convert all outputs to numpy arrays with appropriate types
|
||||
outputs["input_ids"] = np.array(outputs["input_ids"], dtype=np.int64) # Token IDs as int64
|
||||
outputs["token_type_ids"] = np.array(outputs["token_type_ids"], dtype=np.int64) # Type IDs as int64
|
||||
outputs["position_ids"] = np.concatenate(
|
||||
outputs["position_ids"], axis=1, dtype=np.int64
|
||||
) # Concatenate position IDs
|
||||
return outputs
|
||||
|
||||
def text2ids(self, text, images=None, videos=None):
|
||||
"""
|
||||
Convert text with image/video placeholders into model inputs.
|
||||
|
||||
Args:
|
||||
text: Input text with <|image@placeholder|> and <|video@placeholder|> markers
|
||||
images: List of PIL Images corresponding to image placeholders
|
||||
videos: List of video data corresponding to video placeholders
|
||||
|
||||
Returns:
|
||||
Dict containing:
|
||||
- input_ids: Token IDs
|
||||
- token_type_ids: Type identifiers (text/image/video)
|
||||
- position_ids: 3D positional embeddings
|
||||
- images: Preprocessed visual features
|
||||
- grid_thw: Spatial/temporal dimensions
|
||||
- image_type_ids: Visual content type (0=image, 1=video)
|
||||
"""
|
||||
|
||||
outputs = {
|
||||
"input_ids": [],
|
||||
"token_type_ids": [],
|
||||
"position_ids": [],
|
||||
"images": [],
|
||||
"grid_thw": [],
|
||||
"image_type_ids": [],
|
||||
"labels": [],
|
||||
"cur_position": 0,
|
||||
"pic_cnt": 0,
|
||||
"video_cnt": 0,
|
||||
"vit_seqlen": [],
|
||||
"vit_position_ids": [],
|
||||
}
|
||||
# Define placeholders and their lengths
|
||||
# IMAGE_PLACEHOLDER = "<|vision_start|><|image_pad|><|vision_end|>"
|
||||
IMAGE_PLACEHOLDER = "<|image_pad|>"
|
||||
VIDEO_PLACEHOLDER = "<|video@placeholder|>"
|
||||
IMAGE_PLACEHOLDER_LEN = len(IMAGE_PLACEHOLDER)
|
||||
VIDEO_PLACEHOLDER_LEN = len(VIDEO_PLACEHOLDER)
|
||||
|
||||
# Initialize tracking variables for text parsing
|
||||
st, image_idx, video_idx = 0, 0, 0 # Start position, image counter, video counter
|
||||
while st < len(text):
|
||||
# Find next image or video placeholder in text
|
||||
image_pos = text.find(IMAGE_PLACEHOLDER, st)
|
||||
image_pos = len(text) if image_pos == -1 else image_pos # Set to end if not found
|
||||
video_pos = text.find(VIDEO_PLACEHOLDER, st)
|
||||
video_pos = len(text) if video_pos == -1 else video_pos # Set to end if not found
|
||||
ed = min(image_pos, video_pos) # End position is first placeholder found
|
||||
|
||||
self._add_text(text[st:ed], outputs)
|
||||
if ed == len(text):
|
||||
break
|
||||
|
||||
if ed == image_pos:
|
||||
outputs["pic_cnt"] += 1
|
||||
self._add_image(images[image_idx], outputs)
|
||||
image_idx += 1
|
||||
st = ed + IMAGE_PLACEHOLDER_LEN
|
||||
else:
|
||||
item = videos[video_idx]
|
||||
if isinstance(item, dict):
|
||||
frames, meta = self._load_and_process_video(item["video"], item)
|
||||
else:
|
||||
frames, meta = self._load_and_process_video(item, {})
|
||||
|
||||
outputs["video_cnt"] += 1
|
||||
self._add_video(frames, meta, outputs)
|
||||
video_idx += 1
|
||||
st = ed + VIDEO_PLACEHOLDER_LEN
|
||||
|
||||
return self._pack_outputs(outputs)
|
||||
|
||||
def request2ids(
|
||||
self, request: Dict[str, Any], tgts: List[str] = None
|
||||
) -> Dict[str, Union[np.ndarray, List[np.ndarray], None]]:
|
||||
"""
|
||||
Convert chat request with multimodal messages into model inputs.
|
||||
|
||||
Args:
|
||||
request: Dictionary containing:
|
||||
- messages: List of chat messages with text/image/video content
|
||||
- request_id: Unique identifier for logging
|
||||
tgts: Optional target sequences
|
||||
|
||||
Returns:
|
||||
Dict with same structure as text2ids() output
|
||||
"""
|
||||
|
||||
outputs = {
|
||||
"input_ids": [],
|
||||
"token_type_ids": [],
|
||||
"position_ids": [],
|
||||
"images": [],
|
||||
"grid_thw": [],
|
||||
"image_type_ids": [],
|
||||
"labels": [],
|
||||
"cur_position": 0,
|
||||
"pic_cnt": 0,
|
||||
"video_cnt": 0,
|
||||
"vit_seqlen": [],
|
||||
"vit_position_ids": [],
|
||||
}
|
||||
|
||||
# Parse and validate chat messages
|
||||
messages = parse_chat_messages(request.get("messages"))
|
||||
image_message_list = [] # Store visual content messages
|
||||
|
||||
for msg in messages:
|
||||
role = msg.get("role")
|
||||
assert role in self.role_prefixes, f"Unsupported role: {role}"
|
||||
|
||||
# Normalize content to list format
|
||||
content_items = msg.get("content")
|
||||
if not isinstance(content_items, list):
|
||||
content_items = [content_items]
|
||||
|
||||
# Collect all visual content items
|
||||
for item in content_items:
|
||||
if isinstance(item, dict) and item.get("type") in ["image", "video"]:
|
||||
image_message_list.append(item)
|
||||
|
||||
raw_messages = request["messages"]
|
||||
request["messages"] = messages
|
||||
|
||||
prompt_token_ids = self.apply_chat_template(request)
|
||||
if len(prompt_token_ids) == 0:
|
||||
raise ValueError("Invalid input: prompt_token_ids must be a non-empty sequence of token IDs")
|
||||
request["messages"] = raw_messages
|
||||
|
||||
vision_start_index = 0
|
||||
vision_message_index = 0
|
||||
for i in range(len(prompt_token_ids)):
|
||||
if prompt_token_ids[i] == self.vision_start_id:
|
||||
self._add_text(prompt_token_ids[vision_start_index : i + 1], outputs)
|
||||
|
||||
vision_start_index = i + 1
|
||||
image_message = image_message_list[vision_message_index]
|
||||
|
||||
if image_message["type"] == "image":
|
||||
img = image_message.get("image")
|
||||
if img is None:
|
||||
continue
|
||||
outputs["pic_cnt"] += 1
|
||||
self._add_image(img, outputs)
|
||||
|
||||
elif image_message["type"] == "video":
|
||||
video_bytes = image_message.get("video")
|
||||
if video_bytes is None:
|
||||
continue
|
||||
frames, meta = self._load_and_process_video(video_bytes, image_message)
|
||||
|
||||
outputs["video_cnt"] += 1
|
||||
self._add_video(frames, meta, outputs)
|
||||
|
||||
vision_message_index += 1
|
||||
|
||||
self._add_text(prompt_token_ids[vision_start_index:], outputs)
|
||||
return self._pack_outputs(outputs)
|
||||
|
||||
def _add_text(self, tokens, outputs: Dict) -> None:
|
||||
"""
|
||||
Add text tokens to model inputs dictionary.
|
||||
|
||||
Args:
|
||||
tokens: Text string or already tokenized IDs
|
||||
outputs: Dictionary accumulating model inputs
|
||||
|
||||
Note:
|
||||
- Handles both raw text and pre-tokenized inputs
|
||||
- Updates position IDs for 3D embeddings
|
||||
"""
|
||||
if not tokens:
|
||||
return None
|
||||
|
||||
if isinstance(tokens, str):
|
||||
tokens_str = self.tokenizer.tokenize(tokens)
|
||||
tokens = self.tokenizer.convert_tokens_to_ids(tokens_str)
|
||||
|
||||
num_tokens = len(tokens)
|
||||
outputs["input_ids"].extend(tokens)
|
||||
outputs["token_type_ids"].extend([IDS_TYPE_FLAG["text"]] * num_tokens)
|
||||
|
||||
position_ids = self._compute_text_positions(outputs["cur_position"], num_tokens)
|
||||
outputs["position_ids"].append(position_ids)
|
||||
outputs["cur_position"] = position_ids.max() + 1
|
||||
|
||||
def _compute_text_positions(self, start_pos: int, num_tokens: int) -> np.ndarray:
|
||||
"""
|
||||
Generate 3D positional embeddings for text tokens.
|
||||
|
||||
Args:
|
||||
start_pos: Starting position index
|
||||
num_tokens: Number of tokens to generate positions for
|
||||
|
||||
Returns:
|
||||
numpy.ndarray: 3D position IDs shaped (3, num_tokens)
|
||||
"""
|
||||
text_array = np.arange(num_tokens).reshape(1, -1)
|
||||
text_index = np.broadcast_to(text_array, (3, num_tokens))
|
||||
position = text_index + start_pos
|
||||
return position
|
||||
|
||||
def _add_image(self, img, outputs: Dict) -> None:
|
||||
"""
|
||||
Add image data to model inputs dictionary.
|
||||
|
||||
Args:
|
||||
img: PIL Image to process
|
||||
outputs: Dictionary accumulating model inputs
|
||||
|
||||
Note:
|
||||
- Preprocesses image and calculates spatial dimensions
|
||||
- Adds image token IDs and type markers
|
||||
- Generates appropriate position embeddings
|
||||
"""
|
||||
ret = self.image_processor.preprocess(images=[img.convert("RGB")])
|
||||
num_tokens = ret["grid_thw"].prod() // self.image_processor.merge_size**2
|
||||
grid_thw = ret["grid_thw"].tolist()
|
||||
|
||||
outputs["input_ids"].extend([self.image_token_id] * num_tokens)
|
||||
outputs["token_type_ids"].extend([IDS_TYPE_FLAG["image"]] * num_tokens)
|
||||
|
||||
outputs["images"].append(ret["pixel_values"])
|
||||
outputs["grid_thw"].append(grid_thw)
|
||||
outputs["image_type_ids"].append(0)
|
||||
|
||||
# position_ids
|
||||
t, h, w = grid_thw
|
||||
position_ids = self._compute_vision_positions(outputs["cur_position"], t, h, w, 0)
|
||||
outputs["position_ids"].append(position_ids)
|
||||
outputs["cur_position"] = position_ids.max() + 1
|
||||
numel = h * w
|
||||
outputs["vit_seqlen"].append(numel)
|
||||
outputs["vit_position_ids"].append(np.arange(numel) % numel)
|
||||
|
||||
def _add_video(self, frames, meta: Dict, outputs: Dict) -> None:
|
||||
"""
|
||||
Add video data to model inputs dictionary.
|
||||
|
||||
Args:
|
||||
frames: Video frames as numpy array
|
||||
meta: Video metadata containing fps/duration
|
||||
outputs: Dictionary accumulating model inputs
|
||||
|
||||
Note:
|
||||
- Handles temporal dimension in position embeddings
|
||||
- Uses video-specific token IDs and type markers
|
||||
"""
|
||||
ret = self.image_processor.preprocess(images=frames)
|
||||
|
||||
num_tokens = ret["image_grid_thw"].prod() // self.image_processor.merge_size**2
|
||||
grid_thw = ret["image_grid_thw"].tolist()
|
||||
|
||||
outputs["input_ids"].extend([self.video_token_id] * num_tokens)
|
||||
outputs["token_type_ids"].extend([IDS_TYPE_FLAG["video"]] * num_tokens)
|
||||
|
||||
outputs["images"].append(ret["pixel_values"])
|
||||
outputs["grid_thw"].append(grid_thw)
|
||||
outputs["image_type_ids"].extend([1] * grid_thw[0])
|
||||
|
||||
fps = meta["fps"]
|
||||
second_per_grid_t = self.temporal_conv_size / fps
|
||||
t, h, w = grid_thw
|
||||
position_ids = self._compute_vision_positions(outputs["cur_position"], t, h, w, second_per_grid_t)
|
||||
|
||||
outputs["position_ids"].append(position_ids)
|
||||
outputs["cur_position"] = position_ids.max() + 1
|
||||
numel = h * w
|
||||
outputs["vit_seqlen"].append(numel)
|
||||
outputs["vit_position_ids"].append(np.arange(numel) % numel)
|
||||
|
||||
def _compute_vision_positions(
|
||||
self, start_pos: int, t: int, h: int, w: int, second_per_grid_t: float
|
||||
) -> np.ndarray:
|
||||
"""
|
||||
Generate 3D position IDs for visual inputs.
|
||||
|
||||
Args:
|
||||
start_pos: Base position in sequence
|
||||
t: Temporal patches (1 for images)
|
||||
h: Height in patches
|
||||
w: Width in patches
|
||||
second_per_grid_t: Time per temporal patch
|
||||
|
||||
Returns:
|
||||
np.ndarray: Position IDs for [t,h,w] dimensions
|
||||
"""
|
||||
h //= self.spatial_conv_size
|
||||
w //= self.spatial_conv_size
|
||||
|
||||
tn = np.arange(t).reshape(-1, 1)
|
||||
tn = np.broadcast_to(tn, (t, h * w))
|
||||
tn = tn * int(second_per_grid_t) * self.tokens_per_second
|
||||
t_index = tn.flatten()
|
||||
|
||||
hn = np.arange(h).reshape(1, -1, 1)
|
||||
h_index = np.broadcast_to(hn, (t, h, w)).flatten()
|
||||
|
||||
wn = np.arange(w).reshape(1, 1, -1)
|
||||
w_index = np.broadcast_to(wn, (t, h, w)).flatten()
|
||||
|
||||
position = np.stack([t_index, h_index, w_index]) + start_pos
|
||||
return position
|
||||
|
||||
def apply_chat_template(self, request):
|
||||
"""
|
||||
Apply chat template to convert messages into token sequence.
|
||||
|
||||
Args:
|
||||
request: Dictionary containing chat messages
|
||||
|
||||
Returns:
|
||||
List of token IDs
|
||||
|
||||
Raises:
|
||||
ValueError: If model doesn't support chat templates
|
||||
"""
|
||||
if self.tokenizer.chat_template is None:
|
||||
raise ValueError("This model does not support chat_template.")
|
||||
|
||||
raw_prompt = self.tokenizer.apply_chat_template(
|
||||
request["messages"],
|
||||
tokenize=False,
|
||||
add_generation_prompt=request.get("add_generation_prompt", True),
|
||||
chat_template=request.get("chat_template", None),
|
||||
)
|
||||
prompt_token_str = raw_prompt.replace(self.image_token, "").replace(self.video_token, "")
|
||||
request["text_after_process"] = raw_prompt
|
||||
|
||||
tokens = self.tokenizer.tokenize(prompt_token_str)
|
||||
token_ids = self.tokenizer.convert_tokens_to_ids(tokens)
|
||||
data_processor_logger.info(
|
||||
f"req_id:{request.get('request_id', ''), } prompt: {raw_prompt} tokens: {tokens}, token_ids: {token_ids}"
|
||||
)
|
||||
return token_ids
|
||||
Reference in New Issue
Block a user