mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-12-24 13:28:13 +08:00
[Docs] add qwen25-vl docs (#5243)
* [Docs] add qwen25-vl docs * [Docs] add qwen25-vl docs * [Docs] add qwen25-vl docs
This commit is contained in:
136
docs/get_started/quick_start_qwen25_vl.md
Normal file
136
docs/get_started/quick_start_qwen25_vl.md
Normal file
@@ -0,0 +1,136 @@
|
||||
[简体中文](../zh/get_started/quick_start_qwen25_vl.md)
|
||||
|
||||
# Deploy Qwen2.5-VL in 10 Minutes
|
||||
|
||||
Before deployment, ensure your environment meets the following requirements:
|
||||
|
||||
- GPU Driver ≥ 535
|
||||
- CUDA ≥ 12.3
|
||||
- cuDNN ≥ 9.5
|
||||
- Linux X86_64
|
||||
- Python ≥ 3.10
|
||||
|
||||
This guide uses the lightweight Qwen2.5-VL model for demonstration, which can be deployed on most hardware configurations. Docker deployment is recommended.
|
||||
|
||||
For more information about how to install FastDeploy, refer to the [installation document](installation/README.md).
|
||||
|
||||
## 1. Launch Service
|
||||
|
||||
Please download the qwen25-vl model in advance: such as [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)
|
||||
|
||||
Add the following configuration in `config.json`
|
||||
```text
|
||||
"rope_3d": true,
|
||||
"freq_allocation": 16
|
||||
```
|
||||
|
||||
After installing FastDeploy, execute the following command in the terminal to start the service. For the configuration method of the startup command, refer to [Parameter Description](../parameters.md)
|
||||
|
||||
```
|
||||
export ENABLE_V1_KVCACHE_SCHEDULER=1
|
||||
python -m fastdeploy.entrypoints.openai.api_server \
|
||||
--model You/Path/Qwen2.5-VL-7B-Instruct \
|
||||
--port 8180 \
|
||||
--metrics-port 8181 \
|
||||
--engine-worker-queue-port 8182 \
|
||||
--max-model-len 32768 \
|
||||
--max-num-seqs 32
|
||||
```
|
||||
|
||||
> 💡 Note: In the path specified by ```--model```, if the subdirectory corresponding to the path does not exist in the current directory, it will try to query whether AIStudio has a preset model based on the specified model name (such as ```Qwen/Qwen2.5-VL-7B-Instruct```). If it exists, it will automatically start downloading. The default download path is: ```~/xx```. For instructions and configuration on automatic model download, see [Model Download](../supported_models.md).
|
||||
```--max-model-len``` indicates the maximum number of tokens supported by the currently deployed service.
|
||||
```--max-num-seqs``` indicates the maximum number of concurrent processing supported by the currently deployed service.
|
||||
|
||||
**Related Documents**
|
||||
- [Service Deployment](../online_serving/README.md)
|
||||
- [Service Monitoring](../online_serving/metrics.md)
|
||||
|
||||
## 2. Request the Service
|
||||
After starting the service, the following output indicates successful initialization:
|
||||
|
||||
```shell
|
||||
api_server.py[line:91] Launching metrics service at http://0.0.0.0:8181/metrics
|
||||
api_server.py[line:94] Launching chat completion service at http://0.0.0.0:8180/v1/chat/completions
|
||||
api_server.py[line:97] Launching completion service at http://0.0.0.0:8180/v1/completions
|
||||
INFO: Started server process [13909]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://0.0.0.0:8180 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
### Health Check
|
||||
|
||||
Verify service status (HTTP 200 indicates success):
|
||||
|
||||
```shell
|
||||
curl -i http://0.0.0.0:8180/health
|
||||
```
|
||||
|
||||
### cURL Request
|
||||
Send requests as follows:
|
||||
|
||||
```shell
|
||||
curl -X POST "http://0.0.0.0:8180/v1/chat/completions" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"messages": [
|
||||
{"role": "user", "content": "Rewrite Li Bai's 'Quiet Night Thoughts' as a modern poem"}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
For image inputs:
|
||||
|
||||
```shell
|
||||
curl -X POST "http://0.0.0.0:8180/v1/chat/completions" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"messages": [
|
||||
{"role": "user", "content": [
|
||||
{"type":"image_url", "image_url": {"url":"https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example2.jpg"}},
|
||||
{"type":"text", "text":"From which era does the artifact in the image originate?"}
|
||||
]}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
For video inputs:
|
||||
|
||||
```shell
|
||||
curl -X POST "http://0.0.0.0:8180/v1/chat/completions" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"messages": [
|
||||
{"role": "user", "content": [
|
||||
{"type":"video_url", "video_url": {"url":"https://bj.bcebos.com/v1/paddlenlp/datasets/paddlemix/demo_video/example_video.mp4"}},
|
||||
{"type":"text", "text":"How many apples are in the scene?"}
|
||||
]}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
### Python Client (OpenAI-compatible API)
|
||||
|
||||
FastDeploy's API is OpenAI-compatible. You can also use Python for streaming requests:
|
||||
|
||||
```python
|
||||
import openai
|
||||
host = "0.0.0.0"
|
||||
port = "8180"
|
||||
client = openai.Client(base_url=f"http://{host}:{port}/v1", api_key="null")
|
||||
|
||||
response = client.chat.completions.create(
|
||||
model="null",
|
||||
messages=[
|
||||
{"role": "user", "content": [
|
||||
{"type": "image_url", "image_url": {"url": "https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example2.jpg"}},
|
||||
{"type": "text", "text": "From which era does the artifact in the image originate?"},
|
||||
]},
|
||||
],
|
||||
stream=True,
|
||||
)
|
||||
for chunk in response:
|
||||
if chunk.choices[0].delta:
|
||||
print(chunk.choices[0].delta.content, end='')
|
||||
print('\n')
|
||||
```
|
||||
130
docs/zh/get_started/quick_start_qwen25_vl.md
Normal file
130
docs/zh/get_started/quick_start_qwen25_vl.md
Normal file
@@ -0,0 +1,130 @@
|
||||
[English](../../get_started/quick_start_qwen25_vl.md)
|
||||
|
||||
# 10分钟完成 Qwen2.5-VL 模型部署
|
||||
|
||||
本文档讲解如何部署Qwen2.5-VL模型,在开始部署前,请确保你的硬件环境满足如下条件:
|
||||
|
||||
- GPU驱动 >= 535
|
||||
- CUDA >= 12.3
|
||||
- CUDNN >= 9.5
|
||||
- Linux X86_64
|
||||
- Python >= 3.10
|
||||
|
||||
为了快速在各类硬件部署,本文档采用 ```Qwen2.5-VL``` 模型作为示例,可在大部分硬件上完成部署。
|
||||
|
||||
安装FastDeploy方式参考[安装文档](./installation/README.md)。
|
||||
## 1. 启动服务
|
||||
请提前下载Qwen2.5-VL模型,例如 [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)
|
||||
|
||||
在`config.json`中增加下面的配置项
|
||||
```text
|
||||
"rope_3d": true,
|
||||
"freq_allocation": 16
|
||||
```
|
||||
|
||||
安装FastDeploy后,在终端执行如下命令,启动服务,其中启动命令配置方式参考[参数说明](../parameters.md)
|
||||
|
||||
```shell
|
||||
export ENABLE_V1_KVCACHE_SCHEDULER=1
|
||||
python -m fastdeploy.entrypoints.openai.api_server \
|
||||
--model You/Path/Qwen2.5-VL-7B-Instruct \
|
||||
--port 8180 \
|
||||
--metrics-port 8181 \
|
||||
--engine-worker-queue-port 8182 \
|
||||
--max-model-len 32768 \
|
||||
--max-num-seqs 32
|
||||
```
|
||||
|
||||
>💡 注意:在 ```--model``` 指定的路径中,若当前目录下不存在该路径对应的子目录,则会尝试根据指定的模型名称(如 ```Qwen/Qwen2.5-VL-7B-Instruct```)查询AIStudio是否存在预置模型,若存在,则自动启动下载。默认的下载路径为:```~/xx```。关于模型自动下载的说明和配置参阅[模型下载](../supported_models.md)。
|
||||
```--max-model-len``` 表示当前部署的服务所支持的最长Token数量。
|
||||
```--max-num-seqs``` 表示当前部署的服务所支持的最大并发处理数量。
|
||||
|
||||
**相关文档**
|
||||
|
||||
- [服务部署配置](../online_serving/README.md)
|
||||
- [服务监控metrics](../online_serving/metrics.md)
|
||||
|
||||
## 2. 用户发起服务请求
|
||||
|
||||
执行启动服务指令后,当终端打印如下信息,说明服务已经启动成功。
|
||||
|
||||
```
|
||||
api_server.py[line:91] Launching metrics service at http://0.0.0.0:8181/metrics
|
||||
api_server.py[line:94] Launching chat completion service at http://0.0.0.0:8180/v1/chat/completions
|
||||
api_server.py[line:97] Launching completion service at http://0.0.0.0:8180/v1/completions
|
||||
INFO: Started server process [13909]
|
||||
INFO: Waiting for application startup.
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://0.0.0.0:8180 (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
FastDeploy提供服务探活接口,用以判断服务的启动状态,执行如下命令返回 ```HTTP/1.1 200 OK``` 即表示服务启动成功。
|
||||
|
||||
```shell
|
||||
curl -i http://0.0.0.0:8180/health
|
||||
```
|
||||
|
||||
通过如下命令发起服务请求
|
||||
|
||||
```shell
|
||||
curl -X POST "http://0.0.0.0:8180/v1/chat/completions" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"messages": [
|
||||
{"role": "user", "content": "把李白的静夜思改写为现代诗"}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
输入包含图片时,按如下命令发起请求
|
||||
|
||||
```shell
|
||||
curl -X POST "http://0.0.0.0:8180/v1/chat/completions" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"messages": [
|
||||
{"role": "user", "content": [
|
||||
{"type":"image_url", "image_url": {"url":"https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example2.jpg"}},
|
||||
{"type":"text", "text":"图中的文物属于哪个年代?"}
|
||||
]}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
输入包含视频时,按如下命令发起请求
|
||||
|
||||
```shell
|
||||
curl -X POST "http://0.0.0.0:8180/v1/chat/completions" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"messages": [
|
||||
{"role": "user", "content": [
|
||||
{"type":"video_url", "video_url": {"url":"https://bj.bcebos.com/v1/paddlenlp/datasets/paddlemix/demo_video/example_video.mp4"}},
|
||||
{"type":"text", "text":"画面中有几个苹果?"}
|
||||
]}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
FastDeploy服务接口兼容OpenAI协议,可以通过如下Python代码发起服务请求。
|
||||
```python
|
||||
import openai
|
||||
host = "0.0.0.0"
|
||||
port = "8180"
|
||||
client = openai.Client(base_url=f"http://{host}:{port}/v1", api_key="null")
|
||||
|
||||
response = client.chat.completions.create(
|
||||
model="null",
|
||||
messages=[
|
||||
{"role": "user", "content": [
|
||||
{"type": "image_url", "image_url": {"url": "https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example2.jpg"}},
|
||||
{"type": "text", "text": "图中的文物属于哪个年代?"},
|
||||
]},
|
||||
],
|
||||
stream=True,
|
||||
)
|
||||
for chunk in response:
|
||||
if chunk.choices[0].delta:
|
||||
print(chunk.choices[0].delta.content, end='')
|
||||
print('\n')
|
||||
```
|
||||
@@ -59,6 +59,7 @@ plugins:
|
||||
ERNIE-4.5-300B-A47B: ERNIE-4.5-300B-A47B快速部署
|
||||
ERNIE-4.5-VL-424B-A47B: ERNIE-4.5-VL-424B-A47B快速部署
|
||||
Quick Deployment For QWEN: Qwen3-0.6b快速部署
|
||||
Quick Deployment For QWEN2.5-VL: Qwen2.5-VL系列快速部署
|
||||
Online Serving: 在线服务
|
||||
OpenAI-Compatible API Server: 兼容 OpenAI 协议的服务化部署
|
||||
Monitor Metrics: 监控Metrics
|
||||
@@ -122,6 +123,7 @@ nav:
|
||||
- ERNIE-4.5-300B-A47B: get_started/ernie-4.5.md
|
||||
- ERNIE-4.5-VL-424B-A47B: get_started/ernie-4.5-vl.md
|
||||
- Quick Deployment For QWEN: get_started/quick_start_qwen.md
|
||||
- Quick Deployment For QWEN2.5-VL: get_started/quick_start_qwen25_vl.md
|
||||
- Online Serving:
|
||||
- OpenAI-Compatible API Server: online_serving/README.md
|
||||
- Monitor Metrics: online_serving/metrics.md
|
||||
|
||||
Reference in New Issue
Block a user