mirror of
https://github.com/PaddlePaddle/FastDeploy.git
synced 2025-12-24 13:28:13 +08:00
Update disaggregated.md
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
# Disaggregated Deployment
|
||||
|
||||
Large model inference consists of two phases: Prefill and Decode, which are compute-intensive (Prefill) and compute-intensive (Decode) respectively. Deploying Prefill and Decode separately in certain scenarios can improve hardware utilization, effectively increase throughput, and reduce overall sentence latency.
|
||||
Large model inference consists of two phases: Prefill and Decode, which are compute-intensive (Prefill) and Memory access-intensive(Decode) respectively. Deploying Prefill and Decode separately in certain scenarios can improve hardware utilization, effectively increase throughput, and reduce overall sentence latency.
|
||||
|
||||
* Prefill phase: Processes all input Tokens (such as user prompts), completes the model's forward propagation, and generates the first token.
|
||||
* Decode phase: Starting from the first generated token, it generates one token at a time autoregressively until reaching the stop token. For N output tokens, the Decode phase requires (N-1) forward propagations that must be executed serially. During generation, the number of tokens to attend to increases, and computational requirements gradually grow.
|
||||
@@ -163,4 +163,4 @@ python -m fastdeploy.entrypoints.openai.api_server \
|
||||
* --scheduler-port: Redis port to connect to
|
||||
* --scheduler-ttl: Specifies Redis TTL time in seconds
|
||||
* --pd-comm-port: Specifies PD communication port
|
||||
* --rdma-comm-ports: Specifies RDMA communication ports, multiple ports separated by commas, quantity should match GPU count
|
||||
* --rdma-comm-ports: Specifies RDMA communication ports, multiple ports separated by commas, quantity should match GPU count
|
||||
|
||||
Reference in New Issue
Block a user