fix paddle_peak_increase size (#4355)
Some checks failed
CE Compile Job / ce_job_pre_check (push) Has been cancelled
CE Compile Job / print_ce_job_pre_check_outputs (push) Has been cancelled
CE Compile Job / FD-Clone-Linux (push) Has been cancelled
CE Compile Job / Show Code Archive Output (push) Has been cancelled
CE Compile Job / BUILD_SM8090 (push) Has been cancelled
CE Compile Job / BUILD_SM8689 (push) Has been cancelled
CE Compile Job / CE_UPLOAD (push) Has been cancelled
Deploy GitHub Pages / deploy (push) Has been cancelled

This commit is contained in:
AIbin
2025-10-10 21:31:38 +08:00
committed by GitHub
parent f7eaca3971
commit 533896fd63

View File

@@ -141,7 +141,7 @@ class GpuWorker(WorkerBase):
paddle_allocated_mem_after_run = paddle.device.cuda.max_memory_allocated(local_rank)
model_block_memory_used = self.cal_theortical_kvcache()
paddle_peak_increase = paddle_reserved_mem_after_run - paddle_allocated_mem_before_run
paddle_peak_increase = paddle_allocated_mem_after_run - paddle_allocated_mem_before_run
paddle.device.cuda.empty_cache()