Executive Summary

Informations
Name CVE-2025-48944 First vendor Publication 2025-05-30
Vendor Cve Last vendor Modification 2025-06-02

Security-Database Scoring CVSS v3

Cvss vector : N/A
Overall CVSS Score NA
Base Score NA Environmental Score NA
impact SubScore NA Temporal Score NA
Exploitabality Sub Score NA
 
Calculate full CVSS 3.0 Vectors scores

Security-Database Scoring CVSS v2

Cvss vector :
Cvss Base Score N/A Attack Range N/A
Cvss Impact Score N/A Attack Complexity N/A
Cvss Expoit Score N/A Authentication N/A
Calculate full CVSS 2.0 Vectors scores

Detail

vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.

Original Source

Url : http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2025-48944

CWE : Common Weakness Enumeration

% Id Name
100 % CWE-20 Improper Input Validation

Sources (Detail)

https://github.com/vllm-project/vllm/pull/17623
https://github.com/vllm-project/vllm/security/advisories/GHSA-vrq3-r879-7m65
Source Url

Alert History

If you want to see full details history, please login or register.
0
1
Date Informations
2025-06-03 00:20:34
  • Multiple Updates
2025-05-31 00:20:32
  • First insertion