Executive Summary
Informations | |||
---|---|---|---|
Name | CVE-2025-52566 | First vendor Publication | 2025-06-24 |
Vendor | Cve | Last vendor Modification | 2025-06-24 |
Security-Database Scoring CVSS v3
Cvss vector : N/A | |||
---|---|---|---|
Overall CVSS Score | NA | ||
Base Score | NA | Environmental Score | NA |
impact SubScore | NA | Temporal Score | NA |
Exploitabality Sub Score | NA | ||
Calculate full CVSS 3.0 Vectors scores |
Security-Database Scoring CVSS v2
Cvss vector : | |||
---|---|---|---|
Cvss Base Score | N/A | Attack Range | N/A |
Cvss Impact Score | N/A | Attack Complexity | N/A |
Cvss Expoit Score | N/A | Authentication | N/A |
Calculate full CVSS 2.0 Vectors scores |
Detail
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721. |
Original Source
Url : http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2025-52566 |
Sources (Detail)
Source | Url |
---|
Alert History
Date | Informations |
---|---|
2025-06-25 05:20:32 |
|
2025-06-24 09:20:35 |
|