llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.
Metrics
Affected Vendors & Products
References
History
Wed, 27 Aug 2025 14:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| First Time appeared |
Ggml
Ggml llama.cpp |
|
| CPEs | cpe:2.3:a:ggml:llama.cpp:*:*:*:*:*:*:*:* | |
| Vendors & Products |
Ggml
Ggml llama.cpp |
Tue, 24 Jun 2025 22:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
ssvc
|
Tue, 24 Jun 2025 03:45:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721. | |
| Title | llama.cpp tokenizer signed vs. unsigned heap overflow | |
| Weaknesses | CWE-119 CWE-195 |
|
| References |
| |
| Metrics |
cvssV3_1
|
Status: PUBLISHED
Assigner: GitHub_M
Published: 2025-06-24T03:21:19.009Z
Updated: 2025-06-24T21:49:53.200Z
Reserved: 2025-06-18T03:55:52.036Z
Link: CVE-2025-52566
Updated: 2025-06-24T21:49:47.523Z
Status : Analyzed
Published: 2025-06-24T04:15:46.967
Modified: 2025-08-27T14:01:31.297
Link: CVE-2025-52566
No data.