llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
Metrics
Affected Vendors & Products
References
History
Wed, 16 Jul 2025 13:45:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
epss
|
epss
|
Fri, 11 Jul 2025 13:45:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
epss
|
Thu, 10 Jul 2025 21:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
ssvc
|
Thu, 10 Jul 2025 19:45:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579. | |
| Title | Integer Overflow in GGUF Parser can lead to Heap Out-of-Bounds Read/Write in gguf | |
| Weaknesses | CWE-122 CWE-680 |
|
| References |
| |
| Metrics |
cvssV4_0
|
Status: PUBLISHED
Assigner: GitHub_M
Published: 2025-07-10T19:32:45.296Z
Updated: 2025-07-10T20:31:07.240Z
Reserved: 2025-07-07T14:20:38.389Z
Link: CVE-2025-53630
Updated: 2025-07-10T20:31:00.986Z
Status : Awaiting Analysis
Published: 2025-07-10T20:15:27.523
Modified: 2025-07-15T13:14:49.980
Link: CVE-2025-53630
No data.