GGML_PAD Integer Overflow PoC

Security research PoC for responsible disclosure. Integer overflow in GGML_PAD macro in llama.cpp GGUF parser (CWE-190).

Files

  • overflow_poc.gguf โ€” 2-tensor variant (intra-allocation OOB)
  • overflow_poc_v2.gguf โ€” Single-tensor variant (NULL deref, ASAN-confirmed)
  • poc.py โ€” Generator script with arithmetic proof

Reproduction

git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp
cmake -B build -DBUILD_SHARED_LIBS=OFF
cmake --build build --target llama-gguf
./build/bin/llama-gguf overflow_poc.gguf r n

Tested against commit b8635075f (2026-04-04).

Downloads last month
77
GGUF
Model size
4611686T params
Architecture
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support