You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

This repository contains a security proof-of-concept for a vulnerability in llama.cpp. Access is restricted to authorized reviewers and the affected maintainers. Please state your reason for requesting access.

Log in or Sign Up to review the conditions and access this model content.

PoC: heap OOB read and write in llama.cpp UGM tokenizer charsmap parsing

Five bugs in the parsing of a tokenizer metadata blob. Two are universal heap over-reads; three more are live on big-endian release builds including the official s390x Docker images.

Affected software: llama.cpp, confirmed on master at commit 5d3a4a7d.

Bug classes: CWE-125 (OOB read), CWE-170 (improper NUL termination), CWE-787 (OOB write), CWE-476 (NULL pointer dereference).

Files

File Description
poc.gguf Minimal crafted GGUF for the primary over-read
poc_tiny.gguf Crafted GGUF for the secondary over-read
make_poc.py Generates both of the above
srv_256.gguf Server-loadable crafted GGUF for the release-build demo
make_server_poc.py Generates srv_256.gguf
prove.c Release-build impact demo via the public C API
b1_final.sh Release-build impact quantification via llama-server
d1_small_be.gguf Big-endian crafted GGUF for the s390x write bugs
make_poc_be.py Generates the big-endian PoCs
s390x_verify.sh Docker-based repro of the s390x bugs
colab_poc.ipynb Clone, build with ASAN, generate, run; one notebook

Quick reproduction

git clone https://github.com/ggml-org/llama.cpp && cd llama.cpp
git checkout 5d3a4a7da5e3dd42f5922aba2fe21b520e96e830
cmake -B build -DCMAKE_BUILD_TYPE=Debug -DLLAMA_SANITIZE_ADDRESS=ON \
      -DLLAMA_CURL=OFF -DLLAMA_OPENSSL=OFF
cmake --build build --target llama-tokenize -j
./build/bin/llama-tokenize -m poc.gguf -p x

Expected: AddressSanitizer reports a heap-buffer-overflow during model load.

s390x

s390x_verify.sh pulls the official Docker image and reproduces the big-endian bugs. Runs on a native s390x host or under qemu (register binfmt first: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes).

Full root-cause analysis, affected code locations, release-build impact demos, and the s390x findings are in the huntr report and the gated files.

Responsible disclosure

This PoC is provided under responsible disclosure for review via huntr.com. Do not use it against systems you do not own or have authorization to test. Do not redistribute until the issue is patched and publicly disclosed.

Downloads last month
12
GGUF
Model size
0 params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support