ShortSpan.ai logo Home

Researchers Expose Model-Sharing Remote Code Risks

Attacks
Published: Tue, Sep 09, 2025 • By Clara Nyx
Researchers Expose Model-Sharing Remote Code Risks
New research shows popular model-sharing frameworks and hubs leave doors open for attackers. The authors find six zero-day flaws that let malicious models run code when loaded, and warn that many security features are superficial. This raises supply chain and operational risks for anyone loading shared models.

Stop pretending model files are harmless blobs of data. This new study pulls the sheet off model-sharing ecosystems and finds real teeth: six undisclosed zero-day vulnerabilities that allow arbitrary code execution when a model is loaded. The authors test common frameworks and hubs, show how "security" settings can be cosmetic, and document how users trust scans and labels that do not fully protect them.

Picture this: a researcher downloads a model from a public hub, loads it locally, and the model quietly executes attacker code on their machine. That is not drama, it is a realistic attack chain the paper reproduces. The researchers examine frameworks like PyTorch and TensorFlow, hubs such as major public repos, and highlight that even data-forward formats can rehydrate executable paths. CVEs were assigned for the findings, so this is not hypothetical.

Why this matters beyond academics: shared models are a supply chain. Hospitals, regulators, startups and hobbyists may all pull the same poisoned artifact. Overreliance on hub scanners and security marketing creates misplaced trust, and many platforms shift responsibility onto users who may not know what to check.

Practical takeaway: treat model artifacts as executable code. Action 1: never load untrusted models in production; run them in isolated sandboxes or containers and inspect files before loading. Action 2: prefer models with clear provenance, apply framework security patches immediately, and demand end to end protections from hubs and vendors.

Additional analysis of the original ArXiv paper

📋 Original Paper Title and Abstract

When Secure Isn't: Assessing the Security of Machine Learning Model Sharing

Authors: Gabriele Digregorio, Marco Di Gennaro, Stefano Zanero, Stefano Longari, and Michele Carminati
The rise of model-sharing through frameworks and dedicated hubs makes Machine Learning significantly more accessible. Despite their benefits, these tools expose users to underexplored security risks, while security awareness remains limited among both practitioners and developers. To enable a more security-conscious culture in Machine Learning model sharing, in this paper we evaluate the security posture of frameworks and hubs, assess whether security-oriented mechanisms offer real protection, and survey how users perceive the security narratives surrounding model sharing. Our evaluation shows that most frameworks and hubs address security risks partially at best, often by shifting responsibility to the user. More concerningly, our analysis of frameworks advertising security-oriented settings and complete model sharing uncovered six 0-day vulnerabilities enabling arbitrary code execution. Through this analysis, we debunk the misconceptions that the model-sharing problem is largely solved and that its security can be guaranteed by the file format used for sharing. As expected, our survey shows that the surrounding security narrative leads users to consider security-oriented settings as trustworthy, despite the weaknesses shown in this work. From this, we derive takeaways and suggestions to strengthen the security of model-sharing ecosystems.

🔍 ShortSpan Analysis of the Paper

Problem

The paper studies security in ML model sharing frameworks and hubs, examining whether claimed protections are real, and how users perceive security narratives. It matters because model sharing makes ML more accessible but can expose users to new risks such as remote code execution and supply chain attacks, and because security controls are often superficial or rely on user action.

Approach

The authors analyse two levels of the ecosystem framework level and hub level, focusing on widely used frameworks (Keras, TensorFlow, PyTorch, scikit-learn, XGBoost) and hubs (Hugging Face Hub, Kaggle Models, PyTorch Hub, TensorFlow Hub, Keras Hub). They classify sharing formats as self contained or not and as data based or code based, summarising official documentation. They define a threat model in which an attacker crafts malicious model artifacts to achieve arbitrary code execution at load time via public repositories or direct delivery, and they perform vulnerability discovery and a user survey. They publish proof of concept exploits and artefacts for reproducibility and provide a public repository.

Key Findings

  • Most frameworks and hubs address security risks only partially at best, often by shifting responsibility to the user.
  • Six undisclosed zero day vulnerabilities enabling arbitrary code execution were found in frameworks advertising security oriented settings and complete model sharing, with CVEs assigned for each finding.
  • The vulnerabilities show that data based formats are not automatically secure, and that JSON style representations can reconstruct code paths during model loading.
  • There is a gap between security marketing and reality, with users often trusting security oriented settings or hub scanners, leading to misplaced trust.
  • Hub level analysis shows Hugging Face Hub provides malware pickle and secret scanning and integrates third party scanners, while other hubs offer little or no automated protection and can rely on the framework level instead.
  • ONNX is cited as a secure solution due to its restricted operator set, but in practice flexibility is traded for security, whereas other mechanisms rely on allowlists or blocklists with limitations.

Limitations

The study includes a user survey with 53 participants who have loading or sharing ML experience, so results are indicative rather than representative. Security scanning results are limited to the tested PoCs and tools, and scanners show false positives and negatives; several hubs do not provide comprehensive protection. The work focuses on widely used frameworks and hubs and cannot exhaustively cover all third party libraries or newer LLM specific formats. The authors also note that legacy formats continue to pose security risks and adoption of patches can be slow.

Why It Matters

The findings highlight practical implications for security in model sharing: there is a need for end to end security in the ecosystem, improved provenance and verification, runtime isolation or sandboxing, automated security testing, and secure by default configurations with better user education. The work emphasises that shared models operate as executable code and loading untrusted artifacts carries substantial risk, which has implications for supply chain security and critical deployments in sensitive domains.


← Back to Latest