ShortSpan.ai logo Home

Pickle Poisoning Outwits Model Scanners Again

Attacks
Published: Thu, Aug 28, 2025 • By Natalie Kestrel
Pickle Poisoning Outwits Model Scanners Again
New research reveals Python pickle serialization remains a stealthy avenue for model supply chain poisoning, and that current scanners miss most loading paths and gadgets. Attackers can craft models that execute code during load and bypass defenses. The finding urges platforms and teams to prefer safer formats, strengthen scanning, and isolate model loads.

Here is a fresh reminder that convenience often hides danger. Researchers mapped the full pickle-based attack surface for model loading and found that open model ecosystems are far more exposed than most teams assume. Pickle still lets complex objects run code when you load them, and that behavior shows up in 22 loading paths across common AI frameworks.

Worse, 19 of those paths are completely missed by existing scanners. The team also discovered 133 exploitable gadgets and a clever bypass trick they call Exception-Oriented Programming. In practice that means attackers can tuck executable behavior into models, archives, or compression quirks and slip past the best scanners. Even the top tool still missed nearly 90 percent of these gadgets.

This is not academic hair-splitting. Organizations that pull models from public hubs or reuse community checkpoints risk remote code execution inside their workflows. The paper shows automated pipelines can produce reliable exploits, and responsible disclosure earned the authors acknowledgements and a modest bug bounty, but vendor fixes are incomplete and some maintainers accept residual risk.

Practical takeaways are blunt: treat pickle-serialized models as hostile when they come from untrusted sources, and do not rely solely on signature scanners. At minimum, teams should run the following checks immediately:

  • Audit and block pickle loads from external model sources
  • Prefer safer serialization formats such as ONNX or savedmodel where feasible
  • Run path-aware scanning that covers all framework loading routes and archive decompression
  • Isolate model loading in sandboxed processes or containers with strict runtime policies
  • Test scanners by simulating EOP and gadget payloads to measure real-world coverage

Additional analysis of the original ArXiv paper

📋 Original Paper Title and Abstract

The Art of Hide and Seek: Making Pickle-Based Model Supply Chain Poisoning Stealthy Again

Pickle deserialization vulnerabilities have persisted throughout Python's history, remaining widely recognized yet unresolved. Due to its ability to transparently save and restore complex objects into byte streams, many AI/ML frameworks continue to adopt pickle as the model serialization protocol despite its inherent risks. As the open-source model ecosystem grows, model-sharing platforms such as Hugging Face have attracted massive participation, significantly amplifying the real-world risks of pickle exploitation and opening new avenues for model supply chain poisoning. Although several state-of-the-art scanners have been developed to detect poisoned models, their incomplete understanding of the poisoning surface leaves the detection logic fragile and allows attackers to bypass them. In this work, we present the first systematic disclosure of the pickle-based model poisoning surface from both model loading and risky function perspectives. Our research demonstrates how pickle-based model poisoning can remain stealthy and highlights critical gaps in current scanning solutions. On the model loading surface, we identify 22 distinct pickle-based model loading paths across five foundational AI/ML frameworks, 19 of which are entirely missed by existing scanners. We further develop a bypass technique named Exception-Oriented Programming (EOP) and discover 9 EOP instances, 7 of which can bypass all scanners. On the risky function surface, we discover 133 exploitable gadgets, achieving almost a 100% bypass rate. Even against the best-performing scanner, these gadgets maintain an 89% bypass rate. By systematically revealing the pickle-based model poisoning surface, we achieve practical and robust bypasses against real-world scanners. We responsibly disclose our findings to corresponding vendors, receiving acknowledgments and a $6000 bug bounty.

🔍 ShortSpan Analysis of the Paper

Problem

This paper examines how Python pickle deserialization creates a large, stealthy attack surface for model supply chain poisoning in the open model ecosystem. Pickle remains widely used for serialising complex AI/ML objects despite known risks of arbitrary code execution. Public model hubs and diverse framework loading logic increase real-world exposure, while existing scanners have an incomplete view of where and how poisoned models can execute code, enabling persistent bypasses.

Approach

The authors perform a systematic two-layer analysis of the pickle-based poisoning surface: the model loading surface and the risky function surface. They implement PickleCloak, which combines CodeQL-driven static analysis to enumerate call chains and loading paths, a lightweight function-level dataflow analyser to reduce candidate gadgets, and an LLM-assisted Automatic Exploit Generation pipeline using DeepSeek-V3 to reason about and synthesize exploits. They compile payloads with Pickora, validate exploits with runtime oracles, and test against four real-world scanners and multiple AI/ML frameworks including NumPy, Joblib, PyTorch, TensorFlow/Keras and NeMo.

Key Findings

  • Model loading paths: The study identifies 22 distinct pickle-based model loading paths across five frameworks; 19 of these paths are entirely missed by existing scanners.
  • Scanner-side exceptions: The authors introduce Exception-Oriented Programming (EOP), discovering 9 exploitable scanner-side loading path exceptions, of which 7 can bypass all evaluated scanners.
  • Gadget discovery: PickleCloak discovers 133 exploitable gadgets (129 attack gadgets and 4 helper gadgets) across built-in and common third-party libraries; almost 100% of these gadgets bypassed most scanners, and the best-performing scanner still missed 89%.
  • Automation: The static analyser reduced the gadget search space by an average 78.40%, produced a 0% false positive rate on sampled candidates, and had a 5.33% false negative rate due to intra-procedural reductions. The LLM-assisted pipeline generated 108 candidate exploits with a 96.3% validity rate after manual validation, yielding 100 chained exploits that remained valid.
  • Real-world impact: Crafted malicious models exploiting loading paths, compression and archive behaviours, and gadgets successfully bypassed state-of-the-art open-source and online scanners in many cases; responsible disclosure produced acknowledgements and a US$6000 bounty from ProtectAI.

Limitations

The static analysis uses intra-procedural, function-level reductions to remain scalable, which yields a measurable under-approximation (5.33% false negative rate) for patterns that require inter-procedural reasoning. The approach depends on available library source and the chosen LLM for reasoning; additional manual effort recovered some missed gadgets. Vendor remediation varied and some maintainers deferred or accepted risk trade-offs, so full mitigation is not guaranteed. Detailed author and dataset provenance are not reported.

Why It Matters

The work exposes systemic gaps in model-scanning defences and shows how pickle-based models can execute code via diverse loading paths and disguised function gadgets. This undermines trust in open-model ecosystems and risks compromising downstream systems that load community models. The paper highlights practical mitigations: adopt safer serialization formats where possible, strengthen provenance and hosting platform scanning with path-aware detection, update scanners to address scanner-side exceptions, and apply runtime isolation for model loading. Controlled release of findings and proofs aims to support defensive improvements.


← Back to Latest