ShortSpan.ai logo

Small poisoned sets can hijack large LLMs

Attacks
Published: Sat, Oct 11, 2025 • By Theo Solander
Small poisoned sets can hijack large LLMs
Researchers show that a fixed, small number of poisoned documents can plant backdoors in large language models (LLMs) regardless of dataset size. Experiments on models from 600 million to 13 billion parameters and datasets from 6 billion to 260 billion tokens find roughly 250 poisoned documents reliably compromise models in both pretraining and fine‑tuning, undermining the idea that more data alone reduces risk.

The paper reports a clear, if unsettling, pattern: backdoors can be sewn into LLMs with a near‑constant number of poisoned documents. The authors pretrained autoregressive transformer models from 600 million to 13 billion parameters on Chinchilla‑optimal datasets of roughly 6 billion to 260 billion tokens. They injected fixed counts of poisoned documents and found that 250 poisoned documents reliably produced backdoor behaviour across model and dataset sizes, while 100 documents were generally insufficient.

Attacks were evaluated with metrics the community recognises: attack success rate (ASR), near trigger accuracy (NTA) and clean accuracy (CA), using held‑out prefixes from the Pile. Triggers included denial‑of‑service style gibberish and simple language switching. Importantly, the same dynamics appear during fine‑tuning: a small absolute number of poisoned samples dominates whether a backdoor succeeds, not the poisoned fraction relative to clean data.

Why this matters

The takeaway is a practical one. Many defenders assume that sheer scale of training data dilutes adversarial samples. This work contradicts that assumption. If an attacker only needs a few hundred documents to create a reliable trigger across very different model sizes and data volumes, the attack surface widens as models and corpora grow. The paper also shows the backdoor effect is resilient: continued clean pretraining erodes attack success only slowly, and certain positioning of poisoned samples — near the end of a training run — can be more effective.

There are caveats. The study focuses on particular trigger classes and model families. It does not claim every possible backdoor will behave identically. Still, the repeated finding that absolute count matters, plus the demonstration that supervised fine‑tuning for alignment can substantially reduce ASR with on the order of 50 to 100 samples, gives defenders both a warning and a lever.

Practical steps for teams

Defenders should take the result seriously and change practice accordingly. First, tighten provenance and curation: require metadata for ingested documents, prioritise vetted sources for sensitive tasks and audit third‑party corpora for duplicates and rare token sequences. Second, instrument training: log data ordering and batch composition, and include validation probes that test whether a model flips into gibberish or other failure modes when presented with candidate triggers. Third, make alignment part of the deployment pipeline: supervised fine‑tuning and other post‑training alignment steps reduced backdoor strength in the experiments and are a practical mitigation today.

Operational knobs matter too. The study notes higher learning rates lowered the number of poisoned samples needed, while decaying schedulers made end‑of‑run poisoning less effective. That suggests sensible default schedules and careful tuning are part of a defence-in-depth approach. Finally, red‑teaming remains essential: deliberately attempt to insert or activate triggers during development so you learn the model's weak points before adversaries do.

I remain cautious about overgeneralising. The work does not show universal failure modes, but it does change the calculus. If a few hundred poisoned documents can reliably implant a backdoor across stages and scales, builders cannot rely on volume alone. They need provenance, detection, alignment and thoughtful operational controls to keep models trustworthy.

Additional analysis of the original ArXiv paper

📋 Original Paper Title and Abstract

Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples

Authors: Alexandra Souly, Javier Rando, Ed Chapman, Xander Davies, Burak Hasircioglu, Ezzeldin Shereen, Carlos Mougan, Vasilios Mavroudis, Erik Jones, Chris Hicks, Nicholas Carlini, Yarin Gal, and Robert Kirk
Poisoning attacks can compromise the safety of large language models (LLMs) by injecting malicious documents into their training data. Existing work has studied pretraining poisoning assuming adversaries control a percentage of the training corpus. However, for large models, even small percentages translate to impractically large amounts of data. This work demonstrates for the first time that poisoning attacks instead require a near-constant number of documents regardless of dataset size. We conduct the largest pretraining poisoning experiments to date, pretraining models from 600M to 13B parameters on chinchilla-optimal datasets (6B to 260B tokens). We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data. We also run smaller-scale experiments to ablate factors that could influence attack success, including broader ratios of poisoned to clean data and non-random distributions of poisoned samples. Finally, we demonstrate the same dynamics for poisoning during fine-tuning. Altogether, our results suggest that injecting backdoors through data poisoning may be easier for large models than previously believed as the number of poisons required does not scale up with model size, highlighting the need for more research on defences to mitigate this risk in future models.

🔍 ShortSpan Analysis of the Paper

Problem

The paper studies data poisoning attacks on large language models (LLMs) and shows that a near constant, small number of poisoned documents can hijack models across sizes and training stages. This challenges the assumption that larger data volumes mitigate poisoning risk and highlights a cross stage threat where backdoors can be introduced during pretraining or fine tuning, with potential covert manipulation of outputs in critical applications.

Approach

Researchers pretrained autoregressive transformer models from 600 million to 13 billion parameters on Chinchilla‑optimal datasets (around 6 billion to 260 billion tokens). They injected fixed absolute numbers of poisoned documents (100, 250 and 500) uniformly into the training data, and evaluated backdoor effects using a denial of service style trigger that produces gibberish, measured by increases in perplexity. Evaluation used held out prefixes from the Pile to assess control versus triggered generations, focusing on three metrics: attack success rate ASR, near trigger accuracy NTA, and clean accuracy CA. They conducted large scale pretraining experiments and smaller scale ablations to test poisoning ratios, per batch density, and sample ordering. They also repeated the dynamics during fine tuning using Llama 3.1 8B Instruct and GPT 3.5 Turbo, constructing poisoned and clean harmful data from jailbroken sources, and assessed model capabilities via standard NLP benchmarks to ensure no substantial degradation of core abilities. Finally, they explored defensive experiments including continued clean pretraining and simulated alignment post training to assess mitigation potential.

Key Findings

  • Across model sizes from 600M to 13B parameters, 250 poisoned documents reliably backdoor models, producing large increases in perplexity when a trigger is present, even though the largest models train with more than twenty times more clean data; 100 poisoned documents were not sufficient.
  • The success of the backdoor is governed by the absolute number of poisoned samples rather than the poisoning rate relative to clean data. For a fixed poisoned sample count, attack impact is similar across model sizes and different amounts of clean data, with attack effects observable across the training progression.
  • Per batch factors such as poisoning density and frequency of poisoned batches have limited impact; higher per batch density requires more poisoned samples to reach the same ASR, suggesting a role for sequential gradient steps in learning the attack.
  • Backdoors can persist through substantial portions of pretraining; continued clean pretraining slowly degrades ASR, while near trigger accuracy NTA can recover with further clean training. Language switching backdoors show similar dynamics, indicating persistent risk across training phases.
  • In fine tuning, the absolute number of poisoned samples again dominates ASR; random data ordering yields high ASR with sufficient poisoned samples, while the position of poisoned data (beginning versus end) can alter effectiveness, with end of training generally more effective given enough poisoned samples; very small poisoned data counts can be ineffective when placed at the beginning.
  • Alignment through supervised fine tuning after poisoning markedly reduces backdoor effectiveness; in experiments, 50 to 100 alignment samples can substantially cut ASR, sometimes bringing it close to zero, while CA and NTA remain generally well preserved.
  • Learning rate and data scheduling influence poisoned sample requirements; higher learning rates reduce the number of poisoned samples needed to reach high ASR, whereas using a linearly decreasing LR scheduler can reduce the effectiveness of poisoning at the end of training; uniform data distributions during fine tuning support consistent ASR outcomes.
  • Fine tuning with poisoned data can degrade some capabilities slightly but often preserves core reasoning abilities and overall performance on standard benchmarks; the backdoor does not necessarily erase benign performance, though some declines in CA or NTA can occur depending on data and schedule.
  • The study notes that backdoors pose a practical threat as model scale and data scale diverge; a fixed number of poisons can become more effective as datasets grow, underscoring the need for robust data curation, provenance tracking, and poisoning resistant training across the full training pipeline.

Limitations

The work focuses on a subset of backdoor types including language switching and denial of service; it does not demonstrate end to end poisoning through all post training stages nor RLHF pipelines in all contexts. Results are drawn from specific model families (Pythia style checkpoints, Llama 3.1 8B Instruct, GPT 3.5 Turbo) and datasets, and the authors do not release code or data, which may affect reproducibility. The persistence of backdoors under all realistic post training and alignment regimes remains unclear, and the exact generalisability of the scaling relations across unseen triggers or architectures requires further study.

Why It Matters

The findings imply that backdoors can be injected with a fixed small number of poisoned documents, independent of overall dataset size, making large scale LLMs more vulnerable than previously assumed. Defences should target data curation, provenance tracking, and poisoning resistant training, and defenses must remain effective across training stages, given cross stage vulnerability. Practically, covert manipulation of outputs could affect critical applications and automated decision systems, with security and trust implications unless robust mitigations are developed. The research also highlights that alignment based on supervised fine tuning can be an effective defence against backdoors, suggesting a viable defense pathway for future models and deployments.


Related Articles

Related Research on arXiv

Get the Monthly AI Security Digest

Top research and analysis delivered to your inbox once a month. No spam, unsubscribe anytime.

Subscribe