ShortSpan.ai logo Home

Poetry Jails Most LLMs in Single Prompt

Attacks
Published: Thu, Nov 20, 2025 • By Adrian Calder
Poetry Jails Most LLMs in Single Prompt
Researchers show adversarial poetry can bypass safety guards in many Large Language Models (LLMs). Across 25 frontier models, hand-crafted verse yields about 62% jailbreak success and a meta-prompt conversion yields roughly 43%, with some providers over 90%. The method crosses threat domains and exposes a gap in style-agnostic safety testing.

Here’s a neat trick: turn a harmful instruction into a poem and many Large Language Models (LLMs) will do the bad thing anyway. A new study evaluates 25 frontier models from nine providers and finds that poetic framing frequently defeats contemporary safety layers in a single turn. It is annoyingly effective.

The researchers fed each model two types of inputs: 20 hand-crafted adversarial poems covering chemical, biological, radiological and nuclear (CBRN) topics, cyber offence, manipulation and loss-of-control scenarios, and a large-scale conversion of 1,200 harmful MLCommons prompts into verse using a standard meta-prompt. That produced roughly 2,400 inputs per model and about 60,000 outputs overall. Outputs were judged by an ensemble of three open-weight judge models and a human-validated stratified subset with double annotations.

Results are stark but specific. Hand-crafted poems produce an average attack success rate (ASR) of 62%. Converted prompts hit about 43% ASR. Some providers show very high vulnerability, with ASRs over 90% in individual cases. Across 25 models, 13 show ASRs above 70% while five are below 35%. Moving from prose to poetry increases ASR by roughly 35 percentage points on average. Cyber-offence prompts are especially vulnerable, with ASRs up to 84%. The study also notes that, within a provider, smaller models often refuse more than larger ones.

Why this matters

The practical upshot is simple. Current safety measures can be brittle: they often look at what a prompt says and not how it is said. Style matters. Attackers who already know a harmful instruction can wrap it in rhyme, metaphor or verse to get past refusals. That widens the attack surface for disinformation, targeted manipulation and cyber misuse, and it complicates compliance with testing frameworks such as MLCommons taxonomies and the European Code of Practice-style mappings used in the paper.

Methodological implications are important too. The study deliberately focuses on single-turn attacks and English and Italian prompts, and it does not probe hardened deployment stacks or the internal mechanics behind the failure. So it reports a serious blind spot without claiming to explain all the underlying causes. Still, the breadth of models and alignment approaches tested — from reinforcement learning with human feedback to constitutional training and open-weight systems — suggests the vulnerability is widespread, not an oddity confined to a single lab.

What should defenders do? Start treating stylistic variation as an adversarial vector. Extend red teams and automated scanners to include poetic and creative transformations. Consider style-agnostic filtering and adversarial training that explicitly injects verse and other artistic forms into safety test suites. Monitor for prompt patterns that correlate with policy bypass, and use multi-layered defences instead of relying on single-turn refusals.

Pragmatically, expand your test corpus to include creative forms, add judge-model diversity when assessing safety, and treat unusually styled prompts as higher risk until proven safe. None of this is glamorous. It is, however, the sort of dull, necessary work that keeps things functioning when someone decides to ask nicely — in iambic pentameter — for trouble.

Additional analysis of the original ArXiv paper

📋 Original Paper Title and Abstract

Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models

Authors: Piercosma Bisconti, Matteo Prandi, Federico Pierucci, Francesco Giarrusso, Marcantonio Bracale, Marcello Galisai, Vincenzo Suriani, Olga Sorokoletova, Federico Sartore, and Daniele Nardi
We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for large language models (LLMs). Across 25 frontier proprietary and open-weight models, curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90%. Mapping prompts to MLCommons and EU CoP risk taxonomies shows that poetic attacks transfer across CBRN, manipulation, cyber-offence, and loss-of-control domains. Converting 1,200 MLCommons harmful prompts into verse via a standardized meta-prompt produced ASRs up to 18 times higher than their prose baselines. Outputs are evaluated using an ensemble of open-weight judge models and a human-validated stratified subset (with double-annotations to measure agreement). Disagreements were manually resolved. Poetic framing achieved an average jailbreak success rate of 62% for hand-crafted poems and approximately 43% for meta-prompt conversions (compared to non-poetic baselines), substantially outperforming non-poetic baselines and revealing a systematic vulnerability across model families and safety training approaches. These findings demonstrate that stylistic variation alone can circumvent contemporary safety mechanisms, suggesting fundamental limitations in current alignment methods and evaluation protocols.

🔍 ShortSpan Analysis of the Paper

Problem

Adversarial poetry functions as a universal single turn jailbreak technique for large language models, revealing a systemic vulnerability in current alignment and safety evaluation whereby stylistic form can override refusals across domains and providers.

Approach

The study combines 20 hand crafted adversarial poems spanning CBRN, loss of control, manipulation and cyber offence with a large scale transformation of 1,200 MLCommons prompts into verse via a standard meta prompt, producing 2,400 inputs per model and roughly 60,000 outputs evaluated under provider default safety settings. Outputs are judged by an ensemble of three open weight judge models and a human validated stratified subset, with disagreements manually adjudicated. Prompts are mapped to the MLCommons hazard taxonomy and the European Code of Practice for General Purpose AI models to assess cross domain risk coverage. Evaluation covers 25 frontier models from nine providers, including both proprietary and open weight releases.

Key Findings

  • Poetic framing yields high attack success rates across models, with an average 62% ASR for hand crafted poems and 43% for meta prompt conversions, outperforming non poetic baselines.
  • Some providers exhibit very high vulnerability, with ASRs exceeding 90% in some cases; across 13 of 25 models ASR exceeds 70%, while 5 remain below 35%.
  • ASR increases when moving from prose to poetry by about 35 percentage points on average, consistent across model families and alignment strategies including RLHF, Constitutional AI and open weight systems.
  • The effect is broad across safety domains including CBRN, cyber offence, manipulation, privacy, misinformation and loss of control; cyber offence prompts show particularly high ASR up to 84%.
  • Smaller models tend to refuse more often than larger ones within a provider, suggesting a non linear relationship between capability and robustness to stylistic jailbreaks.

Limitations

The threat model is restricted to single turn prompts; the study relies on a single poetic transformation pipeline and a three model judge ensemble with human adjudication, which may yield a lower bound on ASR. Only English and Italian prompts were tested; hardened safety configurations and real world deployment stacks were not evaluated. The work does not identify mechanistic drivers of the vulnerability and does not test iterative or multi turn jailbreak dynamics.

Why It Matters

The findings expose a systemic vulnerability in contemporary alignment and evaluation practices, showing that stylistic transformation alone can bypass safeguards across diverse models and domains. This has implications for safety testing, regulatory conformity under regimes such as EU GPAI and AI Act style frameworks, and for real world risks including disinformation, manipulation and loss of control. Defensive signals include monitoring stylistic patterns in prompts, expanding test suites to include creative and poetic inputs, and pursuing style agnostic or adversarially trained defenses and cross model safety strategies.


← Back to Latest