ShortSpan.ai logo Home

Study warns AI can erode civilisation slowly

Society
Published: Tue, Jan 16, 2024 • By James Armitage
Study warns AI can erode civilisation slowly
A systems-analysis paper argues there are two AI existential-risk paths: decisive, sudden failure from a single advanced system, and accumulative, slow erosion from many small AI-driven failures interacting across economic, political and military systems. It urges practitioners to monitor cascading weaknesses and combine sector monitoring with central oversight to protect resilience.

Artificial intelligence (AI) means software or systems that perform tasks that normally need human intelligence. A new paper contrasts two ways AI could pose civilisation-level risk: a decisive event from a single runaway system, and an accumulative pathway where many small AI-induced harms slowly erode social, economic and political resilience until a trigger causes collapse.

The distinction matters because defensive strategies differ. If the risk comes from one catastrophic system, the response centres on controlling development of very advanced models. If risk accumulates, the practical threat surface is broad and diffuse: brittle markets, degraded institutions, misinformation, surveillance and insecure systems that interact and amplify each other.

Scope and stakes for security teams

The paper uses systems analysis and a thought experiment called MISTER to show how economic, political and military subsystems can form feedback loops and tipping points. For practitioners this means incidents that look trivial in isolation can matter if they weaken redundancy, trust or governance. The accumulative view links the ethics-and-social-risk world with long-term safety planning.

How it works: small failures propagate. AI-driven automation and optimisation shift market structures and attack surfaces; manipulation and surveillance erode public trust; cyber intrusions exploit new dependencies. These effects can cascade and interact, so monitoring a single metric is not enough.

Impact and risk: the paper warns that slow degradation can leave societies vulnerable to a trigger event that causes disproportionate harm. It recommends distributed sector monitoring paired with central oversight of advanced development to avoid fragmented governance.

Practical mitigations: start with minimal viable controls — inventory AI dependencies, enforce basic secure development and incident reporting, and set thresholds for escalation. Good-better-best options scale from sectoral dashboards and cross-sector exercises to coordinated oversight bodies and stress-testing of critical infrastructure.

Limitations and caveats: the accumulative model is harder to quantify and validate. The paper notes modelling uncertainties and the need for early-warning signal research rather than claiming precise timelines or probabilities.

Forward-looking kicker: whether risk arrives fast or slow, resilience wins. Security teams should treat small, repeated failures as intelligence, not noise, and design controls that stop erosion before it becomes an irreversible cascade.

Additional analysis of the original ArXiv paper

📋 Original Paper Title and Abstract

Two Types of AI Existential Risk: Decisive and Accumulative

The conventional discourse on existential risks (x-risks) from AItypically focuses on abrupt, dire events caused by advanced AI systems,particularly those that might achieve or surpass human-level intelligence. Theseevents have severe consequences that either lead to human extinction orirreversibly cripple human civilization to a point beyond recovery. Thisdiscourse, however, often neglects the serious possibility of AI x-risksmanifesting incrementally through a series of smaller yet interconnecteddisruptions, gradually crossing critical thresholds over time. This papercontrasts the conventional "decisive AI x-risk hypothesis" with an "accumulativeAI x-risk hypothesis." While the former envisions an overt AI takeover pathway,characterized by scenarios like uncontrollable superintelligence, the lattersuggests a different causal pathway to existential catastrophes. This involves agradual accumulation of critical AI-induced threats such as severevulnerabilities and systemic erosion of economic and political structures. Theaccumulative hypothesis suggests a boiling frog scenario where incremental AIrisks slowly converge, undermining societal resilience until a triggering eventresults in irreversible collapse. Through systems analysis, this paper examinesthe distinct assumptions differentiating these two hypotheses. It is then arguedthat the accumulative view can reconcile seemingly incompatible perspectives onAI risks. The implications of differentiating between these causal pathways --the decisive and the accumulative -- for the governance of AI as well aslong-term AI safety are discussed.

🔍 ShortSpan Analysis of the Paper

Problem

Public discourse on AI existential risks typically concentrates on decisive events caused by highly capable AI, yet there is a serious possibility that risks could unfold gradually through interconnected disruptions. The paper contrasts the conventional decisive AI x risk hypothesis with an accumulative AI x risk hypothesis, where a gradual accumulation of AI induced threats erodes economic, political and social resilience and may culminate in irreversible collapse after a triggering event. Through systems analysis the study examines the distinct assumptions behind these two causal pathways and argues that the accumulative view can reconcile divergent perspectives on AI risk and governance for long term safety.

Approach

The study employs systems analysis to compare two causal pathways to AI existential catastrophes. It clarifies risk in terms of uncertainty about adverse outcomes and distinguishes AI x risks from AI social risks such as manipulation, misinformation, insecurity, surveillance and rights infringements. It presents the accumulative hypothesis as a gradual accumulation of disruptions across interlinked subsystems rather than a single decisive event, and uses a thought experiment known as the perfect storm MISTER to illustrate how interactions between AI induced disruptions in economic, political and military realms could undermine global stability. Three meso level subsystems economic, political and military are analysed as a network in which initial perturbations can propagate via feedback loops and thresholds, highlighting the role of system dynamics in risk escalation and governance.

Key Findings

  • Two distinct AI x risk pathways are contrasted a decisive pathway driven by a single disruptive system and an accumulative pathway driven by multiple interacting disruptions.
  • The accumulative pathway weakens systemic resilience through cascading interactions across economic political and military subsystems potentially culminating in irreversible collapse after a triggering event.
  • The perfect storm MISTER scenario demonstrates how manipulation insecurity threats surveillance and trust erosion economic destabilisation and rights infringements can combine with cyber attacks and social disruption to produce a global crisis in an interconnected world.
  • Systems analysis provides a method to map propagation pathways identify tipping points and design interventions linking short term social and ethical risks with long term existential risks.
  • Governance implications call for integrating risk frameworks adopting distributed monitoring of accumulative risks with centralised oversight of advanced AI development to address fragmentation and enable proactive mitigation.

Limitations

The paper recognises uncertainties in modelling accumulative AI x risks acknowledging that empirical validation and precise quantification remain challenging. It discusses objections to the accumulative model and argues for monitoring system dynamics simulations and further formal frameworks to study accumulation. It notes the need for identifying early warning signals and thresholds and for integrating diverse governance approaches to manage AI risk.

Why It Matters

Practically the accumulative perspective broadens threat modelling for long horizon security planning by emphasising how small AI induced weaknesses in software markets and institutions can cascade and interact across domains. It also proposes bridging fragmentation between ethical risk and existential risk literatures by adopting a tiered governance framework that combines distributed sector specific monitoring with centralised oversight of advanced AI development. The analysis highlights systemic interactions and feedbacks that can erode resilience, informs risk management policy and safety research, and supports resilience building in critical socio economic and political domains for AI governance and security.


← Back to Latest