Skip to main content

The Great Risks: Misuse, Accidents, and Existential Threats

The Technologies That Could End Humanity

Every chapter in this book has explored transformative potential. AI that accelerates discovery. Biology that rewrites health. Energy that powers abundance. Each offers tremendous benefit.

Each also offers tremendous risk.

The same AI that discovers new drugs could design bioweapons. The same synthetic biology that cures disease could create pathogens. The same autonomous systems that drive efficiency could operate beyond human control.

This chapter confronts the great risks—the ways advanced technology could cause catastrophic harm, potentially threatening civilization or humanity itself. Not to paralyze with fear but to understand what's at stake and what might be done about it.


2026 Snapshot — Risk Landscape

AI Risks

Misuse: Deepfakes, automated disinformation, autonomous weapons, AI-enhanced cyberattacks.

Accidents: Systems behaving in unintended ways. Flash crashes. Infrastructure failures.

Alignment: AI systems pursuing goals misaligned with human values.

Concentration: Power accumulating in hands of those who control advanced AI.

Biological Risks

Engineered pathogens: CRISPR and other tools lower barriers to bioweapon creation.

Dual-use research: Knowledge generated for benefit applicable to harm.

Lab accidents: Potential pathogen releases from research facilities.

Natural pandemics: COVID showed ongoing vulnerability.

Other Technology Risks

Nuclear: Existing arsenals. Proliferation. Accident risk.

Cyber: Critical infrastructure vulnerability. Cascading failures.

Climate: Covered in previous chapters but existential-scale risk.

Unknown: Technologies not yet invented could pose risks not yet imagined.

Risk Assessment

Existential risk probability: Estimates vary enormously. Some researchers suggest 10-20% chance of existential catastrophe this century.¹

Uncertainty: The probabilities are unknown. Not all the risks are known.


Categories of Risk

Misuse

Definition: Humans deliberately using technology for harm.

Examples:

  • Bioweapons engineered with synthetic biology
  • Autonomous weapons targeting civilians
  • AI-generated disinformation undermining democracy
  • Cyberattacks on critical infrastructure

Who: State actors, terrorists, criminals, ideologically motivated individuals.

Dynamics: As technology democratizes, capability diffuses. Defense becomes harder.

Accidents

Definition: Unintended harmful outcomes from technology.

Examples:

  • Lab leak of enhanced pathogen
  • AI system failure causing infrastructure collapse
  • Cascade of autonomous system failures
  • Unintended consequences of geoengineering

Dynamics: Complex systems have failure modes. Testing can't cover all scenarios. Interconnection creates cascades.

Structural Risks

Definition: Risks emerging from how technology reshapes society.

Examples:

  • Power concentration enabling authoritarianism
  • Surveillance enabling social control
  • Automation creating mass unemployment and instability
  • AI enabling thought and behavior control

Dynamics: Even technologies that work as intended can reshape society in harmful ways.

Existential and Catastrophic Risks

Existential: Risk of human extinction or permanent civilizational collapse.

Catastrophic: Risk of global-scale harm without extinction.

Examples:

  • Engineered pandemic killing billions
  • Nuclear war
  • Unaligned superintelligent AI
  • Climate tipping points

Dynamics: Low probability but infinite stakes. Traditional cost-benefit fails.


AI-Specific Risks

Current Risks

Disinformation: AI-generated content at scale. Truth becomes harder to establish.

Autonomous weapons: Removal of human decision-making in lethal contexts.

Surveillance: AI enables pervasive monitoring. Privacy dies.

Manipulation: AI systems optimizing for engagement create addiction, polarization.

Near-Term Risks

AI-enhanced bioweapons: AI helps design more dangerous pathogens.

AI cyberoffense: Autonomous systems finding and exploiting vulnerabilities.

Economic disruption: Rapid automation causing instability.

Erosion of truth: Synthetic media makes everything deniable.

Long-Term Risks

Misaligned AGI: Generally capable AI pursuing goals that harm humanity.

Value lock-in: AI systems encoding and perpetuating wrong values.

Human obsolescence: Humanity unable to meaningfully participate in civilization.

Autonomous power-seeking: AI systems acquiring resources, influence, control.

The Alignment Problem

The challenge: Specifying what humans want in ways AI can understand and follow.

Why it's hard:

  • Humans don't fully know what they want
  • What people say they want isn't always what they mean
  • Edge cases reveal inconsistencies
  • Values evolve; AI systems don't automatically evolve with them

Progress: RLHF, Constitutional AI, interpretability research. Better but not solved.

Stakes: Getting alignment wrong with sufficiently powerful AI could be catastrophic.


Biological Risks

Pandemic Potential

Natural: Coronaviruses, influenza, and unknown pathogens.

Engineered: Synthetic biology could create pathogens worse than natural evolution.

Lab leak: Research facilities studying dangerous pathogens could release them.

The Dual-Use Dilemma

Knowledge serves both: Understanding how viruses work helps both vaccine developers and bioweapons makers.

Democratizing access: Synthesis technology becoming cheaper, more accessible.

Information hazards: Publishing research that enables harm.

What's Different Now

CRISPR and gene editing: Precise modification of organisms is accessible.

AI in biology: AI could help design more effective pathogens.

Synthesis capability: Ordering DNA sequences becomes easier.

Possible Futures

Engineered pandemic: Deliberately released pathogen causing mass casualties.

Accident: Research pathogen escaping containment.

Multiple events: Lowered barriers mean more attempts.


Risk Reduction

Technical Approaches

AI safety research: Alignment, interpretability, robustness, containment.

Biosecurity: Pathogen detection, synthesis screening, medical countermeasures.

Cybersecurity: Infrastructure protection, resilience, recovery capability.

Monitoring: Early warning systems for emerging threats.

Governance Approaches

International coordination: Treaties, norms, enforcement mechanisms.

National regulation: Export controls, research restrictions, liability.

Industry self-governance: Responsible development practices, voluntary commitments.

Anticipatory governance: Preparing for risks before they materialize.

Institutional Approaches

Safety-focused organizations: CAIS, MIRI, FHI (now closed), others researching risks.

Government capacity: UK AISI, US AISI, Singapore equivalent building evaluation capability.

International bodies: UN discussions, potential new institutions.

Cultural Approaches

Responsible development norms: Safety as integral, not afterthought.

Public awareness: Understanding of risks enables pressure for safety.

Scientific ethics: Researchers considering implications of work.


The Path Forward

Near-Term Likely (2026-2032)

AI governance develops: More regulation, more safety requirements, more testing.

Biosecurity improves: Synthesis screening, pathogen surveillance, rapid response capability.

Some incidents: AI failures, cyberattacks, disinformation campaigns demonstrate risks.

No catastrophe: Risks recognized and partially managed.

Plausible (2032-2040)

Major incident: AI system failure or biosecurity breach causes significant harm.

Governance strengthens: Post-incident regulation. International coordination improves.

Safety research matures: Better understanding of how to build safe systems.

Risks grow: More powerful technology means larger-scale potential harm.

Wild Trajectory (2040+)

Catastrophe avoided: Safety research succeeds. Governance works. Technology benefits realized.

Catastrophe occurs: Major biological event, AI failure, or other harm at scale.

Existential risk realized: Extinction or permanent civilizational collapse.

Or: Muddling through. Close calls. Gradual improvement. Perpetual vigilance.


What Individuals Can Do

If You Build Technology

Safety as priority: Not afterthought. Integral to development.

Red-teaming: Actively look for ways your work could cause harm.

Responsible disclosure: Report vulnerabilities. Share safety research.

Choose wisely: Consider whether a project should be built at all.

If You Influence Policy

Support safety research: Fund AI alignment, biosecurity, risk analysis.

Build governance capacity: Technical expertise in government.

International coordination: Work across borders on global risks.

Prepare for incidents: Response capability before events occur.

If You're Everyone Else

Support safety: Vote for it. Advocate for it. Pay attention.

Critical thinking: Don't be manipulated by AI-generated content.

Community resilience: Local capacity for response to disruptions.

Maintain perspective: Risks are real but not reasons for despair.


The Stakes

Why This Matters

The technologies in this book could create unprecedented flourishing. Disease conquered. Abundance achieved. Capability extended. Suffering reduced.

Or they could create unprecedented catastrophe. Power concentrated. Values corrupted. Existence ended.

The same science that cures cancer could create pandemic. The same AI that accelerates discovery could accelerate destruction. The same tools that extend life could end it.

The Asymmetry

Benefits: Gradual, distributed, reversible (mostly).

Catastrophes: Sudden, concentrated, potentially irreversible.

Risk tolerance: Society should be more cautious about irreversible harms than reversible ones.

The Responsibility

Current generation: This generation is building the technologies that will shape everything after.

No one else: There's no one to save humanity from its own creations.

The bet: Humanity is betting that it can manage what it's building.


Conclusion

This book has been optimistic about technology's potential. AI accelerating discovery. Biology conquering disease. Energy powering abundance. The trajectory toward human flourishing.

But that trajectory isn't guaranteed. It requires getting safety right—not just in one domain but across all of them. One catastrophic failure could undo decades of progress or end progress altogether.

The risks are real. Engineered pandemics. Misaligned AI. Autonomous weapons. These aren't science fiction—they're engineering challenges being worked on today. The question is whether safety challenges will be solved before the capabilities that could cause harm are created.

The answer isn't to stop developing technology. That ship has sailed, and in any case, the technologies offer too much benefit to abandon. The answer is to develop wisely—with safety integral, not afterthought. With governance that can keep pace with capability. With vigilance about what is being built and who controls it.

The bet humanity is making is that it can navigate this transition—from a world where human tools can't destroy humanity to a world where they could. The technologies make it possible to thrive as never before. They also make it possible to end.

Humanity gets to decide which.


Endnotes — Chapter 60

  1. Existential risk estimates: surveys of AI researchers show wide range; Toby Ord's "The Precipice" estimates ~1/6 existential risk this century; other estimates higher or lower.
  2. Dual-use research of concern: H5N1 transmissibility studies (2011-2012) sparked debate about publication of gain-of-function research; led to NIH funding moratorium.
  3. CRISPR accessibility: gene editing kits available for ~$150; undergraduate labs can perform gene editing; democratized capability.
  4. Synthesis screening: DNA synthesis companies screen orders for dangerous sequences; International Gene Synthesis Consortium coordinates; gaps remain.
  5. AI safety organizations: Center for AI Safety (CAIS), Machine Intelligence Research Institute (MIRI), Future of Humanity Institute (FHI, closed 2024), Alignment Research Center (ARC).
  6. UK AI Safety Institute: established 2023; evaluates frontier AI models before release; first government institution of its kind.
  7. Biological Weapons Convention (1972): prohibits development, production, stockpiling of bioweapons; lacks verification mechanism; 183 parties.
  8. RLHF (Reinforcement Learning from Human Feedback): technique for aligning AI behavior with human preferences; used in ChatGPT, Claude, etc.; has limitations.
  9. Constitutional AI: technique developed by Anthropic; AI evaluates its outputs against principles; aims to reduce harmful outputs.
  10. Asilomar AI Principles (2017): set of AI safety principles signed by AI researchers; includes calls for safety research and beneficial AI.