Dedication
For everyone building the future—and everyone who will live in it.
Epigraph
"The future is already here—it's just not evenly distributed." —William Gibson
"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." —Roy Amara, Institute for the Future
"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." —Roy Amara, Institute for the Future
Disclaimer
This book is an exercise in structured speculation.
It is not prophecy. It is not a business plan. It is not investment advice. The specific predictions in these pages will almost certainly be wrong in their details—timing, sequence, and mechanism will diverge from what actually unfolds. History does not move in straight lines, and the future has a habit of arriving sideways.
What this book attempts is something different: to map the territory of plausible change across the major domains of human activity, given what is known about the capabilities now emerging from artificial intelligence and its interaction with other accelerating technologies. The goal is not to predict the future but to prepare for it—to give readers a framework for thinking about transformations that may arrive faster than institutions, careers, and mental models can adapt.
Throughout, this book distinguishes between three levels of confidence:
- Near-term likely: Technologies that exist today and are scaling along observable trajectories. Barring major disruption, these developments will probably arrive within five to seven years.
- Plausible: Developments that require breakthroughs which appear achievable but are not guaranteed. These exist on a seven-to-fifteen-year horizon, assuming continued progress.
- Wild: Speculative possibilities that would require multiple unlikely events, unknown physics, or societal shifts that seem improbable from today's vantage point. These are included not as predictions but as boundary markers—the edges of what might be possible.
Timelines are the weakest part of any forecast. This book focuses on what rather than when, and attempts to be honest about uncertainty. Where research is cited, the aim has been for peer-reviewed sources and credible institutional analysis, but readers should treat all claims as provisional. The footnotes and endnotes point to primary sources; skeptical verification is encouraged.
Finally, a note on bias: this book takes neither a technologically utopian nor a doom-mongering stance. The transformations described here are largely inevitable in direction, though not in outcome. The same technologies that could cure disease and eliminate poverty could also concentrate power, displace billions from meaningful work, or create catastrophic risks. Which future emerges depends on choices made by governments, companies, researchers, and citizens in the years immediately ahead.
This book is an attempt to inform those choices.
Genesis — The Essay That Started This Book
In October 2024, Dario Amodei—CEO of Anthropic and one of the architects of modern AI systems—published an essay titled "Machines of Loving Grace." It was not about AI risk, though Amodei has written extensively about that. It was about radical optimism.
Amodei's central argument was startling in its ambition: powerful AI could compress 50-100 years of human scientific progress into 5-10 years. Not through magic, but through the systematic application of superhuman intelligence to the bottlenecks that have historically slowed discovery. He called it "the country of geniuses in a datacenter"—millions of AI researchers working in parallel, never sleeping, never forgetting, building on each other's work at the speed of silicon rather than synapses.
The essay moved methodically through five domains: biology and health (where Amodei predicted near-elimination of most disease and doubled human lifespans), neuroscience and mental health (cures for depression, addiction, PTSD), economic development (potential 20% annual GDP growth in developing nations), peace and governance (tools for democracy to outcompete authoritarianism), and the meaning of work in a world where AI handles most cognitive labor.
What made the essay unusual was not its optimism—techno-utopian visions are common enough—but its specificity and its source. This was not a futurist speculating from the outside. This was someone who had helped build GPT-2 and GPT-3, who understood the architecture of these systems at the deepest level, who had left OpenAI specifically because he took AI risks seriously. When Amodei said that AI could accelerate biological discovery by an order of magnitude, he was making an engineering judgment, not a wish.
The essay introduced a framework that runs through this book: "marginal returns to intelligence." In some domains—pure mathematics, certain kinds of software engineering—progress is bottlenecked primarily by cognitive capability. Add more intelligence, get more progress, roughly linearly. In other domains—clinical trials, infrastructure construction, political change—intelligence helps but hits diminishing returns quickly because of physical constraints, human factors, or irreducible time requirements. Understanding which domains have high returns to intelligence, and which face other bottlenecks, is essential for predicting where AI will transform the world fastest.
Amodei's essay was also honest about uncertainty. He expressed less confidence in the governance section than in biology. He acknowledged that economic benefits don't automatically distribute fairly. He worried about meaning and purpose in a world where human comparative advantage might disappear entirely.
This book is an attempt to take that essay seriously—to expand its framework across every major domain of human activity, to add the detail and nuance that a single essay couldn't contain, and to trace the second-order effects that ripple out from each transformation. Where Amodei sketched in broad strokes, this book attempts to fill in the picture: the specific technologies, the key players, the likely trajectories, the risks and guardrails.
The title Progression reflects the core thesis inherited from that essay: that the direction of change is now set, even if the outcomes remain uncertain. AI will transform biology, energy, transportation, education, governance, and work. The question is not whether but how—and whether humanity manages the transition wisely.
Amodei concluded his essay by invoking Iain M. Banks' fictional "Culture"—a post-scarcity civilization where AI and humans coexist in a framework of "compassion, cooperation, fairness, and autonomy." He suggested this wasn't naive utopianism but "a winning strategy" that tends to emerge when pursued systematically.
This book shares that qualified optimism. The future described in these pages is not inevitable in its details, but the direction of change is. The tools to build a dramatically better world are emerging now. Whether humanity uses them well depends on choices made in the next decade by governments, companies, researchers, and citizens.
Those choices are what this book hopes to inform.
Foreword
The Pattern People Keep Missing
In January 2007, Steve Jobs walked onto a stage in San Francisco and introduced the iPhone. The technology press was skeptical. Steve Ballmer, then CEO of Microsoft, laughed at the idea that a $500 phone with no keyboard could capture meaningful market share.¹ BlackBerry's co-CEO dismissed it as a niche product for consumers, irrelevant to serious business users.² Nokia, which controlled 40% of the global handset market, saw no threat.³
Ten years later, Nokia's phone division had been sold for parts, BlackBerry had retreated to software, and more than two billion people carried smartphones—small glass rectangles containing more computing power than NASA used to reach the moon.⁴ The smartphone had become the primary interface to the internet for most of humanity. It had spawned industries that didn't exist in 2007: the app economy, ride-sharing, mobile payments, social media as a dominant cultural force. Instagram, Uber, Venmo, TikTok—none of these were imaginable when Jobs held up that first device.
This is the pattern people keep missing.
Observers see a new technology, assess it against current needs with current assumptions, and conclude it will be a modest improvement over what exists. They fail to anticipate the second-order effects—the ways a technology reshapes behavior, which reshapes markets, which reshapes infrastructure, which reshapes what becomes possible next. People think linearly about exponential processes.
The mRNA vaccines that arrived in December 2020 followed the same arc. Messenger RNA as a therapeutic platform had been researched for over three decades, dismissed repeatedly as too unstable, too immunogenic, too difficult to deliver.⁵ Katalin Karikó, who would later win the Nobel Prize for her foundational work, had her grant applications rejected and her academic position demoted in the 1990s because the field seemed like a dead end.⁶ Then, in less than a year, mRNA vaccines went from concept to emergency authorization to hundreds of millions of administered doses—a timeline that would have seemed like science fiction to any vaccine developer in 2019.⁷
Reusable rockets followed the pattern. For decades, aerospace engineers treated expendable launch vehicles as an economic inevitability. Rockets were too complex, the thermal and mechanical stresses too severe, the inspection and refurbishment costs too high.⁸ When SpaceX announced plans to land and refly orbital-class boosters, the established players were dismissive. In 2014, the head of Russia's space agency called reusable rockets economically unviable.⁹ By 2024, SpaceX had reflown Falcon 9 boosters over 300 times and driven launch costs down by more than an order of magnitude.¹⁰
In each case, the skeptics weren't wrong about the difficulty. They were wrong about the trajectory. They assessed a technology at one point on an exponential curve and assumed it would remain there.
Why This Time Is Different
This book argues that artificial intelligence represents a discontinuity—a shift not merely in degree but in kind.
Previous technological revolutions enhanced human physical capability (steam, electricity, internal combustion) or extended human reach (telegraph, telephone, internet). AI is different because it automates cognition itself: the ability to perceive, reason, plan, create, and discover. And unlike previous cognitive tools—calculators, databases, search engines—modern AI systems can improve the tools used to build better AI systems. This creates a feedback loop that previous technologies lacked.
Consider what happened between 2020 and 2025. In 2020, GPT-3 demonstrated that large language models could generate coherent text, answer questions, and perform simple reasoning tasks.¹¹ It was impressive but limited—prone to hallucination, unable to use tools, brittle in complex situations. By 2024, frontier models could write and debug code, analyze scientific papers, generate images and video from descriptions, reason through multi-step problems, and operate as "agents" that plan and execute sequences of actions.¹² The improvement was not incremental; capabilities that seemed years away arrived in months.
This acceleration is now feeding back into every domain of science and engineering:
- Drug discovery: AI systems are generating novel molecular candidates and predicting protein structures that would have taken years to determine experimentally.¹³
- Materials science: Machine learning models are screening millions of potential compounds for batteries, catalysts, and superconductors, compressing decades of trial-and-error into months.¹⁴
- Software development: AI coding assistants have changed how software is written, with studies showing 25-50% productivity improvements for many tasks.¹⁵
- Scientific literature: Researchers now use AI to synthesize findings across thousands of papers, identifying connections that no individual human could track.¹⁶
Each of these applications, in turn, accelerates the development of better AI. More capable AI produces better chips, better algorithms, better training data, and better tools for AI researchers. The loop tightens.
This book is an attempt to map where that loop leads—across biology and medicine, energy and transportation, space exploration, robotics, education, governance, media, and the underlying physics that sets ultimate limits. It is organized by domain because the changes in each area are distinct, but the underlying driver is the same: the automation of discovery and the compression of innovation timelines.
The title, Inevitable, is chosen deliberately. This does not mean that any specific outcome is predetermined, or that the future will be good. It means that the direction of change—toward greater machine capability, faster iteration, and deeper automation of cognitive work—is now baked into the trajectory of research and investment across every major economy. The question is not whether AI transforms society but how, how fast, and for whose benefit.
That question is still open. And the window for shaping the answer may be shorter than many assume.
How to Read This Book
A Map, Not a Prediction
This book covers an enormous amount of territory: nine major domains, sixty-four chapters, and hundreds of specific developments. No reader needs to absorb all of it. This section offers guidance on how to navigate.
Confidence Tiers
Throughout the book, claims are flagged using three confidence levels:
Near-term likely describes developments that are already underway and scaling predictably. The technology exists; the question is adoption speed, not feasibility. Examples include AI-assisted medical imaging, solar photovoltaic cost declines, and autonomous vehicle deployment in geofenced urban areas. These developments will likely be widespread within five to seven years.
Plausible describes developments that require breakthroughs which appear achievable but aren't guaranteed. The scientific principles are understood; the engineering is hard. Examples include practical fusion energy, artificial general intelligence, and significant human lifespan extension through biological intervention. These exist on a seven-to-fifteen-year horizon, if progress continues—a real "if."
Wild describes possibilities that would require multiple unlikely breakthroughs, unknown physics, or societal shifts that seem improbable today. Examples include molecular-scale replicators, brain uploading, and faster-than-light travel. These are included not as predictions but as boundary markers. They help define the outer edge of what might be possible and highlight where current physics places hard limits.
When a claim is not flagged explicitly, assume it is considered "near-term likely" based on context.
The Acceleration Ladder
One mental model runs through every chapter: the Acceleration Ladder. This describes the stages through which AI transforms a domain:
- Tools: AI assists human work. Humans remain in control; AI provides leverage. (Example: AI suggests code completions; the programmer decides what to accept.)
- Automation: AI handles defined tasks end-to-end without human intervention for each step. (Example: AI triages customer support tickets and resolves routine issues automatically.)
- Autonomy: AI makes decisions within bounded domains, adapting to novel situations. Humans set objectives and constraints; AI determines methods. (Example: An autonomous vehicle navigates city streets without human input.)
- Discovery: AI generates novel knowledge, designs, or solutions that humans did not anticipate. (Example: AI proposes a new drug molecule or materials structure that no human researcher conceived.)
- Infrastructure: AI-designed and AI-operated systems become the substrate on which other activities depend. (Example: Power grids, supply chains, and communication networks are optimized and managed by AI systems.)
- Societal Rewrite: Institutions, laws, economic structures, and cultural norms adapt to the new capabilities. (Example: Education systems shift from credentialing to continuous skill verification; work becomes decoupled from income.)
Different domains sit at different rungs today. Medical imaging is between Tools and Automation. Self-driving cars are between Automation and Autonomy. Drug discovery is reaching toward Discovery. Understanding where a domain sits on this ladder helps calibrate expectations for what comes next.
Chapter Structure
Each chapter follows a consistent format:
- 2026 Snapshot: A grounded assessment of where the domain actually stands today—not where enthusiasts claim it is, not where skeptics say it's stuck, but the realistic state of the art. This section anchors the discussion in present reality.
- Notable Players: A non-exhaustive list of companies, institutions, and research groups shaping the domain. These are examples, not endorsements. The landscape will shift; the point is to give readers a sense of who is working on what.
- Next-Decade Trajectories: The core of each chapter. What developments are likely, plausible, or wild? What are the key dependencies and bottlenecks? What breakthroughs would change the timeline?
- Second-Order Impacts: The downstream effects that most analyses miss. If autonomous trucks succeed, what happens to truck stops, motels along highways, and the tax base of rural towns? If personalized medicine becomes standard, what happens to insurance underwriting? This section traces the ripples.
- Risks and Guardrails: What could go wrong? What are the failure modes, unintended consequences, and potential for misuse? And what mechanisms—technical, institutional, or regulatory—might mitigate those risks?
Reading Paths
You don't have to read this book front to back. Here are some suggested paths:
For the time-constrained: Read Chapter 1 (the core thesis), then skip to any domain that interests you. Each domain section is designed to stand alone. Finish with Part IV (societal implications) and Chapter 64 (what humanity should build).
For policymakers and regulators: Focus on Chapters 1, 27 (AI security and alignment), 32-36 (government), 57-60 (societal challenges), and 63 (guardrails). Skim the domain chapters for context on what you'll be regulating.
For investors and business strategists: Chapter 1 for the thesis, then the domain sections most relevant to your portfolio. Pay particular attention to the "Notable Players" and "Second-Order Impacts" sections. Chapter 62 (business strategy) synthesizes implications.
For technologists and researchers: The technical content is in the domain chapters. The historical "1926-2026" chapters at the start of each section provide context on how society got here. Chapter 61 (individual adaptation) may help readers think about their own careers.
For general readers: Start at the beginning. The book is structured to build understanding progressively, and the Foreword and Part I establish the framework that makes the later chapters more meaningful.
A Note on Uncertainty
To be direct about what this book cannot do: it cannot tell you the future.
The history of technological prediction is a history of confident wrongness. In 1903, the New York Times predicted that flying machines might be achieved in one to ten million years; the Wright brothers flew two months later.¹⁷ In 1995, astronomer Clifford Stoll wrote a Newsweek essay explaining why the internet would never be important for commerce, community, or daily life.¹⁸ In 2007, Steve Ballmer gave the iPhone "no chance" of gaining significant market share.
This book will be wrong about things. Some developments described as "plausible" will turn out to be impossible. Some dismissed as "wild" will arrive sooner than anyone expects. Some entire domains will be transformed by breakthroughs that no one is currently anticipating.
What this book hopes to provide is not a correct forecast but a useful framework—a way of thinking about technological change that helps readers update their models as new evidence arrives. The goal is to be less wrong over time, not to be right in advance.
With that caveat, the exploration begins.
Glossary Preview
This section provides brief definitions of key terms that appear throughout the book. A comprehensive glossary is available in Appendix A.
Artificial Intelligence (AI): Software systems that perform tasks typically requiring human intelligence—perception, reasoning, learning, planning, and language understanding. In this book, "AI" generally refers to modern machine learning systems, particularly large neural networks.
Artificial General Intelligence (AGI): A hypothetical AI system with human-level cognitive abilities across all domains—capable of learning any intellectual task a human can learn, transferring knowledge between domains, and operating autonomously in open-ended situations. AGI does not exist today; whether and when it will arrive is contested.
Artificial Superintelligence (ASI): A hypothetical AI system substantially exceeding human cognitive abilities in virtually all domains, including scientific creativity, strategic planning, and social manipulation. ASI is a more distant and speculative concept than AGI.
Alignment: The challenge of ensuring AI systems pursue goals that match human intentions and values. An "aligned" AI does what its designers and users actually want, including in edge cases and novel situations. Alignment is considered one of the central unsolved problems in AI safety.
Autonomy: The ability of a system to operate without human intervention for each decision or action. Autonomy exists on a spectrum; a thermostat is minimally autonomous, a self-driving car is substantially autonomous, and a hypothetical AGI would be highly autonomous.
Foundation Model: A large AI model trained on broad data (text, images, code, etc.) that can be adapted to many downstream tasks. GPT-4, Claude, and similar systems are foundation models. The "foundation" metaphor suggests these models serve as a base layer on which more specialized applications are built.
Digital Twin: A virtual replica of a physical system—a machine, building, human body, or city—that can be used for simulation, monitoring, and optimization. Digital twins are increasingly used in manufacturing, healthcare, and urban planning.
Brain-Computer Interface (BCI): Technology enabling direct communication between the brain and external devices, bypassing normal sensory and motor pathways. BCIs range from non-invasive (EEG headsets) to invasive (implanted electrode arrays).
Provenance: The documented origin and history of a piece of data, content, or artifact. In an era of synthetic media, provenance systems help verify that content is authentic and unmanipulated.
Multi-omics: The combined analysis of multiple biological data types—genomics (DNA), transcriptomics (RNA), proteomics (proteins), metabolomics (metabolites), and others—to understand biological systems comprehensively.
Liquid Biopsy: A blood test that detects fragments of DNA, RNA, or proteins shed by tumors or other tissues, enabling cancer detection and monitoring without invasive tissue sampling.
Senolytic: A drug or treatment that selectively destroys senescent cells—damaged cells that accumulate with age and contribute to inflammation and tissue dysfunction. Senolytics are a leading approach in longevity research.
ISRU (In-Situ Resource Utilization): The practice of using materials found at a destination (such as the Moon or Mars) rather than transporting everything from Earth. ISRU is considered essential for sustainable space exploration.
Zero Trust: A cybersecurity model that assumes no user, device, or system should be trusted by default, even inside a network perimeter. Every access request must be verified, and privileges are minimized.
Tokenization (in finance): The representation of assets (securities, real estate, commodities) as digital tokens on a blockchain or similar system, potentially enabling fractional ownership and faster settlement.
For additional terms and expanded definitions, see Appendix A.
Endnotes — Front Matter
- Ballmer's comments were made in a January 2007 USA Today interview and a CNBC appearance, where he stated: "There's no chance that the iPhone is going to get any significant market share. No chance." The quote became emblematic of incumbent blindness to platform shifts.
- Jim Balsillie, co-CEO of Research in Motion (BlackBerry), initially dismissed the iPhone as a consumer device irrelevant to enterprise markets. BlackBerry's market share collapsed from over 20% in 2009 to under 1% by 2016.
- Nokia's leadership repeatedly underestimated the smartphone transition. Internal memos later revealed that Nokia engineers had prototyped touchscreen devices years earlier but were overruled by management focused on hardware margins.
- The Apollo Guidance Computer had approximately 74 KB of memory and operated at 0.043 MHz. A modern smartphone has roughly 100,000 times more processing power and one million times more memory.
- For a comprehensive history of mRNA technology development, see Katalin Karikó and Drew Weissman's Nobel Prize lecture (2023) and Damian Garde and Jonathan Saltzman's reporting in STAT News.
- Karikó was demoted from her position at the University of Pennsylvania in 1995 after her grant applications were repeatedly rejected. She continued her research at reduced rank until her breakthrough modifications to mRNA enabled therapeutic applications.
- The Pfizer-BioNTech vaccine went from sequence design to emergency use authorization in approximately 11 months—a process that typically takes 10-15 years for traditional vaccines.
- For decades, the dominant view was expressed by figures like Mike Griffin (NASA Administrator): "Reusability is not a panacea." The Space Shuttle's reusability had proven more expensive than expendable alternatives due to refurbishment costs.
- Dmitry Rogozin, then Deputy Prime Minister overseeing Russia's space industry, dismissed SpaceX's reusability plans as "a marketing stunt" in 2014.
- SpaceX achieved its 300th booster landing in 2024. Cost per kilogram to low Earth orbit dropped from approximately $54,500 (Space Shuttle) to under $2,720 (Falcon 9), with Starship promising further reductions.
- Brown, T., et al. "Language Models are Few-Shot Learners." NeurIPS 2020. The GPT-3 paper demonstrated that scaling language models produced emergent capabilities not present in smaller models.
- By late 2024, frontier models from OpenAI, Anthropic, Google DeepMind, and others demonstrated coding abilities that could pass technical interviews, scientific reasoning that could contribute to research, and agentic capabilities for multi-step task execution.
- Notable examples include AlphaFold's protein structure predictions (DeepMind, 2020-2022), which provided structures for over 200 million proteins, and AI-designed drug candidates entering clinical trials from companies including Insilico Medicine, Recursion, and others.
- Google DeepMind's GNoME project (2023) identified over 2 million stable crystal structures, and Microsoft's MatterGen (2024) demonstrated AI generation of novel materials, representing orders of magnitude acceleration over traditional screening.
- GitHub's studies of Copilot adoption showed 55% faster task completion for supported tasks. Academic studies have found varying productivity impacts depending on task type and developer experience.
- Tools like Elicit, Semantic Scholar, and Consensus use AI to synthesize findings across scientific literature, enabling researchers to identify relevant work across thousands of papers.
- "Flying Machines Which Do Not Fly," New York Times, October 9, 1903. The Wright brothers' first powered flight occurred on December 17, 1903.
- Stoll, Clifford. "The Internet? Bah!" Newsweek, February 27, 1995. Stoll later acknowledged his predictions were spectacularly wrong.
- Amodei, Dario. "Machines of Loving Grace." October 2024. https://www.darioamodei.com/essay/machines-of-loving-grace. The essay that inspired this book's framework for understanding AI's potential to accelerate progress across multiple domains.