The Choice
This book has described what could happen. This chapter asks what should happen.
Not everything that can be built should be built. Not every technological path leads to flourishing. The technologies described in these pages could create unprecedented prosperity or unprecedented suffering—and often the same technology could do either, depending on how it's developed, deployed, and governed.
Humanity has choices. The technologies are not destiny. How they are developed, who controls them, what purposes they serve—these are decisions, made by people, shaped by values, influenced by institutions.
This chapter attempts the impossible: to articulate principles for choosing which futures to build. It will inevitably reflect particular values, which not everyone shares. But the conversation must happen. The choices are being made whether principles are articulated or not. Better to make them consciously.
The Stakes
What Humanity Could Build
Disease eliminated: AI accelerates biology. Cancer, Alzheimer's, aging itself—conquered.
Poverty eliminated: Abundance produced. Basic needs met for all. Suffering reduced.
Knowledge expanded: Scientific discovery accelerated. Understanding deepened. Capability extended.
Human potential enhanced: Education personalized. Opportunity equalized. Each person developing fully.
Sustainable civilization: Clean energy. Restored environment. Humanity thriving within planetary limits.
What Humanity Could Also Build
Surveillance state: Every action monitored. Every thought predicted. Freedom eliminated.
Permanent inequality: Technology's benefits captured by few. Most left behind.
Manufactured consent: AI-generated persuasion. Manipulation at scale. Truth dissolves.
Autonomous destruction: Weapons that decide. Wars that escalate. Accidents that cascade.
Human obsolescence: Nothing humans do matters. No role, no purpose, no meaning.
The Same Technologies
These aren't different technologies. They're different applications—different choices—about the same underlying capabilities. AI that cures disease could design bioweapons. AI that educates could manipulate. AI that creates abundance could be owned by few.
The technology doesn't choose. Humanity does.
Principles for Choosing
Human Flourishing
The test: Does this technology help humans flourish? Not just survive—flourish. Develop capabilities. Experience wellbeing. Live meaningful lives.
Complications: What is flourishing? Who defines it? Can people even agree?
Minimum: At least don't prevent flourishing. At least don't cause suffering.
Broad Benefit
The test: Do benefits accrue broadly? Not just to builders, owners, early adopters—but to everyone, or at least to many.
Complications: Tradeoffs between innovation speed and equity. Early concentration may fund later distribution.
Minimum: Mechanisms for eventual broad access. Not permanent capture by few.
Preserved Agency
The test: Do people retain meaningful choice? Can they understand, influence, opt out?
Complications: Some choices must be constrained (society doesn't let people choose unsafe food). Where are lines?
Minimum: Autonomy in matters that define identity and life direction.
Reversibility
The test: Can humanity course-correct? If this path proves wrong, can direction be changed?
Complications: Some advances can't be un-invented. Reversibility isn't always possible.
Minimum: Caution proportional to irreversibility. More careful with permanent changes.
Accountability
The test: If something goes wrong, is someone responsible? Can harms be addressed?
Complications: Complex systems diffuse responsibility. AI decisions hard to attribute.
Minimum: Clear chains of responsibility. Ability to seek remedy.
What Humanity Should Build
Health and Wellbeing
AI-accelerated medicine: Faster drug discovery. Personalized treatment. Disease prevention.
Why: Reducing suffering is almost universally valued. Health enables everything else.
Guardrails: Equitable access. Privacy protection. Safety standards.
Education and Opportunity
Personalized learning: Every person educated to their potential. Barriers removed.
Why: Education enables flourishing. Inequality often roots in unequal opportunity.
Guardrails: Avoid manipulation. Preserve human connection. Equitable access.
Sustainable Abundance
Clean energy: Fusion, advanced solar, storage. Power without environmental cost.
Sustainable production: AI-optimized efficiency. Circular economy. Reduced waste.
Why: Prosperity within planetary limits. Legacy to future generations.
Guardrails: Equitable distribution. Just transition for displaced workers.
Scientific Discovery
AI-assisted research: Faster hypothesis generation. Better experiment design. Pattern recognition in complex data.
Why: Knowledge has intrinsic value. Discovery enables all other benefits.
Guardrails: Responsible disclosure. Dual-use awareness. Broad access to findings.
Democratic Capacity
AI-enhanced governance: Better information for decisions. Efficient services. Citizen engagement.
Why: Democratic governance serves human flourishing. Technology should strengthen, not weaken.
Guardrails: Prevent surveillance state. Transparency. Human oversight.
What Humanity Should Not Build
Autonomous Lethal Systems
What: Weapons that select and engage targets without meaningful human control.
Why not: Lowers barriers to conflict. Removes human moral judgment from life-death decisions. Escalation risk.
Exceptions: Defensive systems with human oversight may be defensible.
Mass Surveillance Infrastructure
What: Comprehensive monitoring of all citizens. Behavior prediction. Social control.
Why not: Incompatible with freedom. Enables authoritarianism. Chills expression.
Exceptions: Targeted surveillance with judicial oversight for specific threats.
Manipulation Systems
What: AI designed to manipulate beliefs, behaviors, emotions at scale. Persuasion without transparency.
Why not: Undermines autonomy. Destroys informed consent. Corrupts democracy.
Exceptions: Transparent persuasion (advertising with disclosure) may be acceptable.
Engineered Pathogens
What: Synthetic biology designed for harm. More transmissible, more lethal than natural.
Why not: Existential risk. No defensive use. Benefits don't justify risks.
Exceptions: None apparent.
Unaligned Superintelligence
What: AI systems with capabilities exceeding human intelligence without robust alignment.
Why not: Could pursue goals harmful to humanity. Potentially irreversible.
Exceptions: With sufficient alignment confidence—but that confidence doesn't yet exist.
The Gray Zones
Most technologies don't fall cleanly into "should build" or "should not build." They require nuanced judgment.
Genetic Enhancement
Possibilities: Eliminating genetic disease. Enhancing capabilities. Choosing children's traits.
Arguments for: Reducing suffering. Expanding human potential. Parental choice.
Arguments against: Inequality. Playing God. Unknown consequences. Consent of enhanced.
The judgment: Disease elimination broadly supported. Enhancement more contested. Germline changes most controversial.
Life Extension
Possibilities: Longer healthspan. Dramatically extended lifespan. Near-immortality.
Arguments for: Life is good. More of it is better. Individual choice.
Arguments against: Population implications. Inequality. Meaning without mortality.
The judgment: Healthspan extension broadly valuable. Radical extension requires careful consideration of social implications.
Brain-Computer Interfaces
Possibilities: Restore function for disabled. Enhance capabilities. Direct brain connection.
Arguments for: Medical benefit. Human enhancement. New experiences.
Arguments against: Privacy of thought. Manipulation risk. Identity questions.
The judgment: Medical applications strongly supported. Enhancement more contested. Strong privacy protections essential.
Geoengineering
Possibilities: Solar radiation management. Large-scale carbon removal. Climate intervention.
Arguments for: May be necessary. Could reduce suffering. Buys time.
Arguments against: Unknown consequences. Moral hazard. Governance challenges.
The judgment: Research valuable. Deployment requires unprecedented governance. Carbon removal preferable to solar management.
General Artificial Intelligence
Possibilities: Systems with human-level or beyond capabilities across domains.
Arguments for: Enormous potential benefits. Scientific breakthrough. Could solve currently intractable problems.
Arguments against: Alignment not solved. Could be catastrophic. Irreversible.
The judgment: Research valuable with safety focus. Deployment requires confidence in alignment that doesn't yet exist.
Who Decides
The Problem
Democratic ideal: Decisions affecting everyone should involve everyone.
Technical reality: Most people don't understand these technologies.
Corporate control: Those who build often decide.
Government role: Varies by country. Capacity varies.
Current Decision-Making
Technology companies: Product decisions. Research directions. Safety investment.
Investors: Funding allocation. What gets built.
Governments: Regulation. Sometimes direct investment.
Researchers: What to study. What to publish.
Public: Consumer choices. Votes. Advocacy.
What Should Change
More public input: Technology choices are social choices. Democracy should have a voice.
More expertise in governance: Governments need technical capacity to make informed decisions.
More responsibility on builders: Those who create have obligations beyond profit.
More international coordination: Global technologies require global governance.
Mechanisms
Technology assessment: Expert bodies informing democratic deliberation.
Public deliberation: Citizen assemblies, public consultations on major choices.
Ethics review: Institutional review of research and development decisions.
International frameworks: Treaties, norms, coordination mechanisms.
Living with Uncertainty
What Remains Unknown
Technical capability: What AI will be able to do. How fast.
Social impact: How technologies will affect employment, relationships, meaning.
Unintended consequences: What cannot be foreseen.
Values evolution: What future generations will value.
Approaches to Uncertainty
Precautionary: When in doubt, don't. Caution with irreversible choices.
Proactionary: Proceed but with monitoring. Learn by doing.
Optionality: Maintain ability to change direction. Don't foreclose paths.
Reversibility: Prefer reversible choices. Avoid lock-in.
Humble Decision-Making
Current thinking might be wrong: Current values, knowledge, judgments may be mistaken.
Learn and adapt: Build in feedback loops. Revise decisions as information changes.
Preserve diversity: Don't bet everything on one path. Maintain alternatives.
Intergenerational humility: Future generations may value differently.
The Path Forward
Near-Term Actions (2026-2032)
Build safety: Invest in alignment, security, robustness. Make safety integral to development.
Govern wisely: Develop regulatory frameworks. Build technical capacity in government.
Distribute benefits: Mechanisms for broad access. Transition support for displaced.
Maintain oversight: Humans in loop for consequential decisions. Accountability.
Research dual-use: Understand risks of beneficial research. Develop norms for responsible development.
Medium-Term Development (2032-2040)
Demonstrate alignment: Build confidence that AI systems pursue intended goals.
Develop institutions: International coordination. Effective governance.
Address disruption: Social contracts for AI era. New models for work, meaning, distribution.
Expand access: Technologies described in this book benefit all, not just few.
Long-Term Preparation (2040+)
Transformative technology governance: If AGI or other transformative capabilities emerge.
New social models: Society organized around human flourishing, not just production.
Sustainable trajectory: Humanity thriving within planetary limits.
Preserved agency: Humans retain meaningful control over destiny.
The Responsibility
To Current Generations
Those building: Ethical obligation to consider consequences. Safety not afterthought.
Those governing: Obligation to protect citizens while enabling benefits.
Those investing: Responsibility for what capital enables.
Everyone: Pay attention. Engage. Vote. Advocate.
To Future Generations
Humanity serves as trustee: The world left behind is inherited, not chosen, by those who follow.
Long-term thinking: Choices now shape options later. Preserve possibility.
Sustainability: Not just environmental—social, institutional, civilizational.
To Humanity
Existential stakes: Some choices could end human flourishing—or even humanity.
Cooperation imperative: Global challenges require global cooperation.
Shared fate: What happens to humanity matters more than what happens to any nation or company.
The Deeper Questions
What Is Human Flourishing?
This chapter has used "flourishing" as criterion without defining it. That's deliberate—it's contested. But the question must be engaged. Does flourishing require struggle? Achievement? Relationship? Meaning? Can humanity flourish without mortality? Without work? Without constraints?
The technologies in this book force these ancient questions into practical urgency.
What Do People Owe Each Other?
If technology creates abundance, how should it be shared? If it creates capability, should enhancement be universal? If it extends life, does everyone have that right?
The social contract must be renegotiated for technological conditions unprecedented in human history.
What Makes Humans Human?
If AI can think, create, decide—what makes human thought, creation, decision special? If machines can do what humans do, what is humanity's role? What should humans do? What do they want to do?
These aren't just philosophical questions anymore. They're questions about how to organize society.
What Future Does Humanity Want?
Among all possible futures technology enables—and it enables many—which should be chosen? Not which will happen, but which should? Not what can be built, but what should be built?
This book has described trajectories, possibilities, probabilities. This chapter asks: which possibilities should be pursued?
Conclusion
Every technology in this book represents a choice. The trajectories described are not inevitable—they're possibilities, shaped by decisions made by people and institutions.
Humanity could build a world where AI accelerates discovery that cures disease, extends life, ends poverty, educates everyone, sustains the environment, and expands human flourishing in ways barely imaginable.
Humanity could also build a world where AI concentrates power, eliminates privacy, manipulates at scale, creates weapons beyond human control, and renders human purpose obsolete.
Same technologies. Different choices.
The principles for choosing seem clear enough in outline: pursue human flourishing, share benefits broadly, preserve agency, maintain reversibility, ensure accountability. The application is harder—gray zones everywhere, tradeoffs unavoidable, uncertainty pervasive.
But the choices must be made. They're being made—by researchers deciding what to study, by companies deciding what to build, by investors deciding what to fund, by governments deciding what to permit, by citizens deciding what to accept.
The question isn't whether to choose. It's whether to choose consciously, informed by values and principles, or to drift into whatever future technological momentum and power dynamics produce.
This book has been about what's coming. This chapter has been about what should come. The difference is the most important one there is.
Humanity gets to decide.
Endnotes — Chapter 64
- Human flourishing: concept derived from Aristotle's eudaimonia; contemporary accounts include objective list theories (certain goods constitute flourishing), subjective theories (flourishing is what people prefer), and capabilities approaches (Amartya Sen, Martha Nussbaum).
- Autonomous weapons: UN discussions ongoing; Campaign to Stop Killer Robots advocates for ban; no treaty yet but growing international concern.
- Surveillance technology: comprehensive monitoring capabilities exist; China's social credit system most extensive; democratic countries face debates about appropriate limits.
- Dual-use research of concern: National Science Advisory Board for Biosecurity (NSABB) provides guidance; H5N1 transmissibility debate (2011-2012) was formative case.
- AI alignment: field of research on ensuring AI systems pursue intended goals; approaches include RLHF, Constitutional AI, interpretability, formal verification.
- Technology assessment: Office of Technology Assessment (US, 1972-1995) provided this function; no current equivalent; some calls for revival.
- Citizen assemblies: deliberative democracy mechanisms; Ireland used successfully for constitutional questions; proposed for AI governance.
- Precautionary principle: approach emphasizing caution with uncertain risks; enshrined in EU law; criticized by some as innovation-blocking.
- Proactionary principle: response to precautionary principle; emphasizes benefits of innovation and learning by doing; associated with Max More.
- Intergenerational ethics: obligations to future generations; relevant to long-term technology choices; philosophical foundation includes work by Derek Parfit, Samuel Scheffler.