The Regulatory Dilemma
Governments face an impossible-seeming challenge: regulate technologies they don't fully understand, moving faster than legislative processes allow, with global reach that crosses jurisdictions, in ways that protect citizens without stifling the innovation that creates prosperity.
Get it wrong in one direction, and harmful technologies proliferate unchecked. Get it wrong in the other, and innovation moves elsewhere, leaving citizens without benefits while failing to prevent harms.
This chapter is for policymakers, regulators, and those who influence them. Not a complete policy manual—each country, each technology, each moment requires specific analysis—but a framework for thinking about governance in an era of rapid technological change.
The honest answer: regulatory frameworks adequate to AI don't yet exist anywhere. Every government is improvising. Some improvisations are better than others.
The Governance Challenge
Why This Is Hard
Speed mismatch: Technology changes in months. Legislation takes years.
Knowledge gap: Regulators don't have technical expertise. Technologists don't understand governance.
Jurisdiction limits: Technology is global. Regulation is national (or sub-national).
Dual-use: Same technology enables benefit and harm. Hard to permit one while preventing other.
Uncertainty: Can't regulate what you can't predict. And prediction is unreliable.
What Governments Must Do
Protect citizens: From harm, discrimination, exploitation, manipulation.
Enable prosperity: Innovation creates jobs, growth, improved living standards.
Maintain security: National security, public safety, critical infrastructure.
Preserve rights: Privacy, due process, free expression, democratic participation.
Coordinate internationally: Global problems require global solutions.
The Balance
Too loose: Harms proliferate. Public trust erodes. Eventual backlash creates over-correction.
Too tight: Innovation stifled. Benefits lost. Activity moves to less regulated jurisdictions.
The goal: Enable beneficial uses, prevent harmful uses, build trust, maintain competitiveness.
2026 Snapshot — Current Governance
Major Frameworks
European Union: AI Act (2024). Risk-based approach. Strict requirements for high-risk systems. Bans on certain uses.
United States: Executive Order on AI (2023). Reporting requirements for frontier models. Sector-specific regulation. State-level action (California SB 1047 debate).
China: Algorithm regulation. Generative AI rules. Deep synthesis provisions. Security focus.
United Kingdom: Pro-innovation approach. AI Safety Institute. Principles-based regulation.
Other: Various approaches. Singapore pragmatic. India waiting. Brazil, Canada, Japan developing frameworks.
Current Gaps
Enforcement: Laws exist but enforcement capacity limited.
International coordination: Fragmented. Different approaches in different jurisdictions.
Technical capacity: Governments lack ability to evaluate systems.
Pace: Regulation already behind technology.
Emerging Issues
Foundation models: How to regulate general-purpose systems?
Liability: Who is responsible when AI causes harm?
Competition: AI concentration in few companies. Antitrust implications.
Labor: Job displacement. Worker protection. Social safety net.
Security: AI-enhanced threats. Autonomous weapons. Critical infrastructure.
Principles for AI Governance
Risk-Based Approach
Not all AI is equal: Chatbots for customer service differ from medical diagnosis differ from autonomous weapons.
Proportional regulation: Higher risk justifies more regulation. Lower risk requires less.
Categories: EU AI Act approach—unacceptable risk (banned), high risk (strict requirements), limited risk (transparency), minimal risk (no requirements).
Adaptability
Flexible frameworks: Principles more than specific rules. Allow adaptation as technology changes.
Sunset provisions: Regulations that expire and must be renewed. Forces regular review.
Sandbox approaches: Controlled environments for testing new technologies.
Agency discretion: Empowered regulators can adapt faster than legislators.
Technical Grounding
Technical capacity: Governments need people who understand technology.
AI safety institutes: Dedicated capability for evaluating systems.
Industry engagement: Collaboration with those who build systems.
Academic input: Research informs policy.
International Coordination
Standards harmonization: Common standards reduce fragmentation.
Mutual recognition: Accept each other's certifications.
Information sharing: Share threat intelligence, best practices.
Treaty frameworks: For most serious risks (weapons, biosecurity).
Policy Tools
Regulation
Ex ante requirements: Requirements before deployment. Testing, certification, disclosure.
Ex post liability: Responsibility for harms that occur. Litigation, penalties.
Sectoral rules: Industry-specific requirements (healthcare, finance, transportation).
Horizontal rules: General requirements applying across sectors.
Standards
Technical standards: Specifications for how systems should work.
Process standards: Requirements for how systems are developed.
Voluntary vs. mandatory: Standards can be adopted voluntarily or required by regulation.
International standards bodies: ISO, IEEE, and others developing AI standards.
Incentives
Tax benefits: Encourage beneficial uses, safety investment.
Procurement: Government as customer. Conditions on purchase.
Research funding: Direct innovation toward beneficial directions.
Prizes: Rewards for achieving safety or beneficial milestones.
Information
Disclosure requirements: Transparency about AI use.
Labeling: Marking AI-generated content.
Reporting: Incident reporting, performance reporting.
Public registers: Databases of AI systems in use.
Specific Policy Areas
Safety and Security
Frontier model oversight: Requirements for most capable systems. Testing before release.
Critical infrastructure: AI in power grid, water, communications. Resilience requirements.
Cybersecurity: AI in attack and defense. Requirements for defensive AI. Restrictions on offensive.
Autonomous systems: Requirements for human oversight. Restrictions on autonomy in certain contexts.
Rights and Values
Discrimination and bias: Requirements for testing, mitigation. Prohibited discriminatory uses.
Privacy: Data protection. Consent requirements. Minimization.
Due process: Explainability for consequential decisions. Appeal rights.
Free expression: Balance content moderation with expression. Transparency requirements.
Economic Issues
Competition: Antitrust enforcement. Prevent excessive concentration.
Labor: Transition support. Retraining programs. Safety net.
Intellectual property: AI and copyright. Training data rights. Generated content ownership.
Trade: Export controls. Investment screening. Technology transfer.
International Dimensions
AI in warfare: Autonomous weapons governance. International humanitarian law.
Global governance: UN processes. AI summits. Multilateral frameworks.
Development: AI benefits for developing countries. Capacity building.
Implementation Challenges
Enforcement
Detection: How do you know if systems comply?
Capacity: Regulators need technical capability.
Resources: Enforcement requires funding, staffing.
Cross-border: Enforcement against foreign entities difficult.
Adaptation
Keeping pace: Technology changes faster than regulation.
Revision processes: How to update rules efficiently.
Avoiding ossification: Rules that made sense yesterday may not tomorrow.
Coordination
Domestic: Multiple agencies with overlapping jurisdiction.
International: Different countries, different approaches.
Public-private: Collaboration with industry.
Legitimacy
Democratic input: Public participation in rulemaking.
Expertise: Technical decisions require expertise public may lack.
Accountability: Who is responsible for regulatory decisions?
Models and Examples
European Union Approach
Strengths: Comprehensive. Risk-based. Strong enforcement mechanisms.
Weaknesses: Potentially over-broad. May stifle innovation. Slow to adapt.
Key features: AI Act categories. High requirements for high-risk systems. Bans on social scoring, certain biometric uses.
US Approach
Strengths: Flexible. Allows innovation. Strong enforcement in some areas.
Weaknesses: Fragmented. Gaps in coverage. Sector-specific inconsistencies.
Key features: Executive action. Sector-specific (FDA, FTC, SEC). State-level variation.
UK Approach
Strengths: Pro-innovation. Adaptable. Strong safety focus (AISI).
Weaknesses: Less comprehensive. May lack enforcement teeth.
Key features: Principles-based. AI Safety Institute. Regulatory sandboxes.
Chinese Approach
Strengths: Fast implementation. Comprehensive digital governance.
Weaknesses: Authoritarian framing. May not translate to democracies.
Key features: Algorithm regulation. Content requirements. Security focus.
Lessons
No perfect model: Each approach has tradeoffs.
Context matters: What works in one country may not work in another.
Evolution: All approaches are evolving. Early days.
The Path Forward
Near-Term Priorities (2026-2028)
Build technical capacity: AI safety institutes. Technical staff. Expert networks.
Implement existing frameworks: Enforce what's on the books. Learn from implementation.
Address immediate risks: Deepfakes, discrimination, security vulnerabilities.
International coordination: Harmonize where possible. Share information.
Medium-Term Development (2028-2032)
Comprehensive frameworks: Fill gaps in coverage. Address cross-cutting issues.
Adaptive mechanisms: Build in revision processes. Sandbox approaches.
Enforcement capacity: Resources for meaningful enforcement.
Liability regimes: Clear rules for responsibility when AI causes harm.
Long-Term Preparation (2032+)
AGI governance: If/when more capable systems emerge. Unprecedented challenges.
International institutions: Global governance for global risks.
Constitutional questions: AI and democratic governance. Rights in AI context.
Transformation governance: Managing societal transition at scale.
For Specific Government Actors
Legislators
Educate: Learn enough to legislate wisely. Technical briefings. Expert input.
Principled frameworks: Broad principles that agencies can implement flexibly.
Oversight: Hold agencies accountable. Require reporting.
Resources: Fund enforcement, research, capacity building.
Regulators
Build expertise: Hire technical talent. Partner with researchers.
Engage industry: Understand what you're regulating. Avoid capture.
Experiment: Sandboxes, pilots, adaptive approaches.
Coordinate: With other agencies, other countries.
Local Government
Understand impact: AI affects your community. Economic, social, service delivery.
Procurement power: Conditions on AI purchased.
Service delivery: AI in government services. Equity, access, oversight.
Workforce: Prepare workers for transition.
Risks and Guardrails
Regulatory Capture
Risk: Regulated entities shape regulation to favor themselves.
Guardrails: Diverse input. Transparency. Revolving door restrictions. Public interest representation.
Stifling Innovation
Risk: Over-regulation drives innovation elsewhere. Benefits lost.
Guardrails: Evidence-based. Proportional. Regular review. Innovation-friendly approaches where appropriate.
Under-Protection
Risk: Harms occur due to insufficient regulation.
Guardrails: Precautionary approach for serious risks. Enforcement resources. Liability regimes.
Fragmentation
Risk: Patchwork of inconsistent rules. Compliance impossible.
Guardrails: Federal preemption where appropriate. International harmonization. Standards.
The Deeper Questions
What Is Government For?
In a world of rapid technological change, what is government's role? Protector? Enabler? Distributor? All of these?
The technologies described in this book could create abundance or disaster depending on how they're governed. Government's choices matter more than ever.
Who Decides?
Democratic legitimacy requires public input. But AI governance requires technical expertise the public may lack. How can society balance democratic accountability with technical competence?
What Values?
Regulation embeds values. Which values? Safety, innovation, equity, liberty—these sometimes conflict. Whose values prevail?
Global vs. Local
Technology is global. Governance is local. This mismatch isn't new, but AI intensifies it. How can the world govern globally while respecting sovereignty?
Conclusion
Governments have regulated technologies before—nuclear power, automobiles, pharmaceuticals, telecommunications. AI is not categorically different, but it is different in important ways: faster-moving, more general-purpose, more distributed, more global.
The framework for AI governance that will eventually emerge doesn't yet exist. Every government is improvising, and some improvisations are better than others. The principles are clearer than the implementation: risk-based, adaptable, technically grounded, internationally coordinated.
Getting this right matters enormously. Over-regulation could stifle beneficial innovation, driving it to places with fewer protections. Under-regulation could allow harms to proliferate, eroding public trust and eventually triggering backlash. The goal is the difficult middle: enabling benefits while preventing harms, moving fast enough to keep pace while careful enough to avoid mistakes.
No government has figured this out. But some are trying harder than others, investing more in capacity, thinking more carefully about tradeoffs, engaging more seriously with the challenge. These efforts deserve support.
The next decade will determine whether AI governance succeeds or fails. The stakes—for citizens, for innovation, for democracy, for humanity—are as high as any policy challenge governments face.
Endnotes — Chapter 63
- EU AI Act: adopted 2024; risk-based framework classifying AI systems by risk level; includes requirements for high-risk systems and bans on certain uses.
- US Executive Order on AI (October 2023): establishes reporting requirements for frontier models, directs agency action across government, creates AI safety institutes.
- UK AI Safety Institute: established November 2023; evaluates frontier AI models; first dedicated government AI safety capability.
- China AI regulation: includes Algorithm Recommendation Regulation (2022), Deep Synthesis Provisions (2023), Generative AI Measures (2023).
- Regulatory sandboxes: controlled environments where new technologies can be tested with reduced regulatory requirements; used in UK, Singapore, others.
- Technical standards: ISO/IEC 42001 (AI management system); IEEE standards for AI ethics; NIST AI Risk Management Framework.
- Liability regimes: EU Artificial Intelligence Liability Directive proposes presumption of causality for certain AI harms; US relies on existing tort law.
- Export controls: US controls on advanced AI chips to China; controls on frontier model weights proposed; balancing security and commerce.
- International AI governance: UN High-Level Advisory Body on AI; G7 Hiroshima AI Process; various bilateral and multilateral discussions.
- Democratic AI governance: challenge of technical complexity vs. democratic accountability; various proposals for citizen engagement in AI policy.