The Speed of War
In February 2024, the Ukrainian military released footage of FPV (first-person view) drones chasing and destroying Russian tanks. Small, cheap quadcopters—costing perhaps $500 each—were defeating armored vehicles worth millions. Both sides were deploying thousands of drones per month.¹
This is not science fiction. It is current warfare.
The battlefields of Ukraine, Nagorno-Karabakh, and Gaza have demonstrated that autonomous and semi-autonomous weapons are no longer theoretical. Drones that can fly themselves, loitering munitions that search for targets, and missile systems with automated targeting are already in use. The human role is shifting from operator to supervisor—and in some systems, to mere authorizer of actions the machine has already decided.
The trajectory is clear: weapons will become more autonomous, faster, smaller, and cheaper. Swarms of drones could overwhelm defenses. AI could make targeting decisions in milliseconds. Cyber weapons could disable infrastructure before any human realizes an attack is underway.
This chapter examines the transformation of warfare by autonomous systems: what's already deployed, what's in development, and the profound risks posed by machines that can kill without direct human control. The stakes could not be higher—the decisions made about autonomous weapons in the next decade may shape whether wars remain bounded human conflicts or become cascading machine processes too fast to stop.
2026 Snapshot — The Autonomous Weapons Landscape
Drones and Loitering Munitions
Ukraine has become a proving ground:
- Both sides deploy FPV drones in massive numbers
- Commercial drones adapted for reconnaissance and attack
- Loitering munitions (Switchblade, Lancet) that search for and destroy targets
- AI-assisted targeting in some systems
- Production exceeds consumption—industrialized drone warfare
Other conflicts:
- Turkey's Bayraktar TB2 demonstrated effectiveness in Nagorno-Karabakh (2020) and Libya
- Houthi drones and missiles against ships and Saudi infrastructure
- Israeli systems in Gaza; Hamas drones
The pattern: Low-cost, attritable platforms challenge expensive traditional weapons. Drones can be lost; pilots cannot.
Automated Defense Systems
Missile defense:
- Patriot, THAAD, Iron Dome—human supervised but largely automated
- Engagement timelines too fast for human decision in many scenarios
- Algorithms select targets and calculate intercepts
Close-in weapons systems (CIWS):
- Phalanx, Goalkeeper—fully automatic point defense
- Human sets conditions; machine executes within them
- Designed to engage incoming missiles faster than humans can
Counter-drone systems:
- Electronic warfare, kinetic interceptors, directed energy
- Increasingly autonomous as drone swarm threat grows
Ground and Maritime Systems
Ground robots:
- Bomb disposal robots widely deployed
- Armed ground robots exist but rarely used (Russian Uran-9, various others)
- Autonomous logistics vehicles (supply convoy automation)
Maritime:
- Autonomous surface vessels for patrol and mine countermeasures
- Autonomous underwater vehicles for reconnaissance and mine warfare
- Ukraine's naval drones have struck Russian ships
Command and Control
AI in military operations:
- Target identification and prioritization
- Intelligence fusion and analysis
- Logistics optimization
- Battle management decision support
The human-machine interface: Increasingly, humans oversee and approve rather than directly control. The meaningful decision may be "activate system" rather than "engage this target."
Notable Players
Major Powers
United States:
The most advanced and largest investment. Programs include:
- DARPA research across autonomy, swarms, AI
- Replicator initiative: Accelerated acquisition of autonomous systems
- Collaborative Combat Aircraft: Autonomous wingmen for manned fighters
- AUKUS: Partnership with UK and Australia on undersea autonomous systems
- Loyal Wingman programs with multiple contractors
Doctrine: Emphasis on human control, but definitions of "meaningful human control" are debated.
China:
Significant investment in autonomous weapons:
- Extensive drone programs (military and commercial)
- AI integration in weapons systems
- "Intelligentized warfare" as doctrinal concept
- Large-scale testing and deployment
- Export of armed drones to various countries
Russia:
Challenged by sanctions but developing:
- Loitering munitions (Lancet) in Ukraine
- Autonomous ground vehicles (with mixed results)
- Doctrinal interest in robotic combat
- Capacity limited by technology access
Others:
- Turkey: Major drone exporter (Bayraktar)
- Iran: Shahed loitering munitions supplied to Russia, Houthis
- Israel: Advanced autonomous systems; combat experience
- South Korea: Border defense systems; Samsung SGR-A1
Defense Industry
Traditional contractors investing heavily:
- Lockheed Martin, Boeing, Northrop Grumman, Raytheon (US)
- BAE Systems (UK)
- MBDA (Europe)
- Elbit, Rafael, IAI (Israel)
- Baykar, TAI (Turkey)
New entrants:
- Anduril Industries (Lattice AI, various autonomous systems)
- Shield AI (autonomous aviation)
- Skydio (reconnaissance drones)
- Palantir (AI for defense)
Commercial technology spillover: AI developed for commercial use applies to military; drone technology is dual-use.
The Technology of Autonomous Weapons
Levels of Autonomy
Human-in-the-loop: Human makes engagement decision for each target. Traditional weapons.
Human-on-the-loop: System can engage autonomously; human monitors and can intervene. Many current systems.
Human-out-of-the-loop: System operates without human oversight during operation. Controversial; limited deployment.
The spectrum: Most systems fall on a spectrum. A Patriot missile battery has human supervision but automated engagement. The distinction can be subtle.
Targeting
Target recognition: Computer vision identifies vehicles, people, equipment. Trained on military datasets.
Target selection: Algorithms prioritize targets based on threat, value, rules of engagement.
Confirmation: In current systems, humans typically confirm target selection. In faster scenarios, this becomes difficult.
Autonomous targeting: Systems that select and engage without per-target human approval. This is where concern concentrates.
Swarms
The concept: Large numbers of cheap autonomous systems coordinating to overwhelm defenses.
Advantages:
- Numbers exceed defensive capacity
- Loss of individuals doesn't stop the swarm
- Coordination enables complex behavior
- Cost-effective if individual units are cheap
Challenges:
- Communication and coordination in contested environments
- Preventing fratricide (swarm attacking itself)
- Assuring behavior meets commander intent
- Testing and validation of swarm behavior
Status: Research and demonstration; not yet large-scale deployment.
Speed
The speed problem: As systems get faster, human decision-making can't keep pace.
Examples:
- Hypersonic missiles: Minutes from launch to impact; limited decision time
- Cyber attacks: Execute in seconds
- Drone swarms: Engagement timeline measured in seconds
Implications: Either delegate decisions to machines or accept vulnerability. Neither is comfortable.
Risks and Concerns
Escalation
Speed and escalation: If systems act faster than humans can decide, conflicts could escalate before leaders can intervene.
Automated retaliation: If defensive systems are authorized to respond automatically, a mistake or provocation could trigger unintended escalation.
Flash wars: The cyber/autonomous equivalent of flash crashes—cascading automated actions before humans understand what's happening.
Accountability
Who is responsible? When an autonomous weapon kills wrongly:
- The commander who deployed it?
- The programmer who wrote the code?
- The company that built it?
- The nation that authorized it?
The accountability gap: If no human made the specific decision, attribution of blame becomes difficult.
Legal frameworks: International humanitarian law (laws of war) assumes human decision-makers. Autonomous weapons challenge this.
Proliferation
Dual-use technology: Most components of autonomous weapons are commercially available. AI, drones, sensors are not inherently military.
Low barriers: Non-state actors, terrorists, and rogue states can acquire or build autonomous weapons more easily than nuclear or advanced conventional weapons.
The democratization of lethality: Cheap drones with explosive payloads enable attacks that previously required significant military capability.
Accidents and Errors
Technical failure: Autonomous systems can fail in unpredictable ways—sensor errors, software bugs, adversarial attacks.
Friendly fire: Autonomous weapons could target own forces or civilians if identification fails.
Unpredictable behavior: Complex AI systems can exhibit unexpected behavior, especially in novel situations.
Ethical Concerns
Is autonomous killing inherently wrong? Some argue machines should never make life-and-death decisions over humans, regardless of technical capability.
Dignity: Being killed by a machine may violate human dignity in ways that differ from human-caused death.
The morality of risk transfer: Using machines to reduce own casualties may make war more likely if costs are low.
International Efforts
CCW Discussions
Convention on Certain Conventional Weapons (CCW): UN forum discussing "lethal autonomous weapons systems" (LAWS) since 2014.
Progress: Limited. Discussions continue; no binding treaty.
Positions:
- Some states (Austria, Brazil, others) call for preemptive ban
- Some (US, Russia, others) oppose new treaty, prefer principles
- Many in between, seeking definitions and norms
The Definition Problem
What counts as an autonomous weapon? Definitions vary:
- Any weapon with autonomous features?
- Only weapons that select and engage targets without human control?
- Systems that can identify targets but require human authorization?
The challenge: Too narrow a definition misses emerging risks; too broad covers existing systems like missile defense.
Key Principles Discussed
Meaningful human control: Humans must have meaningful control over use of force. But what is "meaningful"?
Compliance with IHL: Autonomous weapons must comply with distinction (targeting only combatants), proportionality, and precaution.
Accountability: Clear accountability for actions of autonomous systems.
Status: Principles discussed but not agreed as binding requirements.
Military AI Ethics
US DOD AI principles (2020):
- Responsible
- Equitable
- Traceable
- Reliable
- Governable
Implementation: Efforts to embed ethics in development; results debated.
Other nations: Various approaches; no global standard.
The Path Forward
Near-Term Likely (2026-2032)
Drone warfare intensifies: Lessons from Ukraine drive massive investment. All major militaries field large drone fleets.
Human-on-the-loop becomes standard: Autonomous systems with human oversight become normal. Per-target human authorization erodes for time-critical scenarios.
Defensive autonomy expands: Counter-drone, missile defense, cyber defense increasingly automated out of necessity.
No comprehensive treaty: CCW discussions continue without breakthrough. Some bilateral or regional agreements possible.
Proliferation accelerates: More nations and non-state actors acquire capable systems.
Plausible (2032-2040)
Swarms deployed: Coordinated autonomous swarms become operational in some militaries. Defense against swarms is major challenge.
Ground robots in combat: Humanoid or other form robots engage in ground combat, initially in specialized roles.
Speed threshold crossed: Some engagements are too fast for meaningful human decision-making. Autonomy is the only option.
Crisis stability concerns: Risk of automated escalation recognized as serious issue. Arms control efforts intensify.
Wild Trajectory (2040+)
AI-driven warfare: Combat becomes primarily machine vs. machine, with humans in strategic direction only.
Autonomous escalation: Some conflict escalates beyond human control, demonstrating risks.
Global governance emerges: After crisis or catastrophe, serious international limits on autonomous weapons are established.
Or: Arms race continues: No effective governance; major powers compete in increasingly autonomous and dangerous systems.
Arms Control Challenges
Verification
How do you verify limits on autonomous weapons?
- Software is hard to inspect
- Autonomous capability can be added to existing systems
- Dual-use technology is pervasive
- Testing can be concealed
Contrast with nuclear: Nuclear weapons require fissile material, which can be detected and monitored. Autonomous weapons have no equivalent signature.
Defining Limits
What should be banned or limited?
- Full autonomy in target selection and engagement?
- Specific weapon types?
- Certain contexts (urban areas, against civilians)?
The slippery slope: Where is the line between automated missile defense and autonomous offensive weapons?
Incentive Problems
First-mover advantage: Nations that develop capable systems first may gain military advantage.
Defensive framing: Most developments are described as defensive, even when offensive applications are obvious.
Competitive pressure: If adversaries are developing capabilities, unilateral restraint seems risky.
Possible Approaches
Norms rather than bans: Agreed principles for responsible use; transparency measures; confidence-building.
Specific prohibitions: Bans on particular systems (e.g., fully autonomous targeting of humans) while permitting others.
Liability frameworks: Clear accountability for autonomous weapon use; incentivizing responsible development.
Verification innovation: New methods for monitoring AI and autonomous systems.
The AI Acceleration Factor
AI is the enabler: Everything in this chapter depends on AI—target recognition, decision-making, coordination, adaptation.
Dual-use acceleration: AI developed for commercial purposes (vision, language, planning) applies directly to military systems.
The pace of change: AI capability is advancing faster than governance can adapt. Systems that seem science fiction become possible within years.
Autonomous vs. augmented: The same AI that enables autonomous weapons could also enhance human decision-making in warfare—better information, faster analysis, more options. The choice of how AI is used matters.
Conclusion
Autonomous weapons are not a future concern—they are a present reality. Drones, loitering munitions, and automated defense systems are already in use. The question is not whether autonomous weapons will exist but what limits, if any, will govern their development and use.
The risks are profound: escalation faster than humans can control, accountability gaps when machines kill, proliferation to actors unconstrained by law or ethics, and the possibility of conflicts that cascade beyond human ability to stop.
The next decade will be decisive. Either international norms and agreements establish meaningful limits, or the technology will advance beyond the point where control is possible. The choices made by major powers—and the AI developers, defense contractors, and military officials who shape those choices—will determine whether humanity retains meaningful control over the use of force, or cedes it to machines of human creation.
This is not a technical question. It is a moral and political one. The technology enables both paths. Which path humanity takes is a matter of collective choice.
Endnotes — Chapter 26
- Ukraine FPV drone deployment statistics from various military analyses; both Ukrainian and Russian forces deploying thousands per month by late 2023.
- Bayraktar TB2 demonstrated combat effectiveness in Nagorno-Karabakh (2020), Libya, and Ukraine; relatively low cost compared to manned aircraft.
- Iron Dome engagement rate during major rocket attacks requires automated response; human oversight of system but not individual engagements.
- DARPA programs include Gremlins (recoverable drone swarms), ACE (AI dogfighting), and various autonomous vehicle initiatives.
- DOD Replicator initiative announced 2023 aimed at fielding thousands of autonomous systems within 18-24 months.
- "Intelligentized warfare" is Chinese doctrinal concept emphasizing AI integration across military operations; discussed in official publications.
- Anduril Industries founded 2017; Lattice AI operating system deployed with various autonomous systems; contracts with DOD and allies.
- CCW discussions on LAWS began 2014; Group of Governmental Experts has met annually; no binding instrument agreed.
- DOD Ethical Principles for Artificial Intelligence adopted 2020; implementation through Responsible AI Strategy and various programs.
- "Meaningful human control" concept discussed in academic literature and CCW; no agreed definition; central to debates about autonomous weapons governance.