Skip to main content

Courts, Justice, and 'Minority Report' Temptations

The Slowest Branch

Justice delayed is justice denied. The phrase is centuries old, but the problem persists.

In the United States, the median federal civil case takes 10 months to resolve. Criminal cases average 8 months. State courts are often slower. Backlogs number in the millions. Defendants wait months in jail before trial. Civil plaintiffs watch years pass before resolution.

The reasons are many: insufficient judges and staff, complex procedures, litigation tactics, scheduling difficulties, the sheer volume of cases. But the underlying constraint is human attention. Every motion must be read. Every argument must be heard. Every decision must be reasoned. There are only so many hours in a day, and judges are mortal.

AI offers the prospect of acceleration—drafting orders, researching precedents, triaging cases, even predicting outcomes. But AI in the justice system also raises the specter of "Minority Report": algorithmic prediction used to preemptively judge, to categorize people as risks before they've done anything wrong.

This chapter examines both possibilities: the legitimate uses of AI to make justice faster and more accessible, and the dangerous temptations of prediction that could undermine the very concept of individual judgment.


2026 Snapshot — Courts Today

The Backlog Crisis

Federal courts: ~760,000 pending cases. Civil cases average 10 months to disposition. Criminal cases 8 months.¹

State courts: Tens of millions of cases annually. Criminal backlogs severe in many jurisdictions. Civil cases often wait years.

Immigration courts: Over 3 million pending cases. Wait times measured in years.

Bankruptcy: ~400,000 filings annually. Chapter 7 cases relatively quick; Chapter 11 can take years.

Access to Justice Gap

Civil legal needs unmet: 80%+ of low-income Americans' civil legal needs go unmet. Eviction, family law, consumer issues without representation.²

Self-represented litigants: Majority of family law cases have at least one unrepresented party. Navigating courts without a lawyer is extremely difficult.

Cost barriers: Average hourly rate for lawyers exceeds $300. Litigation costs make many disputes economically impossible to pursue.

Geographic barriers: Rural areas have limited legal services. Courthouses may be far away.

Technology Adoption

Electronic filing: Widespread in federal courts; variable in state courts. COVID accelerated adoption.

Video hearings: Became routine during pandemic. Continued for many proceeding types. Access vs. due process debates.

Legal research AI: Westlaw, LexisNexis, ROSS Intelligence (defunct), Casetext, others offer AI-assisted research. Widely used by lawyers.

Document review: AI-assisted document review standard in large litigation. Predictive coding for discovery.

Case management: Electronic dockets, scheduling systems, case tracking. Implementation varies.

Current AI in Courts

Research assistance: AI tools help lawyers find relevant cases, statutes, regulations.

Document generation: Contract drafting, motion templates, form completion.

Predictive analytics: Some tools predict case outcomes for litigation strategy. Not used by courts for decisions.

Pretrial risk assessment: COMPAS, PSA, Arnold tools used in some jurisdictions to assess risk for bail and sentencing. Controversial.³


Notable Players

Court Systems

Federal judiciary: Administrative Office of US Courts; Case Management/Electronic Case Files (CM/ECF); PACER public access (with fees).

State court leaders:

  • Utah: Online Dispute Resolution for small claims
  • Michigan: Expanding online court services
  • California: Innovations in self-help services

International leaders:

  • Singapore: eLitigation platform; online dispute resolution
  • UK: Online Civil Money Claims; divorce applications
  • Estonia: Automated court case assignment

Research and analytics:

  • Westlaw Edge (Thomson Reuters): AI-assisted research
  • Lexis+ (LexisNexis): AI features for legal research
  • Casetext (acquired by Thomson Reuters): CoCounsel AI assistant
  • vLex: AI-powered legal research

Document and practice management:

  • Clio, MyCase, PracticePanther: Practice management
  • ContractPodAi, Ironclad: Contract analysis
  • Luminance: AI for due diligence

Online dispute resolution:

  • Tyler Technologies/Modria: ODR platforms
  • Court Innovations: Virtual court services
  • Matterhorn: Self-help court services

Academic and Policy

RAND Corporation: Research on algorithmic tools in criminal justice.

AI Now Institute: Critical analysis of AI in government, including justice.

Partnership on AI: Guidelines for AI in criminal justice.

American Bar Association: Resolutions and guidance on AI in legal practice.


AI Applications in Courts

Research and Preparation

Legal research: AI can search millions of cases, identify relevant precedents, and summarize holdings. What took hours takes minutes.

Case summarization: AI can digest lengthy filings, depositions, and documents. Judges and clerks can absorb information faster.

Citation checking: AI can verify that cited cases say what lawyers claim they say. Catching errors and misrepresentations.

Current state: Widely used by lawyers; beginning to be used by court staff. Generally uncontroversial.

Case Management

Triage and routing: AI can categorize cases, identify urgency, route to appropriate judges or processes. Some cases need more attention; some less.

Scheduling optimization: AI can optimize hearing schedules, reduce conflicts, estimate time needed. Courts as complex scheduling problems.

Status monitoring: AI can track case progress, flag delays, identify bottlenecks. Dashboards for court administrators.

Current state: Beginning adoption. Low controversy; clear benefits.

Judicial Support

Draft orders: AI can generate first drafts of routine orders. Judge reviews and modifies. Not replacement but acceleration.

Research memos: AI can prepare research on legal questions before judge. Law clerk function augmented or partially replaced.

Precedent analysis: AI can identify how similar cases were decided. Inform (not determine) current decisions.

Current state: Experimental. Some judges using quietly. Concerns about transparency and reliance.

Alternative Dispute Resolution

Online dispute resolution (ODR): AI-assisted platforms for resolving disputes without court appearance. eBay resolution center pioneered; now government adoption.

Mediation support: AI can identify issues, propose settlement ranges, facilitate negotiation. Human mediator augmented.

Small claims automation: For simple money disputes, AI can assess claims and suggest resolutions. Human judge reviews.

Current state: Growing adoption, especially for small claims. Controversial for more significant matters.

Predictive Applications (Controversial)

Outcome prediction: AI can predict how cases will be decided based on historical data. Useful for settlement? Or creating self-fulfilling prophecies?

Recidivism prediction: AI predicts likelihood of reoffending. Used in some jurisdictions for bail, sentencing, parole. COMPAS controversy.

Risk assessment: AI categorizes defendants or litigants by risk level. Affects pretrial detention, resource allocation, supervision level.

Current state: Deployed in some jurisdictions; heavily criticized; some banned.


The Prediction Problem

How Predictive Tools Work

Historical data: Algorithm trained on past cases—who was arrested, who was detained, who reoffended.

Feature extraction: Factors like prior arrests, age, employment, address, charge type become inputs.

Statistical correlation: Algorithm learns which factors correlate with outcomes (like rearrest).

Risk score: Output is score or category (low/medium/high risk) that informs human decision.

The Bias Problem

Garbage in, garbage out: If historical data reflects biased policing and sentencing, algorithm learns bias.

Proxy discrimination: Even without race as input, factors like zip code, employment, and prior arrests correlate with race.

Feedback loops: If predictions affect decisions (detention, charging), they affect future data—self-reinforcing.

Measurement problems: "Rearrest" is not "reoffending." The system measures what it catches, not what happens.

The Accuracy Problem

Base rates matter: If 30% of a group reoffends, predicting "no reoffense" for everyone is 70% accurate. Better than many algorithms.

Individual vs. group: Even if predictions are accurate for groups, any individual may differ. Treating individuals based on group statistics raises fundamental fairness questions.

What counts as accurate: Optimizing for one metric (overall accuracy) may increase errors for subgroups.

The Due Process Problem

Right to be judged individually: Constitutional and moral principle that people are judged for what they did, not what statistics say they might do.

Opacity: If defendants can't understand or contest the basis for decisions, due process is undermined.

Automation bias: Judges may defer to algorithmic recommendations without adequate scrutiny.

Pre-crime: At the extreme, prediction shades into punishing people for what they haven't done—the "Minority Report" nightmare.


The "Minority Report" Temptation

The Seductive Logic

The argument: If crime can be predicted, why wait for it to happen? Prevention is better than punishment. Resources should focus on high-risk individuals.

The applications:

  • Predictive policing: Patrol where crime is likely
  • Risk-based detention: Hold those likely to reoffend
  • Differential sentencing: Longer sentences for higher risk
  • Surveillance allocation: Monitor those most likely to offend

The problem: This inverts the presumption of innocence. It treats probability as culpability. It judges people for what they might do rather than what they have done.

Where Lines Should Be

Information gathering vs. action: Using data to understand patterns is different from using predictions to restrict liberty.

Group allocation vs. individual decision: Deploying resources to high-crime areas differs from detaining specific individuals based on prediction.

Post-offense vs. pre-offense: Using risk assessment after someone has committed a crime differs from intervention before any offense.

Support vs. restriction: Offering services to at-risk individuals differs from surveilling or detaining them.

The Slippery Slope

Mission creep: Tools developed for one purpose expand to others. Pretrial risk becomes sentencing input becomes parole decision becomes surveillance priority.

Normalization: What seems extreme becomes routine. Predictive policing is now widespread; predictive detention may follow.

Data accumulation: More data enables better prediction. Incentive to collect more, retain longer, analyze deeper.

The end state: A world where everyone has a risk score, where liberty depends on algorithmic assessment, where presumption of innocence is mathematical fiction.


Legitimate Applications

What AI Should Do in Courts

Accelerate routine tasks: Research, drafting, scheduling, case management. Human oversight maintained but productivity increased.

Improve access: AI-powered self-help for those who can't afford lawyers. Navigating court systems made easier.

Enhance consistency: Identifying when similar cases are decided differently. Flagging potential disparities for human review.

Support human judgment: Providing information, analysis, and options. Not replacing decision-making but informing it.

Principles for AI in Justice

Transparency: Parties must understand how AI influenced their case. No black-box decisions affecting liberty.

Contestability: Ability to challenge AI-influenced decisions. Appeal mechanisms. Human review available.

Accuracy and validation: AI tools must be tested, validated, monitored for accuracy and bias.

Oversight: Human judges make consequential decisions. AI assists but doesn't replace judicial discretion.

Equity: AI must not systematically disadvantage protected groups. Disparate impact matters.


The Path Forward

Near-Term Likely (2026-2032)

AI research tools become universal: Every lawyer uses AI-assisted research. Courts assume parties have access.

Electronic filing completes: All courts accept electronic filing. Paper becomes exception.

Video hearings normalized: Routine for many proceeding types. Standards develop for what requires in-person.

ODR expands: Small claims, traffic, simple disputes resolved online. Human judge reviews contested outcomes.

Case management AI: Triage, scheduling, tracking automated. Judges focus on substantive matters.

Predictive tools face scrutiny: More jurisdictions restrict or ban certain applications. Standards develop slowly.

Plausible (2032-2040)

AI drafting becomes routine: First drafts of orders, opinions, legal documents AI-generated. Human review and modification standard.

Self-represented litigants get AI help: AI assistants guide people through court processes. Not lawyers but better than nothing.

Case prediction influences settlement: Both parties know likely outcomes. Settlement becomes more rational, maybe more common.

Backlog reduces: Efficiency gains from AI allow faster case processing. Timely justice becomes more common.

Some judging functions partially automated: Routine matters (traffic, small claims, uncontested) decided with AI assistance and minimal human review.

Wild Trajectory (2040+)

AI judges for routine matters: Uncontested matters, simple disputes handled entirely by AI. Human appeal available.

Real-time court assistance: Litigants have AI advisors in courtroom. Equal representation through technology.

Justice metrics transform: Speed, consistency, access measured and optimized. What counts as "good" justice may change.

Or: Predictive tools proliferate: Risk scores pervade the system. Pre-crime intervention normalizes. The Minority Report future arrives gradually.


Risks and Guardrails

Algorithmic Injustice

Risk: AI encodes and perpetuates existing biases in criminal justice. Disparities increase.

Guardrails: Mandatory bias audits; disparate impact testing; transparency about training data and methods; ongoing monitoring; correction mechanisms.

Due Process Erosion

Risk: Decisions increasingly based on opaque algorithmic assessments. Defendants can't understand or challenge basis for decisions.

Guardrails: Transparency requirements; right to human decision-maker for consequential matters; explainability standards; meaningful appeal rights.

Automation Dependency

Risk: Judges defer to AI; human judgment atrophies; machines effectively decide.

Guardrails: Training on AI limitations; accountability for human decision-makers regardless of AI input; random audits of AI-influenced decisions.

Access Inequality

Risk: Sophisticated AI tools available to well-resourced parties; others disadvantaged. AI widens justice gap.

Guardrails: Public provision of basic AI tools; court-sponsored assistance; standards ensuring minimal capability for all parties.

Privacy and Surveillance

Risk: AI in courts requires data. Data collection expands. Surveillance normalizes. Chilling effects on behavior.

Guardrails: Data minimization; purpose limitations; retention limits; oversight of surveillance expansion.


The Deeper Questions

What Is Justice?

Justice involves more than efficiency. It involves being heard, being seen, having your day in court. It involves human judgment about human situations.

AI can make courts faster. But can it make them more just? The answer depends on what justice requires—and whether efficiency is friend or enemy of that deeper goal.

Who Judges?

Judgment is a distinctively human capacity. Society holds judges accountable for decisions because they have agency, discretion, responsibility.

If machines effectively decide, who is accountable? Can a machine bear the moral weight of judgment? Does it matter if outcomes improve?

What Is a Fair Trial?

Due process requires that decisions are made fairly, that parties understand the basis for decisions, that they can contest adverse determinations.

Can algorithmic inputs satisfy due process? If AI informs the decision, must the AI be explainable? If not explainable, can it be used?

How Should Error Costs Be Balanced?

Faster processing means some errors won't be caught. Predictive tools means some people treated based on probability rather than proof.

What error rate is acceptable? Who bears the cost of errors? How should speed be valued against accuracy, efficiency against fairness?


Conclusion

The justice system is drowning in backlogs while millions cannot access legal help they need. AI offers genuine potential to address these problems—faster research, better case management, expanded access through online tools.

But AI in justice also carries unique risks. The prediction problem—using statistics to judge individuals—threatens the foundation of presumption of innocence. The opacity problem—decisions influenced by incomprehensible algorithms—threatens due process. The bias problem—AI learning from a system with documented disparities—threatens equal protection.

The "Minority Report" future isn't inevitable. It's a choice. A choice about what tools to deploy, with what limits, with what oversight, with what values.

The next decade will shape that choice. Courts will adopt AI—the question is which applications, with what safeguards, governed by what principles.

Getting this right matters more than in most domains. Justice is not just a service to be optimized. It is a fundamental expression of how society treats its members, how efficiency is balanced against dignity, how the ideal that every person deserves to be judged as an individual is preserved.

AI should help achieve that ideal. It should not become an excuse to abandon it.


Endnotes — Chapter 34

  1. Federal court case processing statistics from Administrative Office of US Courts; median times vary by case type and district.
  2. Legal Services Corporation "Justice Gap" reports document that approximately 80% of low-income Americans' civil legal needs go unmet.
  3. COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a proprietary risk assessment tool; ProPublica investigation (2016) alleged racial bias; debate continues.
  4. Utah's Online Dispute Resolution program for small claims began 2018; expanded during COVID; resolution rates and user satisfaction generally positive.
  5. Immigration court backlog exceeded 3 million cases as of 2024; average wait times exceed 4 years in many jurisdictions.
  6. Arnold Foundation (now Arnold Ventures) developed Public Safety Assessment (PSA) used in multiple jurisdictions; intended as improvement over money bail but also criticized.
  7. eBay's Online Dispute Resolution system processes millions of disputes annually; pioneered private ODR now adopted by some courts.
  8. European Commission's "Ethics Guidelines for Trustworthy AI" (2019) provides principles applicable to justice sector; Article 22 GDPR restricts automated decision-making.
  9. "Minority Report" (Philip K. Dick, 1956; film 2002) depicts pre-crime arrest based on prediction; influential reference point for predictive policing debates.
  10. Partnership on AI "Report on Algorithmic Risk Assessment Tools in the US Criminal Justice System" (2019) provides comprehensive analysis of COMPAS and similar tools.