AI Will Change Law More Than the Internet Did

AI is set to transform the legal profession far beyond what the internet ever did—reshaping not just access to law, but how legal thinking and work itself are performed. The article explores why this shift is deeper, riskier, and more disruptive than many expect.
From the Internet Age to the AI Age
When the internet arrived in legal practice in the 1990s, it transformed how lawyers communicated, how courts filed documents, and how clients accessed legal information. It created new categories of law — cyber law, digital privacy, electronic commerce regulation — and reshaped intellectual property disputes for a digital age. It was, by any measure, a significant disruption to an institution famous for its resistance to change.
But the internet did not touch the inner logic of legal work. Lawyers still researched case law by reading cases. Judges still wrote opinions through the application of legal reasoning to facts. Contracts were still drafted by attorneys exercising professional judgment, sentence by sentence. The internet changed the environment in which law operated; it did not change how law was made, applied, or administered. The legal profession adapted to the internet much as it adapted to the photocopier and the fax machine — absorbing the efficiency gains without altering its fundamental practices.
Artificial intelligence may be different. Not in degree, but in kind.
Where the internet provided a faster way to access and transmit legal information, AI provides a mechanism for automating the intellectual tasks that are the core of legal work itself: analyzing precedents, identifying relevant arguments, drafting documents, predicting outcomes, assessing risks, and — in some early applications — adjudicating disputes. If these capabilities continue to develop, AI will not merely reshape the environment in which law operates. It will reshape law itself: the processes of legal reasoning, the economics of legal practice, the institutional role of courts, and ultimately the access of ordinary people to the legal system.
This article examines that proposition — that AI will change law more than the internet did — drawing on current data, real-world developments, and the analytical perspectives emerging from legal scholarship and practice.
What the Internet Actually Did to Law
To assess AI's potential transformative impact, it is worth being precise about what the internet actually accomplished.
The internet created genuinely new legal challenges that required new legal frameworks. Jurisdiction became contested when transactions crossed borders without physical presence. Intellectual property law was forced to confront mass copying, peer-to-peer file sharing, and the economics of digital distribution. Privacy law had to grapple with the permanent and searchable record of personal information that networked computing enabled. Electronic evidence — emails, metadata, server logs — became central to litigation in ways that required new evidentiary rules and new discovery practices. Online commerce demanded regulatory frameworks that physical commerce had not required.
These were substantive changes to what law addressed. But the fundamental processes of legal work — how lawyers reasoned, how judges deliberated, how disputes were adjudicated — remained largely unchanged. A lawyer in 2005 used a computer to do Westlaw research instead of going to the library stacks, but they read the same cases and applied the same analytical framework. A judge in 2010 received filings electronically but still wrote opinions through the same intellectual process. The internet made legal work faster and cheaper in certain respects; it did not alter the nature of that work.
The deeper structural features of the legal profession — the billable hour model, the hierarchical law firm structure, the apprenticeship model of legal training, the reliance on expensive human time as the primary input to legal services — were essentially untouched. For most people seeking legal help, the internet changed remarkably little: legal services remained expensive, slow, and practically inaccessible to the majority of the population without substantial financial resources.
AI's Arrival in Legal Systems: The Current State
The entry of AI into legal practice has been more rapid than most observers anticipated. According to Clio's 2025 Legal Trends Report, 79% of legal professionals now use AI — a figure that had jumped dramatically from 19% just two years earlier. A more recent survey by 8am Legal, conducted in the autumn of 2025 and published in early 2026, found that individual AI adoption among legal professionals had risen to 69%, more than doubling from 31% the previous year. A global report by Secretariat and ACEDS found that 80% of respondents now rate themselves as knowledgeable about AI, and 74% expect their jobs to involve AI tools within the next 12 months.
These are not statistics about peripheral tools. AI is now central to how many legal professionals work. The leading applications include AI-powered legal research platforms that identify relevant precedents and summarize case law in seconds; contract review and due diligence systems that process thousands of documents faster than human teams could review hundreds; predictive litigation analytics that assess the probability of various outcomes based on judicial history; and AI-assisted drafting tools that generate first versions of contracts, briefs, motions, and correspondence. Thomson Reuters has reported that AI systems saved lawyers an average of four hours per week in 2024, generating approximately $100,000 in new billable time per lawyer annually across the US legal market. Major firms report 500–800% productivity increases in paralegal tasks through AI-assisted document review.
The nature of this adoption is worth noting: AI is not merely accelerating existing work. It is beginning to change the composition of legal work — what tasks lawyers do, what tasks can be delegated to systems, and what portion of a matter requires distinctively human judgment. Clio's analysis found that nearly three-quarters of a law firm's hourly billable tasks are potentially exposed to AI automation. A Goldman Sachs study concluded that 44% of tasks performed in the legal industry could ultimately be automated. These are not predictions about a distant future — they are assessments of what current technology already makes possible in principle.
4. AI in Judicial Systems: From Tools to Decisions
The transformation of legal practice, while significant, may be less consequential than AI's entry into judicial systems themselves — because it is in courts that law is ultimately made and applied.
China's Smart Courts
China has undertaken the most comprehensive integration of AI into judicial systems of any major jurisdiction. The Supreme People's Court has mandated that every court in the country deploy AI tools for judicial support, with full operational integration targeted for 2030. The "Smart Courts" strategy extends AI beyond administrative convenience to encompass decision-making assistance at the core of the judicial process. AI systems assist judges with case management, electronic file analysis, hearing voice recognition, judgment drafting, and legal research. At the Hainan High People's Court, AI judgment-drafting assistance has reduced written judgment time by 70%, with judges generating over 70% of judgment drafts through AI systems that they subsequently review and refine. A new national judicial AI platform, launched in late 2024, has gathered 320 million pieces of legal information including court rulings, cases, and legal opinions — forming what Chinese authorities describe as a "national-level legal AI infrastructure."
Critics note that China's smart court integration occurs in an institutional context where judicial independence is limited and AI systems can serve as instruments of political control as readily as instruments of judicial efficiency. AI tools in China's courts filter cases based on political sensitivity; the same systems that improve efficiency can extend the state's capacity to monitor and manage the judiciary. This dual character — AI as both an efficiency tool and a control mechanism — is unique to China's institutional environment, but it illustrates a broader principle: AI in judicial systems is not politically neutral.
Estonia's AI Adjudicator
Estonia has developed one of the most direct experiments in AI adjudication outside China. An AI system, trained on past judicial decisions, adjudicates small claims disputes below a monetary threshold — issuing binding verdicts that parties can appeal to human judges. The system processes cases submitted digitally, analyzing contractual terms, relevant precedents, and factual submissions to produce decisions in hours rather than the months required by conventional proceedings. The efficiency gains are real and documented; the costs to the court system are substantially reduced.
Estonia's experiment is deliberately narrow: it applies to low-value, factually straightforward disputes where the legal issues are well-defined and the parameters of relevant judgment are limited. Its architects have been careful to preserve human appellate oversight and to confine AI adjudication to cases where contextual and moral judgment is minimal. But the experiment proves a proposition that was previously theoretical: algorithmic systems can produce legally binding judicial decisions that affected parties accept as legitimate in appropriate contexts.
Risk Assessment Tools in the United States
The United States provides the most contested and extensively analyzed example of AI in judicial decision-making: algorithmic risk assessment instruments used in pretrial detention, sentencing, and parole decisions. COMPAS and similar tools, now used in some form in 46 states, produce risk scores based on questionnaire responses and demographic data, and judges cite these scores in high-stakes decisions about liberty. The legal and empirical controversy surrounding these tools — particularly regarding racial bias, opacity, and due process implications — has made them the central case study in global debates about AI and judicial fairness.
These developments represent a different kind of AI integration than the internet enabled. The internet gave legal professionals better tools for doing the same work. AI, in these judicial contexts, is doing work itself — producing outputs that directly influence decisions about human freedom, property, and rights. The distinction is not merely quantitative.
The Data on AI's Legal Capabilities
The empirical record of AI performance in legal tasks has advanced rapidly, and the trajectory suggests capabilities well beyond current deployment.
AI systems trained on large judicial datasets can now predict court outcomes with meaningful accuracy across multiple jurisdictions. Studies have found that AI outcome prediction models perform comparably to experienced lawyers in some categories of legal analysis — and can process relevant case history at scales no human lawyer can match. Contract review AI identifies material provisions and risk clauses with documented reliability that compares favorably to junior attorney performance on standardized tasks.
At the same time, documented limitations matter. A 2024 Stanford study found that leading legal AI platforms hallucinated at rates ranging from 17% to 33% across different platforms — rates far above what responsible legal practice can tolerate. The crisis of AI hallucinations in court filings is quantifiable: French researcher Damien Charlotin's database has documented more than 300 cases of AI-generated false legal citations being submitted to courts across multiple jurisdictions since mid-2023, with the rate accelerating sharply — rising from roughly two cases per week in early 2025 to two or three cases per day by the autumn of 2025. Federal courts have imposed monetary sanctions against attorneys in dozens of documented cases, with total court-imposed fines exceeding $50,000. Cases range from the foundational Mata v. Avianca (2023) — where a New York attorney cited six entirely fabricated cases generated by ChatGPT and was fined $5,000 — to Johnson v. Dunn (July 2025), where a federal judge not only imposed sanctions but disqualified the offending attorneys and directed notification to state bar regulators.
These failures illuminate the nature of AI's current limitations in legal contexts: not failure to process or analyze, but failure to maintain factual accuracy under conditions where fabricated outputs are indistinguishable from real ones. The hallucination problem is not merely a technical inconvenience — it represents a fundamental challenge to the use of AI in legal work, which demands verifiable accuracy as a baseline professional obligation.
The Case That AI Will Transform Law More Than the Internet Did
The argument for AI's transformative superiority over the internet rests on several structural observations that go beyond the impressive adoption statistics.
AI attacks the cost structure of legal work directly. The internet made legal information cheaper and faster to access; it did not make legal expertise cheaper to deploy. A lawyer's time remained expensive regardless of how efficiently they could find the cases they needed to read. AI, by automating substantive legal tasks, directly reduces the human time required to produce legal work product. The implications for the economics of legal practice are not merely incremental — they are potentially structural. When a contract review that required six months and $6 million in attorney time can be accomplished by AI in 2.5 days, as one documented example shows, the entire economic model of the practice that deployed those attorneys is challenged.
AI threatens the billable hour — the organizing economic principle of legal practice. The internet did not challenge hourly billing; it simply made billable hours more productive. AI makes the relationship between lawyer time and client value increasingly arbitrary: if a task that previously required five attorney hours can be accomplished by AI in minutes, hourly billing for that task becomes indefensible. More than half of legal professionals now expect AI-driven efficiencies to impact the prevalence of hourly billing. Major legal commentators describe the billable hour as facing its most serious challenge since it became the profession's dominant pricing model in the mid-twentieth century. The CEO of Clio has described a "structural incompatibility" between AI productivity gains and hourly billing that the profession cannot indefinitely defer.
AI reaches into legal reasoning itself, not just legal support tasks. The internet automated the filing, transmission, and storage of legal documents. AI is beginning to automate the production of legal analysis — the drafting of arguments, the identification of applicable law, the construction of legal positions, and increasingly the generation of judicial reasoning. This is a categorically different level of intrusion into the intellectual core of legal work than any previous technology.
AI reshapes the access-to-justice landscape in ways the internet could not. The internet created legal information resources accessible to non-lawyers, but navigating legal systems still required expensive professional expertise. AI legal tools — capable of drafting documents, explaining rights, identifying relevant law, and guiding procedural choices — can potentially provide meaningful legal assistance to people who cannot afford professional representation. A 2026 survey of legal professionals found that 76% believe AI has the potential to expand access to justice. This transformation, if realized, would alter the relationship between law and society more profoundly than anything the internet accomplished.
The Risks and Limits of AI's Legal Transformation
The case for caution is substantial and must be engaged honestly.
The hallucination crisis is not merely a teething problem. More than 300 documented instances of AI-generated false legal citations entering court proceedings, with the rate accelerating rather than declining despite widespread awareness, suggests that the accuracy limitations of current AI systems are not easily resolved. The risk is particularly acute because legal work demands verifiable accuracy — and AI systems can produce confident, plausible-sounding fabrications that are indistinguishable from accurate outputs without independent verification. As a Yale researcher noted after testing major legal AI platforms, AI systems hallucinate at rates "far higher than would be acceptable for responsible legal practice." Until this fundamental reliability problem is solved, the scope of appropriate autonomous AI use in legal work remains limited.
Automation bias is already distorting professional judgment. The evidence suggests that the problem is not just that AI makes mistakes — it is that human professionals are delegating verification responsibility to systems that cannot reliably perform it. The pattern across the 300+ hallucination cases is consistent: lawyers submitted AI outputs without conducting the independent verification that their professional obligations require. As one federal magistrate judge observed: "The use of artificial intelligence must be accompanied by the application of actual intelligence in its execution." The integration of AI into legal work requires not only technical capability but cultural and professional adjustment that the profession is still working through.
Algorithmic bias in judicial contexts threatens equality before the law. The COMPAS controversy — in which Black defendants were labeled high-risk at substantially higher rates than white defendants who went on to commit similar offenses — illustrates the structural risk of embedding historical bias into judicial systems through algorithmic tools. The replacement of inconsistent human judgment with consistent algorithmic judgment is not automatically an improvement; it depends entirely on whether the algorithm's embedded judgments are more or less just than the human judgments it replaces.
Courts and regulatory institutions are not keeping pace. While legal professionals are rapidly integrating AI into practice, judicial institutions themselves are much slower. AI use is currently limited to a minority of courts globally, and the regulatory and ethical frameworks governing AI in judicial contexts remain rudimentary. Individual judges are issuing standing orders requiring AI disclosure; bar associations are publishing ethics opinions; but comprehensive governance frameworks for AI in legal systems do not exist in most jurisdictions. The EU AI Act's classification of judicial AI as high-risk, with full compliance obligations applying from August 2026, represents the most ambitious attempt to govern this space — but it applies only to EU jurisdictions.
The transformation may deepen existing inequalities before it addresses them. AI tools that are powerful, accurate, and well-designed tend to be expensive. The competitive advantage of large well-resourced law firms in deploying AI may widen the gap between institutional and individual legal actors before democratizing effects materialize. Those who can deploy the best AI research, the best predictive analytics, and the best agentic legal systems may have advantages in litigation, negotiation, and regulatory compliance that smaller actors cannot match.
The Future of Law in the AI Era
The trajectory visible in 2026 points toward developments that will, if realized, constitute a transformation of legal institutions without precedent since the professionalization of law in the nineteenth century.
Agentic AI systems — capable of autonomously executing multi-step legal tasks without continuous human direction — are moving from pilots to production deployment. These systems can review contracts, conduct legal due diligence, draft transaction documents, manage litigation workflows, and increasingly handle end-to-end legal processes. The legal sector is tracking the trajectory of software development approximately one cycle behind: just as software AI moved from code completion to file editing to multi-hour autonomous project execution, legal AI is moving from document Q&A to multi-step workflow assistance to autonomous matter management. Early-adopter law firms describe competitive advantages in speed and cost that are already reshaping client expectations.
AI-assisted dispute resolution is being deployed in courts and private arbitration bodies for categories of routine disputes. Singapore's Small Claims Tribunals have deployed AI systems providing legal advice, procedural guidance, and claim valuation. The American Arbitration Association is actively experimenting with AI co-mediators. Online dispute resolution platforms powered by AI are handling commercial disputes that would previously have required court proceedings. As these systems improve and gain institutional legitimacy, the category of disputes requiring full human judicial processes may narrow — with significant implications for court workloads and access to affordable resolution.
The platformization of legal services — the consolidation of legal work into AI-powered platforms that can deliver standardized legal outputs at scale — may reshape the structure of the legal profession itself. AI-native law firms, built from the ground up around AI as a core operating capability rather than a peripheral tool, are emerging across multiple jurisdictions. These firms can price services at levels impossible for conventionally staffed practices, and they are beginning to attract clients for whom the economics of traditional legal practice were prohibitive.
The democratization of legal research may, over time, partially address the access-to-justice crisis that the legal profession has failed to resolve through conventional means. AI tools capable of providing accurate, accessible legal information and analysis can reduce the gap between legally sophisticated and legally unsophisticated parties — though realizing this potential requires attention to accuracy, equity, and the structural barriers to legal access that go beyond information alone.
A Transformation Larger Than the Internet?
The internet changed what law was about. Artificial intelligence is beginning to change how law works.
The internet created new subjects of legal regulation — cyberspace, digital commerce, online speech, data protection. It did not touch the internal machinery of legal work: how lawyers think, how judges decide, how legal institutions function. It changed the inputs and outputs of the legal process while leaving the process itself largely intact. For most people seeking legal help, the internet made remarkably little difference to the practical accessibility or affordability of law.
AI operates at a different level. By automating legal analysis, research, drafting, prediction, and increasingly adjudication, AI reaches into the processes that constitute legal work rather than merely the environment in which that work takes place. It challenges the economic foundation of the legal profession, threatens or promises to transform access to justice, and raises questions about the nature of legal reasoning and accountability that the internet never posed.
Whether this transformation ultimately improves justice systems — makes them more accessible, more consistent, more fair, and more accountable — will depend on choices that are not technologically determined. The hallucination crisis shows what happens when AI is deployed without adequate attention to accuracy; the COMPAS controversy shows what happens when algorithmic bias is embedded in institutions that determine human liberty; the smart court experience shows that AI judicial systems can serve both efficiency and control, depending on the institutional context in which they are deployed.
The internet changed law without requiring law to decide what it valued. AI forces law to decide. The choices made in courtrooms, legislatures, law firms, and judicial institutions about how to integrate AI — with what safeguards, accountability mechanisms, transparency requirements, and equity constraints — will determine whether the transformation AI brings to law is one that serves justice or merely reflects the priorities of those with the power to deploy it.
That is a larger question than anything the internet ever asked of the legal system. It may also be the most important legal question of the next generation.
© 2026 Law of Tomorrow. This work is licensed for non-commercial use with attribution.