• Home Blog The Dangers of AI and The Law

The Dangers of AI and The Law

celender Apr 10, 2026
user-icon Michael Sunderland
the-dangers-of-ai-and-the-law

Since the introduction of AI Large Language Models (LLMs) in the early 2020s, people have increasingly relied on AI platforms such as ChatGPT and Google Gemini to assist with day-to-day tasks, from drafting emails to summarising complex information. By 2026, it is widely understood that while these tools are powerful, they are not infallible. One of the most significant limitations is their tendency to generate incorrect or misleading informatio, commonly referred to as “AI hallucinations.”

What is less commonly appreciated, however, is why these hallucinations occur and how easily they can be mistaken for authoritative answers. LLMs are not databases of verified knowledge; rather, they are predictive systems trained to generate the most statistically likely sequence of words based on the input provided. This means that when certainty is lacking, the model may still produce a confident and coherent response, even if the underlying content is flawed.

From the unique perspective of the author, having previously worked in the Information Technology field before becoming a solicitor, there is a distinct advantage in understanding both the technical architecture of these systems and the real-world consequences of their outputs. In IT, there is an inherent understanding that systems behave deterministically within defined parameters. In contrast, LLMs operate probabilistically, which introduces a level of unpredictability that is often underappreciated by end users.

This distinction becomes particularly important in legal practice. The law demands precision, accuracy, and accountability. A hallucinated case citation, misinterpreted statute, or fabricated legal principle is not merely an inconvenience, it has the potential to materially mislead a client, a court, or opposing counsel. Unlike traditional research tools, which fail transparently (for example, returning no results), AI systems may fail convincingly, which is arguably more dangerous.

Empirical data now demonstrates that the issue of AI hallucinations in legal contexts is no longer hypothetical. A growing body of research and reported decisions indicates that courts across multiple jurisdictions are increasingly encountering fabricated authorities generated by AI tools. One dataset tracking these incidents has identified hundreds of proceedings globally involving unverified AI output, including both self-represented litigants and qualified practitioners. More recent reporting suggests a rapid acceleration in frequency, with over 50 cases involving fake AI-generated citations identified in a single month alone. Courts have responded with increasing severit, including fines, costs orders, and professional disciplinary referrals, with some matters involving dozens of fabricated authorities within a single filing. The trend is clear: what began as isolated incidents has evolved into a systemic risk within legal practice.

Beyond the courtroom, there is also a significant impact on clients and their perception of the legal process. Clients are increasingly conducting their own “research” using AI tools prior to, or alongside, seeking legal advice. Where that research produces incorrect or overly simplified conclusions, it can create unrealistic expectations about outcomes, timeframes, or legal entitlements. This often places practitioners in the difficult position of having to correct misinformation that appears, to the client, to be authoritative. In some cases, this may erode trust, particularly where the client perceives the lawyer’s advice as conflicting with what an AI system has confidently provided. More broadly, it risks undermining confidence in the legal system itself, as clients may struggle to reconcile the complexity and nuance of real legal processes with the overly certain outputs generated by AI.

It is therefore critical to emphasise that AI does not replace a legal practitioner, nor is it capable of doing so. The practice of law is not merely the retrieval of information; it involves the application of judgment, ethical obligations, strategic decision-making, and an understanding of context that extends beyond what is written. While AI can assist in generating drafts or identifying potential lines of argument, it cannot independently verify the accuracy of its outputs, nor can it take responsibility for them. Courts have made it clear that the duty remains with the practitioner to ensure that all material relied upon is accurate and properly sourced. In this sense, AI is best understood as a tool and like any tool, its effectiveness depends entirely on the skill and oversight of the person using it.

Looking forward, the challenge is not whether AI will continue to improve, it undoubtedly will, but whether users will develop the critical literacy required to use it appropriately. Blind reliance on AI outputs risks eroding professional standards, whereas informed and cautious use has the potential to significantly enhance productivity without compromising integrity.

Contact Us for Expert Advice

    Upload file-attachment
    PDF (Charge & Summons)

    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.


    Other Blogs