The Dangers of AI and The Law
Since the introduction of AI Large Language Models (LLMs) in the early 2020s, people have increasingly relied on AI platforms such as ChatGPT and Google Gemini to assist with day-to-day tasks, from drafting emails to summarising complex information. By 2026, it is widely understood that while these tools are powerful, they are not infallible. One of the most significant limitations is their tendency to generate incorrect or misleading informatio, commonly referred to as “AI hallucinations.”
What is less commonly appreciated, however, is why these hallucinations occur and how easily they can be mistaken for authoritative answers. LLMs are not databases of verified knowledge; rather, they are predictive systems trained to generate the most statistically likely sequence of words based on the input provided. This means that when certainty is lacking, the model may still produce a confident and coherent response, even if the underlying content is flawed.
From the unique perspective of the author, having previously worked in the Information Technology field before becoming a solicitor, there is a distinct advantage in understanding both the technical architecture of these systems and the real-world consequences of their outputs. In IT, there is an inherent understanding that systems behave deterministically within defined parameters. In contrast, LLMs operate probabilistically, which introduces a level of unpredictability that is often underappreciated by end users.
This distinction becomes particularly important in legal practice. The law demands precision, accuracy, and accountability. A hallucinated case citation, misinterpreted statute, or fabricated legal principle is not merely an inconvenience, it has the potential to materially mislead a client, a court, or opposing counsel. Unlike traditional research tools, which fail transparently (for example, returning no results), AI systems may fail convincingly, which is arguably more dangerous.
Empirical data now demonstrates that the issue of AI hallucinations in legal contexts is no longer hypothetical. A growing body of research and reported decisions indicates that courts across multiple jurisdictions are increasingly encountering fabricated authorities generated by AI tools. One dataset tracking these incidents has identified hundreds of proceedings globally involving unverified AI output, including both self-represented litigants and qualified practitioners. More recent reporting suggests a rapid acceleration in frequency, with over 50 cases involving fake AI-generated citations identified in a single month alone. Courts have responded with increasing severit, including fines, costs orders, and professional disciplinary referrals, with some matters involving dozens of fabricated authorities within a single filing. The trend is clear: what began as isolated incidents has evolved into a systemic risk within legal practice.
Beyond the courtroom, there is also a significant impact on clients and their perception of the legal process. Clients are increasingly conducting their own “research” using AI tools prior to, or alongside, seeking legal advice. Where that research produces incorrect or overly simplified conclusions, it can create unrealistic expectations about outcomes, timeframes, or legal entitlements. This often places practitioners in the difficult position of having to correct misinformation that appears, to the client, to be authoritative. In some cases, this may erode trust, particularly where the client perceives the lawyer’s advice as conflicting with what an AI system has confidently provided. More broadly, it risks undermining confidence in the legal system itself, as clients may struggle to reconcile the complexity and nuance of real legal processes with the overly certain outputs generated by AI.
It is therefore critical to emphasise that AI does not replace a legal practitioner, nor is it capable of doing so. The practice of law is not merely the retrieval of information; it involves the application of judgment, ethical obligations, strategic decision-making, and an understanding of context that extends beyond what is written. While AI can assist in generating drafts or identifying potential lines of argument, it cannot independently verify the accuracy of its outputs, nor can it take responsibility for them. Courts have made it clear that the duty remains with the practitioner to ensure that all material relied upon is accurate and properly sourced. In this sense, AI is best understood as a tool and like any tool, its effectiveness depends entirely on the skill and oversight of the person using it.
Looking forward, the challenge is not whether AI will continue to improve, it undoubtedly will, but whether users will develop the critical literacy required to use it appropriately. Blind reliance on AI outputs risks eroding professional standards, whereas informed and cautious use has the potential to significantly enhance productivity without compromising integrity.
Other Blogs
-
Vehicle Impoundment, Immobilisation & Forfeiture in Victoria – Know Your RightsHad your car impounded, immobilised, or facing forfeiture in Victoria? Learn how hoon laws work, when police can seize your vehicle, how to appeal, and when to get legal help. Practical guide from William Archer Defence Lawyers.
March 31, 2026
Read more -
Failure to Nominate the Driver of a Motor Vehicle – s60 Road Safety ActIf you’ve been contacted by police and asked, “Who was driving your car?”, you might be thinking you can just say “no comment” and move on. A lot of people think that. This charge exists because, under Victorian law, the registered owner (or the person responsible for the vehicle) can be required to help identify […]
March 30, 2026
Read more -
DUI – Driving Under the Influence | s49(1)(a) Road Safety Act VictoriaIf you’ve been charged with DUI, you’re probably scared. Most people are. This charge feels heavier than a normal drink driving charge. It often comes with words like “incapable”, “court”, “criminal record”, and sometimes “jail”. You might also be confused because you did not “blow high”, or you were not even given a breath test […]
March 27, 2026
Read more -
Failing in the Duty of a Driver After an Accident – s61 ExplainedIf you’ve been charged with failing in the duty of a driver after an accident, most people are rattled. Many didn’t think what happened even counted as an “accident”. Others panicked at the scene and left, thinking they’d sort it out later. Some genuinely didn’t realise they were required to report it. Now there’s a […]
March 25, 2026
Read more -
Dangerous or Negligent Driving Whilst Being Pursued by Police – s319AAIf you’ve been charged with dangerous or negligent driving while being pursued by police, this usually comes with a lot of fear. People worry about jail.They worry about losing their licence.They worry about their car being taken. Most people also say the same thing.“I wasn’t trying to run.”“I panicked.”“I didn’t think it was that bad.” […]
March 20, 2026
Read more
