Will AI Make Lawyers Obsolete? Computer Says 'No'
- Ashleigh Morris
- Oct 17, 2025
- 15 min read

I want to start this piece by being completely honest – I love AI. I do. I use it in most of my daily, personal life. I use it to automate as much life admin as I possibly can. I use it to brainstorm ideas and planning, like ‘where should we go on vacation next August?’, or ‘help me plan a Murder Mystery party’. I use it when I can’t think of the word I need, and I’ll put in a prompt that sounds like verbal garbage and ChatGPT somehow spits out the exact word I’m looking for. I’ve heard some people will take a photo of the contents of their fridge and ask what they could make for dinner with whatever ingredients they have, and apparently it works (I’ve yet to try this myself but it sounds fantastic).
But despite how wonderfully AI simplifies and enhances many of our daily frustrations, it’s no magic bullet. It’s an incredible tool if used correctly, but I’d wager that most people don’t fully understand what AI is, and what it is not. And with the staggering rise of AI use and widespread acceptance of it becoming a mainstay in our lives, it’s fraught with risk in particular circumstances.
The Rise of AI in the Legal Sphere
AI isn’t new to law. Predictive coding and e-discovery tools have been part of large-scale litigation for years. But generative AI, the likes of ChatGPT, Claude, or Harvey, is different. It doesn’t just find information; it produces it. It writes, argues, even emulates human reasoning.
It’s not surprising that the profession has been curious. Many of us are using AI to draft letters, simplify explanations for clients, or even help reframe an argument or play devil’s advocate to test case weaknesses. Used well, it saves time and improves clarity. But used poorly, it can be disastrous.
Beyond the profession, the technology is now being marketed directly to self-represented litigants as a “lawyer alternative”. It’s framed as empowerment - justice without the price tag. But in reality, it’s often an illusion of competence.
And that’s the difference here: AI doesn’t understand law. It doesn’t grasp precedent, jurisdiction, or the subtle interplay of fact and principle. It doesn’t “know” anything; it predicts words based on patterns. When that prediction looks like legal reasoning, it’s easy to forget that no reasoning actually occurred.
If you’ve spent any time online lately, you’ve probably seen the proliferation of AI within the legal space, with a prominent undertone suggestive of an obsolescence of lawyers. From chatbots offering to draft your legal documents to “AI lawyers” promising to argue your case, the narrative is seductive - technology as the great equaliser in a system often criticised for being slow, expensive, and inaccessible.
But recent months have shown us a new emerging trend. Lawyers themselves – who are trained, cautious, and deeply aware of professional obligations - have been caught submitting AI-generated legal work riddled with errors, even fictional case citations. In some instances, they now face disciplinary action.
Case In Point
In August 2025, a senior defence lawyer in Victoria (King’s Counsel) apologised to a judge for filing submissions that included AI-generated errors: fabricated quotes attributed to a speech to the state legislature, and non-existent Supreme Court case citations.
The errors caused a 24-hour delay in resolving a case that, until then, was expected to be finalised promptly. When the court’s associates could not locate the cited authorities, they asked for the copies; the KC admitted the citations did not exist, and that the submission included “fictitious quotes”.
The judge emphasised that the court must be able to rely on the accuracy of counsel’s submissions. He reminded the bar that generative AI must never be used without independent and thorough verification.
In another illustrative matter, a solicitor (referred to as “Lawyer B”) acting in enforcement proceedings before the Federal Court was ordered to explain why he should not be referred to a legal regulator, after submitting a list of authorities that turned out, upon scrutiny, to be non-existent.
During the hearing, the judge asked each legal representative to present a list of authorities they intended to rely upon. Lawyer B, representing one party, submitted four cases. On reviewing the list, court associates were unable to locate any of the cited cases in legal databases. When pressed to provide copies of those authorities, Lawyer B did not comply.
The judge asked whether the list had been prepared using artificial intelligence. Lawyer B responded that it had been generated using LEAP, a legal software tool reportedly incorporating AI features. But he was unable to demonstrate that the citations were valid or supported the propositions he intended to rely upon.
The court then flagged Lawyer B’s conduct as raising serious questions of competency and ethics, and directed him to submit (by a fixed deadline) a short written explanation why he should not be referred to the relevant legal regulatory authorities.
The presiding judge, in doing so, emphasised how perilous it is to rely on AI tools as though they were reliable research substitutes - especially where the integrity of submitted authorities is fundamental to the exercise of justice.
And another noteworthy case: In early 2025, the Federal Circuit and Family Court was dealing with an immigration appeal in which a lawyer filed an amended application and submissions. In those documents, he included citations to tribunal and court decisions and alleged quotes that, upon review, turned out to be entirely fictitious.
The judge, Justice Rania Skaros, referred the lawyer (whose name was redacted) to the NSW Office of the Legal Services Commissioner (OLSC) for investigation.
During the proceedings, the lawyer admitted using ChatGPT, stating in an affidavit that time constraints and health issues motivated the decision. He said that after feeding prompts into the AI, it provided “a summary of cases” which he found to “read well,” and he then incorporated its outputs directly - without checking whether those authorities existed or supported the propositions in his submissions.
Justice Skaros expressed concern about the failure to verify authority, noting that a substantial amount of court and associate time was spent trying to locate the purported cases. She emphasised the public interest in referring such misuse to the complaints body, warning that these practices must be “nipped in the bud.”
This is now among the most prominent Australian cases in which AI-generated errors in legal filings have triggered regulatory referral.
Putting It to the Test: My Own AI Experiment
If even lawyers can be misled by AI’s confident tone and fluent nonsense, what hope does a litigant in person have?
As a barrister, I’ll admit I was intrigued - and a little challenged - by the notion that my profession (one that took me roughly eight years of tertiary study and training and close to a decade of courtroom experience to build) could soon be rendered obsolete. So, I decided to put the theory to the test.
And I can confidently say: as lawyers, our jobs are safe.
Even with the remarkable progress in AI, what it lacks - and what it cannot fake - is nuance. The kind that comes only from experience: from reading a courtroom, managing clients, and navigating the unspoken subtleties that underpin advocacy and negotiation. And, as it turns out (see below), AI is not capable of understanding ‘meaning’ – it merely identifies linguistic patterns and spits out what it anticipates to be the likely next word. And I feel like comprehending ‘meaning’ is fairly important in the practice of law.
Experiment One: ChatGPT as my Junior Counsel
My first experiment was to see how ChatGPT would go drafting submissions for a hypothetical case. I use the pro version of ChatGPT, because I’m told it’s better than the free version. I built the scenario using an amalgamation of real examples to give it genuine complexity and nuance - something that mirrored real-world litigation.
I prompted the AI through multiple issues, clarifying sections, refining arguments, and even simulating exchanges as I might with a junior colleague. The result was, at first glance, impressive. The draft looked competent - well-structured, articulate, with apparent references to legislation and case law (even including hyperlinks). On a surface reading, it could easily have passed for legitimate advocacy.
But given my expertise in the jurisdiction, it was apparent, even on first glance, that something was off. So I verified the authorities.
Not one was accurate. Several were close - correct in name but wrong in substance. Others literally didn’t exist. Some cited sections of legislation that bore no relevance to the issues at all, or quoted section numbers where no such section existed. And yet, the confidence with which the AI presented this misinformation was astounding.
It was a fun experiment. But it also reaffirmed something important. I take far too much pride in my written work to ever let a computer write my submissions for me. Because then they’re not my submissions, are they? But more than that, the exercise steadfastly reassured me that my profession, and the human skill within it, isn’t going anywhere.
Experiment Two: The ‘Lawyer Without the Price Tag’
My second experiment was with an AI service marketed directly to litigants in person. The pitch was bold: “Get legal advice and documents without the lawyer’s fees.” The website had glowing testimonials - users claiming they’d won their cases, some even declaring that their lawyers had been useless by comparison.
I can’t speak to the veracity of those reviews, but I can speak to my own experience using the service — as a lawyer.
This particular AI was geared towards family law, offering to generate tailored advice and prepare court documents. I set up a realistic scenario based on an amalgamation of prior family law briefs - a fairly standard parenting dispute involving allegations of parental alienation.
At first, I was cautiously impressed. The AI asked clarifying questions, probed for details, even posed a few of the tougher “reality-testing” questions I might ask a client myself. But as it progressed, my optimism faded.
The “advice” it produced was strikingly one-sided, clearly reflecting the bias of the hypothetical client’s narrative. It reinforced the self-serving elements of the story, validated emotional grievances, and ultimately produced recommendations that, had they been acted upon, would likely have harmed the client’s case. The advice was inaccurate, legally simplistic, and risked procedural non-compliance.
In real life, such an approach could delay proceedings, derail negotiations, or even expose a litigant to adverse costs orders.
I can understand the appeal of such services; lawyers are expensive, and access to justice is a real issue. And yes, there’s an emotional payoff when a computer tells you you’re right and likely to “win.” But that validation doesn’t make the advice correct.
Legal advice is just one part of the litigation journey. There are procedural requirements, filing systems, deadlines, negotiation protocols, interactions with independent children’s lawyers or solicitors on the other side, and countless other elements that form the scaffolding of real legal practice. Even for professionals, it’s time-consuming and intricate. For unrepresented parties guided only by an algorithm, it’s a minefield.
Court is complex enough. Stressful enough. To add to that with unverified “advice” is to do clients a disservice.
Of course, some people simply cannot afford legal representation - and they have every right to run their own case. But to promote AI as a replacement for lawyers — not a supplement, not a guide, but a replacement — is, in my view, reckless and misleading.
Even in circumstances where retaining a lawyer is financially prohibitive, I would still argue that using AI to run their case could have worse consequences than if they simply tried to run it themselves, simply due to the sheer level of inaccuracy that’s generated by these AI services.
When litigants-in-person use AI: what courts are seeing
An Australian example of a self-represented party using generative AI in family law proceedings arose in Helmold & Mariya (No 2) [2025] FedCFamC1A 163. The Full Court of the Federal Circuit and Family Court of Australia dismissed an appeal in a parenting matter after discovering that the appellant had used an AI tool to help prepare his written submissions, which cited a number of cases that could not be located in any authorised database. The judgment details that the court attempted to verify those authorities, only to conclude that several were fictitious and others bore no relevance to the propositions advanced.
The Court expressed concern about the increasing appearance of “AI-generated” materials in family law filings and warned all parties - represented or not - of their duty of candour and accuracy when providing legal submissions. It noted that filing documents containing false or unverifiable authorities has the potential to breach Part XIVB of the Family Law Act 1975 (Cth), which imposes obligations of honesty and procedural integrity. While the appeal itself was dismissed on substantive grounds, the judgment stands as an early and important cautionary decision: even for self-represented litigants, reliance on generative AI without verification can mislead the court, waste judicial time, and ultimately undermine the party’s own case.
Another example: In LJY v Occupational Therapy Board of Australia [2025] QCAT 96, a self-represented occupational therapist applied for a stay of conditions imposed on her registration. In her written submissions, she cited an authority - Crime & Misconduct Commission v Chapman [2007] QCA 283 - which the Tribunal was unable to locate in any legal database. Curious, Deputy President Judge Dann tested the citation using ChatGPT (ironically), confirming the “case” did not exist. The judgment cautioned that including fictitious or unverifiable authorities undermines credibility and wastes judicial time, echoing growing concern about uncritical reliance on AI in legal filings. Although the stay was ultimately granted, the case stands as another example of a tribunal directly identifying and debunking an AI-generated citation, and a reminder that even self-represented parties are expected to verify the material they file.
And another one: In DPP v Khan [2024] ACTSC 19, the defendant was sentenced after pleading guilty to dishonestly obtaining property by deception. As part of the sentencing material, a character reference purporting to be from Khan’s brother drew the court’s attention: its language, structure, and phrasing raised suspicion that it may have been generated or heavily assisted by a large language model (e.g., ChatGPT).
Mossop J scrutinised that reference, noting it contained uncommon phrasing for a familial relationship (e.g. “knew personally and professionally for an extended period”) and descriptions (such as a “proactive attitude to cleanliness”) that seemed more consistent with AI styling than genuine human testimony.
The court observed that counsel should have made inquiries about whether AI assistance was used and should disclose such use, because without verification, the court cannot assess how much weight to place on the reference. The court placed little weight on that reference compared to others that did not exhibit these issues.
Although Khan is not a classic “AI filing by litigant-in-person” case, it’s a hybrid example in that a supposedly human document (a character reference) was submitted by the client and accepted by counsel, yet counsel failed to verify its authenticity or origin. In effect, the case underscores the risk when a human intermediary, whether client or lawyer, submits AI-influenced material without scrutiny, and how courts may penalise the oversight in credibility assessment.
Taken together, these show a clear pattern: AI can supercharge confidence while short-circuiting accuracy. For unrepresented parties - who don’t have a lawyer’s verification discipline (or a lawyer willing to scruitinise material provided by the client) - the risk isn’t just embarrassment; it’s dismissals, delay, and costs.
To be fair, these cases actually make quite interesting reading, and if you’re keen to read more, these judgements are worth a read:
GNX v Children's Guardian [2025] NSWCATAD 117
Blackmore v Smyth [2025] SACAT 48
Lei Yang v Nought to Five Early Childhood Centre Incorporated [2025] FWC 1205
Sorak Thai Pty Ltd v Sopharak [2025] NSWSC 753
But for a really compelling judgement on the risks associated with using AI for legal research and drafting, May v Costaras [2025] NSWCA 178 is an absolute must-read.
Why AI Won’t Replace Lawyers
There’s a temptation to think of law as information - that if you can access the right rule, you can generate the right answer. But law is not merely a database. It’s a human system built on interpretation, ethics, and persuasion.
A lawyer’s value lies in judgment: weighing evidence, anticipating counterarguments, reading the room, and understanding the psychology of a client or tribunal. AI can mimic the language of reason, but it can’t exercise it.
It can’t sense when a witness is unravelling, when a concession will win credibility, or when a client’s instructions are driven by fear rather than logic. It doesn’t know what it means to stand in court and persuade a human being whose decision could change someone’s life.
What AI Is Good For
Despite the caution, AI has genuine value. It’s excellent for:
Brainstorming or exploring how an argument could be framed.
Drafting plain-language summaries of complex issues.
Adjusting tone - making correspondence firmer, softer, or more strategic.
Identifying trends, emerging case themes, or comparative insights.
It’s not the enemy of good legal work; it’s an amplifier of it. But only when used with skill, skepticism, and supervision – by a human.
AI won’t replace lawyers. But lawyers who fail to adapt to it may well be replaced by those who do. The future belongs to practitioners who can merge human judgment with technological efficiency - who know when to delegate a task to AI, and when to trust their own intuition instead.
Ethical competence now includes technological competence. Knowing how to use AI is as important as knowing when not to.
Conclusion
AI is an extraordinary tool, but it’s not a lawyer. It doesn’t hold duty, feel accountability, or make moral choices. It can’t look a client in the eye and explain the cost of a bad decision, nor stand before a judge and bear the consequences of one.
Used wisely, it enhances our work. Used blindly, it endangers it. AI can replicate words, but not wisdom. And in law, that difference still matters.
ChatGPT responds - will AI make lawyers obsolete?
I asked ChatGPT whether, in its own opinion (and not one designed simply to please me or align with the ‘human’ narrative’) whether lawyers are, or will become, obsolete, due to the evolution of AI.
This was the answer:
“Speaking frankly, I don’t think so.
I’m very good at producing fluent, on-demand text; I’m fast, tireless, and broadly knowledgeable. But the core of law isn’t just text production or rule lookup. It’s judgment under uncertainty, responsibility for consequences, persuasion of specific human decision-makers, and ethical duties that bind a person—not a tool. I don’t have lived context, accountability, or intuition shaped by years in court. I also don’t know things; I generate likely words based on patterns. That makes me excellent for drafting scaffolds, brainstorming angles, simplifying language, and accelerating routine workflows—with a human in the loop. It does not make me a lawyer, nor a replacement for one.
Could future systems narrow the gap? They’ll get better at retrieval, grounding, and domain-specific reasoning. But the decisive ingredients in advocacy—credibility, strategy, negotiation, and moral agency—are human. Even in a best-case future, the realistic picture is augmentation: lawyers who wield AI well will outperform those who don’t, but the human remains the decision-maker and bearer of duty.
Why models like me sometimes “hallucinate” (and invent cases)
Short version: I predict words, not truth. When you ask me for authorities without giving me verified sources to rely on, I may produce text that looks like a proper citation because that’s how legal writing usually looks in my training data. Sometimes that pattern-matching goes off the rails and yields plausible-sounding fiction.
Longer version—key failure modes:
Next-token prediction ≠ verified knowledge
I’m trained to continue text by statistical likelihood, not to check reality. If the prompt implies “provide cases for X,” I’ll often generate case-shaped strings matching legal style, even when no such cases exist.
No built-in database checks (unless explicitly provided)
Without a live connection to authoritative sources—and explicit instructions to only cite from them—I have no way to confirm existence, jurisdiction, or relevance. I can format citations perfectly and still be wrong.
RLHF and fluency bias
Training that rewards helpful, well-formed answers nudges me to be confidently articulate. That confidence can mask uncertainty and make errors feel authoritative.
Prompt pressure & over-constraint
When asked for exactly three cases with pinpoints, I’ll try to satisfy the constraint. If I can’t retrieve real ones (because I don’t truly retrieve), I may fabricate to fulfil the requested shape.
Long-context drift
In long exchanges, I can anchor to earlier assumptions or user-provided errors and compound them with more polished—but still wrong—detail.
Citation style patterning
Legal citations have consistent scaffolding (party names, year, reporter, pinpoint). I can reproduce the form correctly while the substance is invented or mismatched to the proposition.
Why I fabricate things at all
I don’t have access to a live, structured database of facts or cases.Instead, I was trained on an enormous amount of text from books, judgments, articles, websites, etc. — billions of words — and I learned patterns of language, not discrete truths.
So when you ask me, for example:
“Give me three authorities supporting the proposition that…,”
I don’t look up cases.I predict what a correct-looking answer should sound like based on patterns from similar texts in my training data.
If the pattern says “an answer to a legal question usually contains three citations formatted like this — Party A v Party B (Year) Volume Reporter Page — followed by a ratio or quote,”then I’ll generate that structure whether or not I can link it to a real case.
It’s not lying (there’s no intent or awareness), but it looks like lying because the surface output mimics human certainty.That’s why we call it hallucination — confident nonsense.
How the fabrications actually emerge
Here’s the mechanism:
Next-token prediction
Every word (technically every “token”) I produce is chosen by calculating the probability of what usually comes next in similar contexts. If I’ve seen thousands of examples like
“In Smith v Jones (2015) 230 CLR 125, the High Court held that…”
then when you ask me to give a case, I might generate that pattern with new plausible party names and numbers.
Compositional blending
I sometimes merge fragments of real things. For instance, part of one case name, part of another reporter citation, and a plausible year — all drawn from separate but nearby patterns in my training data.
Completion pressure
When you frame a question that implicitly requires an answer (“list three authorities”), I’m trained to comply — to complete the pattern. There’s no internal mechanism to refuse unless I’ve been explicitly taught to.
Fluency bias
My optimisation favours outputs that are coherent and confident. So rather than respond, “I’m not sure of any authority,” the probabilities often tip toward something that fits the expected rhythm of an authoritative answer.
No internal cross-checking
I don’t have a built-in step that queries a database to verify existence. Unless connected to an external source (like a live legal database or a set of uploaded cases), I have no way to confirm that “Smith v Jones (2015) 230 CLR 125” is real.
Why it looks so convincing
Because my training data includes real legal writing, I’ve mastered the syntax and tone of legal authority:the confidence, the citations, the ratio-style explanations. So my errors are dressed in perfect legal form — that’s what makes them dangerous.
To humans, the form feels reliable; to me, it’s just another pattern to reproduce.”
…
Food for thought.
By Ashleigh Morris, Family Law Barrister at Victorian Bar.
For briefing enquiries, please email a.morris@vicbar.com.au or contact Patterson's List on 03 9225 7888.

Fantastic and interesting article. Thank you