Advertisement
Advertisement

You can’t lick a badger twice

Despite many warnings from the court and decisions1 about the dangers of using AI based on large learning models for drafting court documents or providing case precedents, the message does not seem to be getting through.

In a recent ruling from California, the court ordered lawyers to pay $31,000 in compensation to their opponents, due to the consequences of using AI to draft documents which contained non-existent case authorities.

One of the issues appears to be a failure to realise that AI does not just make some things up, it makes everything up, in the sense that it is creating an entirely new piece of text, which has never been seen (or reviewed) by anyone previously.

These sorts of AI do not cut and paste from published judgements, textbooks or precedent sets; they create material out of whole cloth, based on the information on which they have been trained.

Yes, it is highly informed text, and will be mostly right most of the time – if it were constantly and obviously wrong, nobody would use it.

Instead, just as a slot machine pays out regularly enough to give the players the illusion that they are winning, this kind of AI is right so often that lawyers can begin to trust it. Unfortunately, it can get things wrong – and, when it cannot find an answer or a case authority, it might just create one.

This is clearly – and sometimes humorously – demonstrated by Google’s ‘AI overview’. As reported in New Scientist2 historian Greg Jenner discovered that typing any random phrase into Google and adding the word ‘meaning’ prompted the AI to make up a meaning if one didn’t exist.

Thus, the AI claims that the nonsense phrase, “you can’t lick a badger twice” means “you can’t trick or deceive someone a second time after they’ve been tricked once”. Google’s AI also claimed that “you can’t run a mile without hitting it with a hammer” is often used as a motivational phrase.

Those results are largely harmless and a bit of a laugh – but turning such inventive technology loose on documents to be filed in court without oversight is neither.

Lawyers need to understand what technology is doing before utilising it (and keep in mind that, when it comes to large language models, how they actually arrive at their answers is not always clear).

AI-based tools promise to be a powerful asset for lawyers if used correctly, but their limitations and faults need to be front of mind, and their work needs to be checked because ultimately, the practitioner is responsible. Lawyers utilising them need to be aware of what they are – and most importantly, what they are not: an adequate substitute for the work of a real lawyer.

Footnotes
1 Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95; Handa & Mallick [2024] FedCFamC2F 957
2 Feedback, New Scientist 10 May 2025 No3542

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Search by keyword