Barely a week goes by without a case report1 of an unfortunate lawyer using an AI tool inappropriately and producing material based on fictitious citations.
The rule is fairly simple, and by now most practitioners should know it: If you sign a document referring to case law or commentary you accept responsibility for the content.2 This includes an implied3 promise that you or a reliable proxy4 have checked the citations and confirmed that they say what your document suggests they say.
As AI becomes more ubiquitous it may be less obvious that search or research software is using it. In several recent cases5 the explanation from an embarrassed practitioner who had tendered material with inaccurate content was that they used search results from Google Scholar or similar and did not appreciate that this may now include an AI component.
The changing technology landscape requires a degree of self education on the part of the profession. If you use a tool in practice you must appraise yourself of at least the very basics of how it works and the degree to which it can be relied upon. As a rule of thumb, anything that is creating blended summaries of information – whether that is a single document or a search result – is probably using generative AI to do so.
Examples include: using a “summarise this document” feature of pdf viewing software, copilot compilations from your internal document library or precedent banks attached to practice management systems.
Whether a draft was prepared by AI, trained pigeons, a junior clerk or was a cut and paste from an online search – the responsibility principle still applies: If you sign it, you own it.
In a recent UK High Court decision6, Ritchie J made the point that firms and supervisors also carry responsibility for ensuring that junior practitioners understand their obligations and have clear guidance. The QLS has a free AI policy template available on the website. This makes it clear to all employees that it is not acceptable to use AI without express approval.
When approving AI tools for use, the firm should consider how the tool will be used and ensure appropriate training and safeguards are in place to protect the quality of the firm’s work.
AI tools designed and tuned for legal use with capacity to source information from reputable and jurisdiction-specific repositories are likely to be a lot more reliable than a random chatbot – but even these are prone to hallucination and error. A tool that has been reliable in one context may be considerably less so when asked a different question. Or even the same question in a slightly different way.
For suggestions on how to calibrate and ameliorate these risks. see the QLS Companion Guide to AI use.
Footnotes
1 A representative sample from Australia: Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95; Re Dayal [2024] FedCFamC2F 1166
2 See paragraph 4.4 of the QLS Guidance Statement on Artificial Intelligence in Legal Practice
3 Or explicit certification, as required in an increasing number of practice directions.
4 Where the work has been previously checked and verified by a human of appropriate experience it would be acceptable to rely upon this, although even then spot checks are essential to ensure that you can cite a basis for your belief that the person supplying drafts was reliable.
5 Murray on behalf of the Wamba Wemba Native Title Claim Group v State of Victoria [2025] FCA 731; Frederick Ayinde, R (on the application of) v The London Borough of Haringey [2025] EWHC 1040 (Admin)
6 See Ayinde, n. 3, supra.


Share this article