In Mertz & Mertz (No. 3) a solicitor and counsel were referred to the regulators for failing to detect AI errors in material edited by an employee clerk.
Every lawyer must now consider whether clients, experts, counterparties and other team members may be using AI. Even if we don’t intentionally use it ourselves, we may still have professional obligations to identify and adapt to use by others.
“AI disaster” cases in which submissions full of inaccuracies were filed in court are now sufficiently commonplace that QLS doesn’t publicise each one.
Those cases tend to focus on the responsibility of the solicitor who produced the material and emphasise personal duties: if you sign it, you own it1 and an identified “person responsible” must ensure that all cases cited are accurate. This derives both from your fundamental duty of competence and PD 5 of 2025 2 or analogous Practice Directions. The duty extends to both solicitors and counsel.
While most jurisprudence so far relates to so-called AI hallucinations, the same basic principle extends to all material produced in a team environment, whether AI generated or not. A solicitor is not a rubber stamp and must exercise professional judgment3 with respect to all material for which they take responsibility.
Such responsibility extends beyond the signatory to the document to everyone involved in the workflow and those supervising them.4 This may include even those who are not using AI tools themselves or were not aware they are doing so.
The recent Family Court5 case illustrates the point. The facts, briefly:
- Solicitor G drafted a Summary of Argument and List of Authorities. The drafts were settled by Mr. AX, K.C.
- The filed documents are found to contain AI induced errors. This was identified by the party that had filed them and corrected.
- Solicitor G and Mr AX deny AI use on their part.
- Solicitor G states that, on enquiry, her paralegal had used AI to “alphabetise the cases in the list of authorities and place them under headings”. This action is stated to have introduced the errors.
- Outcome: The paralegal has been fired, $36,955 in adverse costs with a $10,000 personal costs order against Solicitor G. Solicitor G and Mr. AX have been referred to their respective regulatory bodies.
Applying hindsight to the situation, clear policy within the firm and communication within the team (both solicitors and counsel) should have avoided much of the problem.
The Court also raised the concern that, as no evidence which AI tool was used and how it was used had been provided, it was possible material subject to the “Harman Undertaking” may have been processed by an insecure AI. This may also amount to a breach of s.114Q of the Family Law Act (indictable offence – prohibition on identifying parties to proceedings).
Supervisory responsibilities
The fact that responsibility extends beyond the party whose name appears on final documents to their supervisor and, in some cases, instructing solicitors was made clear in two UK cases6 earlier this year.
Ayinde was a show cause hearing relating to two incidents arising from different cases. In one (Ayinde) a pupil-barrister drafted material containing hallucinatory material and the instructing solicitors did not take steps to investigate when the issue was raised. In the other (Al-Haroun) the client conducted research using AI which the solicitor adopted.
In addition to the personal responsibilities of legal practitioners, the President, Dame Victoria Sharp specifically addressed the issue of leadership, stating that heads of chambers and managing partners had a positive duty to ensure that AI use within their organisations is appropriate and that staff had the necessary training and direction.7 This is entirely consistent with Australian guidance.8
Managing staff: what is appropriate training and supervision?
All firms with employees should have an AI policy which has been brought to the attention of staff and the subject of appropriate training (see QLS policy template). If the policy is a simple prohibition, the training can also be simple: “We don’t use AI at all, and this is why. Here is how you can tell if software is offering or supplying additional AI enabled features”.
If your firm has elected to trial or deploy AI tools, a risk analysis and management strategy should have been created, articulating such issues as:
- Which tools may be used and for what purpose?
- What data is to be supplied, and how to ensure that only permitted data is used.
- How to check that output is correct and appropriate?
- Who is responsible for each part of the verification process (including overall system supervision)?
Training in that policy is essential before any staff member uses AI for professional work.
It is good practice for all AI-generated drafts (or other first drafts, for that matter) to be watermarked to ensure that anyone subsequently using them is aware of the status of the document and whether it has been verified.
Confidentiality requirements.
Any system outside the firm’s computing environment9 (whether for storage, AI & other processing or transmission) must be vetted to ensure appropriate data confidentiality and security.10 This is a major reason why AI use cannot be left to the discretion of individual staff.
Privacy impact and security assessments can be complex and are outside the scope of this article, however more information is available from the QLS website.11 As a rule of thumb; any system available for “free” will harvest data which can be monetised by the provider. Most of the free tiers of the AI tools such as Chat GPT, Claude and Grok permit data access which is inappropriate for professional use. 12 Even if paid subscription accounts are being used careful analysis of where and how data access is to be permitted is necessary. Data privacy might not be enabled by default and settings might need to be changed back after updates.
Data which is confidential to third parties or held under special access rules (eg: subpoena / disclosure material subject to the “Harman Undertaking”13, or “Inspection Only” Family Court material14) must not be accessed by any AI which has not been appropriately vetted for privacy compliance.15 This obligation extends to clients and experts in possession of such materials.
Clients
Clients should be warned not to load legal advice, court documents or discovery material into AI systems, especially free-tier subscriptions.
A client may potentially waive privilege or breach court rules by doing so and should be made aware of those repercussions.16
Clients should also know that hostile eyes may read their AI chat history. Subpoenas targeting AI accounts of litigants is likely to become more common and there is no current basis to argue that legal questions asked of an AI are privileged.
Many clients will seek to use AI to understand legal processes or sense-check advice. Provided that they understand the risks and limitations that is a matter for them.
Harman Undertaking obligations extend to all recipients of disclosed material. Clients must understand these obligations and the consequences of breaking them.
Experts
Experts should be briefed with a summary of their obligations to the court both generally17 and with respect to the use of Generative AI.18 These vary from court to court and jurisdiction.
A clear consensus has not yet emerged on whether or how experts should use AI. Absent specific Practice Directions good practice requires that:
- Relevant19 AI use be disclosed, together with a description of the manner of use and which tool was used;
- If applicable guidelines exist with respect to use of AI within a profession, a link or copy of such guidelines together with confirmation that they were adhered to;
- “Open”20 Ai systems not be given access to embargoed or confidential information without specific consent or leave of the Court.
Counsel
Solicitors are generally entitled to rely on the expertise of counsel if properly instructed21 and are not obliged to duplicate work which was briefed out. Balanced against this is our obligation to exercise independent judgment.22 In LSC v Krebs the Tribunal stated: 23
“The respondent’s professional responsibility did not cease when counsel was retained; he had a continuing duty to apply his own professional judgment to the case, and to confer with counsel and the client in a meaningful and constructive way.”
In practical terms, this (in all likelihood) requires a solicitor to carefully consider Counsel’s drafted material, apply their own expertise and raise any concerns or queries, but not to independently verify every citation relied upon unless the solicitor is to sign and accept responsibility for the content.
It would be good practice for solicitors retaining counsel to address the question of Generative AI use and what verification processes are to be followed.
Where work is produced by a team, AI use by any person within it must be clearly communicated and responsibility for verification assigned.
In Ayinde the solicitor was not criticised for failing to identify the fictious references in counsel’s drafting, but for their dismissive approach once the issue was raised.
In Mertz, counsel had not used AI himself but had signed submissions without verifying the citations.
In DPP v GR24 criticism of solicitors who filed erroneous material is probably best interpreted as an observation that the duty to verify would also apply to solicitors who drafted material rather than the suggestion that all references must be checked by solicitors filing counsel’s submissions.25
- No.37 Artificial Intelligence in Legal Practice – Queensland Law Society ↩︎
- See also the Proctor article on PD 5/2025: “Emphasis on obligations in time of AI” ↩︎
- Professional Independence: See Fundamental Duty 4.1.4; Forensic Judgment, See ASCR r.17.1) ↩︎
- See ASCR Rule 37, the duty of supervision. ↩︎
- Mertz & Mertz (No. 3) [2025] FedCFamCFamC 222 (28 November 25) ↩︎
- yinde v London Borough of Haringey , ex rel Hamad Al-Haroun v Qatar National Bank and QNB Capital LLC [2025] EWHC 1383 ↩︎
- Ibid, para 9. The Court identified the importance of systemic factors such as training, supervision and assignment of work, workload and marketing as potential issues the regulator would need to consider. ↩︎
- QLS Guidance Statement #37, n.1. For other state’s guidance of a similar nature see the Law Council’s AI hub: Artificial Intelligence and the Legal Profession – Law Council of Australia ↩︎
- For many firms nearly every document will spend part of the lifecycle in an external cloud based system. There is nothing inappropriate in that. In many cases professionally managed systems are a lot more secure than a local server. ↩︎
- ASCR Rule 4, Duty of competence; Rule 9: Duty of confidentiality. ↩︎
- Artificial intelligence – Queensland Law Society ↩︎
- As a minimum the provider should expressly state that all user-entered data will be kept confidential and not used to train AI models (even if anonymised). These assurances are not credible unless detailed information is given concerning where data is processed and by whom. ↩︎
- The (so called) implied undertaking not to use material supplied under a compulsory court process for a collateral purpose. See: NSW Law Society Fact Sheet ↩︎
- See: Family Law Practice Direction: Electronic subpoena inspection | Federal Circuit and Family Court of Australia ↩︎
- See the NSW Supreme Court’s commentary in SC Gen 23, Para 9A. This is consistent with QLS Guidance Statement 37 which was reviewed by the Qld Courts’ AI reference group prior to publication. ↩︎
- This is by no means clear, nor is there any directly relevant jurisprudence as yet. The basis of the argument would likely be that the client had acted inconsistently with maintenance of the privilege; Mann v Carnell [1999] HCA 66 or that the information is now in the public domain; Glencore International AG v FCT [2019] HCA 26 (Note, Glencore is not a waiver case) ↩︎
- See UCPR Qld, Schedule 1C, if applicable, Amended Practice Direction 14 of 2024 (criminal cases), Federal Court Expert Evidence Practice Note, Family Court expert fact sheet (or specific guidelines linked therein as applicable.), SC Gen 23 (NSW Supreme Court). District Courts have generally adopted the same principles expressed in their corresponding Supreme Courts. ↩︎
- For example, SC Gen 23 prohibits (subject to clause 6, exempting use of certain products and incidental use) expert use of AI to write reports is prohibited without leave. (See cl.10) It also sets out requirements that apply once leave has been given. Similar requirements in Qld Criminal Proceedings are set out in para 16(l) of PD 14/24 see, n. 17. For a more general guidance to Judicial Officers on expert use of AI see “The Use of Generative AI: Guidelines for Judicial Officers” (QLD), 15 September 2025. Para 36. ↩︎
- General guidance struggles with this issue of relevance. As AI becomes ubiquitous – such as features which spell check, grammar check and offer clarity suggestions in word processors – there is a judgment call in determining what is incidental and what potentially affects the substantive elements of material. For example, if a medical practice uses AI to capture and transcribe patient notes, is such use germane to a report later generated on the strength of such notes? It is suggested that until more clarity is available disclosure of all AI tools may be prudent. ↩︎
- For this purpose, an “Open” Ai system means one in which use of user-entered data to train AI models is not expressly prohibited, with storage and processing information available for verification. ↩︎
- Hellard & Anor v Irwin Mitchell [2013] EWHC 3008 (a negligence case); Edwards v Edwards [1958] 2 All ER 179; ↩︎
- See n.3 ↩︎
- Legal Services Commissioner v Krebs [2009] QLPT 11, citing Davy-Chiesman v Davy-Chiesman [1984] Fam 48 per May LJ. See also Yates Property Group v John Borland & Ors [1998] FCA 926 (a negligence claim, counsel failed to advise solicitors of alternate bases of argument. Court found that instructing counsel did not absolve solicitors of professional liability, nor that of junior counsel when silk is involved.) See also White Industries (Qld) Pty Ltd v Flower & Hart [1998] FCA 806 per Goldberg J and the cases discussed therein (personal costs orders against solicitors, reliance on counsel.); Ridehalgh v Horsefield [1994] Ch 205 @ 216; DZW17 v Minister for Immigration (No.2) [2022] GedCFamC2G 905 per Given J at 54.(Counsel seeking to rely on solicitor’s advice, a reversal of the usual scenario.) ↩︎
- Director of Public Prosecutions v GR [2025] VSC 490; for further discussion on the issues raised and how risks can be controlled see: Emphasis on obligations in time of AI – Proctor ↩︎
- However, the wording was a little ambiguous: at [80] “ Regrettable as it is to single out counsel and their instructing solicitors in this case .. … counsel must take full and ultimate responsibility for any submissions made to the court. To this end, it is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified. The same may be said for solicitors responsible for producing or filing court documents.” ↩︎


One Response
See also this excellent article by Mehzabin Farazi published in the NSW LSJ on supervisory obligations, citing the Wamba Wemba case ([2025] FCA 731.) and tying the supervisory responsibilities back to the ASCR and first principles: https://lsj.com.au/articles/supervising-the-use-of-generative-ai-in-legal-practice/ and