Advertisement
Advertisement

Emphasis on obligations in time of AI

A robotic hand interacts with a digital representation of scales of justice, symbolizing the integration of artificial intelligence in modern legal practices and future innovations. Synapse

The Supreme Court of Queensland’s new AI Practice Direction emphasises that long-standing professional obligations do not change when using new technologies.

The Court flags the critical importance of individual responsibility and – commensurate with being treated as a competent professional – indicates that sanctions can be expected when that responsibility is not met.

What is Practice Direction 5 about?

SC 5/25 addresses the growing use of artificial intelligence in legal practice by emphasising clear accountability standards while avoiding prohibitive restrictions.

The Direction acknowledges AI as a legitimate litigation tool but emphasises the critical need for human oversight and verification.

The express scope of SC 5 is limited to submissions, however the underlying regulatory philosophy would also apply to any other use of AI such as preparation of evidence and affidavits.

The two primary risks being addressed are:

Advertisement

• AI “hallucination” that can produce inaccurate or misleading responses in what looks to be a well-written and authoritative document; and
• the more subtle, but probably more damaging propensity for humans to fail to exercise independent professional judgment when using automated systems (“automation bias”).

The direction emphasises the potential for such errors to mislead the Court and undermine the administration of justice.1

While not as serious as a deliberate misrepresentation, the net effect is similar, with sloppy work having a corrosive effect on the quality of outcomes, public confidence in institutions and generating significant extra costs.

The underlying message: technological efficiency cannot come at the expense of accuracy, nor will the shift to workflows contributed to by multiple parties and systems permit abdication of individual professional responsibilities.

Responsibility and accountability framework – a human in the loop

Submissions must identify the specific practitioner responsible, distinguishing between firm representation and personal responsibility.

Where written, the submission must expressly state the individual responsible party by name. Where oral, the person making the submission impliedly assumes the role of responsible party.

This framework operates within existing professional and ethical obligations under the Barristers’ and Solicitors’ Conduct Rules, reinforcing rather than replacing established standards.

Concurrent responsibility of supervisors and other parties

Emphasis on the personal responsibility of the verifier does not alter the fact that other parties in the equation also have obligations.

Immediate supervisors and Legal Practitioner Directors remain responsible2 for ensuring that firm systems operate correctly, staff are using the right tool for the right job and that have been trained and supported in their use.

As the High Court in the UK recently made clear3, while primary responsibility rests on practitioners concerned, supervisors are not absolved, both in checking work and in managing the firm overall.

Instructing solicitors must also check and consider submissions and settled material supplied by counsel4. We are not simply passengers on the bus, and cannot outsource professional or ethical obligations to counsel, even the most senior5.

To an extent, parties also have responsibility to consider, and if necessary correct errors by opponents and others if the effect of ignoring these would be to permit the court to fall into error.6

Solicitors should also ensure that any proposed use of AI by experts7 and witnesses8 is canvassed in advance.

In a recent Victorian case9, the Court was critical10 of all parties, including a prosecutor who had failed to detect fictitious authorities in material filed by the defence, although to some extent that relied upon the specifics of the case and the fact that the prosecutor had adopted the Defence’s conclusions.

An emerging pattern of over-reliance and lack of understanding

AI-hallucination cases are now sufficiently common not to be remarkable. While it is failed AI use in the courtroom which attracts the most attention – perhaps due to the immediate and public nature of the consequences, anecdotal reports indicate that transactional and ex-curial AI use is leading to problems as well.

Consequences – even for inadvertent error – have been significant (if rather inconsistent).

Consequences so far: costs and professional sanctions

Dayal11 is of note mainly because his was the first Australian case in which referral to a regulator has been concluded.

The facts are similar to many similar incidents. Mr Dayal, a Victorian solicitor, submitted a list and summary of authorities to the FCFC that contained entirely fabricated case citations generated using AI-based legal software.

Notably, he had used what he thought to be a reliable law-specific AI, had swiftly accepted responsibility when the problem came to light and cooperated with the regulator.

Notwithstanding, the Victorian Legal Services Board imposed12 comprehensive restrictions on Mr Dayal’s practice, including loss of principal practice rights, prohibition from handling trust money, mandatory closure of his own practice, and restriction to employee solicitor status only.

The sanctions extended to a two-year supervised practice requirement with quarterly reporting obligations for both the practitioner and supervisor.

In Murray13, Justice Murphy ordered a Melbourne law firm to pay indemnity costs after discovering fabricated citations in court documents.

Justice Murphy’s decision to publish detailed reasons served an educational purpose for the profession, highlighting systemic failures in supervision and quality control. The personal costs order against the solicitors demonstrated that AI-related misconduct carries serious financial consequences beyond regulatory sanctions.

Similar cases from Western Australia14 (referral to disciplinary regulator and imposition of immediate personal costs order), New South Wales15 (referral to regulator) and Queensland16 show that this is not an isolated instance.

Managing risk: some practical guardrails

QLS Guidance Statement 37 suggests some measures that reduce a firm’s exposure to professional and economic consequences.

The starting point is to establish a clear AI policy within the firm and communicate this to staff. Attempts at blanket prohibition are likely to backfire, as the potential for efficiency gains mean that refusing to permit appropriate AI use will hinder staff meeting KPI targets and balancing work-life commitment.

Even “approved” AI tools will require guardrails. Decide what can and cannot be done with AI assistance and establish guardrails to mitigate identified risk.

For example: where AI is used in drafting or research, the output should be watermarked to ensure that subsequent users of the material clearly understand the provenance. As verification steps are undertaken this should be clear on the face of the document.

Firms must implement mandatory human verification protocols using authoritative legal databases rather than relying on AI-generated sources. Citation checking requires independent verification with checks back to original sources.

Supervision frameworks for junior practitioners must account for AI use, ensuring that inexperienced practitioners do not rely on AI without adequate oversight. It cannot be assumed that lawyers appreciate the inherent unreliability and consequent risk of AI use.

Effective human supervision requires that the human has:
• sufficient understanding of that area of the law to identify errors;
• access to the background facts upon which the legal theory being put forward rests;
• time to assimilate the two; and
• an organisational culture that does not shoot a messenger telling firm management that their expensive new system is not working reliably.

In many contexts this requires multi-level supervision. The first layer checks the machine output, the second holds the verification parties to account and accepts overall responsibility for the effectiveness of firm workflows.

Like most cases of negligence or professional default, the factors leading to AI problems are not new: distraction, over-work, health or personal problems, under-resourced or high maintenance clients are all consistent features.

  1. For further information on automation bias, mitigations and the effect in other professional contexts see: L Kahn, E Probasco, R Kinoshita AI Safety and Automation Bias: the downside of Human-in-the-loop. Centre for security and emerging technology, Georgetown University, 2024; M Vered, T Livni, Et Al, The effects of explanations on automation bias, Artificial Intelligence 322 (2023) Accessed 26/9/25; K Goddard, A Roudsari, J Wyatt Automation Bias: a systemic review of frequency, effect mediators and mitigators, National Library of Medicine, 2011. ↩︎
  2. Legal Profession Act (Qld) s.117: See also : QLS Guide to Appropriate Management Systems. ↩︎
  3. Ayinde v London Borough of Haringay, ex rel Hamad Haroun v Qatar National Bank & Ors [2025] EWHC 1383. ↩︎
  4. One of the parties referred by the Court for disciplinary action in Ayinde (ibid) was the supervising solicitor at the legal charity that had instructed the barrister who had filed erroneous material. ↩︎
  5. See, for example, White Industries (Qld) Pty Ltd v Flower & Hart (1998) 156 ALR 169 – a solicitor filing an unmeritorious claim following close consultation with a very senior QC. For full background and analysis see the (ever useful) S Warne Australian Professional Liability Blog. ↩︎
  6. This is a complex issue, requiring balance between partisan duties to our client and our duties to the court and Administration of Justice: contrast Fundamental 3.2.1 (duty to the Court) Rules 19.3 of the ASCR (no duty to correct error in statement made by opponent cf: Myers v Elman [1940] AC 282); 19.2 (duty to correct our own errors), 19.11 (duty to immediately inform Court of misapprehension as to effect of an order);29.12.1 & 3 (special duties of prosecutors); rule 30 (duty not to exploit errors). ↩︎
  7. Butler v National Disability Insurance Agency (NDIS) [2025] ARTA 1579 in which Senor Member De Villiers discounted the value of expert reports due to the use of AI in producing them which had led to sufficient errors to justify discounting the weight of the balance. See [75] et seq. ↩︎
  8. DPP v Khan [2024] ACTSC 19, Mossop J considered that it was part of Counsel’s role to detect use of AI by character witness and make appropriate enquiries. ↩︎
  9. Director of Public Prosecutions v GR [2025] VSC 490 ↩︎
  10. At [76] “At the risk of understatement, the manner in which these events have unfolded is unsatisfactory.” ↩︎
  11. Dayal [2024] FedCFamC 2F 1166 ↩︎
  12. It is possible that this was a negotiated outcome in which a practitioner – recognising the issue which landed them in hot water was a symptom of a deeper problem – offered to step back from the responsibility of sole practice. There is no mention of this in the published statement of the VLSB, however. ↩︎
  13. Murray on behalf of the Wamba Wemba Native Title Claim Group v State of Victoria FCA 731 ↩︎
  14. JNE24 v Minister for Immigration [2025] Fed C FamC2G 1314 ↩︎
  15. Valu v Minister for Immigration and Cultural Affairs (no.2) [2025] FedCFamC2G 95. ↩︎
  16. QWYN and Commissioner of Taxation (Taxation and business) [2025] ARTA 83 – it is not clear from the report whether the Applicant was represented, however there is reference to materials filed on their behalf. ↩︎

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Search by keyword