Advertisement

Is it ethical for lawyers to use AI tools like ChatGPT?

ChatGPT
LONDON, ENGLAND - FEBRUARY 03: In this photo illustration, the welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen on February 03, 2023 in London, England. OpenAI, whose online chatbot ChatGPT made waves when it was debuted in December, announced this week that a commercial version of the service, called ChatGPT Plus, would soon be available to users in the United States. (Photo by Leon Neal/Getty Images)

Simply put, the answer is yes.

There is no ethical impediment to appropriate use of artificial intelligence (AI) tools in practice. In fact, it would be a failure on our part not to consider use of emerging technology to enhance the efficiency of legal services.

As a cornerstone principle, we must accept that AI-driven platforms are no more than a tool, and we cannot delegate either all the work, or any of the responsibility. There is nothing new in this. Drafting and research using AI is no different from delegating to a human. No matter how reliable the junior, a competent supervisor must exercise independent judgement and scrutiny before adopting the result. That is, after all, what the client is paying for.

As at early 2023, ChatGPT and other general language model AI are not yet a safe pair of (virtual) hands. When asked for legal research or drafting, these platforms will return work equivalent to a first or second-year law student, and not the brightest in the class either. A lot of the material will look convincing – certainly enough to satisfy a layperson who uses it to draft legal documents or arguments. Given the significant limitations of the system as it stands, fundamental errors wrapped in convincing language are likely to be included.

For example, in February 2023, ChatGPT was invited to outline how a Queensland solicitor should assess testamentary capacity with reference to any appropriate guidelines, and to contrast the capacity test for an enduring power of attorney. The response accurately cited Banks v Goodfellow, summarised the relevant limbs, and referred to the ‘Capacity Assessment Handbook for Qld Legal Practitioners’.

It then renamed the Mental Health Act Qld (2016) as the Mental Capacity Act (2016), extracted the test for consent to medical treatment within that legislation, and presented it as the capacity test for an enduring power of attorney.1

Advertisement

In the coming months and years, courts are sure to be inundated with error-ridden GPT-generated material from self-represented litigants, and a backlash seems likely. A practitioner seeking to excuse errors on the basis that “the AI did it” will no doubt receive short shrift. Costs orders for inadequate understanding and supervision of eDiscovery are now well established, and the same principles should apply.2

However, the ‘trough of disillusionment’3 before us with respect to the professional use of language-based AI should not obscure the fact that the world changed when this tool was released. To ignore the potential for disruption because of current limitations is like laughing at a baby T-Rex because of its little arms. The day is coming, and likely soon in which law firms that cannot make appropriate use of this technology will struggle to remain viable.

Robo-advice – legal services delivered without each item being checked and approved by a qualified human – is not yet feasible given the current state of the technology. Even if a much higher level of reliability were achieved, the concerns would not end there. Lawyers are not simply vendors of legal information and documents. We are officers of the court who must apply an ethical lens to each matter.

Our fundamental duties extend beyond competence, requiring honesty and fidelity to the administration of justice as well. It is doubtful that a machine will develop the nuanced understanding of context necessary to discharge this obligation, as the successful attempts to circumvent ChatGPT’s ethics boundaries show.4 So where do we start?

In my view, every solicitor should sign up for and use a ChatGPT account for appropriate tasks. It should fairly quickly become apparent what the potential, and limitations, of the tool as it currently stands are. Naturally, careful scrutiny of all output is essential. The point of this is not any expectation that you will end up with client-ready material; the idea is to develop experience with the technology and the ability to be a fast follower when the commercial applications move from theoretical to practical.  

As there is no promise of confidentiality, client identifying particulars should not be entered into the system. Firms should articulate the parameters of acceptable use and have a discussion with stakeholders about how, if at all, generative AI will be used.

Advertisement

David Bowles is a Queensland Law Society ethics solicitor.

Footnotes
1 When the fundamental error was pointed out, the answer was immediately revised and a substantially correct version supplied. The speed at which even this beta-release tool can adapt is remarkable.
2 See, for example, Cabo Concepts Ltd v MGA Entertainment (UK) Ltd [2022] EWHC 2024.
3 https://www.gartner.com.au/en/methodologies/gartner-hype-cycle
4 See, for example, https://futurism.com/amazing-jailbreak-chatgpt

Share this article

One Response

  1. It has begun. Great article, David, cheers!!.

    AI brought me here and to Shane Budden’s article “A matter of trust”.

    Just these 2 articles found, returned 8 ticks off my “research list”; and it didn’t cost me “a cracker”!

    “Asking the right questions takes as much skill as giving the right answers.” – Robert Half (quote from AI too)

Leave a Reply

Your email address will not be published. Required fields are marked *

Search by keyword