Advertisement

Hands up who’s using AI?

ChatGPT
In 2023, ChatGPT passes the New York bar exam and is elected Dean of Harvard.

Recently, United States District Judge Brently Starr in the Northern District of Texas issued a standing order requiring attorneys to file documents certifying that:

  • that they did not use generative AI in drafting their court filings, or
  • If they did use it, the result was checked by humans in the usual way.

In a similar vein, US Magistrate Judge Gabriel Fuentes in the Northern District of Illinois, required parties to disclose any use of AI.1 That has triggered a debate as to whether such orders are over the top, exactly what is needed or somewhere in the middle.

If nothing else, having the debate itself is a good thing, although it needs to be tempered by the fact that broad, blanket orders might be difficult to comply with, given that many lawyers may be unaware that some of the tools they use are AI-based. ChatGPT gets all the news coverage, but it isn’t even a law-specific tool; many such tools exist, and some use a type of AI.

The dangers of using ChatGPT without proper checks and balances have become glaringly obvious (as already noted by my colleague David Bowles), but this sort of risk has existed for a long time. Relying on any digitised database can be an issue – search engines can pull up irrelevant cases based on keywords, corrupted links can lead to the incorrect destination, or the database may simply not have been updated due to human error.

There has always been the need to ensure that, for example, you have read any cases you cite – and if you think judges, magistrates and tribunal members can’t tell when you haven’t read the case, you are kidding yourself. If you have ever had the chance to watch someone try (as many have) to bluff their way through on Jones-v-Dunkel, you’ll know what I mean.

The significance of the orders of Judges Starr and Fuentes is that courts are acknowledging the use of generative and other types of AI, and are not moving to ban it – which doesn’t mean that these orders are necessarily the best way to go. For example, there is every chance jurors could make assumptions based on such declarations, especially if only one side makes such a declaration.

Advertisement

Could juries presume the use of AI results in an inferior product, or signifies a legal team just mailing it in? If a legal representative realises some weeks into a trial that they have not disclosed in full their AI use, are we in re-trial territory?

These are issues we need to sort out, and soon – because there can be little doubt that ChatGPT and other tools are already being used in our profession. The attorney in Mata v Avianca Ltd 22-cv-1461 was caught out by not reviewing the citations, but it is highly likely that more vigilant lawyers are using ChatGPT as a force-multiplier already, and simply doing a better job proof-reading.

Courts and lawyers need to get their heads together and work out the best way to navigate this brave new world, and getting the tech industry into the tent would no doubt improve the outcome. Given that the future is already here, there isn’t a moment to waste.

Footnote
1 As reported in Legaltech news, 1 June 2023, by Isha Marathe.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Search by keyword