Advertisement
Advertisement

AI fabrications sink case

An image showing a computer rendering the likeness of a real person.

A recent US decision is a timely warning about the risks arising from the increased use of deepfakes and AI-generated evidence.

The plaintiffs in the case1 were alleging that they were being harassed by a neighbour in the apartment complex in which they lived and that the complex property managers failed to act in relation to their complaint.

In support of their claim, the plaintiffs submitted screenshots of group chats, voicemails, security camera videos and videos of other neighbours verifying the harassment.

The defendants’ lawyers became suspicious of the material due to the strange movements and unusual speech pattern of the subject of one of the videos. They were also having trouble getting depositions from the plaintiffs’ witnesses.

The defendants’ lawyers hired an expert to assess the various exhibits, who concluded that they were indeed all deepfakes.When the plaintiffs pushed for summary judgement, the defendants’ lawyers raised their concerns about the exhibits.

The court shared those concerns and ultimately issued a ‘terminating sanction’ – effectively dismissing the complaint and prohibiting the plaintiffs from bringing it again.

Advertisement

The plaintiffs were representing themselves so no disciplinary action was taken, although the defendants are seeking costs.

The implications for lawyers everywhere are clear: deepfakes are in widespread use and lawyers need to be aware of this and to take steps to check evidence when necessary.

The material in this case was less than subtle, though unearthing the deepfakes required a fairly high level of technological know-how. For example, the plaintiffs sought to shift blame for one of the videos to the alleged subject – saying that she had provided the fake video to the plaintiffs. However, examination of the metadata showed that the operating system of the phone in question could not do what the plaintiffs claimed had been done.

As deepfakes get more sophisticated, detecting fake evidence will get harder. Practitioners need to be aware of this and to be looking for red flags in evidence presented from opponents and their own witnesses. Videos, recordings and screen shots need to be carefully assessed, especially if they fit into the ‘too good to be true’ category.

A video that seems to be the perfect piece of evidence might need to be checked by an expert. Practitioners who suspect the use of deepfake technology should seek the advice of experts in this field.

Finally, clients and witnesses should be asked if they have used AI in the production of statements or reports and warned about the consequences of altering/creating evidence.

Advertisement

Lawyers are responsible for the evidence they present on behalf of a client; the fact that the lawyers were themselves deceived may not be accepted as an excuse by the courts or the regulators.

Both the Plaintiffs’ complaint and the terminating sanction here are available. Tips on identifying AI generated content can be viewed here.

  1. Ariel Mendones, et al., Plaintiff, et al., v. Cushman and Wakefield, Inc., et al., Defendant, et al. case number 23CV028772 in the Superior Court of California, County ↩︎

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Search by keyword