Advertisement

Computer says no

“Open the pod bay doors, HAL.
I’m sorry, Dave. I’m afraid I can’t do that.”

– 2001, A Space Odyssey

We all know the feeling — you ring customer service, press a few buttons and get put on hold for what seems like eternity.

Music that appears to have been recorded with the specific intention of driving people insane is occasionally interrupted by an assurance that your call is important, as your will to live slowly ebbs from your soul. Still, at least it is the same for everybody, right?

Maybe not.

Some call centres have software that uses machine learning to ‘value’ particular customers, and re-order the phone cue on that basis — so if you are scored as being of low value by the artificial intelligence (AI), your wait is going to be longer. The same thing can happen when you browse websites — the ads you see are based on your browsing history, or the movies recommended by a website are based on what you have watched in the past.

In the above examples the consequences probably aren’t especially dire, unless the on-hold music drives you to criminality, although when search engines tailor results to what you have liked previously, things can get a bit more concerning. If you are trying to assess the validity of a Donald Trump tweet, and the search engine keeps coughing up other Trump tweets or articles based solely on them, you may ascribe a level of truth to the tweets that does not reflect reality. Thus can the echo chambers that help elect presidents be made.

Of course, it is hard to know just what effect that had on the US Presidential election, but the insidious effects of machine learning in AI are easier to note elsewhere, and their effect can be diabolical. In many jurisdictions in the United States, and now the United Kingdom, AI is used to assess the likelihood of a defendant re-offending while on bail, or if paroled. Despite the fact that many studies have shown the AI to develop racial and other biases, cash-strapped jurisdictions are considering introducing similar models.

Advertisement

Worse, it is hard for people rated ‘high risk’ by such software to challenge the finding, as the code is usually protected by intellectual property laws and owners of the software are reluctant to offer it up for examination. This results in a situation in which a person is effectively unable to confront their accuser or unearth flaws in the data.

In the United States, many of the difficulties surrounding this issue involve the heavy reliance placed on the residential postcodes of applicants for bail or parole. Applicants from poor or crime-ridden areas are assessed based on the general lawlessness of that area, not on their personal likelihood of re-offending.

This is a flawed analysis in that it applies a population average to an individual – much the same as the body mass index (BMI) is not a good indicator of individual heath. The BMI is an arbitrary calculation based on dividing a person’s weight (in kilograms) by their height squared (in centimetres). Used to estimate the change in weight for a population over a period of time, it is a useful tool; applied to individuals, it regards both Fat Albert and Arnold Schwarzenegger as life-threateningly obese.

What is worse, however, is that many AI systems do something called ‘deep learning’ and are known as ‘black boxes’ — because nobody, not even their creators, knows how the AI is making the decision it comes up with, nor what issues it is considering to make them.

For example, a medical diagnosis AI developed at Vanderbilt University, Tennessee, performed excellently at diagnosing colon cancer in patients — but it did so by noting which of them had gone to a particular clinic, not clues from each patient’s medical data. Although they eventually realised what it was doing, its creators had no idea why or how. If courts employed AI to evaluate guilt, cross-examination of the people who built it would largely be met with a shrug of the shoulders; little comfort can be taken from the words, “trust us, you’re guilty.”

In short, allowing AI to assess the likelihood of a person’s re-offending (or worse, their guilt or innocence) will probably result in poor people staying in jail and rich people walking — but the problems don’t end there.

Advertisement

“The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.”

– Terminator 2: Judgement Day, 1991

In 1982, a modified Coke machine at Carnegie Mellon University became the first Internet-connected appliance, able to report its inventory and whether newly loaded drinks were cold. This has evolved into the ‘Internet of Things’ (or IoT) — a vast network of computing devices embedded in everyday objects, enabling them to send, receive and collect data. It is estimated that the IoT currently consists of some 20.4 billion connected devices.

These devices collect and transmit huge volumes of data, on just about everything. They have enormous potential to benefit society, and also enormous risks; ever-present internet-connected devices represent ever-present threats to privacy and data security. The over-riding issue, however, is control — in that nobody has it.

The IoT has grown to the point where no particular government or organisation has control over all of it. Competing and incompatible security and privacy protocols are engaging at a deep level, and if decision-making AI has access to the IoT, no court can be certain of the data it is considering. If the AI has learned biases or drawn on irrelevant or inadmissible information, its decisions would be deficient, and it is unlikely that its operators could become aware of such things in real time.

Then, of course, AI can be hacked.

“Joshua: Shall we play a game?
David: Love to. How about Global Thermonuclear War.
Joshua: Wouldn’t you prefer a good game of chess?
David: Later. Let’s play Global Thermonuclear War.
Joshua: Fine.”

– War Games, 1983

The 1980s movie War Games introduced the world to modems, hacking and Mathew Broderick, and it’s fair to say that the world hasn’t gotten sick of any of them yet; Broderick has proved to be easily the least destructive. The plot involves a young hacker who almost starts World War 3 by accident, thinking he was only playing a game; fiction when the movie was released, the only implausible part of the plot in in 2020 is that the start of war might be unintentional.

Once a pastime for teenage rebels who outwitted large corporations for thrill and status, hacking is now a lucrative tool for organised criminals. It is naïve in the extreme to think that criminal networks would not use those same hacking skills to ensure that their associates on bail or up for parole are rated low-risk, or to keep competitors behind bars. It is impossible to hack the mind of a magistrate or judge, and on the rare occasion that they make a mistake it is at least subject to appeal; of course, computers never make mistakes, do they?

Advertisement

“What do you get if you multiply 6×9?
42.”

– Deep Thought, Hitchhiker’s Guide to the Galaxy

Computers regularly get things wrong, as we all know, and also crash from time to time. If this happens when you are trying to stream a movie, it’s annoying; if it happens just after you press ‘save’ on your assignment, it’s devastating; but if it happens when you are awaiting a parole assessment, it could mean staying in prison while the glitch is fixed. Given the violence and deprivation generally experienced behind bars, that’s a little more serious than having to ask your lecturer for an extension.

The difference between AI and a real-life magistrate or judge is that when a mistake is made they are generally aware of it, as they understand what it is to be wrong; it is very difficult to teach that to software, no matter how intelligent — especially when we don’t know why the AI made its decision, how it made it, and on what data it was based. Allowing it to determine parole and bail applications is a step too far and would likely mean that simply being from the wrong side of the tracks will keep you on the wrong side of the cell bars.

Facebook shut down an AI learning experiment because the machines started to speak to one another in their own language, locking their creators out of the conversation; now why does that sound so familiar?

“John Connor: By the time Skynet became self-aware it had spread into millions of computer servers across the planet. Ordinary computers in office buildings, dorm rooms, everywhere. It was software, in cyberspace. There was no system core; it could not be shutdown…”

– Terminator 3: Rise of the Machines

This article was first published on the Queensland Law Society Law Talk blog on Medium in August 2017.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Search by keyword