Advertisement
Advertisement
NetDocuments

Still on a learning curve

Edward Santow is the latest guest on The Callover podcast. Image: Geoff McLeod

Edward Santow is a leading voice in human rights and technology, currently serving as the Director of Policy and Governance at the Human Technology Institute and an industry Professor of responsible Technology at the University of Technology, Sydney. Before that, he served as Australia’s Human Rights Commissioner from 2006 to 2021.

In this month’s podcast, The Callover explores the intersection of artificial intelligence and the law, human rights and ethics, the challenges it presents and the opportunities it offers for the future of law.

So when did you first become interested in AI?

“Well, it was a couple of jobs ago, so I used to run an organisation called the Public Interest Advocacy Centre. And we noticed these issues and we had no idea that there was any connection to artificial intelligence at all. It was just we ran this service where young people who had bad interactions with the police could come to us for free legal advice and representation.

“And then we noticed there was some other things that linked them all together. The first thing was that every single one of those young people who were coming to us had dark skin. Every single one of them did.

“Some of them were as young as 12 years old … they hadn’t committed serious offences or even been accused of serious offences for the most part. And so over time, the police disclosed that they had this list, and the list was known as a Suspect Target Management Plan.

Advertisement
TR CoCounsel

“And that was all the information we could get at, until this particular moment in parliament under questioning where the minister of the time admitted that what they had was an algorithm or AI-based system that was identifying young people from the criminal justice database who might go on to lead a life of crime. And the theory was that if the police went and checked on them over and over again, then it would set them on the right path.

“Now, that was problematic on every front. The most obvious front is it doesn’t work like you know, if you are constantly being harassed by the police, that’s not going to make you trust them exactly. But the maybe the more fundamental problem, really the link back to artificial intelligence was that this was a really early example here in Australia more than a decade ago now of the AI system going off the rails.”

What are the biggest challenges we face when it comes to AI and human rights?

“Technology generally can usually be used for good.

“Think about very old technology, like a knife, you know, there’s nothing more innocuous than slicing off a piece of bread, something that humans have literally been doing for millennia.

“But of course, the knife can also be used to stab someone. And so that idea of AI, there’s a $3 term for this, that the term is dual reference. In other words, that a piece of technology, the same piece of technology can do wonderful things and terrible things – that idea is isn’t limited to AI, but it’s particularly significant when it comes to AI because AI is so, so powerful.

Advertisement
TR CoCounsel

“I think what we heard from the community when we did this big consultation a couple of years ago, it was really two or three things that came up again and again.

“The first was people said, I’ve just realised my personal information can be used against me. We heard that form of words or very similar form of words a lot. And I was really interested by that because the way I can be most helpful and most effective is by being trained on us. And so, you know, the fuel of of AI is personal information.

“As humans, it learns from us. And so this idea that it can sometimes be used in ways that cause us harm is really at the heart of the whole idea of a right to privacy.”

So perhaps we can look to what are the greatest advantages or I suppose, opportunities for using AI to create positive change.

“I’ve been practising as a lawyer for my career mostly in human rights, and I’ve done a lot of work with people with disability and particularly with people who are blind or have a vision impairment. And it’s kind of amazing some of the technology that is AI powered, that is developed, particularly for people who are blind.

“So there’s a bunch of apps now that you can get on your smartphone. Perhaps the best known one is called Seeing AI. And what it allows you to do is hold up your smartphone and it will almost literally tell you about what’s in the world around you. Right. And identify the people that you know that you have in your phone, stored in your phone, so that’s your mum, you know, that’s your friend Jackie or whatever.

Advertisement
TR CoCounsel

“But it’ll also allow you to do things that people like me, who I’m not blind, take for granted, like going to the supermarket. I mean, that is life changing, right? Super, super exciting to me as a human rights lawyer and people who are blind and have a vision impairment often report back and say that perhaps that really does change my life in a really profound way. So that that’s really cool and it’s, I guess, an example of the way in which well-designed AI can make our community more inclusive.

“But there are also ways in which, you know, AI democratises information for everyone, regardless of whether you have a disability. The sad truth, of course, though, is that the vast majority of people who have legal questions have no one to ask. They can’t afford a lawyer. Community legal centres, legal aid and so on are just not able to help the number of people who need help. And so the idea that you can get some information that you would not otherwise have access to via free AI applications that are just in the world, that’s pretty amazing.

“It comes with an asterisk. There are problems sometimes. There is in hallucinations, all of that sort of thing. I readily acknowledge that. But fundamentally it is, I think, true that that there is a democratisation of information happening. That’s good.”

What are the most transformative uses or applications you’ve seen in action so far in helping the law?

“I think we’re still early days. Look, some regulators are starting to use not so much generative AI, but other forms of analytical AI to identify patterns of misbehaviour. So what are the sorts of red flags, for example, that might indicate that a lawyer is misusing their clients’ trust money, the money that they hold on trust for their clients?

“So that’s that’s quite exciting to me. I think another way in which I think is being used in in a transformative way is to identify things for further research. And I know how unsexy that sounds … but it’s what a police officer might call lead generation. So it gives you a ‘have you thought of this kind of response?’.

Advertisement
TR CoCounsel

“And that is something that I think genuinely augments a human lawyer’s capability rather than tries to displace that capability. And even if we thought that was a good thing, the technology ain’t ready for prime time yet. It’s super experimental. So we shouldn’t be using it for that purpose.”

And so what do you think the next big leap will be in AI? And where do you see us going in the next five, ten, 20 years?

“I’d make a couple of observations. The first is, you know, the concern that people have with generative AI. Maybe the core concern is it’s accuracy, right?

“Right. So if it has fairly high rates of inaccuracy in certain areas. That’s going to be a problem, particularly for lawyers and other knowledge professionals where accuracy is prized. So the question that is often posed to me is, well, the way that companies are trying to solve that problem, it’s really interesting. They’re trying to just feed it more and more data, training data, right.

“And maybe that’s not a surprise because whenever you have more information, you’re hoovering up truth, you’re hoovering up lies, you’re bringing up a whole bunch of relevant information and so on. So maybe it’s just going to hit some kind of limit. And so when I’m looking to the future, I guess this is a good thing on balance, but not a wholly good thing.

“So that’s bit mixed, right? I think the thing that I’m most excited about is the rise of what are known as socio-technical systems, again, that sounds like it’s a bit of a piece of jargon, but it’s … a system that is designed really, really well and plays to the respective strengths of the human and the machine.

Advertisement
TR CoCounsel

“At the moment, you know, in these very early days of artificial intelligence, we’re not very good at that. Like we often give the machine too much power or the wrong kind of power or not enough power to do something really basic, whereas socio-technical system is really carefully designed, as I say, to play to the respective strengths of the human machine.

“And that’s good because I want to be able to do the things that I’m really good at and not do the things that I’m pretty hopeless at.”

What do you think are the key things we need to keep in mind when bringing AI into our practice?

“There’s like a little handful of issues. The first sounds really obvious and it is its accuracy, right? These machines will lull us into a false sense of security because they’re accurate, accurate, accurate. And then suddenly wildly inaccurate. And it lulls us into that false sense of security, because if someone is just constantly giving you inaccurate information, then you kind of know to look at them a bit cockeyed.

“Point two is they are optimised to be fluent rather than accurate, and that’s really, really important. Let me explain what I mean by that. So I think the thing that is most impressive about the leading generative AI applications is that they write so well, they are able to write really clearly, but also at the kind of level of sophistication of, you know, good writers, right?

“People have been to university, that sort of thing. So that’s great. But it has a sting in the tail. And the sting in the tail is that if you are optimising for fluency, you’re not optimizing for accuracy.

“The third point is one that you brought up, which is about really information integrity. So you need to first confront something which is quite awkward, which is that these systems, these applications have been built by hoovering up a whole heap of information, some of which is legally protected.

Advertisement
TR CoCounsel

“So it’s an open question. We genuinely don’t know yet whether these applications are themselves unlawful. So when you hoover up so much information from the open web, you are definitely hoovering up copyright protected information.

“And so there’s an open question whether those applications are even lawful. But then the fourth point is it’s one thing to say that somebody else has broken the law. But what about the information that you as a lawyer put into the machine so that the prompts that you use and as I said before, you can ask a very general question like, you know, what are the kind of, well, the elements of the tort of negligence, And it’ll give you then a generic answer, right?

“And fine. But if what you are actually trying to do is apply that test so there’s elements to a specific facts scenario, the more information that is particularly relevant to your client, the more specific will be the response from the AI application. And so that puts you at greater and greater risk of putting client- privileged, legally privileged information on into the application.

“And that’s that’s a pretty worrying point. And the final thing that I would mention is this problem of kind of oversight and responsibility. So particularly as a junior lawyer, sometimes you don’t know what you don’t know. And so you might do some legal research to be able to provide an advice.

“Now, if a more experienced lawyer puts in a query into, you know, a generative application that is essentially a query about the law, generally, they would have, I guess, more well-tuned antenna about whether the output is right or not.

“So it’s less useful for them in one sense, because they could just do the work. They’re just too busy. Right. But they’re also better prepared through the previous work that they’ve done to be able to kind of call out when it’s gone wrong or made an error. And so that that oversight problem, it’s actually really difficult. There’s no easy solution to that.”

Advertisement
TR CoCounsel

Do you think I will affect lawyer client relationships, particularly concerning trust and the delivery of personalised legal services? And if so, how do we mitigate against them?

“I think we’re just starting to see that come to the fore now. So I think just as doctors talk about their patients going to ‘Doctor Google’ before they come to see their doctor for for an appointment, and then they may say, ‘look, I’m pretty sure this is my diagnosis’.

“I think we’re certainly hearing from lawyers that clients are astonished at that a bit more themselves as well. And there’s nothing inherently wrong with that. But navigating through a process where the lawyer and the machine may have different levels of trust for a human, that’s actually quite difficult. And some clients will have, you know, naturally more trust in the lawyer than in the machine or vice versa.

“It’s quite a challenge, I think, to navigate. And I think we’re just still kind of finding our feet with that.”

What is one piece of advice you would give to your younger self as he commenced his legal career?

“I remember looking around at all of the others – there were 13 other judges associates in my group – and they all seemed so much more knowledgeable than me.

Advertisement
TR CoCounsel

“I wasn’t wrong about that, but maybe not quite as much as I thought they were. Like, I remember constantly having this fear that I just was a moment away from being shown to be the imposter that I am. And I think the thing that when I look back at that, that time that I, I didn’t appreciate it as much as I should have with that, you have a bit more time as a as a young lawyer to do something for the first time.

“And this there’s a bit of power in holding that space and saying to your boss or your colleague or whatever, you know what, I’m doing this for the first time. I’m not ready for mistakes. It’s okay. And they will always, always, always say that’s okay. But acknowledging that sort of not feeling like you always are a slave to a perfect right answer, I think that’s that’s quite good.

“And I think it means that that sort of learning curve that we all go on is where, you know, as we’re sort of finding our way in the law, I think that becomes sort of less traumatic and more enjoyable.”

Listen to this episode of The Callover.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Search by keyword