-
AI Interrogator: Navigating the AI Ethical Frontier with Dr. Kate Devlin
November 22, 2023
Insights
- The need for responsible AI serves as a compass for navigating the complex landscape of technology and suggests a shift towards ethical considerations and a deeper understanding of the human impact beyond corporate interests.
- In debunking fears of an AI apocalypse, Dr. Devlin provides a reassuring perspective, highlighting the urgency of addressing real-world challenges such as hidden labor and data privacy, while navigating the transformative potential of AI in shaping a more responsible and interconnected future.
Kate Bevan: Welcome to this episode of the Infosys Knowledge Institute Podcast on all things AI, The AI Interrogator. I'm Kate Bevan, and my guest today is Dr. Kate Devlin, who's an academic with King's College London, and she's also on the UK's Responsible AI Program, so Kate really knows her stuff.
Kate, one of the things we've been talking about quite a lot with an emphasis is responsible AI and how you do it responsibly, but we've been looking at that from kind of a corporate point of view, whereas your exper-tise is very much on the human impact. Thinking about responsible AI, how do you do it well in a business and in society when we're thinking about the humans?
Dr. Kate Devlin: It's quite interesting, isn't it, that everyone has a different take on what it means to be responsible, and it's something that we are trying to get at from the very start. So there's been quite an emphasis on, well re-sponsible, surely that means safety critical or surely that means cybersecurity, but of course legal as well and compliance. But yes, it goes much wider. So thinking more broadly of an ethical approach as well, that's part of it. But really how do we make sure that from the outset we've considered the impact that these sys-tems have? And that can be anything from things like the impact of the algorithm. Is there bias? Is it going to be disruptive through to supply chain? Is there sustainability issues that are going to arise from it? To human rights as well. It covers so much.
Most of the AI that we do, and for this I generally mean machine learning these days, people don't often di-rectly consider the human, but it's usually humans who are using it or who are affected by it. So we are try-ing to push for human-centered AI. It's about asking at the very outset, when you decide to design some-thing, why is this being done and who is it affecting?
Kate Bevan: Well, one of the things you mentioned when we've chatted in the past was particularly about the down-stream impact, and you mentioned just now supply chain. So it's not just within the business itself, it's the wider impact, isn't it?
Dr. Kate Devlin: That's right. One of the secrets of artificial intelligence is that it's not always artificial. There are a lot of people as well, a lot of human labor involved along the way and that can include people with outsourced work in countries like Kenya for example, where we know there are a lot of ghost work or hidden labor go-ing on, people spending a lot of time either moderating content or perhaps labeling images for image seg-mentation for self-driving cars, for example. So we have to be really aware of that.
Kate Bevan: But it's not being passed off as artificial intelligence if there's some poor soul in Kenya labeling data. I sup-pose that counts as supply chain, which I hadn't thought about.
Dr. Kate Devlin: Yeah, and it's not always the fact that it's being passed off as artificial intelligence, it's just that it's consid-ered to be part of it that is inevitable, and therefore no one really needs to talk about it. There are cases where there've been people doing the work behind the scenes rather than an AI, and some companies have rather unscrupulously decided to omit that. But in terms of just the hidden labor that goes on, it's sort of widely accepted it happens, but it's very rarely discussed.
Kate Bevan: So how can we do that ethically? How can that be part of responsible AI?
Dr. Kate Devlin: There are a number of different approaches that you can take and one of the methods is looking at auditing the supply chain or the different aspects of your AI. That's something that may well come into regulation in the future. It's certainly something that's being discussed and that could include doing things like impact as-sessments to see exactly what components are there that need to be checked. So that's one. That might be to comply with internal standards or it could be for external validation or external regulation.
Kate Bevan: I know you're working with the UK government on responsible AI, but where does the UK sit in relation to other countries? Are we ahead of the curve? Are we behind the curve? Is there a curve?
Dr. Kate Devlin: We don't actually know if there is a curve. I think everyone's a little bit on the back foot when it comes to things like generative AI, for example. We knew it was coming. We hadn't realized how quickly that would have an impact because we weren't quite expecting open AI to release that publicly and then have the huge uptick that it did. We're seeing the effects of that already. That was not even a year ago that ChatGPT was released. So there's a lot of scrambling worldwide.
Kate Bevan: It feels like we've had ChatGPT forever.
Dr. Kate Devlin: It does. It really does. And yet it's not been that long. And so different countries have been trying this from different angles over time. The EU is probably closest to bringing legislation, which through their EU AI Act, which it's been developed in a very much a risk-based approach, where they've said, "Here are some tech-nologies that we think really the interest to use, and we advise not using them at all." Things like predictive policing or autonomous weapons. And then they have a category of things that we think could be used, but only in special circumstances, things like facial recognition. And then they have everything else. So that's kind of this risk-based category approach is quite useful.
And then the UK and the US have gone for a more sector-based approach where they're saying, "We should devolve this out because different sectors will have different needs." And then the US of course, quite pro-innovation in what they're doing and they want to make sure that they're driving business, they want to make sure they're working with the tech companies. But in terms of actual legislation, none of it's actually enacted yet. It's not there on the grind, but we do have a lot of existing laws that cover technology. It's just that AI raises a few new ones.
Kate Bevan: I'm thinking about using the phrase ahead of the curve rather than ahead of the curve of regulation. What I'm thinking about also is being out in front of the technology this time because with a lot of transformative technologies, I'm thinking particularly of social media, but more generally, we have been retrofitting regula-tion and how we manage these technologies. Are we getting better at that? Are we doing better with AI?
Dr. Kate Devlin: It's really difficult to say. I think we do have a lot of regulation that applies in these cases. It's just that there are these particularly new developments that aren't really covered. But are we responding as well? I don't know. I think because it's so new, this requires quite a lot of reeducating people on what's involved. It in-volves looking at things slightly differently. So it's about getting people up to speed and what the implica-tions are. So it's a bit of a scramble in that sense.
Kate Bevan: Also, the other thing I wanted to ask you about is we seem to be thinking about white-collar jobs and how they're impacted by AI, but what about other jobs in society? We're talking about AI from a kind of lofty per-spective. AI can involve robots. Are robots going to impact different parts of society?
Dr. Kate Devlin: Right. So up until now, we've been kind of okay with automating jobs when they're blue-collar jobs. We have factory lines where we've replaced humans with robots, and no one's really completed apart from the Lud-dites many, many years ago who saw this coming. But now of course AI is replacing these more middle-class jobs, these more white-collar jobs. And then people start getting worried and you think, "Well, where were you worrying when others were losing theirs?" And yes, robots is part of that. Autonomous systems are defi-nitely able to do that, and the whole point of developing these autonomous systems was to replace people in jobs that were dull, dirty, and dangerous. That was the kind of the aim.
And so we are seeing that, but we're also seeing AI coming now for the more creative aspect and creative jobs, which I don't think it will completely remove because there are a lot of these jobs that we are really worrying about. Now, yes, it will have an impact, but a lot of them require that human element of decision-making of taste, which is very subjective. And so I think while it will have an impact, it's not going to be completely a takeover. So I think there are definitely risks from automation, but of course other jobs will emerge as well. We know that with any change in industry, we also get new jobs that never existed before will spring up too, but this scale of this is so much greater than anything that's gone before.
Kate Bevan: I think of AI as really genuinely transformative. So thinking about some of the stuff that's the innovative stuff that's come along before the Metaverse and blockchain or what have you, those are kind of interesting and niche, but not transformative in their impact. This feels really transformative to me. I wonder about sort of the things that we do now that are done by humans. For example, the care sector. I remember a story in Ja-pan, I think it was where robots were doing that kind of work. Are we going to see that?
Dr. Kate Devlin: The care sector is an interesting one because for years we are hearing robots will be able to do a lot of care, and of course they can't. It's not turning out as easily as expected. And yes, Japan had an entire robotic strategy based around this because they have a population that is aging, they have a shortage of care work-ers. They've moved away from trying to build robots specifically to do the caring and more into assistive technologies, things like exoskeletons for carers so they can maneuver people and lift people without injur-ing themselves. Care is definitely something in the UK that requires much more resourcing and isn't getting it, but it's not easily replaced by robotics. I think AI could be useful in many of the administrative tasks for that, and that would be a really, really useful thing. If we can harness AI to deal with the administrative side of that, it would definitely make things a bit easier.
Kate Bevan: What kind of thing could we harness AI for then in care when you think about the admin tasks?
Dr. Kate Devlin: If you think about how generative AI in particular is currently going, I think we're going to see in the next few years that it becomes embedded into the everyday tools that we use. So you apply your generative AI model to your word processing software or your emails and it's able to give you summaries of things, it's able to tell you who you should email and when or what you've done before. If you can speed up processing of administrative tasks, if you can speed up perhaps things like means testing or appointment systems, the kind of stuff that really needs to be done takes up a lot of time, but we don't quite have all the systems in place for it. And one of the things in the UK that makes that difficult is that there are so many different health trusts and different jurisdictions for administering this, that it becomes really difficult to join it all up.
Kate Bevan: Is there a way that AI can make us better humans, more connected, more empathetic?
Dr. Kate Devlin: I like to think there is, and a lot of my research has concentrated on the idea of companionship from AI, and I do see potential for it there. When you say things like this, people get quite upset at the idea that they think you're going to lose human companionship. I don't believe that that is the case. I think as humans, we reach out to each other and we see that as a kind of a gold standard, but there have been a lot of advances in the past few years of creating chatbots that act like companions and people get a lot out of that. And it's not just about replacing a human-human relationship, it's about this sort of new social category that's emerging. And for some people, that's a nice way of feeling not alone. And is it really that different from coming home and putting on the radio or putting on the TV because you want to hear other people's voices? So it's someone, some thing that could listen to you, respond to you, and just make you feel a little less lonely.
Kate Bevan: Like the parasocial relationships we have with radio DJs and people we sort of know on social media, we can have AI parasocial relationships. Isn't that a bit creepy?
Dr. Kate Devlin: I don't find it creepy. It's an interesting one. There's a suspension of disbelief that goes on. You're buying into the illusion because you know these things don't really think for themselves, but you're prepared to overlook that in order to engage. And we've been doing that socially for years with technology. It goes right back to Eliza, the very rudimentary chatbot of the 1960s, which had no AI in it whatsoever, gave only very terse re-sponses, and yet people were confiding all sorts of things in Eliza. So anything that behaves socially, we're primed as humans to respond socially. We're social creatures. So this is quite advantageous in many ways. And of course, we already do this. We've seen the speed at which voice assistants became popular, and it's not to say it's without its problems. For me, those aren't the ethical or moral problems about conversing with chatbots. For me, those are problems around data and security and privacy because when you talk to these things, where's your data going?
Kate Bevan: Yeah, I don't think it comes as a surprise that some of the companies really going all in on this are the ones that we already think of as massive data hoover is, anyway, data suckers anyway. Okay, I'm going to come to my final question now. Is the AI going to kill us all?
Dr. Kate Devlin: The AI is absolutely not going to kill us all. I may regret saying that if the AI then goes on to kill us all. Now, I'm going to take a chance. Everyone I've spoken to who works in AI research, machine learning research or cognitive science, cognitive systems, none of them think that there is an existential threat from some kind of sentient artificial intelligence. It's just so infeasible. It's just such a small chance of that happening as to be non-existent. Most people working on those systems will say that is not possible, but that's not to say that there aren't other problems.
Kate Bevan: We do see an awful lot of tech luminaries striking their chins and saying, "Oh, there's an existential threat." So what's going on there with that? Why are they doing that?
Dr. Kate Devlin: Well, a cynic might think that it's a bit of hype, and it's certainly if you are perhaps one of the lead compa-nies in the field and you say, "Oh, we need a moratorium now and we need to stop developing AI." It's very coincidental, isn't it, that you're the lead company saying that. I'm a slightly skeptical about how this is being played for competitive advantage, but I do genuinely think there are issues with AI that we should be ad-dressing. But these are all very tangible issues on the ground right now and not some future scenario that is really rooted in sci-fi.
Kate Bevan: I think that's quite reassuring. Thank you, Kate. That's fantastic.
Dr. Kate Devlin: Thank you.
Kate Bevan: That was part of the Infosys Knowledge Institute's AI podcast series, The AI Interrogator. Be sure to follow us wherever you get your podcasts and visit us on infosys.com/IKI. The podcast was produced by Yulia De Bari, Catherine Burdette, and Christine Calhoun with Dode Bigley as our audio engineer. I'm Kate Bevan of the Infosys Knowledge Institute. Keep learning, keep sharing.
About Dr. Kate Devlin
Dr. Kate Devlin is a Reader in Artificial Intelligence & Society in the Department of Digital Humanities, King's College London. She is an interdisciplinary computer scientist investigating how people interact with and react to technologies, both past and future. Kate is the author of Turned On: Science, Sex and Robots (Bloomsbury, 2018), which examines the ethical and social implications of technology and intimacy. She is Creative and Outreach lead for the UKRI Responsible Artificial Intelligence UK program, an international research and innovation ecosystem for responsible AI.
Connect with Dr. Kate Devlin
- On LinkedIn
To Learn More
- The Infosys Knowledge Institute
- “The AI Revolution Reaches Healthcare Administration” Newsweek, September 18, 2023
- Turned On: Science, Sex and Robots, Kate Devlin