Knowledge Institute Podcasts
-
AI Interrogator: The Power of Teens' Voices in AI with TeenTech's Maggie Philbin
January 22, 2024
Insights
- Highlighting the significance of open conversations surrounding AI ethics, the discussion underscores the need to include diverse stakeholders beyond the tech elite. This approach ensures a more comprehensive and equitable perspective when addressing ethical considerations related to artificial intelligence.
- AI can bridge gaps, offering a level playing field for students with diverse abilities and learning styles, reinforcing the idea that AI has the power to enhance educational inclusivity.
Kate Bevan: Hello, and welcome to this episode of the AI Interrogator. I'm Kate Bevan, and my guest today is Maggie Philbin. Maggie needs no introduction to any British listeners who will know her from her time, has ending the iconic BBC tech program, Tomorrow's World, and also an equally iconic children's program, Multi-coloured Swap Shop. But these days she is also the chief executive of TeenTech, which is a charity that works to connect young people with the world of technology with a view to getting them into careers there. Maggie, thank you very much for joining us. I'm really delighted to have you.
Maggie Philbin: I am honored, Kate, thank you for inviting me.
Kate Bevan: Nobody's ever said they're honored before, so that's amazing. You've been thinking a lot about AI recently, particularly as regards to the ethics of it and talking about it with young people, and also the people who work with young people. Coming to that, who should the stakeholders be when we're talking about AI?
Maggie Philbin: We have to be very, very, very inclusive. AI is set to affect all of our lives, everyone's lives, in ways which, at the moment, we can't completely imagine. We've got some ideas about the way it might go. And I think it can be very dangerous if you only discuss the ethics underpinning AI with a small group of vested interest stakeholders. You are going to get a very different set of answers and considerations than if you widen it out. We've been doing some specific work with young people because I felt that young people are often taken into consideration when it comes to any kind of innovation, but they're very rarely at the table to give their opinion on it. Obviously, AI has been around for a very long time, but it's suddenly in the hands of everybody, or everybody feels that.
Kate Bevan: So, I suppose thinking about the future, this is going to impact, not us particularly how we use AI, because we're old ladies, aren't we? But the young people you work with, it's really going to impact them going forward.
Maggie Philbin: Young people use social media, for instance, a lot. And we've already seen sort of like analog stroke, digital era, the harm that can come by someone posting something when they're 12 years old and it is coming back to haunt them. And that is obviously a very narrow way that someone's social media profiles could actually affect their future in ways that I don't know about at the moment, and you don't know about, but that might be a real deciding factor in job applications. Not just in, "Oh, goodness me, look at the awful things they said," but by training algorithms to search for certain character traits, it is supposedly reading people. And the key thing is fairness, make it fair and equitable and not be based on algorithms that prejudice the outcome for certain groups of people.
Kate Bevan: So how can we improve that fairness? How can we improve inclusivity when we're talking about building and regulating AI? And by that, I mean not just the tech companies thinking slightly outside their own box, but what's the role for charities like Teen Tech or for other NGOs?
Maggie Philbin: Well, I think the starting point is to have those discussions with the stakeholders, because people do come up with ideas that you haven't thought of yourself. For instance, the project that we're doing with young people at the moment is focused on innovation and asking them to consider if they were to have a charter for ethical innovation, what would they have in it?
There are certain things surfacing at the moment that you might expect about, well if a development shouldn't do harm to the planet, and preferably should make things better. So those are kind of the predictable things. And then other things sort of surface, one of which was euthanasia. And I thought, I wonder what this child means by that. And then having a bit of a discussion, it was, well, if we are using AI and it's being used in the development of drugs or whatever and healthcare decisions, and there is only so much money to produce said drug. So first of all, well we will give it to people up to the age of 60 because they've got more years in them, but we weren't giving it to older people. And in a sense that is sort of euthanizing and giving people poorer health outcome. It was quite interesting talking to this child about how that manipulation could actually happen. So, you get slightly unexpected ideas cropping up. It's just so good to do as much listening as possible and then take it on board.
Kate Bevan: Do you think these voices outside the tech industry and the policy world are being listened to enough when we are thinking about what we should do with AI?
Maggie Philbin: If you are not careful, you produce a system which works well for an elite group of people, because that elite group of people have an understanding of how their system is working and how it's benefiting. And then if that group are kind, they will then go and explain it to another group. But the thing is, it's been designed for that elite group to really use and have a handle on it. And it hasn't been designed for, whether it's someone who's 84 or a young person in Oldham with no access to the latest smart devices or whatever. And I think it's like a reality check when you go and talk to people about any aspects of technology to see where they're coming from.
And also, it's useful because you can understand people's fears, and there are a lot of fears around AI. I mean, one of the things, and you might think, oh, well, we would expect teenagers to say that, but it's like, will robots take over the world? And we're asked that in all seriousness. Or are robots dangerous? And the answer to that question is, well, yes, they are. They are dangerous, and it's very important that they're programmed properly. And we recently saw the example of the poor man who was maintaining a robot on a conveyor belt that was maybe packing up salad or vegetables or something, and he lost his life because a robot thought he was a box of salad. The answer is, yes, they are. They can be dangerous, but it's all in the programming.
Kate Bevan: So, what worries you about that?
Maggie Philbin: This technology, which is developing all the time, rather if we go back to the 1980s when mobile phones were first emerging, it's like the mobile phone we have today is not going to bear any resemblance to the one that we will all have in five or 10 years’ time. It's all going to move on. But there are certain things that it's really useful to know because this is how you might want to use the technology. There are certain things that's really useful to know because you might really want to understand how other people are using the technology and it is affecting you. And there are certain things which are really good to know because these are really the boundaries that we need to put in place, in terms of the ethics and in terms of whether we are looking at the way people's work can be plagiarized, always just understanding what those boundaries might actually be. So, I think those three things are important for everyone to have a grasp of.
Kate Bevan: So, when you talk to young people, as you do a lot to the TeenTech events, what do you hear from them? What are they saying about AI and about technology and how the two interact?
Maggie Philbin: We have one question that we ask at all of the events at the moment. We use voting buttons. So, we get hundreds of responses at once around how they feel about what they're learning about AI in the press and does it leave them feeling positive towards it, neutral, negative. And the vast majority are either very positive, positive, neutral, and there are some who feel negative, but it's moving in the direction of feeling positive, which I think is quite interesting. So, they are open to the ideas. They're very interested. We've taken groups of students to Microsoft and to a children's hospital called Alder Hay in Liverpool and looking for them to see how these technologies can be used to make a big difference in healthcare.
I know that the feeling from the students was they were really inspired by that and that helped them think of all sorts of ideas and applications themselves. So, they find it really intriguing and interesting, and it's really helpful to give them concrete examples so they can see how it can make a difference and they can see it in action. And also, it's a great way in to subjects like computer science because it brings it to life for them. Actually, this is an area that is really interesting, and I can see how this could make a difference to the world. They can see how they can apply them and use them.
Kate Bevan: So, presumably the young people are actually quite familiar with the new generative AI tools, things like ChatGPT, Midjourney, Copilot for writing code. Do you think that familiarity is giving them something to be enthusiastic about or is it making them aware of the problems with it?
Maggie Philbin: I go into a lot of schools, and you see so many young people and they've got all the potential in the world. They might not be able to express themselves as eloquently because they haven't had those opportunities. And some young people, whether they are dyslexic or they don't write so well, they are always going to be penalized in our formal exam system. So, if you've got technology that helps them get, because their thinking is fine, their ideas are good, that could become a reinforcing and a very helpful tool in the classroom because they would be able to express themselves, I mean in a very different way. And obviously it's a parallel thing. So, we have a little project we do with younger students. It's a good entry point into science and tech called City of Tomorrow where the students develop their own ideas for the safer kind of smarter city.
Now, we could do that project, so they were using sophisticated CAD systems, or they were actually coding and making things happen, but we don't do that with that particular project because we want it to be fully inclusive. We don't want the fact that you can't code or the fact that you don't have access to smart architect modeling systems to get in the way of you having ideas. So, they build their ideas out of recyclable materials, but they can explain their ideas to judges. And initially, I'm sure judges are slightly bemused when they walk into a room, which is a sea of cardboard boxes and drink bottles and whatever lampshades that have been used to construct all these buildings, but when they sit and listen to the kids, the kids have got great ideas.
Now that is a way of it creating a level playing field for all of those students. So, you've got 3, 500 students with brilliant ideas. They can all share their ideas. You are not looking at a shiny project here and a less shiny one there. And in theory, if Technology Department for Education here decides to embrace these technologies across all the curriculum, it doesn't belong only in computer science, there's some potential here for engaging young people who might not otherwise have been engaged in formal learning because they do have a chance to express themselves and produce pieces of work, they can be proud of.
Kate Bevan: That sounds quite optimistic, actually. And I'm now going to finish off with a question that sounds very un-optimistic and rather doomy, but it's a question I ask everybody. Do you think the AI is going to kill us all?
Maggie Philbin: I think there's a lot of confusion around it. I think the terminology gets very blurred. So, people who actually should know what they're talking about talk about AI, and maybe they mean machine learning. It is important, I think, for everybody to have an understanding of what this technology might mean for them, so they aren't too afraid of it, because at the moment, what you tend to see is quite binary. Either this is going to be the thing that rescues humanity, that sorts out climate change, et cetera, or we are all doomed and a very dystopian picture emerges. And life isn't like that, is it?
And also, this technology is something, in an ideal world, that works alongside human beings and supports what we want to do. It's human beings behind it, isn't it? And that's the bit I don't feel particularly optimistic about. And that, I think, is why I'm so keen for everybody to have a say and a stake and have a level of understanding. We absolutely don't want a situation where we hear people on radio four as we used to going, "Oh, I'm hopeless at maths. I'm no good at tech." We don't want to hear that. We want to hear an encouragement of people to find out about it. So sometimes you can challenge things and make decisions based on knowledge and not on fear, or optimism, for that matter.
Kate Bevan: Maggie Philbin, thank you very much for joining us today.
Maggie Philbin: It's lovely to be with you.
Kate Bevan: The AI Interrogator is an Infosys Knowledge Institute production in collaboration with Infosys Topaz. Be sure to follow us wherever you get your podcasts and visit us on infosys.com/IKI. The podcast was produced by Yulia De Bari, Catherine Burdette, and Christine Calhoun. Dode Bigley is our audio engineer. I'm Kate Bevan of the Infosys Knowledge Institute. Keep learning, keep sharing.
About Maggie Philbin
Maggie Philbin is well known for her work in radio and television where for over 30 years she has reported on a wide range of science, medical and technology programmes from Tomorrow’s World to Bang Goes the Theory. In 2008 she co-founded TeenTech, an award-winning charity which works across the UK getting young people innovating, creating, building skills, and preparing for a fast-changing future. TeenTech helps students, parents and teachers understand the real opportunities in contemporary industry.
She received the WISE Award for Communication and Outreach for her work promoting diversity in 2013. In June 2016 she was voted most influential woman in UK IT by Computer Weekly and also named 2016 Digital Leader of the Year. She was awarded an OBE in Jan 2017 for her work to promote careers in STEM and the Creative Industries. In July 2017 she received the Tech4Good Special Award and in 2019 TeenTech was awarded the Best Employer and School Outreach.
She is patron of the Council for Professors and Heads of Computing and has been awarded ten honorary degrees and fellowships for her work.
About the TeenTech Charity
TeenTech helps young people understand opportunities in contemporary industry, no matter what their gender or social background. Their engaging, sharply focused initiatives reach over 14,000 students aged 8-19 every year and are carefully planned to involve teachers and parents as they are the main influencers in career decisions. Over 300 companies and 40 UK universities work with TeenTech helping deliver their exciting programmes to students across the UK.
- On LinkedIn
- @maggiephilbin
- @teentechevent
- More information: maggie.philbin@teentech.com
Connect with Maggie Philbin
- TeenTech
- Multi-coloured Swap Shop
- BBC’s Tomorrow's World
- BBC Bang Goes the Theory
- Infosys Knowledge Institute
Mentioned in the podcast