Knowledge Institute Podcasts
-
AI Interrogator: The AI Gravy Train, Opportunism, and Innovation with Zoe Kleinman
February 20, 2024
Insights
- The emergence of a booming AI industry is emphasized, characterized by a "gravy train" of startups vying for a share of the wealth and power associated with AI. The challenges of distinguishing genuine AI innovations from opportunistic ventures are discussed, shedding light on the industry's current landscape.
- The conversation delves into the regulatory complexities surrounding AI, acknowledging the difficulties in governing a rapidly advancing technology. The awareness of the limitations in regulating tech, especially after the social media challenges, is emphasized, raising questions about the adequacy of existing frameworks and the necessity for international collaboration in AI regulation.
Kate Bevan: Welcome to this episode of the Infosys Knowledge Institute's podcast on all things AI, the AI Interrogator. I'm Kate Bevan of the Infosys Knowledge Institute. My guest today is Zoe Kleinman, who's one of the most senior technology journalists in the UK. As she's the BBC's technology editor, she's been covering tech for a long time. And so, Zoe, welcome. Thank you for joining us.
Zoe Kleinman: Hello. Thank you for having me.
Kate Bevan: Well, you've been covering AI since before ChatGPT really sort of brought AI and generative AI specifically into some of the wider public consciousness. What are you seeing in your reporting of the AI space?
Zoe Kleinman: I think it's been really fascinating to watch people's perception of AI, and probably I include my own in that. I sort of morphed from the slightly quirky sci-fi, but not really real life part of journalism into absolutely part of the bread and butter of news. We're seeing AI stories in the healthcare, in science, in business and economics. It's becoming part of the fabric of everything. And ChatGPT has a lot to do with that. I think. But we're talking just after its first birthday, prior to its launch.
Most people had existed around AI for a long time. It's been curating our social media feed. It's been deciding what we might want to watch next and video streaming platform, that kind of thing. But we haven't actually really spoken to it before and had it talk back. And so, I think for a lot of people that's been a revelation and it's also been interesting to see how accurately it mimics us.
So, I've been watching robots do things really badly since about 2009, and now they're doing things a lot less badly than they were. And equally, chatbots can be awful and when they hallucinate or they just have a moment, they spit out all kinds of rubbish. We know that. But when they do it well, they really do do it well. And it's interesting to see it become a companion almost for some people rather than something that they might enjoy reading about because it's mad and bonkers, but it's never actually going to be in their house with them.
Kate Bevan: That's really interesting. The point also about leaping from sort of nerdy, niche sci-fi to the real world. I'm thinking about how Google Photos will throw out pictures of my dad as a baby. I have a lot of family photos in my Google Photos and say, "Is this Simon Bevan?" I find that just amazing and extraordinary. But also, there seems to be a bit of a gold rush about this. Are you seeing a lot of startups bubbling up to offer AI products, and what do you think they're like?
Zoe Kleinman: Oh, my goodness. I mean the gravy train is in full swing, isn't it? There is so much money and power, I think, pouring into that particular space and everybody wants a piece of that pie. I'm seeing a real mixture of things. There are some really clever ideas. There are some people working on some really cool stuff. Don't know how marketable some of it will be, but it's very interesting. And there were others who were just hopelessly kind of trying to cling onto that bandwagon. I've sort of stopped doing it now to be honest, Kate, but I used to get emails from people pitching to me their startup idea and it would be the latest AI thing from blah. And I'd sort of reply and go, "Well, what about this is actually artificial intelligence? What is it that you think makes this an AI pitch?"
And I would get tumbleweed in reply nine times out of 10. That's part of the problem. I think AI had the communication problem, and part of it is that there's so many people using it when they don't really need it. And equally, there are also people who are kind of trying to hide their AI offering, I think. There's that going on as well. We've got lots of companies, some very big, trying to not really tell us a lot about what they're doing. It's confusing the issue for people. I don't think that's helping.
Kate Bevan: I think that also comes back to the whole Wild West nature of it. It reminds me a bit actually of the early metaverse days when everybody was actually sure the metaverse was going to be the thing that was going to save us all.
Zoe Kleinman: I was never sure about that.
Kate Bevan: Neither was I, for the record. I spent a lot of last year going, "No, the metaverse is nonsense, but there's some interesting technologies around it." With all this sense that it's a bit of a Wild West with loads of startups popping up, are we ready for that? Do you think we've got regulation in place to cope with that?
Zoe Kleinman: Absolutely not. The regulation is so interesting because I think what there is at this time around is awareness of how difficult it is to regulate tech. I think everyone has learned from the social media disaster that came before, right, when these companies said, "Oh, leave us alone. We can regulate ourselves. You don't understand as your old. Politicians go away." And then we saw time and time again, these companies failing to do exactly that and eventually governments and regulators had to step in and have admitted that they don't necessarily understand the beat. And in fairness, politicians are generalists, aren't they? I think that's not a new issue. I think it's becoming a more serious issue. I mean when you've got Sam Altman testifying to US Congress and you've got these lawmakers saying, "We don't know if we're up to the job, we don't really understand it," that's quite an astonishing thing for the leader of one of the biggest territories of the world to come out with.
And also quite frightening, I think, to hear that from your people in charge. It's better to be transparent and honest about it and not to try and blag it because you can't blag it really. If you don't know what you're talking about, you can't really blag it in this space, can you?
But I also think if it is your job to get your head around the stuff, do it. Don't wait until you're in front of the world asking one of these leader’s questions. Talk to them beforehand. You can bring up Sam Altman and say, "I want to understand AI. Come and talk to me." He will do that. He will come and talk to Joe Biden if that's what Joe Biden wants. Maybe he is, I kind of hope that he is.
I feel like it is good that they are admitting it, but it's also, it is one thing to admit it, but what are you going to do about it? Let's see some action here. Let's see you making an effort to get your head around it. And I think what's been interesting to watch is how differently different territories are going about trying to regulate AI. And then you've got this sort of growing chorus of people saying, "Well actually, it's not really a geographical thing, is it? We should have an UN-style regulator." The one I keep hearing is the International Climate Committee, something like that AI. And yet on the other hand, you've got all these different territories frantically coming up with their own rules. So, what are we doing? Which way are we going here?
Kate Bevan: That actually encourages me that they're saying, "We don't understand this." Because I'm thinking of the social media hearings. But you think you covered them with Mark Zuckerberg, didn't you, in front of Congress? And they were asking questions that were naive, uninformed, and it sounded like they were trying to sound as they knew what they were talking about. I'm actually quite encouraged by the thought that politicians, lawmakers going, "We don't know enough about this. How could we learn more?"
I'm quite encouraged, actually, by the AI Summit that was held here in the UK recently, last month. Obviously, it attracted a certain amount of ridicule, not least for the way Rishi Sunak, the UK prime minister interviewed Elon Musk in a way that no journalist would ever have interviewed Elon Musk.
But are you encouraged by the thought that this stuff is actually being taken seriously, that things like the AI Summit were happening and are going to continue to happen?
Zoe Kleinman: So, I was at the AI Safety Summit in Bletchley Park here in the UK. Everyone was quite skeptical about it. Why is the UK doing it and what role can we play? I think they pulled it off. I think it was a really useful conversation to have. I'm not convinced that this Bletchley Declaration really amounts to very much. It's great that everybody agreed that you need to do stuff, but come on then, what are you going to do?
Seemed to me everyone was agreeing that we need regulation, and everyone is agreeing that this stuff is potentially threatening, but nobody seems to be quite agreeing on which of those threats should be prioritized. And in my opinion, there are more immediate threats to everyday life. Things like the enormous impact on the jobs market that is already happening, that is a more immediate short-term threat arguably than the sort of killer robot scenario, which arguably is a threat, right? That is a possibility. I don't think we should rule out that either, but I would be more worried about not being able to feed my children next week at this stage.
And I think for a lot of people that is a concern. And the other thing that really troubled me was the lack of focus on data. I spoke to Rishi Sunak's representatives. She sorts of put the Summit together. They were the kind of figureheads at the Summit. And they basically said to me, "Data's really hard. It's very difficult to internationally collaborate in data, so we are not fixing that." And I was like, "Yeah, but data is at the absolute heart of this that we all know the added rubbish in, rubbish out. If this stuff isn't trained on decent data, then it's not going to be very good. And on the other hand, you've got lots of companies pulling up the drawbridge on their data, including my employer, the BBC, and saying, 'Actually we pay for this. We pay for our journalists to create this. We don't want you to just scrape it and have it for free.'"
So, what's going to happen there?" If all the decent data is taken away, what is left? I think that's another priority that was missed. But that said, it was the first one. We've got two more coming up, one in France and one in South Korea. Let's see how this evolves. I'm not writing it off yet.
Kate Bevan: Really interesting point there about data. What we've recently finished a research project looking at how North American companies and now European companies are responding to some of the challenges of AI. And one of the things that have come out of the European report is that European companies are actually more confident about data. And our hypothesis on that is because of GDPR, European companies just have more experience of handling data, making sure it's clean, making sure it's usable, making sure it's protected.
So that, to me, feels like an interesting twist on how companies handle data. One other thing that struck me, though, is what do you think about the culture that the AI companies seem to have been born out of? And I'm thinking sort of the Silicon Valley VC culture, the effective altruist culture where the founders set themselves up as great arbiters of what is good.
Zoe Kleinman: Well, I think effective altruism is probably a podcast itself, isn't it? It's a very interesting and unusual belief system that's followed by a select few incredibly rich and powerful people. My slight fear about all of this is that we are kind of repeating what's come before, certainly in the West, which is very few very wealthy US tech companies having all the power here.
I keep hearing, I don't know what you think about this, Kate, that there's a growing concern about the lack of open source, not only in terms of AI itself not being open source, but also things like foundation, those kind of, I guess nonprofits, right? But these kinds of more socially orientated organizations. A bit like a sort of AI Wicked Media type thing. It doesn't seem to be that so much with AI. It seems to be very proprietary, very commercial. And a lot of people are saying, "That's a bit worrying really, because if you are a commercially driven for profit, then you're going to have different priorities."
And while we still don't know what on earth happened with Sam Altman dramatically leaving and then dramatically going back to OpenAI, I think there were three main theories, weren't there? And one of them was OpenAI was set up as a nonprofit, then it had a profit arm. I've heard stuff about how they want the profit arm to actually become more dominant. And was it that that divided the board? I mean, I don't know, right? This is speculation. I have to be honest with you. I don't know.
But it's interesting, isn't it, that this is so commercially driven this time because it's expensive. Making AI is super expensive. Let's not sugarcoat it. You do need millions of dollars and you need a lot of resources and a lot of infrastructure to do it. So, it is hard to do it on the cheap.
Kate Bevan: The arguments I hear against open sourcing it effectively is you can't have your own chatbot trained on internal caucuses of work, which is I think an important point. But then there's also the problem of black box algorithms. Yeah, we should definitely return to this at some point because it's such an interesting conversation to have. I don't think we've got time to have it properly now.
Then there's also the ideologically driven approach of everything should be open source. And that's not necessarily always the best. But as you say, it costs an absolute fortune. And look at how much money NVIDIA's making from its GPU, selling that. There's no way some small open source hand-knitted AI company could afford to do that.
Zoe Kleinman: In the summer, I met a fantastic lady who she and her partner are a tiny business based in Northern Ireland and they're doing, it's a kind of chatbot type thing that makes websites. And they're doing really well. They've won awards. And it is a small business, but for what it is, it's doing really well. And she really wants to push it on, but she said she just cannot get hold of the resources to do it.
When I spoke to her, she'd been waiting five months for a grant to buy one GPU, and in that same week, Elon Musk had bought 5,000. And at that point, we didn't even know why. We didn't know about Grok. We knew he was obviously doing stuff, but we didn't know what. But anyway, her point was how do you compete with the guy that's got 5,000 when you haven't even got one yet? And that is the real difference in this world between the hove and the have-nots, because as you say, you can't crochet a GPU, you've got to buy one.
Kate Bevan: There are all the resources that takes up as well. We haven't even started thinking about sustainability issues around AI.
Zoe Kleinman: Yeah, I mean that is significant. It's huge. It's very hard to quantify, I think. But there's a researcher in the Netherlands who spends quite a lot of time looking at... And this is slightly putting your finger in the air, but bear with me. If things carry on as they are, if the AI industry continues to grow as it is and if there is enough access to enough infrastructure for it to be able to do it, he calculated that the AI industry alone will be using as much electricity as the country the size of the Netherlands by 2027.
Kate Bevan: Isn't that the stat, though, that was used about our mining Bitcoin?
Zoe Kleinman: Yeah, The Netherlands is the new whale, isn't it? If in doubt, compare it with the Netherlands. But I actually can believe that because I've been to data centers. I went to a data center to do some filming for the BBC, and we walked down the normal server aisles, and it was, I mean, it's loud. It's like in a data center, you can hear all the fans whirring. And then you get to the AI aisle where the servers with the big powerful GPUs are, and it's like standing underneath an airplane. You can literally hear the difference between current state of AI.
Kate Bevan: Wow, that's amazing. From what you're seeing, what do you think is the current state of AI? Now, there's an open-ended question for you.
Zoe Kleinman: I think it is growing extremely fast. I think it's where the money is. I think it's where the power is. I don't think it's going away. I think it is, I mean, in a way it's like a movie, isn't it? You've got a battle of good and evil. You've got these enormous potential benefits. People talk about helping find a cure for cancer or helping to create nuclear fusion. I mean, wow, how incredibly brilliant that would be.
And on the other hand, you've got, "Oh yeah, but it might just control this and eventually destroy it." The stakes are so high, and what I like to see is people taking that seriously. I think bringing it back to social media because I feel like that was the last big tech revolution that we probably had, although it's not the same thing. We didn't realize how powerful it was. We didn't realize how much it was going to disrupt people's brains, how many problems it was going to cause. And I hope that we are being a lot more mindful of the potential impact of these tools this time around.
I just feel like we're in the middle of a revolution, and I think we're going to look back over these last 20 years, I don't know what we'll call it. What would you think we'll call it? Historical periods, revolution. We had the Industrial Revolution. What would this be? The AI Revolution? A bit boring. I don't know, but this will be an era of something that history will look back on, I think, when everything changed.
Kate Bevan: I always ask everybody, and we've touched on this already, do you think the AI is going to kill us all?
Zoe Kleinman: I hope not. I think what's really interesting, ultimately this tech is only as good as the people who are using it. And if people choose to misuse it and it works, then that will be probably quite catastrophic. I think a classic example of that in a way was this whole business with Sam Altman and OpenAI. Here, you've got this remarkably wealthy tech company, they're really powerful, holding these incredible tools. I mean, one of the very few people with his hands on that actual wheel is Sam Altman. And he gets ousted and then there's absolute chaos, and then he comes back, and nobody knows why. That was all completely human drama, wasn't it? It was nothing to do with the tech. I think as ever, if anything's going to destroy us, it's probably ourselves.
Kate Bevan: That's a brilliant answer. Zoe Kleinman, thank you very much for joining us today.
Zoe Kleinman: Oh, my absolute pleasure. Thank you for having me.
Kate Bevan: The AI Interrogator is an Infosys Knowledge Institute production in collaboration with Infosys Topaz. Be sure to follow us wherever you get your podcasts and visit us on infosys.com/iki. The podcast was produced by Yulia De Bari, Catherine Burdette, and Christine Calhoun. Dode Bigley is our audio engineer. I'm Kate Bevan of the Infosys Knowledge Institute. Keep learning, keep sharing.
About Zoe Kleinman
Zoe Kleinman is the BBC’s first ever Technology Editor, appointed to the role in 2021. With over 20 years broadcasting experience, Zoe has spent most of her career as a specialist presenter and reporter covering all areas of the tech agenda. She has also presented BBC Radio 4’s PM Programme.
From gadgets and games to cybersecurity and artificial intelligence, she makes tech news accessible to a mainstream audience of millions across national and international BBC Radio, TV and online outlets including Radio 4’s Today programme, BBC World TV, and the UK’s most watched news shows.
Zoe has raced drones in the Nevada desert, taken a backseat drive in a driverless car, spent the night in a house full of robots and climbed Mount Everest in Virtual Reality.
She’s reported on Elon Musk’s controversial takeover at Twitter, the volatile world of cryptocurrency, the gender bias in Silicon Valley, Facebook’s Cambridge Analytica scandal, and her exclusive reports about the parents whose children accidentally spend thousands of pounds inside video and mobile games was discussed by politicians and led to a parliamentary committee hearing on gaming and gambling.
Zoe’s wider journalistic skills are hugely respected across the BBC and in 2020 Zoe was asked to work as a senior journalist on the BBC News team covering the Covid-19 pandemic.
A confident and entertaining speaker who is in demand as a host, chair, panelist and lecturer, Zoe’s expertise and knowledge of the tech sector is brought to life by her engaging style. Her networking skills and ability to engage with people from all walks of life ensure many repeat bookings.
- On LinkedIn
Connect with Zoe Kleinman
- AI Safety Summit
- OpenAI
- Sam Altman
- Infosys Knowledge Institute
Mentioned in the podcast