-
AI Interrogator: AI's Impact on Journalism with Columbia Professor Emily Bell
January 15, 2024
Insights
- Some newsrooms are using generative technologies for automatically generating content. From automated story writing to algorithmic content creation, these technologies have already left their mark on journalism, raising questions about the authenticity and transparency of AI-generated content.
- The conversation touches on the worrisome aspects of algorithmic curation, particularly on social media platforms. These algorithms are shaping user experiences and have a potentially negative impact on journalism. As platforms prioritize other content over news stories, newsrooms face challenges in maintaining visibility, leading to a shift in the landscape of news distribution and audience engagement.
Kate Bevan: Hello, and welcome to this episode of the Infosys Knowledge Institute podcast on all things AI, the AI Interrogator. I'm Kate Bevan of the Infosys Knowledge Institute, and my guest today is Professor Emily Bell. She's the director of the Tow Center for Digital Journalism at Columbia University in New York. Thank you very much for joining us today.
Emily Bell: Thanks very much indeed for inviting me on, Kate.
Kate Bevan: Let's get started with do you think AI is a threat to journalism?
Emily Bell: Yes, of course it is, because it's going to change fundamentally the way in which people receive and disseminate information. And anything that changes that proposition fundamentally, in the same way that broadband internet did in 2000s, is inevitably going to pose a challenge, at very least, and a threat to existing institutions and existing practices. So, in that dimension, yes, of course this is going to be a threat of some type to journalism. Does that mean it's going to put an end to journalism? Absolutely not. Is it something that journalism can use to actually help mitigate its strength and resilience against some of those threats? The answer to that is yes, probably. But like all these things, Kate, we're at a point where we don't really know. We're trying to evaluate that relationship, really, without the benefit of how these generative technologies take shape. And I think that's both the exciting and slightly worrying part of this, which is just as with social media, we're going to have to see how people use it.
Kate Bevan: Yes, I mean, I'm thinking about probably when you and I both started in journalism, hammering away on a typewriter and hot metal, and we were really worried about the impact of new technology, and yet we've come along and created a whole new class of jobs. And it feels me that AI is going to be sort of similar in that way, so I'm quite glad to hear you being not completely full of doom about it. Do you know, is AI already being used in newsrooms and what's the effect there?
Emily Bell: Yeah, I mean, AI or some version of AI has been used in newsrooms actually for some time, or the more sophisticated ones. So, let's take a really big rich news organization, so The New York Times. They've been using large data sets and neural networks for some time to understand how users are visiting and consuming news on their site, and then changing things like perhaps user pathways or where you go next on a page, helping it to influence how people might look at particular advertising. That kind of use actually is pretty widespread, as is something that I don't consider to be AI, but which I think a lot of people associate with AI, which is this robotic writing, so this idea of automatically generated stories, which I think a lot of people seize on as being perhaps the biggest change to journalism, which I don't necessarily agree that's going to be the case.
Those have been around for a very long time as well, but we would just regard those as structured data being turned into written language. There was a company about 10 years ago called Narrative Science, founded by a pair of academics, which figured out how to write stories at great volume from bits of structured data like sports scores, like financial results, like almost anything, till receipts from shops. That's also old, but that I think is going to change into more generative uses. And we have seen newsrooms already saying, "Hey, we're using AI to write these stories." And sometimes they've been discovered using AI to write stories when they haven't disclosed it as well.
Kate Bevan: Quite recently popped up on Twitter, there was a story on some of London news outlet, some low level one I think, but was talking about London railway stations as film locations, and it was talking about London Bridge being the station where Harry Potter was filmed, and I remember everybody looking at that going, I think that must be AI at work, hallucinating wildly there. For those who don't know London, London Bridge is one of London's many railway stations, but the one Harry Potter is set at is King's Cross.
Emily Bell: Well, actually, if you just Google well-known showbiz names at the moment in time, you look at what you get in your top search results, get three or four stories down, and you'll usually find something which doesn't read quite right about famous Taylor Swift. You'll suddenly think, well, this is a slightly oddly written piece. What is this outlet? I don't think I've heard of this before. And there's already just a cotillion different, I guess I'd call them spammy sort of content farms out there, biting out this sort of stuff. Chat GPT and very accessible generative AI applications are just going to see a massive proliferation in that kind of content.
Who's very excited about these technologies being available at the moment are really, I hate to use the words, Kate, grifters and scammers though, but if you don't care about accuracy, so if accuracy is not a value that you care about, then these are already being used by every single person or business that falls into that category. And there's an awful lot of those in the gray market for basically gaming the commercial web, whether it's with fake reviews, whether it's with very spammy search engine optimized pieces that they then sell advertising around or attempt to sell advertising around, you know that those uses are out there already, and I think it doesn't take much to spot them if you know what you're looking for.
Kate Bevan: I think one of the things that worries me about sort of the way AI is used and the way you've described it being used to help news sites understand their audiences, I also worry about the algorithmic curation and sort of directing people to content. We've seen the downsides of that with YouTube, the way it sends people down rabbit holes. Are you worried about that with news content?
Emily Bell: My most overwhelming worry is really not perhaps in the heart of the newsroom and how is this going to be used by the handful of legitimate press outlets that there are left operating at the end of this? I think it's really about how the rest of web publishing, and particularly the platforms that have so much of our attention are already deploying it in how they funnel audiences into marketing segments, essentially. And we can see that sophistication really changing. So, you mentioned, Kate, the phenomenon of the YouTube rabbit hole, which is you watch a video, you're recommended another video, you're recommended another one, you end up down a cul-de-sac of similar content. That really became, I think, known to the public in about 2016, particularly after the American election. Researchers had been onto this previously, so papers have been published about it since about 2013, 2014.
But now you see we have incredibly sophisticated, much more sophisticated use of this, really. So if you spend any time on TikTok, which is a controversial but highly, highly addictive and very popular social app, I study this, our researchers study this, the algorithm is absolutely insanely well attuned to what you're thinking and what you're doing and sends you all kinds of things, whether it's dogs, in my case, dogs that have done sort of bad stuff. Today's is Book Tok. I read quite a lot of books, so I get a lot of book recommendations. You find yourself targeted seamlessly and rapidly.
Now, where the threat to journalism is in that, and there's a lot of emerging research around this at the moment, is that most social platforms have decided that news is bad for them, one way or another. That actually divisive difficult news stories, whether it's the current conflict between Israel and Palestine, whether it's elections, we've got several big ones coming up in 2024, including the US American election, that the problems, the political and press problems, the PR problems that these types of stories cause platforms, are simply not worth the engagement they get back. So, what large distribution platforms that hold so much human attention now can do is really turn down the volume on the algorithm, put it that way, so we just simply don't see news stories high or anywhere in the mix at all. And that's something which I think newsrooms are currently experiencing a collapse in traffic, both from social referral traffic, but also from search referral traffic. And it's a really interesting phenomenon that we are going to have to, I think, get our heads around and as journalists, we're going to have to grapple with.
I call it the very first post-broadcast era of news that we've ever had since the invention of electronic media, even before the invention of electronic media. So, the town crier was a broadcaster. Newspaper front pages were kind of broadcast media. Somebody who's sitting on the subway or standing at the bus stop reading a paper, you could sit. Newsstands have been used by dictators. They've mandated which stories face the public. These are all broadcast mechanisms. That's all going away. You can't see what people are consuming now. You've got no insight into how those stories are reaching people, because the platforms that disseminate them simply don't share that data and incidentally, neither do newsrooms either.
Kate Bevan: There's an impact on society, isn't there if we don't have high quality, widely trusted news. And also, I think this is something I probably heard from you first, the concept of context collapse, where you are just seeing something delivered to you via the medium of TikTok or YouTube or Twitter, now called X, or whatever, and you don't know where it's coming from. You can't judge its trustworthiness if it's just coming to you through this one fire hose. That's, I think, one of these impacts of all this I do worry about.
Emily Bell: Yeah, I think you're absolutely right. I really think that the big effect of journalism is going to be everything that AI offers to and does in other parts of society that we then have to unpick and report on. And for once, we're part of that story. We are reporting from inside a story.
Kate Bevan: And we hate becoming the story, don't we, as journalists?
Emily Bell: Exactly. We hate being the story, but we're having to really figure out exactly these questions. And the societal part of this is important because increasingly, everything is mediated, and everything is a platform. There's a very smart legal academic called Evelyn Douek who's at Stanford, and she has a great phrase, which is everything's a content moderation problem. In other words, every single electronically mediated interaction now has question marks about who is putting the messages there, who paid for them, who's seeing them? How are they receiving them? When you talk to your doctor, that's a mediated experience these days. When you are thinking about your financial choices or your political choices, that's often a mediated experience, and we're in this world where everything has become the flat space of the internet. All of these things are mixed together with very little differentiation between them. And you and I, Kate, come from an era of newspapers and broadcasting where it was actually the law that you had to really clearly delineate what was commercial content and what was editorial content.
If you'll indulge me for one second about a small anecdote about a class, I teach at Columbia called Information Warfare Reporting. We have a source come into class and talk to students, who makes a living from what you and I would consider certainly a gray area between legality and illegality, which is creating false accounts, writing false reviews, all of that sort of thing. And the last time our source came in and talked, students are all absolutely electrified by this, they said, "Chat GPT is completely transforming what we do. It's absolutely amazing because previously, we would still be able to produce spam and all sorts of false profiles and things like that. They just weren't very well written, and they weren't very convincing. And now we can just tweak that model." And I think those types of operate, they're much more prevalent than perhaps most people realize.
We're already seeing scams involving what we call friends and family scams. So which sort of, again, they're kind of pretty crude. They're a bit like email phishing, but they have an advantage to email spam, which is that they now can tell when somebody perhaps is in a vulnerable state. They can also sometimes tell, depending on who they've bought their data from, where their relatives are. So, they can do things like they can spoof email or usually telephone locations. They can spoof, they can say, "Your friend Kate is calling from London," and it would sound exactly like you, and you'd say, "Daphne is terribly ill," that's your cat, beloved cat, "And I need a $1,000 immediately, Emily. Can you help me?" And for those five seconds, I would think, "I must send Kate $1,000 immediately." This is happening right now. This is not a sort of a dystopian future. This is actually the present reality of that kind of activity.
Kate Bevan: We've been talking about journalism, but it strikes me there's an impact on the kind of work we do at Infosys and the Knowledge Institute, which is creating content that brings many of the journalism skills to bear on it. And obviously we are very ethical about how we write things. We source things, we do it properly, largely because the Knowledge Institute is made up of many former journalists. But there is plenty of space there for much less scrupulous actors to get in there and craft corporate content that's considerably less ethical and considerably less reliable.
Emily Bell: Absolutely right. We study some of this in the United States, particularly the use of things that look like news organizations that have thousands of titles but are largely illusory that they don't actually employ hundreds of journalists. They are algorithmically generated, often paid for by either political lobbyists or by public relations crisis management, which can magic up all kinds of stories that target. The thing is that they're not very readable and they're never going to replace your high quality local news outlet. A lot of this content isn't really there to be read. It's just there to create what we call a surround sound effect for people, to make them think that an issue is perhaps important when it isn't.
So, for instance, voter fraud was established in the U.S. public's voting imagination, months, and months before the 2020 election, and then it was triggered by Donald Trump supporters to say, "We know there's a problem with voter fraud." You look at the Brazilian elections and you see that Jair Bolsonaro did exactly the same thing. You can use and manipulate that information landscape, if you like, and consumers within it, in a low cost, highly targeted way now. I do think that there's a question for me about whether we now are generally aware enough about that manipulation to be able to see a plateau in it. So, there's been some interesting research recently from academics looking at AI-generated content saying, "We might be reaching a plateau of miss and disinformation," which I really hope is true.
Kate Bevan: That's actually quite encouraging. That brings me onto to my final question, really, which is what I ask everybody. It sounds a bit facetious, but it's a good way of getting into it. I mean, do you think the AI is going to kill us all?
Emily Bell: No, it's not going to kill us. But things that don't kill you do not necessarily always make you stronger. There are hundreds of different ways that AI is going to, I think, really make things difficult for people, including understanding what's going on around them, and we have to be alert to those risks. It's the little immediate risks that I'm much more concerned about than the very long-term the robots are going to kill us. I mean, you can always just unplug the machine. As somebody said, nothing's going to kill you, when you can just pull the plug out the wall and it doesn't work anymore.
Kate Bevan: That's a great place to bring it to an end. Professor Emily Bell, thank you very much for joining us.
Emily Bell: Thanks so much, Kate.
Kate Bevan: The AI Interrogator is an Infosys Knowledge Institute production in collaboration with Infosys Topaz. Be sure to follow us wherever you get your podcasts and visit us on infosys.com/iki. The podcast was produced by Yulia De Bari, Catherine Burdette, and Christine Calhoun. Dode Bigley is our audio engineer. I'm Kate Bevan of the Infosys Knowledge Institute. Keep learning, keep sharing.
About Emily Bell
Emily Bell is founding director of the Tow Center for Digital Journalism at Columbia Journalism School and a leading thinker, commentator, and strategist on digital journalism.
Established in 2010, the Tow Center has rapidly built an international reputation for research into the intersection of technology and journalism. The majority of Bell’s career was spent at Guardian News and Media in London working as an award-winning writer and editor both in print and online. As editor-in-chief across Guardian websites and director of digital content for Guardian News and Media, Bell led the web team in pioneering live blogging, multimedia formats, data, and social media, making the Guardian a recognized pioneer in the field.
She is co-author of “Post Industrial Journalism: Adapting to the Present” (2012) with CW Anderson and Clay Shirky. Bell is a trustee on the board of the Scott Trust, the owners of The Guardian, a member of Columbia Journalism Review’s board of overseers, an adviser to Tamedia Group in Switzerland, has served as chair of the World Economic Forum’s Global Advisory Council on social media, and has served as a member of Poynter’s National Advisory Board.
She delivered the Reuters Memorial Lecture in 2014, the Hugh Cudlipp Lecture in 2015, and was the 2016 Humanitas Visiting Professor in Media at the University of Cambridge.
She lives in New York City with her husband and children.
Connect with Emily Bell
- On LinkedIn