Knowledge Institute Podcasts
-
AI, Election Interference, Disinformation and Media Literacy with Insikt AI's Jennifer Woodard
August 30, 2024
Insights
- AI is increasingly being leveraged by bad actors to manipulate narratives, polarize societies, and interfere in democratic processes. Jennifer Woodard points out that while the tactics of disinformation and foreign interference campaigns have remained consistent, AI has amplified their scale and sophistication. Generative AI, in particular, has made it easier to create convincing fake content, making it more challenging to detect and combat disinformation. This insight highlights the critical need for advanced detection tools and strategies to counter these emerging threats.
- Jennifer emphasizes that media literacy is a crucial defense against the spread of disinformation, particularly as AI tools make it easier to produce and disseminate false narratives. She advocates for proactive approaches like "pre-bunking" and inoculation strategies, which involve educating the public about common disinformation tactics before they encounter them. By enhancing media literacy, societies can better equip individuals to critically evaluate the information they consume, reducing the effectiveness of AI-driven disinformation campaigns.
Kate Bevan: Welcome to the Infosys Knowledge Institute's podcast on being AI-first, the AI Interrogator. I'm Kate Bevan of the Knowledge Institute, and my guest today is Jennifer Woodard, who's a co-founder at Insight Intelligence, which is using AI to understand and defeat online harms. Now, this is a really pertinent thing to be talking about at the moment, not least because, Jen, you've done a lot of work on using AI to detect around elections and stuff, haven't you?
Jennifer Woodard: Disinformation, misinformation, specifically disinformation, I would say, foreign interference campaign, so foreign interference and manipulation, which is called, also, FIMI, in our jargon. And yeah, it's a huge issue, obviously due to the election year. We've seen past elections be affected by disinformation campaigns from foreign interference operations, but I think back when we first started looking at this and talking about it, it was actually not enabled the way it is today.
So, as we all know, we're kind of being beaten over the head constantly about things like generative AI in the past, several, well, basically the past couple of years, but back when we started looking at this, back in around 2016, those things weren't being talked about. So, the enablement of scale, the economies of scale that are being built to support these types of operations, is really what's worrying right now.
Kate Bevan: So, what have we learned, then, for the past few elections, and what do we need to do in this year? Because there are so many elections this year. India's just had an election, a general election coming up in the UK, and of course, there's a really important election in the US in November. What do we need to take from what we've learned in the past, since you've been working on this since 2016, to protect us this year?
Jennifer Woodard: As I mentioned before, things have changed with the entrance of this marketplace of AI tools and capabilities that are falling into the hands of bad actors, but the tactics and techniques haven't really changed. We need to be on the lookout for what the focus of these actors are, and that's polarization and manipulation of narrative. In many cases, these are local narratives. So, a lot of these campaigns, even before the advent of AI entering this space, were really, really specifically into basically the ills and the hatred of different societies and very hyper-localized and hyper-focused.
So, I think when people identify, or when people have a look at the media, and they get really outraged by something, they automatically assume that, oh wow, this is something that's happening here organically in my neighbourhood. These are things that matter to me. Actually, a lot of these things are being manufactured using the knowledge of what divides people. We've seen this actually happen in cases around the world, pitting ethnic groups against each other and actually sparking genocide, so even goes beyond things like elections and democracies.
Kate Bevan: This is quite scary. When you say bad actors, state actors, who are the main players in this space? Is it Russia? China?
Jennifer Woodard: I think the Russia disinformation question cannot be ignored. I think we do know this, right? And it's only been exacerbated by the conflict with the Ukraine. This is something that was already in the makes. And I think democracies of our world have been trying to address this way before the war started, but it is something that's been happening for a really, really long time. And of course, there's vested interest in certain state and non-state actors, foreign actors, and disrupting our democracies.
Kate Bevan: It was interesting. I'm thinking of the 2012 election in the US, which is widely thought to be the first really digital election, when Obama and his team were really using the digital tools to reach people, personalization. It feels like we've seen a very clear perversion of those tools almost, in a way, because he was using it to reach people to spread positive messages, and it just feels like in the 12 years since then, it's really gone bad.
Jennifer Woodard: Yeah. I think in the case of Obama, it was what we called the first digital grassroots campaign, I mean, really getting into the hearts and minds of people digitally, in ways that used to be just knocking on doors. And to your point, yeah, absolutely, these tools are now being enabled for lots of other things. I mean, social media, before we started talking about AI and disinformation, was already playing a huge role in this, but yes, I love technology, I work at AI, we build AI, I have nothing bad to say about AI, but it is an enabler. It's an amplifier, right? It's an amplifier of good things and bad things.
Kate Bevan: So, what kind of tools can we use to mitigate those threats, particularly the AI tools I'm interested in? How do we turn AI on its head to mitigate threats?
Jennifer Woodard: If we take a step back and we think about the threats that are being posed by AI, as we say, by amplification of narratives, disinformation campaigns and things like that, what we would want to do is turn that on its head and figure out ways we can use similar types of methodologies and approaches to detect this type of behaviour. Now, a lot of this is really hard to detect, and with the entrance of generative AI and large language models, deceptive text is even more convincing than it's ever been before.
If I get a bunch of messages from you, Kate, and it's telling me a bunch of crazy things, I'm probably going to believe that it's coming from you, because it has very specific things that are related to your user persona or the way you speak, and it's very easy to imitate these types of things. But at the same time, we can use AI to identify certain types of patterns. So, I think right now it's probably easier than it used to be to understand what is generated by a ChatGPT-type of a tool. There are certain patterns in the ways it speaks. That would be one way to be able to detect this type of deceptive campaigning.
And where it really gets tricky is when we move into multimedia. So, when we talk about deep audio fakes, is something that a lot of people have brushed over in the past, but it's getting really, really easy to do and really hard to detect. So, I think when we talk about text, text is already challenging, but when we move into multimedia spaces, these types of things are harder and harder to detect, things like photos. It's very, very easy to manipulate as well.
Kate Bevan: Are you worried about the upcoming elections?
Jennifer Woodard: Yeah, for more reasons than one, not just because of AI just powering disinformation. I think, no, disinformation as a whole, I believe that it's the virus, it really is, and it spreads like one, right? So, when we talk about things like narratives, they're really, really hard to stop once the cat is out of the bag. And it's really hard to fully understand, fully understand without being able to employ AI and visualize these things at a network scale, how these narratives are developing. So, in some cases, we've looked at conspiracy theories that have warped into offline violence in no time, because the speed at which these campaigns are deployed, and at which they propagate across the network, is really hard to even imagine and obviously hard to combat. And who actually should be combating, that is another question, right, because a lot of these things are happening on social media platforms.
I really do think, in most cases, they're trying their very best, but the scale of the problem is big or facing multilingual challenges, multimedia challenges, as I mentioned before. So, it's not an easy problem to solve. I'm afraid of narratives getting out of control, and I'm also afraid of the gaps in media literacy that we have as a society, in western society or in any society, and even here in Europe or in the US. I think people are really, in the context of very quick social media interactions, they're still apt to just accept everything at face value and believe something just as it's reported, and that's really, really scary as well.
Kate Bevan: How do we help people become more media literate then? And also, do you have a sense of what groups are the most vulnerable to this? Is it older people because we talk about old boomers on Facebook? Is it also the younger people who get all their news from TikTok and will have no sense of the wider news organization? How do we help them become more media literate?
Jennifer Woodard: My personal perspective is I wish we had better, more reliable, traditional media, because I feel like that's evaporating. It really, really is. In the case of younger generations, obviously, it's all clickbait, it's all headline driven, it's all dopamine driven. They're absolutely vulnerable. Especially, we've seen this in cases, I think, before we got into the area of disinformation, we were focusing on radicalization. That was a very similar type of problem, right, because there are people, kids that are using TikTok, being exposed to different types of content, algorithms just shooting them related content, TikTok or Instagram, and the next thing you know, they're entwined in a radicalization process. Well, it's the same thing with disinformation. You go down a rabbit hole, one thing leads to the other, the algorithm takes you there. So, I think from that perspective, the algorithmic type of manipulation is what is affecting younger people.
I think older people, it's probably this traditional media gap. For instance, in the US, it's very easy to tell what ideology someone has, given the type of news they're consuming. It's really hard to find the middle of the ground, and therefore people go off, they read alternative news, and they go to media that is probably not telling the whole story or the whole truth. And then also, older people didn't have this problem before, at least not at this scale. So, they're not expecting, when they're consuming content, to be duped into or manipulated into becoming very polarized about an issue, but it is happening.
Kate Bevan: That's true, actually, when you think about, I don't know, my generation. I'm the very young end of boomers. I'm on the cusp of Gen X. We grew up with traditional media, with watching the news, with newspapers. And if you're not already extremely online, like if you're not on Twitter or TikTok or Facebook the whole time, you just don't expect to be on the receiving end of so much disinformation, so you are not prepared at all. How do we help people get prepared for this?
Jennifer Woodard: There are many approaches, obviously. Even some of the tech companies have really invested in this, things like pre-bunking, getting ahead of this manipulation. So, it's been proven really that debunking erroneous facts after the fact is not very effective. People have already bought into the narrative, so it's really hard to get them to come over to the other side, but getting ahead of it, really to understand, exposing them to something - they actually call it inoculation, exposing them to something, making them understand that they've been fooled by that something. We just saying, hey, this is actually Russian disinformation. Did you know that? And it really, really makes a psychological impact that gets them questioning and being more sceptical of what they come in contact within the future. So, I think this is really a very helpful tool, and it's being deployed by some tech companies and lots of different researchers have been experimenting with this. It seems to really work.
Kate Bevan: That's actually really interesting. Are they on these AI tools particularly, or just tools in general? The algorithms, their machine learning.
Jennifer Woodard: It could be deployed in a variety of ways. It could literally just be a website. I think Google actually has a website where they show different examples of disinformation campaigns and basically you take a quiz and you say, hey, did you think this was disinformation? Did you believe this? And then people really are jarred by the fact that they've been taken in by stuff that's completely false.
Kate Bevan: I used to do a lot of work with scam victims, and they were always ashamed that it was their fault, and I think that made it harder for people to admit that they'd been scammed. Do you see that in the response to disinformation?
Jennifer Woodard: I think people are really relieved after they have a look at these things and shocked as to how deceptive some of these things are or how real they look. Exposing someone to a fake news site, there's this phenomenon of all sorts of, it's not the Washington Post, but it's like the Washington Tribune, like these types of foreign information, manipulation driven media, and the way that the content is written, people are shocked that it's not actually real. So, when they come into contact, I think they're not really, so to speak, relieved that they were warned and not particularly ashamed of it.
Kate Bevan: Can we use regulation to help with that? Or is regulation really struggling to keep up?
Jennifer Woodard: I think this is really hard to regulate. First of all, it's hard to identify what the problem is, like particularly wrap your head around it. So, I think there's this whole question of alternative facts. What's a fact? What is real? What's the truth? What's not the truth? There's the idea of freedom of speech, obviously the core tenet of our society. So, it's really hard to regulate that. I wouldn't want to see speech regulated at all. What I would like to see is bad actors when discovered, there would be some sort of a consequence for that, but it's a hard thing to do. AI regulation obviously, as you know, is entered into force here in Europe and in other parts of the world. I don't think that that will play a role in this at all. What it could do though is potentially penalize, if possible, the use of generative AI tools for nefarious reasons. But the people that are using these things are not, since you're not worried about the law, let's say.
Kate Bevan: Are you optimistic about regulation, keeping AI honest and ethical?
Jennifer Woodard: Yeah, I am actually. I think there's so many cultural contexts and nuances that we need to think about in the context of Europe. I think the regulation that's been put into place is very much in line with what is considered European values. Before, when the whole topic was being debated, there was a lot of question. Myself, I was questioning whether or not this would be something that would hinder innovation. But I think the way that it has actually caught up when generative AI started appearing on the scene, they actually started really updating what was already on the docket to be approved. And I think they've done a really good job. I do think that the regulation needs to move with the time. It can't just be set in stone and that's it. I think that's the approach.
Kate Bevan: There's regulation. Boom, we're done, right?
Jennifer Woodard: Yeah, exactly. It's just like anything else. You build the guidelines, and you see how the space progresses, and then from there on you really need to adapt to the times. Obviously, digital space moves extremely fast.
Kate Bevan: But how do you then reconcile that with, as you said, the European values and also at the US is to with big players like China who have a completely different take on things. Can we ever reconcile those?
Jennifer Woodard: Personally, don't think so. I've had conversations with people across the world about this. China was actually one of the first countries to instate AI regulation for different reasons. Obviously with a different set of values, let's say. I think what we need is some sort of a global framework in which obviously it's not necessarily regulation, but it's a level of collaboration from the research perspective, from, I wouldn't say the regulation perspective, but basically putting some guardrails in place where if you leave your swim lane, then you're going to be in some sort of a trouble and almost like a UN approach to AI.
Kate Bevan: That's a really interesting thought actually on UN approach. If there were one thing you could do about regulation or one regulation you could put in place, what would it be?
Jennifer Woodard: I think a lot of the regulations that need to be in place are probably already in place in Europe, and maybe too many, let's say. It's not about regulating for regulation’s sake, it's about the application of the use of AI and its specific use case. So, something that is amazing for one specific use case could be completely dangerous for another one. So, you think about things like facial recognition, what are you actually using it for? Other types of biometrics. There are things obviously that are in the list of prohibited things in Europe, like social scoring that have no place in a democratic society. So, I think most of the things are already regulated for, but we really need to focus on the use case versus the actual technology.
Kate Bevan: Okay. And that kind brings me to my final question. That's one I always ask. Do you think AI is going to kill us all?
Jennifer Woodard: No, I don't think so. No. I don't know if anyone's ever said yes, but no, I don't think so. If I did, I wouldn't be building AI. And I want to play a role in ensuring that AI is safe, is ethical, and is used to stamp out these types of things that could potentially kill us or at least divide us like disinformation, online harms, terrorism and things like that. But I think AI is an enabler of good. It will be an enabler of good. I think a lot of the risks, even though I'm sitting here telling you about all of these scary things, a lot of the risks are completely overblown. And I think we have every reason to be optimistic about the future of AI.
Kate Bevan: That's a great place to leave it. Jennifer Woodard, thank you very much indeed.
Jennifer Woodard: Thank you, Kate.
Kate Bevan: The AI Interrogator is an Infosys Knowledge Institute production in collaboration with Infosys Topaz. Be sure to follow us wherever you get your podcasts. And visit us on infosys.com/iki. The podcast was produced by Yulia De Bari and Christine Calhoun. Dode Bigley is our audio engineer. I'm Kate Bevan of the Infosys Knowledge Institute. Keep learning, keep sharing.
About Jennifer Woodard
Jennifer Woodard is the co-founder and CEO of Insikt AI, a deep-tech startup researching applying AI and Machine Learning to unearth hidden insights within large datasets for customers in the government sector for fighting online harms such as disinformation, extremism and hate speech. She is also the co-founder of Dataietica, a non-profit institute created by AI practitioners and researchers in the Security industry who want to fully realise the potential of new technologies for public safety and policing, while addressing the implications such innovations have in terms of ethics, privacy and human rights.
Jennifer is a recognized expert in the application of AI for counterterrorism and a frequent speaker on the topic at high-level events. Most recently she briefed the UN Security Council on the threats and opportunities of AI in the area of online disinformation. She is also frequently called upon to speak on the topic of ethical AI for security and policing in the EU, US and Israel.
- On LinkedIn
About Kate Bevan
Kate is a senior editor with the Infosys Knowledge Institute and the host of the AI Interrogator podcast. This is a series of interviews with AI practitioners across industries, academia, and journalism that seeks to have enlightening, engaging, and provocative conversations about how we use AI in enterprises and across our lives. Kate also works with her IKI colleagues to elevate our storytelling and understand and communicate the major themes of business technology.
Kate is an experienced and respected senior technology journalist based in London. Over the course of her career, she has worked for leading UK publications including the Financial Times, the Guardian, the Daily Telegraph, and the Sunday Telegraph, among others. She is also a well-known commentator who appears regularly on UK and international radio and TV programs to discuss and explain technology news and trends.
- On LinkedIn
- “About the Infosys Knowledge Institute”
- “Generative AI Radar” Infosys Knowledge Institute
- Insikt AI
Mentioned in the podcast