Knowledge Institute Podcasts
-
AI Interrogator: How can we make AI responsible with Clare McGlynn
February 27, 2024
Insights
- The Taylor Swift deepfake incident highlights the urgent need for responsible AI practices, emphasizing the importance of proactive measures to mitigate the spread of harmful content and protect individuals' privacy and dignity online.
- To effectively address deepfake threats and uphold responsible AI, policymakers must prioritize comprehensive regulation and platform accountability, ensuring that legal frameworks are agile enough to anticipate and address emerging forms of digital abuse.
Kate Bevan: Hello, and welcome to this episode of the Infosys Knowledge Institute Podcast on all things AI, The AI Interrogator. I am Kate Bevan of the Infosys Knowledge Institute. And my guest today is Clare McGlynn, who is a professor of law at Durham University in the Northeast of England. She is also an expert on shaping criminal law around pornography and has an interest in discussing crimes and violence against women. Of course, AI has been a very big focus of her work. Clare, thank you very much for joining us today. And I do not think I have ever said this before in my work at Infosys but tell us about Taylor Swift.
Clare McGlynn: Yes, I have been very busy over the last few days, as have many people in our field trying to get to grips with the impact of what has happened to Taylor Swift. So, for those of you who may not know Taylor Swift, there were some AI generated pornographic images of her made without her consent that were distributed, we think from Reddit onto X, formerly Twitter. And at one count, around 47 million people had viewed these deepfake pornographic vid-eos. So, it has been a real wake-up call, I think, for many people to realize what deepfake porn is, how harmful the abuse is. And we are now trying to think of the ways forward that we can use this, it sounds terrible but use this ex-ample to try to help protect and free other women and girls who may be facing this.
Kate Bevan: So, what does this mean for policymakers? I mean, in terms both of specific problems with what has happened to Taylor Swift in these past few weeks. But also more broadly, the impact of AI on deepfakes and the ease with which people can create these images, harmful images, and where is the law on this?
Clare McGlynn: So, we have known about what we now call deepfake porn or deepfake abuse for many years, since about 2017 when it first was starting to be used. So, it is not a new phenomenon. And actually, deepfake porn had been made of Taylor Swift many, many years ago. But what is new is the number of times it has been viewed and how fast it was distributed. And now, as we all know with ChatGPT and all other forms of AI, it is so easy to make these images and videos now. This is why it is now an exponential problem and a real crisis.
Clare McGlynn: So, for policymakers, I think this needs to be a wake-up call. It is an urgent call for action. Because many of us, like myself and many others, have been talking about the problems of image-based abuse of which deepfake pornogra-phy as an example for many, many years. But nobody has really been listening. For example, in the United Kingdom last year when we had an AI summit, the threats were existential threats about all sorts of things, but their everyday threat is to women and girls. So, policymakers I think need to take action on three different levels, the criminal law, the civil law, and we need to take steps to better regulate the platforms.
Kate Bevan: What does that mean in terms of actual steps? How would we regulate the platforms?
Clare McGlynn: So, what we need to do with the platforms is, for example, there's regulations like the Online Safety Act in the Unit-ed Kingdom, the Digital Services Act in the European Union. Which are framed around trying to get these platforms to do risk assessments for the harms on those platforms and then take steps to mitigate those risks. Now, that means they need to put in place systems and processes to reduce the harms on their platforms. So, we need to be making them, take better steps to reduce the prevalence of something like deepfake pornography. And when it does come online to remove it swiftly, and I am afraid the Taylor Swift example with X shows that neither of those mechanisms were in place. Neither the means to prevent the upload or viral distribution, nor the means to get the swiftly taken down and blocked. So that is what we need to get the regulators to make the platforms do. Because frankly, the platforms are not doing it on their own at the moment.
Kate Bevan: Would you trust anybody to do this well at the moment? I mean, the platforms, the regulators.
Clare McGlynn: I am not overly optimistic at the moment, unfortunately. Certainly, with the platforms, they have been marking their own homework for many years and not very effectively, which is why we are now in this situation. Maybe if plat-forms had taken action many years ago when this was developing, we would not be in this space now. In terms of the regulators, I am also slightly concerned. In the UK context, the regulator Ofcom is consulting at the moment on its guidance that will underpin the Online Safety Act. And at the moment, that guidance, especially around these issues, is weak and it is very minimalist. It is rather about reinforcing the status quo and it is not necessarily taking a systems approach to these issues. Rather it is dealing with it as individual pieces of content. So, I am not overly opti-mistic at the moment.
Kate Bevan: So, a kind of a whack-a-mole approach, responding to each threat in a reactive way.
Clare McGlynn: Unfortunately, that is exactly the approach that Ofcom's guidance at the moment is taking around issues like image-based abuse, deepfake porn, and other actions. Not the focus on the systems. So, for example, with deepfake por-nography and search engines, you can type deepfake porn into Google. And it brings up the most popular deepfake porn websites, which are receiving about 14 million hits a month.
Kate Bevan: What kind of systemic steps then should regulators be taking? Because the rise of AI has really changed the land-scape, hasn't it?
Clare McGlynn: It has changed the landscape, although it is not that it is a surprise. As I see, we have known about this for many years, it is just that it is now even easier to use. So, I guess that is looking backwards though. They should have done something earlier. What can we actually do now? And this is why we need to focus on the systems and the process-es. They could be developing the mechanisms to reduce the prevalence and to reduce the spread. Some of that is about the algorithms they are using. We know that some of these algorithms obviously promote a hate. More often, they promote the extreme and they promote the pornographic or the sexually explicit. So, we could be getting them to take steps like that. In relation to search engines, they should be down ranking or delisting this what I would call illegal and harmful content, which they are not doing at the moment.
Kate Bevan: I am going to play devil's advocate here for a moment because I am sure certainly the platforms will say it is really hard to do good content moderation at scale. They are using AI themselves to do some of this content moderation. Do you think that is a fair response?
Clare McGlynn: I am sure it is complicated, but I think they have the time, the resources, the money, and the expertise to do it bet-ter. And if they were prioritizing reducing harm, reducing violence against women and girls online, they would be able to produce some faster, swifter, better mechanisms to do it.
Kate Bevan: In general, as in before the Taylor Swift story. I have been rather encouraged, actually, by the way there has been a clear effort to get out in front of the ethics of AI and to do it better than we did with social media. Do you think I am over optimistic there?
Clare McGlynn: I mean, at one level, of course, you are right. There is a lot of international discussion around the ethics of AI, discus-sion about multilateral agreements. And maybe we did not have that at all at the beginning of social media. But what I see, I mean, my areas are around particularly online violence against women and girls. If I think about the trajectory of that, the prevalence of that, I am not convinced yet that these ethics that we are talking about around AI are fo-cusing enough on actually taking the action that is necessary to reduce those types of harms. So yeah, we might be discussing it, which is a star. Are we actually going to see results? Is it actually going to make a difference? I don't know.
Kate Bevan: How do we improve that? I mean, certainly new laws take time to be formed, the policy, the scoping, getting through the parliamentary bodies. What could we do in the meantime?
Clare McGlynn: Well, platforms themselves could be taking action. I guess in the meantime, they shouldn't need to have to wait for regulators or national or international campaigns to actually act. So, they could do things like that. So even just for example, I mean obviously X and Twitter's trust and safety work and content moderation is not what at once well, shall we say... I have given another example of what Google could be doing to reduce the facilitation and encour-agement of deepfake abuse. These are the simple steps in some ways that they could be taking right now. So, I think though, the bigger issue, to be honest, if I am talking about my field around online abuse of women and girls, it is a bigger, broader problem.
Clare McGlynn: Women and girls face inequality in many areas of their life. So, in many ways, it should not be a surprise that it is not prioritized in the AI field because it is not prioritized in many, many areas and walks of life. So, in that sense, the AI is no different from other fields of law, politics, et cetera. And it does just mean, I guess, at a bigger level, it is a general fight against inequality and discrimination. And AI is part of that. But the AI is what is making it worse because of course we are talking about deepfake porn, but we know about the discrimination and inherent in an awful lot of AIS, which impacts on minority group and women as well.
Kate Bevan: Are you working on any new laws at the moment?
Clare McGlynn: Yes, of course. As a professor of law, I might say that anyway. Law to me, it is the foundation on which both our so-cietal norms and what is acceptable and what is not is based. So, for example, around deepfake pornography and AI, there needs to be stronger criminal sanctions to make sure that it is an offense to distribute intimate images without consent. That is in some countries, but not in all. It is really important that we have civil rights for victims. So civil rights of redress, to get material taken down against platforms and individuals, to get material deleted and to get redress. And then I think it is the regulatory laws that we need to put in place to monitor the work of the platforms. So, it is across each of those levels. And in most countries, there is some element of one or two of those parts of the overall package, but very few have the whole package in essence.
Kate Bevan: And what about existing laws, Clare? I mean, one of the things that we often see more generally when we are talk-ing about, we need more laws, we need more regulation, is the response. But we have already got laws. Do we just need to enforce those better? Do we need to rethink those?
Clare McGlynn: Well, in the case of emerging technology, including the likes of AI and deepfakes, no, we do not have the laws basi-cally. Because even if you think about deepfake pornography, there are laws in many countries, including many states of the US that have criminalized the distribution of intimate images. But many of those laws did not cover the AI generated images. So, the existing laws just were inadequate. It is the case that technically in most countries you would be able to find a law, say on harassment or on malicious communications, which might cover some of these instances. That is true. But A, nobody knows about those laws. B, the police do not know about those laws, and they are often there for very difficult to enforce. Which is why it is much better to have some specific regulations that directly cover these new and emerging forms of abuse.
Kate Bevan: Is not that a challenge in how we draft them as well to think about not only the forms of abuse and the technologies that we know that exist. But also, to frame them in a way that actually has broader scope going forward?
Clare McGlynn: It is a real challenge for law, but it is not insurmountable. And the thing I would say looking forward is that we know that technology will be developed and will be used to abuse minority groups, LGBT groups, women, and girls. It just will be. So, we need to craft those laws now knowing that there will be new technology and it will be used in that sort of way. So, it is possible to do. Different countries have different approaches to legislation. There are some countries that have broader provisions. So, a country like Sweden, which has laws against sexual intrusion, for ex-ample, which has a broader impact and then it is developed and is flexible. In England and Wales for example. It is a very pedantic approach to drafting, which is why you then come up against problems. The law did not cover upskirt-ing, the law did not cover down blouse, and the law did not cover deepfake porn. Because each time, there is just a small incremental change. So, it varies across countries, but it is not beyond our competence to draft laws that are forward-thinking and will cover the harms in the future if we just think about it.
Kate Bevan: And haven't we got the right stakeholders involved in this process?
Clare McGlynn: Sometimes and sometimes not. I think that depends. If you are talking about platform regulation, often in countries there is some stakeholder engagement. So, if I think about the violence against women and girls’ sector, but you are faced with multimillion pound lobbying from large tech firms. So that is the imbalance you have. Most legislatures nowadays will try and consult and engage with those stakeholders, but it is an issue of what capacity do they actually have. I mean, if we think of Ofcom, the regulator in the UK consulting at the moment on guidance under the Online Safety Act, it runs to over 1,000 pages. And a lot of it is fiendishly technical. So how are you supposed to engage with that? Unless you are employing large armies of lawyers and what have you to mine through the detail is very diffi-cult. So, there is an imbalance there.
Kate Bevan: Yeah. So, I suppose also it is often this, the voices that chat loudest get heard. I mean, I am thinking about the Online Safety Bill when that was going through. There is the morass of stakeholders there and the weight given to some stakeholders versus others. And people who have the ability to lobby the contacts, it really mitigates against the people who possibly get to suffer the most harm.
Clare McGlynn: I think that is right. And it was a real challenge in the European Union as well. We know that the millions that were spent on lobbying around the Digital Services Act. So, the problem I have with it in many ways and in legislative changes is often the case. You are reliant on the voices of some victims to really put their heads above the parapet and be really determined to speak out. And it is such a burden that we place on those victims and those whistle-blowers, but that is often the only way in which we can get any change. And we have seen that in the US context with hearings before Senate. We have seen it in the UK context with some really impressive campaigners and par-ents of victims, for example, speaking out and making a real change. But it is an awful burden to place on people.
Kate Bevan: It is. I am going to finish up by asking the question I always ask. And I think given how pessimistic you are, I am actu-ally quite interested to hear your response to that. Is AI going to kill us all?
Clare McGlynn: AI is not going to kill us all because we will all have suffered such abuse and violence before we ever get to that in the end. The real threat from AI is not that it is going to kill us. The real threat from AI is being felt every moment right now by women and girls across the country, as well as many others, but particularly them.
Kate Bevan: Professor Clare McGlynn, thank you very much for talking to us today.
Clare McGlynn: No, thank you.
Kate Bevan: The AI Interrogator is an Infosys Knowledge Institute production in collaboration with Infosys Topaz. Be sure to fol-low us wherever you get your podcasts and visit us on infosys.com/iki. Yulia De Bari and Christine Calhoun produced the podcast. Dode Bigley is our audio engineer. I am Kate Bevan of the Infosys Knowledge Institute. Keep learning, keep sharing.
About Clare McGlynn
Clare McGlynn KC (Hon) is a Professor of Law at Durham University, UK, and a leading authority on regulating image-based sexual abuse, deepfake abuse and pornography. Her research and policy advice has been instrumental to the adoption of new criminal laws tackling online abuse, including cyberflashing and deepfake porn. She worked with a coalition of civil society organizations to improve protections for women and girls in the UK’s Online Safety Act 2023, as well as giving evidence to a number of Parliamentary committees and inquiries regarding online abuse. She has worked with Meta, TikTok, Google and Bumble to shape better responses to online abuse. Her recent books include Cyberflashing: recognising harms, reforming laws (2021) and Image-Based Sexual Abuse: a study on the causes and consequences of non-consensual imagery (2021).
- On LinkedIn
Connect with Clare McGlynn
- “About the Infosys Knowledge Institute” Infosys Knowledge Institute
- Online Safety Act
- Digital Services Act
- “Generative AI Radar” Infosys Knowledge Institute
- Cyberflashing: Recognising Harms, Reforming Laws (2021)
- Image-Based Sexual Abuse: A Study on the Causes and Consequences of Non-Consensual Imagery (2021).
Mentioned in the podcast