-
AI Interrogator: The Race to Regulation with Professor Lilian Edwards
November 27, 2023
Insights
- EU’s approach to AI regulation is the most well worked out, most comprehensive legal, as opposed to ethical, approach to regulating AI. At the same time, China has got three pieces of legislation out already, which regulate generative AI, synthetic AI, and recommender systems. But explicitly, they're only trying to regulate systems that operate in China, whereas the European approach is much more extraterritorial. Australia is also thinking about a new AI law. So, there's going to be a lot of new rules coming out about AI governance and how are they going to compete with each other.
- There are two main dangers associated with AI regulation. First, AI is constantly changing and a carefully prepared regulation can fall apart because there's some big technical change. Second, it’s hard to be ahead of the game in terms of regulation. And choosing the right approach will be challenging. Some countries will go with hard mandatory regulation, which is bad for innovation and bad for progress, but good for society and good for human values. Others will opt for a much softer, more ethical, more self-regulatory approach in which we still might look like we're thinking about the future and getting ahead on AI, and yet we're not really doing anything very much.
Kate: Hello, and welcome to this edition of the Infosys Knowledge Institute AI podcast, The AI Interrogator. I'm Kate Bev-an of the Infosys Knowledge Institute, and my guest today is Lillian Edwards, who's a professor of Law Information and Society at Newcastle University. And Lillian is also very much my go-to expert on everything to do with tech reg-ulation, so Lillian, thank you so much for doing this.
Lilian: Thank you for having me.
Kate: I'm delighted. Let's make a start by asking, is the EU's approach to AI regulation the best game in town?
Lilian: Yeah, I think it is at the moment. Some people would say it's the only game in town, but they'd be wrong. It's the most well worked out, most comprehensive legal, as opposed to ethical, approach to regulating AI, whatever AI means. And a lot of people don't know this, that China is very, very ahead of the game here as well. China has got three pieces of legislation out already, at least two of which are in force. I think three complete which regulate gen-erative AI, synthetic AI, and recommender systems. But explicitly, they're only trying to regulate systems that oper-ate in China, whereas the European approach is much more extraterritorial, much more like the GDPR, and they re-ally are aiming to be the gold standard model for the world. That's what they've said in all their various speeches.
There's a lot of criticisms in that it's kind of based on a product safety model, which the EU has a lot of experience with already, that says like if you manufacture a dishwasher and you sell it or import it into the European market, then you must comply with certain rules to make that dishwasher safe essentially. And if you do, you get a nice CE mark and you can circulate it through Europe. And this is kind of taking the same approach to certain AI systems, par-ticularly high risk systems.
Kate: So what do you mean by high risk?
Lilian: High risk systems are really the main target of the AI Act, and you might be surprised to find out that they are writ-ten down in a list, and systems are designated as high risk according to their intended uses. So there's a list, it's in Annex three, and it's systems that are intended to be used, say to assess whether you get credit, say to assess if you get insurance, say to manage your workplace performance. So it covers education. It covers, interestingly, a lot of high stakes public sector functions like giving out welfare benefits or dealing with your tax affairs or deciding if you get through a smart immigration checkpoint, or deciding in court perhaps if you get sent to jail or for how long and do you get bail. So these kind of sentencing support systems, which were very controversial a few years back be-cause a lot of evidence showed that they were very biased, particularly towards Black people.
There's been a lot of evidence in various countries that AI face recognition systems basically recognize Black people less well than White people because they draw on a training set that's populated from the internet, and there are more pictures of wealthy White people on the internet than there are poor Black people. And in fact, another part of the AI Act, which is not about high risk, is called unacceptable risk. So this is the top level of the AI Act. It says there are some AI systems that are just so bad, so bad for society really, bad for human rights, that they shouldn't be allowed at all. And this is one of the most politically sensitive parts of the AI Act because the main example there is indeed face recognition systems used by the police in public areas, and the question is, should they be banned in Europe?
Kate: But then I'm thinking also about the way China uses facial recognition and AI, particularly to oppress some of its mi-nority populations. And having EU wide regulation is fantastic. What does that mean for the rest of the world? How are we able to export our values to other countries?
Lilian: Well, I think the EU is explicitly trying to export its values to other countries, certainly within the high risk rules. So you're going to have to take steps to make sure that your training set is bare, that it's representational, that it's not full of mistakes, that you've labeled perhaps correctly, that your data hasn't been poisoned, that you have a degree of transparency and risk mitigation. There's a long list of things you're going to have to say you've done, all of which are pretty much standard best practice. But that only matters to you if you're outside the EU if you are trying to sell or move into the European market. If it's the other way round, for example, if it's a European producer trying to sell into China, then you don't care.
So although the European jurisdiction grab is quite big, this extraterritorial grab, it certainly won't affect, I think, how the Chinese government operates, unless they are trying to sell their tech into Europe, which seems a bit unlikely, and also, we keep banning them from doing that because we're worried about security.
Kate: So if we're just thinking about the EU regulation being a pretty good starting point, a pretty good standard, what's going to screw up the idea of useful global regulation?
Lilian: I wrote down a list actually, and it gets longer and longer unfortunately as opposed to my usual list of the benefits of the EU regimes. It's very depressing. The first one we've actually already stumbled onto really, which is global regula-tory competition and arbitrage. So at one point, it looked like it was only the EU. The EU were ahead and then we noticed that China were doing something, and now everyone is talking about it.
I'm going out to Australia soon. They are talking about a new privacy law, possibly a new AI law. Maybe an APEC law, who knows? So there's going to be a lot of new rules coming out about AI governance and how are they going to compete with each other. So it's a little bit like a reprise of maybe the millennium when we saw a lot of different countries bringing out rules about how to deal with platforms, online platforms, and it took quite a while for that to settle down. And indeed, has it settled down? Because we're still in the middle of the data transfer wars where we've been stuck for how long? At least 10 years, maybe 15 years.
We're going to have reprises of the data transfer wars, the safe harbor wars in terms of AI arbitrage I am absolutely sure, but another kind of key point which is intrinsic in the discussion about the EU regime is how on earth do you enforce these nice sanding rules, which again, is a perennial chestnut with any kind of internet regulation, let alone global transnational cross-border regulation.
So the heartland of the EU regime, as I've just said, is the high risk regime, which looks really nice. I remember read-ing through it when it came out going, "Wow, this is amazing." And then you sort of get to the end - well, you don't because it's not really obvious - and you discover that it's a self-regulatory regime. Almost all the providers, virtually all will be marking their own homework, will be saying, "Oh yeah, yeah, we've done all this. We've done all this. We've ticked it off, we've mitigated all our risks. Look, we've written it all down," et cetera, et cetera. But nobody will be checking it until someone gets hurt very largely, at which point we get interesting things happening actually with AI liability.
Yeah, the response to that is what could possibly go wrong if they're all marking their own homework? Well, we've only got 20, 30 years of privacy regulation to consult on that. The whole history of privacy seal regulation, yeah, it's just like self-regulation. It doesn't work.
Kate: So where do you think the pitfalls are going to be, particularly marking their own homework? Where's it going to go wrong?
Lilian: So I do think that most of the big companies, there may be some exceptions, will probably make a good faith effort to obey these rules, and there are some reasons why they might. Because the whole point of the EU act is to get it right from the beginning. As opposed to waiting for everything to go horribly wrong, you try and make a dishwasher at the start that doesn't blow up.
But if you take an AI system and you don't obey the rules, and somehow down the line, something does go horribly wrong and you discover that it's stopping 20% of Black people from getting places at Harvard or something like that, or that it's in an autonomous vehicle, there's a good example, that really does blow up or really does run over peo-ple, then again, there's quite a high chance, particularly in Europe, that that failure to mark your own homework properly would get taken into account in your liability. That's explicitly how the EU is now setting it up. And that's very clever, I think. So you've got a bit of stick and carrot.
Kate: One of the things I've always thought about, some of the stuff we've done, particularly with things like social media moderation, is we've let it run away from us and then we've been playing catch up with how to make it better. And certainly, my feeling of the people I've been talking to, including you but also some of the people with an emphasis and other people externally, it feels like there's a really strong effort, not just in the EU but certainly in the US and in the UK and in Canada and other Western places, to be out in front of it this time and to do our best to mitigate for the known possibilities of it blowing up.
Lilian: Yeah, I think that is true. This is where I stop being Jiminy Cricket though and look sort of sad and worried, because a number of things are happening. There is always the danger, particularly with AI where everything, as I always say, wherever AI is, everything seems to change about every five minutes at the moment, that your carefully prepared regulation just falls apart because there's some big technical change. And normally, I would say nothing that dra-matic actually happens very often. Existing principles cover it, but actually, we have this great works example which is that we spent years having ethics committees and things, working on the AI Act, and then they get so far with the progress and then along comes generative AI. Along comes ChatGPT and DALL·E and all the rest of them. And they go, "Oh my God, We didn't include this in our regime."
Because our regime, as I said before, is aimed at systems that have a particular high risk intended use, and the whole point of generative AI systems is they have no intended use. So it was like, "Oh my God, what are we going to do?" And I think there is that danger, even though we're trying quite hard to be ahead of the game this time.
I think the other danger, which is being very cynical, is that what do we mean by being ahead of the game in terms of governance? Do we mean actual hard mandatory regulation? Which is what the EU is to some extent going for, which you might argue is bad for innovation and bad for progress but good for society and good for human values. Many people claim that's a false dichotomy, but I just throw it there. Or do we go for a much softer, more ethical, more self-regulatory approach in which we still might look like we're thinking about the future and getting ahead on AI, and yet we're not really doing anything very much? And it could be argued that some countries not that far from where I'm sitting are thinking about that approach.
Kate: So I'm thinking particularly about the issue of copyright, which has been quite a big storm that's blown up recently, where creators of works from music to books to paintings to visual work are up in arms about the fact that their work has been scraped up into this vast seething cauldron of data and used to create generative AI outputs. I com-pletely understand how they feel violated, but I'm not a lawyer obviously, but I can't see how they win any battles on this one. What's your take, Lillian?
Lilian: As to whether they'll win anything? I think in the end, the market's going to solve this one. I've been saying this quite a lot. I think other problems are actually more insuperable. What we're already seeing is a number of devices happening in the market, notably opt-outs. Now, this is clunky obviously. If you're a small artist, if you're a small sculptor, are you going to know to opt out? Are you ever going to have heard about it? But I think that is going to be a useful tool. It's a bit like what we have with Google right now where you can put no into your robots.text and then Google doesn't spy to you.
The problem with Gen AI that's already emerging is that there are a lot of crawlers out there, and do you know even about one crawler, let alone dozens? So in a way, we're back at the do not track problem. What we'd really love I think, artists would love, would be a generic do not track, do not copy me, do not bot me, do not ingest me. But what are the chances of getting it? Again, given that a lot of these crawlers are from organizations based in the US, which historically hasn't been keen on this kind of regulation.
They are moving. The US is definitely moving I think within the foreseeable future to some kind of more stringent privacy regime, but this is even beyond a privacy regime. This is actually impeding the business models of their, what do you call it? Unicorns, whatever.
Kate: Yeah. It feels like a sort of mashup of privacy and copyright and moral rights and ownership that's a whole new area, particularly thinking of the authors who've been up in arms recently saying, "Hang on, my book's in this database. I didn't give you consent to do that." But do they even need to be asked for consent? Because the outputs aren't de-rivative of their work. So it's a really tricky one, I think, and this is the one I'm really keeping an eye on.
Lilian: It is very tricky and very complex. I think you can separate the two out. It's interesting, the Chinese thing. Otherwise, because I had a Chinese student doing a dissertation on, it was fascinating. You can separate the question of wheth-er your stuff was copied without your permission from the question of whether you own the output. And the first one often looks like a really open and shut case, at least to European eyes, whereas the second, yeah, I think the chances of winning on it being a derivative work are pretty low, because yeah, it's just that your work was a tiny part of what went into the giant bludge, mudge, whatever you call it, and it's really hard to necessarily prove that what came out the other end was really consequentially related to your work. Even the one or two times that it looks real-ly, really like it are very, very hard to capture. But yeah, the initial copying, why not? I think that's quite an easy one.
But the problem is in the US, you've got fair use. So in the end, I think this will come down to a fight about fair use, whether the use that was made by, say, OpenAI was transformative and therefore it should be allowed. That's going to be the really nice big litigation fight eventually.
Kate: Yeah, that's one I'm definitely keeping an eye on. I'm going to come to my final question, and this one I ask every-body. Do you think the AI is going to kill us all?
Lilian: No.
Kate: (laughs)
Lilian: Only by sending us so many adverts that we kill ourselves. I truly think that we're going to have died of climate change before AI kills us. If you look at what current computational intense statistics does, which is essentially what the AI we have right now is, what machine learning is, it's so far from any paradigm that would take us to real think-ing, to real cognition, to real human intelligence as opposed to just this facade, which is what it is. It's a behavioral facade in which it basically swallows huge amounts of what we do and then mines it for patterns to try and do some-thing that looks like what we do. That's my generic understanding of it. And we are just a species that likes to look for stories, we like to look for narratives. So our narrative as we see this happening from the outside and we go, "It looks like it's talking like a human to me. Maybe it is a human. Maybe it'll kill me."
Also, why is it always going to kill us? Why isn't it going to sit down and have a pint with us? Why is it always murder and death? Death by paperclips. I've never understood any of this really.
Kate: That's actually really reassuring. Lillian Edwards, thank you so much. It's been a pleasure.
Lilian: Cool. Thank you for having me.
Kate: This is part of the Infosys Knowledge Institute's AI podcast series, the AI Interrogator. Be sure to follow us wherever you get your podcasts, and visit us on infosys.com/IKI. The podcast was produced by Catherine Burdette, Yulia De-Bari and Christine Calhoun, with Dode Bigley as our audio engineer. I'm Kate Bevan of the Infosys Knowledge Insti-tute. Keep learning, keep sharing.
About Professor Lilian Edwards
Professor Lilian Edwards is a leading academic in the field of Internet law. She has taught information technology law, e-commerce law, privacy law and Internet law at the undergraduate and postgraduate level since 1996 and been involved with law and AI since 1985.
She worked at the University of Strathclyde 1986 - 1988 and the University of Edinburgh 1989 - 2006. She was Chair of Internet Law at the University of Southampton 2006 - 2008, and then Professor of Internet Law at the University of Sheffield until late 2010, when she returned to Scotland to become Professor of E-Governance at the University of Strathclyde, while retaining close links with the renamed SCRIPT (AHRC Centre) at the University of Edinburgh. She resigned from that role in 2018 to take up a new Chair in Law, Innovation and Society at Newcastle University. She also has close links with the Oxford Internet Institute.
She is the editor and major author of Law, Policy and the Internet, one of the leading textbooks in the field of Internet law (Hart, 2018). She won the Future of Privacy Forum award in 2019 for best paper ("Slave to the Algorithm" with Michael Veale) and the award for best non-technical paper at FAccT in 2020, on automated hiring. In 2004 she won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy where she invented the notion of data trusts, a concept which ten years later has been proposed in EU legislation. She is a partner in the Horizon Digital Economy Hub at Nottingham, the lead for the Alan Turing Institute on Law and AI, and a fellow of the Institute for the Future of Work. At Newcastle, she is the theme lead in the data NUCore for the Regulation of Data. She currently holds grants from the AHRC and the Leverhulme Trust. Edwards has consulted for inter alia the EU Commission, the OECD, and WIPO.
Edwards co-chairs GikII, an annual series of international workshops on the intersections between law, technology and popular culture.
Connect with Professor Lilian Edwards
- On LinkedIn
Mentioned in the podcast
- "About the Infosys Knowledge Institute" Infosys Knowledge Institute