Knowledge Institute Podcasts
-
Community-Driven Data and AI with Connected by Data founder Dr. Jeni Tennison
October 25, 2024
Insights
- Community involvement is essential in shaping AI and data systems, ensuring that those affected by the technology have a voice in its development and governance.
- Transparency and accountability in AI are crucial, requiring not just open data, but also systems where diverse groups can meaningfully participate in shaping AI's societal impacts.
Kate Bevan: Hello and welcome to this episode of AI Interrogator, Infosys' podcast on enterprise AI. I'm Kate Bevan of the Infosys Knowledge Institute and my guest today is Dr. Jeni Tennison. Jeni is the founder of Connected by Data and she's a very distinguished leader in making sure that communities are part of the discussions around AI and data. Jeni's been CEO of the Open Data Institute and she's worked with a number of organizations in this space, including Cambridge's Benefit Institute for Public Policy, and also Jeni has got an OBE. So, we are very, very pleased to have such a distinguished guest joining us. Jeni, thank you so much.
Dr. Jeni Tennison: Thank you for having me, Kate.
Kate Bevan: Jeni, what brought you to this point to becoming such an expert?
Dr. Jeni Tennison: When I was doing my PhD, I actually started working in psychology and AI was in the psychology department in Nottingham University where I was studying. So, I did my PhD on AI. Then got into a lot of the kind of information exchange into the web and how we better structure knowledge and exchange knowledge and became a developer trainer, worked on then legislation.gov.uk, which introduced me to the kind of public sector world and its use of data and the importance of open data. And that's what led me into working at the Open Data Institute, being the CEO there. And that really made me open my eyes to things that couldn't be open, right, that data and that needed to be protected in different kinds of ways, the kinds of interests that different kinds of communities have around data and AI. And that's what made me then after I left ODI, set up this campaign Connected by Data that really focuses on how we bring the voices of those people who are affected into the conversation to help shape data and AI systems.
Kate Bevan: Are we getting community participation right, do you think with AI?
Dr. Jeni Tennison: We're definitely not getting it right in practice, right? I think that there are these two strands. You can come from the kind of strand of ethics, responsibility, wanting to have AI that works for the public good, those kinds of things, you get into if you do that bits around data and AI justice and start thinking about power and the role of power in the way in which our data and the AI ecosystems work. And that leads you to needing to have more democratic control over data and AI, which kind of leads you to, okay, so we need the communities who are actually affected involved in it. So that's the kind of ethics strand of the way of thinking about it. But the other strand which is really practical is the kind of user-centered design strand, right? It's like how do we make products and services that actually work for people? Well, talking to them is a really good way of finding out what they need.
So, there are these two strands that kind of come together, but that doesn't mean that we are doing it right. I think that the kind of focus on UX can just be about the user experience, about just changing the interface on things and not actually giving people proper power over shaping the tools that they actually need. And also, I'd say that a lot of the public participation and engagement work tends towards just understanding what people think so that you can communicate to them better rather than actually giving them power over the data and AI systems that they need. There are some limitations in the way in which we're engaging those people, even though the theory is there. The practice I think is still really developing.
Kate Bevan: It seems me that a lot of that is very top down. We are not reaching broadly and deeply. How hard is it to reach the people that we actually need to bring into the conversation?
Dr. Jeni Tennison: There are different kinds of ways of thinking about those sets of people, right? So, there's one way of thinking about public participation and deliberation, which is really focused on research and is focused on representation and legitimacy around those people. And there you will be looking at citizens, juries where you want to have a sample that is representative that is bringing in different kinds of voices into that. And citizens, juries and assemblies have definitely been used around data and AI for kind of shaping principles, if you look at the work that Camden local Authority has done.
Kate Bevan: I should say for our listeners around the world, that Camden is an inner London Borough that has a real mix of demographics. It's got quite a lot of enormous wealth and it's got some tremendous deprivation as well.
Dr. Jeni Tennison: So, Camden has been actually going out into its local community and having the deep deliberative conversation with them about what the principles are that they need to put in place around how they use data and AI within their local authority. So, they've got a whole bunch of principles and then they're going back to those communities and saying, well, this is what we've actually done. Is it meeting up against these principles? What does that look like? So, you can see it's an ongoing conversation, not a one-off exercise. And I think that that's really important.
Kate Bevan: To me, that sounds like a really important way to connect with people and there's a great model.
Dr. Jeni Tennison: The other thing I would just throw in is that citizen juries are often outside of practicability for many organizations and at different kinds of stages. So, I don't think we should be setting those up as the only way of involving the public. I also think we need to really look at communicating with civil society groups and the real role that civil society and community groups can play in just being a representative for and for being an easy touch point for access to those kinds of larger communities. And that includes where you've got data and AI systems are affecting workers. That includes talking to unions and doing negotiation with unions and those kinds of things. So, there are existing channels for having conversations with people about stuff that affects them, and we should be using those rather than inventing new ones in most cases.
Kate Bevan: Isn't there also a tension between the need for open data to build good quality models and also the fact that a lot of the state, particularly in social care, in health, in civic society, is really personal data. How do we manage that trade off?
Dr. Jeni Tennison: So, there are various kinds of pushes towards a more public AI, which requires training on public data sets. But as you say, not all data sets can be open or should be open, and we do need to have protections around that. I think for me then one of the crucial things that we need to distinguish between is openness, which means that stuff is out there and anybody can use it, and transparency, which means that some people who need to see the detail are able to see the detail. So, I think that we need to have a lot more transparency around AI for academics, for auditors to be able to dig into the details of what data is holding and therefore what that means in terms of the AI systems that get built over the top of it. That doesn't necessarily mean openness and meaning that data has to be openly available for everyone.
Kate Bevan: I suppose you could also use that approach to reassure people who might not have had anything much to do with AI except either as a service user or maybe they've seen ChatGPT or seen some of some weird search results that Google throws up. How do you reassure them about AI being used in a way that might be good for them or might not be good for them? How do you reassure them?
Dr. Jeni Tennison: So, I think it's one of the interesting things that came out of this work that we did last year around the AI safety Summit in the UK. We ran a people's panel on AI. So, we had 11 people who, from all kinds of walks of life coming in and actually listening to experts, having conversations with them and talking about their kind of hopes and fears around it. And we had a mix of people's attitudes coming in. Some people were very familiar, some people not familiar, lots of people with kind of concerns and fears around it. But actually, as we dug into the details with them, they became more confident even while retaining their critical thinking around AI. And it was really kind of amazing to see. You ask people for their recommendations about how this should be used, where they would like to see it, and it empowers them.
It means that they feel in control and that actually means that they're open to a lot more uses of AI in ways that are controlled in the ways that they like, right? But they're more open to it than if they remain in this place of ignorance and fear where the big boogeyman seems really scary and they don't want to touch it. So, I really feel like empowering people over how data and AI impacts their lives is actually a driver of sensible, useful adoption of AI as well. Which is one of the reasons why I don't think we should be seeing that engagement as being something that is holding back innovation or economic growth or adoption or anything like that. I think the empowerment actually makes it happen.
Kate Bevan: It makes some stakeholders, doesn't it, rather than data subjects or passive recipients of AI.
Dr. Jeni Tennison: Exactly. You know what it's like when somebody tells you to do something, you automatically go, no, thank you. But when you're given control over it, you are a lot more bought in.
Kate Bevan: Yeah, it feels like it makes you part of the conversation. So, if you had one piece of policy or one piece of legislation you could enact, what would it be?
Dr. Jeni Tennison: If I could just target one, it would actually be around the impact assessment process because we have that around data systems. We have that around AI systems. Really, I think just little tweaks actually. We would have impact assessment happening much earlier and throughout the process, not being a kind of one-off exercise after most things have been decided. We would make that impact assessment process actually consider the full set of impacts. So not just on the organizations and on individuals and privacy, which tends to be the kind of focus. But also on communities, on society, on equality, on our democracy, on the environment, right? And both positive and negative kind of assessments of those. Not just thinking that any impacts are going to be harmful, but also looking at the positive ones as well. So that wider scope.
Then crucially, you won't be surprised, involving the people who are affected in the assessment process, bringing in those diverse voices from people affected, civil society, but also academics and theorists and all of those kinds of things. And then making the organizations actually publish those impact assessments, publish also about the process that they've gone. And that's what drives accountability. And I reckon doing all of that means that we'd have better understanding of those risks, those opportunities, we'd actually find opportunities to help get these systems actually working well for people and we'd have that kind of better accountability and throughout the process. If we could target impact assessment, I think we'd do quite a lot.
Kate Bevan: That sounds like a brilliant idea. I'm going to ask you my final question now. That's what I always ask everybody. Do you think AI is going to kill us?
Dr. Jeni Tennison: I think you're very brave to have this as a final question just because it brings down the whole kind of turtleneck.
Kate Bevan: I think that's a good way of getting in clear, getting beyond the hype and getting in people fear and don't fear.
Dr. Jeni Tennison: And is partly because of the current state of energy production, both the utopian view of AI saving us and the existential threat, AI is going to kill us kind of views, really distract the big high-level policymakers and politicians who need to be concentrating on getting us out of the climate crisis from having those conversations. So, this is one of the reasons why I think we all need to be challenging it and the obsession with AI and put it in its place so that it's serving us and serving our lives rather than just going along with the kind of current flow.
Kate Bevan: That's a great place to leave it, and actually quite an optimistic place to leave it as well. Dr. Jeni Tennison, thank you so much for joining us today.
Dr. Jeni Tennison: Not at all, Kate. Thanks for having me.
Kate Bevan: The AI Interrogator is an Infosys Knowledge Institute production in collaboration with Infosys Topaz. Be sure to follow us wherever you get your podcasts and visit us on infosys.com/iki. The podcast was produced by Yulia De Bari and Christine Calhoun. Dode Bigley is our audio engineer. I'm Kate Bevan of the Infosys Knowledge Institute. Keep learning, keep sharing.
About Dr. Jeni Tennison
Jeni is the founder of Connected by Data, a campaign that aims to put community at the centre of data narratives, practices and policies. She is a Senior Fellow at the Centre for International Governance Innovation, adjunct Professor at Southampton’s Web Science Institute, and a Shuttleworth Foundation Fellow.
Jeni was CEO at the Open Data Institute, where she held leadership roles for nine years and worked with companies and governments to build an open, trustworthy data ecosystem; a co-chair of GPAI’s Data Governance Working Group; and an Associate Researcher at Cambridge’s Bennett Institute for Public Policy. She sits on the Boards of Creative Commons and the Information Law and Policy Centre.
She has a PhD in AI and an OBE for services to technology and open data. She loves Lego and board games and is the proud co-creator of the open data board game, Datopolis.
- On LinkedIn
About Kate Bevan
Kate is a senior editor with the Infosys Knowledge Institute and the host of the AI Interrogator podcast. This is a series of interviews with AI practitioners across industry, academia and journalism that seeks to have enlightening, engaging and provocative conversations about how we use AI in enterprise and across our lives. Kate also works with her IKI colleagues to elevate our storytelling and understand and communicate the big themes of business technology.
Kate is an experienced and respected senior technology journalist based in London. Over the course of her career, she has worked for leading UK publications including the Financial Times, the Guardian, the Daily Telegraph, and the Sunday Telegraph, among others. She is also a well-known commentator who appears regularly on UK and international radio and TV programmes to discuss and explain technology news and trends.
- On LinkedIn
- “About the Infosys Knowledge Institute”
- “Generative AI Radar” Infosys Knowledge Institute
- Connected by Data
- Open Data Institute
Mentioned in the podcast