
Safe AI vs. Fair AI with Cornell University's Professor Aditya Vashistha
What does it mean for AI to be truly "safe" and "fair"? In this conversation, host Ramachandran S. from the Infosys Knowledge Institute talks to Cornell University’s Professor Aditya Vashistha to unpack the critical differences between safe AI and fair AI. They explore the implications for marginalized communities, high-stakes domains, and global AI governance. This conversation delves into bias in AI, equitable technology design, and the challenges of ensuring AI benefits everyone – not just a privileged few. Tune in to discover how we can build AI systems that are both responsible and transformative.
Explore more videos:
Ramachandran S:
Welcome everyone! We are very happy to have with us Professor Aditya Vashistha from Information Science area in Cornell University. His research work is in the development of computing systems for social good in low resource environments. His research is at the intersection of human- computer interactions, informational communication technology development and accessibility. I am Ramachandran. I am the Lead for Engineering and Manufacturing in the thought leadership unit at Infosys, Infosys Knowledge Institute. Welcome, Professor Aditya. Thank you so much for taking your time.
Aditya Vashistha:
Thank you, Ram. It's a pleasure to be here.
Ramachandran S:
Traditionally artificial intelligence has been the domain for select few in corporates, let’s say data scientist. But recent developments in generative AI, large language models, tools like ChatGPT or Copilot have made artificial intelligence accessible, or it is democratized AI in corporate world so to speak. But is it really happening in the society? Are these technologies reaching all stratas in our society? Can you talk about that?
Aditya Vashistha:
Excellent observation, Ram. I definitely see this happening not just in the corporate world but in the society and even in high stakes application and domains. So AI is rapidly infused into healthcare, education, criminal justice system which impacts everyone in the world. And given the rapid infusion of AI into these domains, it's super critical to think of the design, evaluation, implementation, governance of these technologies very carefully.
I would say that anyone who is using any kind of a digital device right now is impacted by AI. And even in many governments in many countries, the distribution systems, the government services are also touching by AI. So it's not just a niche product anymore. It's something, which is impacting billions of people all over the globe.
Ramachandran S:
So it is fostering equitable inclusive growth across all sections of the society?
Aditya Vashistha:
That's a big question. And I would say it's not. It's touching people all across the world. But is it equitable? Is it inclusive? The answer as we speak today is no. And we really need a lot of work, which we need to do in order to ensure that AI is democratized, but is accessible to everyone, is inclusive, is equitable.
Ramachandran S:
How does AI support people with disabilities? Is that scalable? How can this technology be scaled affordably specifically in developing countries?
Aditya Vashistha:
Yes, there are lots of historically marginalized populations which are impacted by AI. There are lots of interventions or AI-infused applications which are designed to support them better. I know of an intervention, and actually many interventions, even for people with disabilities, to get access to health and well-being information, sexual and reproductive health information. This is an intervention which is actually happening in India. But there are lots of good examples of interventions impacting people with disabilities. But if we talk about large language models, generative AI technologies, they have very strong biases towards people with disabilities. And these biases are festered into all the entire pipeline of AI models, like how the data is curated, how the data and the models are designed, how they are fine-tuned, how RLHF is done on these models, and even the applications that are designed.
So there are lots of examples of success, but this is certainly an area of caution that when you are using these technologies, especially for historically marginalized populations, then it's important to ensure that they are equitable, that the representation is there, and so on. And that's not the case for people with disabilities. That's not the case for other historically marginalized populations, like women and gender minorities, caste and religious minorities. And that's not the case.
Even for users in the Global South, which is not a minority of any kind. We are talking about 85 % of the world's population, and that's not as well represented into these AI systems and models and technologies as they should be.
Ramachandran S:
Very true, very true. So that brings me to my next question, Professor Aditya. So, bias in AI decision making remains a critical issue. Is fake news and bias, is that a part of your study? How can we mitigate this challenge in AI adoption?
Aditya Vashistha:
Yeah, excellent question. So my research looks at how do you design equitable AI technologies for historically marginalized populations? And there are three focus areas where my lab is putting a lot of energy in. One of them looks at misinformation, propaganda, fake news, and low income environments. The other one looks at biases against people with disabilities or other caste, gender, race minorities. And the third one looks at questions around AI and culture and what does it take to design AI ethically and responsibly in high-stakes domains like healthcare and education. So this question around like bias and like fake content which is available online, AI is playing a lot of prominent role in the spread and infusion of this content. Like we already know about deepfakes and how these deepfakes have impacted the general public when they see content on WhatsApp and Facebook. It's very hard to know like if this is true or not. At the same time, on the positive side, a couple of my colleagues have worked on an intervention where they're thinking of how conversations with large language models can be used to challenge conspiracy theory related beliefs. In fact, we have a very recent research which we have done which looks at how LLMs can be used to encourage constructive dialogue on heart-wrenching topics like Islamophobia, homophobia, affirmative actions, and so on. So there are both the pluses and the minuses of using AI technologies. There are lots of examples of success.
And there are even more examples of failures and it comes down to the right orientation of using these technologies, especially when you're working with marginalized populations, working in high-stakes domains and so on.
Ramachandran S:
Very interesting, Professor. That sounds like a very comprehensive research plan. You mentioned responsible AI. Could you talk about what does responsible AI mean to you? Could you outline the core principles that you advocate and how can you ensure that they are practiced across the world?
Aditya Vashistha:
Excellent question. So like when we think of responsible AI, there are a lot of things to consider here. Like one of this is representation, which has already come up in the conversation. Inclusion, which is like whose voices are included in the design and implementation of an AI model or applications which are built using the AI model. There are questions around governance, like how do we ensure that the AI technologies which we are creating are actually benefiting people and not exploiting people, communities and in the infrastructures we have. And so what kind of models, governance models we need, like this is very, very important and comes under the ambit of responsible AI. There are other, there are several other aspects as well. Like AI tends to be a black box in many cases. And I need to say that when we talk about AI, we talk about this as a monolithic technology, but like AI is very complex and is of different kinds. So there is predictive AI algorithms, predictive AI models, and then there is generative AI algorithms and models. And the thinking needs to be different in both the cases. A little bit different for sure. And there are questions around AI reasoning and AI explanations. So how do we know that the AI is at the right decision or a prediction? How do we know what steps AI have taken in order to do the thinking in the case of generative AI applications? And so this really falls under the embed of responsible AI. The last thing I would like to say is that when we think of safe AI, like often we are thinking of can we make AI unbiased and can we make AI fair and is this equal to like safe AI? And I would strongly argue that safe AI is not just an AI which is unbiased and fair. A safe AI is also about an AI which do not result in quality-of-service harms. So it's not just about what AI should not do. It's also about what AI should do. And so it really needs to support not just a subset of, and a really small subset of the world's population, it really needs to support everyone. And it needs to be equitable for all.
Ramachandran S:
Thank you Professor. So safe AI use again is an interesting topic we can touch upon. Your research also includes culturally aware artificial intelligence. How can we create such a culture that is ethical and respects diversity. Culturally rich AI, how can we create that?
Aditya Vashistha:
Excellent question. And I would say this is one of the most pressing problems. And this comes in the realm of representation, which you were touching upon earlier. So if you look at the AI models today, they have very strong biases in favor of Western societies. So our models, which monolithic models, like GPTs and Geminis and Llamas and Claude and so on, they are Eurocentric and Western-centric, which means that those are the cultural values which are, which are imbibed into these models. And that's not the entire world's population. That's only 15 % of the world's population. And the rest of the 85 % of the world's population, their cultural values, the knowledge, the practices, the beliefs, the rituals, they are not well encoded into the AI models. And there are lots of reasons for why that is happening. So when I say a culturally appropriate AI technology, I really mean that an AI technology which understand local cultures and a technology which works for local languages and is representative of people who are as of now are outside of the design development and implementation of AI technologies but are still like the users of these technologies. So the quality which they get when they interact with AI is not the same. And if we go back and this is true today and this was today like several years back as well that when Alexas and Google Homes came into play they worked really well for a very specific population in the West, urban white men, but not so much for a person like me, like who has an Indian accent. And that sometimes stays true even today. And so the quality of service is super important. And when we create a culturally appropriate artificial intelligence models, then the values, the knowledge, the dialects, the languages are incorporated into these models.
Ramachandran S:
Thank you Professor. So going back to where we started, you mentioned several challenges. What are the key challenges you see to truly democratize AI for socially good? How can these challenges be addressed? What do you think are the main challenges and how can we address them?
Aditya Vashistha:
Yeah, exactly. So I think that there are so many open frontiers, a really exciting time to be working on this because there are lots of challenges. When we think of AI reasoning, when we think of cultural representation, when we think of inclusion, there are really big challenges on how we are thinking of representation, how do we collect data from different cultures, how do we ensure that the languages are represented into the models. Even we underlined the algorithms, which we used for tokenization, sometimes work really well for certain languages, Western languages, but not so much for languages which have agglutinative properties. And so we really need to think of not just the overarching governance models of AI, but everything from the bottom up on how do we collect data, how do we encode the information into these models to even thinking about what would be the right applications. And AI has a lot of potential to be transformative, but that doesn't mean that we need to apply AI everywhere. So finding the right use cases of it and when we apply these technologies into those use cases to ensure that the use of AI is equitable and the use of AI is safe. All this is an open challenge and an area where we all need to come together and work.
Ramachandran S:
Thank you Professor. So that brings us to the end of the conversation. Thank you so much, Professor. You have really broadened our perspective on AI. Thank you so much for sharing your valuable insights. So thank you all for watching. I hope you found it interesting. For more such insightful conversations, please do follow the video section of the Infosys Knowledge Institute at infosys.com/iki. Until next time, keep learning, keep sharing. Thank you so much, Professor.
Aditya Vashistha:
Thank you so much, Ram.