
Prof. Aditya Vashistha, Cornell University’s research focuses on computing systems to achieve social good of people in low-resource environments. His work lies at the intersection of human-computer interaction, ICT (information and communication technology) and development, and accessibility.
Prof. Aditya shared deep insights from his research on generative AI for social good when he attended the Infosys Topaz Responsible AI Summit 2025 in Bangalore. Recent advancements in artificial intelligence (AI) such as generative AI, large language models (LLMs), and tools such as ChatGPT or Copilot have democratized the technology in the corporate world. What was earlier accessible for select roles such as a data scientist for example is now available across the organization, with the need for guard rails in place. But this has not happened across all sections of the society.
AI is rapidly infused into domains like healthcare, education, and justice systems to impact everyone. Anyone using a digital device is impacted by AI. The rapid pace of infusion of these technologies makes it critical to focus on their design, evaluation, implementation, and governance. However, it is not fostering an inclusive, equitable growth for all sections of our societies. That needs a lot of work.
AI for the disabled: Many interventions with AI-infused applications have had a positive impact for those with disabilities. But generative AI and LLMs have strong biases towards the disabled. These biases are present across the entire AI pipeline — from data curation to design of the model, its training, fine tuning, reinforcement learning from human feedback, and the application itself. One word of caution when AI is deployed for historically marginalized sections of the society is to ensure that they are represented well and the application is without any bias, is fair, and inclusive.
“One word of caution when AI is deployed for marginalized sections of the society is to ensure that they are represented well.”
Building equitable AI: Prof. Aditya’s focus is on how we can build equitable AI technologies. The three focus areas in his lab are 1) misinformation propaganda including fake news, 2) bias against the minorities and disabled, and 3) design of culturally sensitive, responsible, ethical AI in high stake domains like healthcare and education. AI plays a prominent role in the creation and spread of misinformation and fake content like deepfakes. On the positive side, some of Prof. Aditya’s colleagues have worked on how LLMs can be used to have constructive dialogues even on controversial topics.
Safe and responsible AI: The key tenets of responsible AI beyond being ethical and fair are to be representative of all sections of the society and inclusive. The governance model should ensure that AI benefits its users and not exploit them. AI is a black box, but not monolithic. AI’s reasoning and explainability are important to understand how it arrived at a particular decision or prediction and what steps were taken. Safe AI is not just one that is unbiased and fair but one that does not result in quality-of-service harms, where the system does not work the same way for all sections of its users.
“Safe AI is not just one that is unbiased and fair but one that does not result in quality-of-service harms”
Culturally aware AI: AI systems like GPT are western centric with their cultural values imbibed into them today. But that is only 15% of the world’s population. The cultural values, knowledge, practices, beliefs, rituals, of the remaining 85% of the population are not well encoded in these models. They are also users of AI. A culturally appropriate AI technology should understand local cultures, work for local languages, and should be representative of the people who are left out of the design and implementation of these systems today.
There are many open frontiers and challenges to be addressed in these exciting times for AI adoption. We should not limit our thinking to the overarching themes like governance but a bottoms-up approach from data collection to its encoding, with cultural and local representation. Even the underlying algorithms used for tokenization work well sometimes for some languages. AI has a lot of potential to be transformative. But that does not mean that it should be applied everywhere. The correct use cases need to be identified. When these technologies are applied for those use cases, we should ensure a safe, equitable implementation of AI.