Infosys Selected as Inaugural Member of the AI Safety Institute Consortium
Infosys has been selected as an inaugural member of the AI Safety Institute Consortium, created by the National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, to jointly develop guidelines and standards for AI security and Responsible AI adoption along with other members of the consortium.
The consortium brings together more than 200 organizations to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. It aims to build a collaborative establishment that enables the development and responsible use of safe and trustworthy AI, particularly for advanced AI systems such as the most capable foundation models.
Infosys has been invited to the consortium based on its pioneering work in AI security and operationalizing AI governance under the overarching umbrella of the Infosys Topaz Responsible AI Suite - a set of 10+ offerings built around the Scan, Shield, and Steer framework. The framework aims to monitor and protect AI models and systems from risks and threats, while enabling businesses to apply AI responsibly. The offerings, across the framework, include a combination of accelerators and solutions designed to drive responsible AI adoption across enterprises. As part of the consortium, Infosys will bring to the table its deep expertise, capabilities, and point of views to support safe development and deployment of generative AI.
Infosys has also signed a Cooperative Research and Development Agreement (CRADA) with NIST. Under CRADA, Infosys will be a part of five working groups dedicated to shaping the standards and regulations for developing and deploying AI in safe, secure, and trustworthy ways, aligned with the standards making bodies in the US.
For more information, please visit.