With the adoption of AI systems increasing in critical decision-making systems, the outcomes rendered by these systems become critical. In the recent past, there have been examples where the outcomes were wrong and impacted important human issues, some of the examples being an AI hiring algorithm found to be biased against specific races and gender. Prison sentences had twice as high a false positive rate for black defendants as for white defendants. A car insurance company by default classified males under 25 years as reckless drivers.
AI started attracting a lot of negative press because of failing systems, the resulting lawsuits and other societal implications, to the point that today regulators, official bodies and general users are seeking more and more transparency of every decision made by AI-based systems. In the U.S., insurance companies need to explain their rates and coverage decisions, while the EU introduced the right to explanation in the General Data Protection Regulation.
All the above scenarios call for tools and techniques to make AI systems more transparent and interpretable.
Following are some of the important principles required for operating an ethical AI system. To ensure trust and reliability in AI models, it is essential to adhere to some key principles:
Many practitioners would mistake explainable AI (XAI) as being applied only at the output stage; however, the role of XAI is throughout the AI life cycle. The key stages where it has an important role are as follows.
Thus, AI systems need consistent and continuous governance to make them understandable and resilient in various situations, and enterprises must ensure that as part of the AI adoption and maturity cycle.
To keep yourself updated on the latest technology and industry trends subscribe to the Infosys Knowledge Institute's publications
Count me in!