The Decent Dozen: 12 Principles for Responsible AI by Design

Insights

  • Firms need to comply with new AI regulations, yet they also need to innovate.
  • Responsible AI is a strategic discipline that enables AI compliance and innovation at scale.
  • Based on research and over 200 client collaborative projects, we developed 12 Responsible AI principles integral to the responsible design and operation of AI systems.
  • Supporting the 12 Responsible AI principles, we have identified strategic approaches, tactical practices, and governance mechanisms that work together to implement these principles.
  • Translating these principles into practice enables firms to maintain focus on their AI objectives, and to course-correct when new regulations, data, and company policies require a different approach.

The rapidly-evolving AI landscape benefitted from a surge in investment and development while regulation has scrambled to keep up. The European Parliament recently passed the AI Act, clarifying obligations and the legislative framework for everyone. This creates tension in the market, as firms need to comply, yet they also need to innovate. AI’s fast-moving nature suggest that laws will change, technology will progress, and the definition of good ethics will evolve due to both economic and business imperatives.

Responsible AI as business imperative

Businesses are thinking carefully about how to use AI ethically, as they realize it has become an imperative to embed ethics into products and services. This practical discipline mitigates ethical risks in product development and adheres to foundational AI ethical principles in production.

As AI becomes more democratized, responsible AI initiatives become a strategic organizational imperative, beyond a short-term technical and so-called tactical checkbox exercise.

Responsible AI provides business value beyond ensuring compliance. It builds public trust and brand reputation through adherence to principles like fairness, transparency, privacy protection, and human oversight. According to AI ethics expert Olivia Gambelin, author of the book Responsible AI: Implement an Ethical Approach in your Organization, this approach fosters higher customer adoption, satisfaction, and retention of AI-powered products and services. It also gives employees and leadership more confidence to deploy AI solutions and reassurance to embark on transformations at scale. These factors are a primary consideration for procuring and using AI systems.

Guiding responsible AI

Organizations must understand critical design considerations that underpin responsible AI to build, maintain and use AI systems with confidence. While the specific characteristics of responsible AI will depend on use case, context, model, and architecture, the fundamentals remain the same. What are these fundamentals?

At their core, responsible AI considerations flow from fundamental human values of transparency, safety, equality, accountability, privacy protection, and respect for human rights. Transparency demands interpretable and explainable AI models whose decision-making are easily understood by humans. Safety requires systems secure from attacks and breaches. Equality requires AI systems that are unbiased and fair to all demographic groups. Accountability necessitates robust governance, clear audit trails, responsible disclosure protocols, and measures to pinpoint issues.

While admirable as high-level considerations, they also translate to tangible design requirements embedded into each stage of the AI lifecycle. For example, during data collection and pre-processing, privacy-preserving techniques and representative data practices uphold privacy and fairness. In the modeling phase, techniques like ethical AI algorithms, bias testing, and interpretable modeling embed transparency and fairness. Deployment, human oversight controls, monitoring processes, and access governance further improve safety and accountability.

The 12 principles of responsible AI

Based on research and over 200 client collaborative projects, we developed 12 principles integral to the responsible design and operation of AI systems (Figure 1):

Figure 1. The 12 principles of responsible AI

Source: Infosys Responsible AI Office

  1. AI regulatory compliance strengthens business reputation and improves market access. Regulatory compliance requires a detailed examination of region-specific and industry-specific AI regulations relevant to the business areas where AI is used. This includes evaluating how AI products and services are trained, deployed, and tested within specific geographical and legal contexts. Compliance involves not just adherence to AI regulations like the new EU AI Act, but also to pre-existing regulations like GDPR and the Digital Services Act.
  2. AI transparency builds trust and improves decision-making. Transparency isn’t just a best practice; it’s a strategic necessity. Decision-making in high-risk areas influences business strategies, financial outcomes, and people’s lives. AI systems must explain the rationale behind their decisions to build and maintain trust.
  3. Humans should control AI decisions and drive business outcomes. Ensuring that AI products and services reliably provide comprehensible decisions that can be controlled by humans is essential. This ensures consistent business outcomes and builds trust among users.
  4. AI systems should be fair, unbiased, and promote equal opportunity for all user groups. To ensure fairness in AI systems, identify and mitigate biases arising from training data, which reflect and replicate the biases of the data sources. This promotes equal opportunity for all user groups.
  5. AI safety mitigates unintended consequences. Firms must ensure AI products and services minimize the risk of unintended harm and are safe, reliable, and trustworthy. This includes implementing measures to prevent unintended consequences, secure model deployment, and manage potential risks in real-world scenarios.
  6. Data minimization improves compliance and protects user privacy. Data minimization techniques enhance data privacy. Additionally, measures should be taken to safeguard sensitive data, respect content and code usage policies, and honor user consent preferences. All AI systems should comply with privacy regulations. Privacy measures include detection mechanisms for data leakage and preserving techniques such as anonymization or differential privacy. These enable firms to release statistical information about datasets while protecting the privacy of individual data subjects.
  7. Security-by-design safeguards models and data. AI products and services must be secured to prevent unauthorized access, manipulation, and malicious attacks. AI must be secure by design against adversarial attacks and inputs such as prompt injections, jailbreaking, extraction, and poisoning.
  8. AI model validation improves accuracy and accountability. Model validation ensures that each AI model has learned sufficiently from training data to make accurate predictions and decisions when exposed to unseen data. Model engineering must integrate responsible-by-design tooling and guardrails into the engineering pipelines. Additionally, model engineering streamlines the deployment, monitoring, and management of AI models.
  9. IP rights and usage authorization prevent litigation, protect innovation, and build trust. Responsible IP management respects others' rights and prevents unauthorized use and infringement of proprietary content in generative AI models. It also safeguards sensitive enterprise IP from leaking into public generative AI tools when uploading proprietary content for tasks like code completion or document classification.
  10. Auditable AI processes increase transparency, accountability, and quality. Firms must ensure AI processes meet industry-recognized benchmarks and are auditable. This includes third-party and internal audits, maintaining documentation, auditing data, processes, and models, and designing validation and testing methodologies. It also involves security, bias, and regulatory auditing.
  11. Sustainable AI improves efficiency, mitigates environmental impact, and nurtures ethical practices. Firms should ensure that the development and implementation of AI products and services consume minimal computational and infrastructure resources, resulting in a lower carbon footprint.
  12. Centralized responsible AI teams safeguard AI decision-making. Enterprises should have responsible AI champions focused on adoption and implementation to ensure that ethics are not overshadowed by business imperatives. At Infosys, the Infosys Responsible AI Office is the custodian of AI governance and facilitates collaboration across functions. The office establishes and maintains a governance framework and defines policies, procedures, and decision-making processes related to AI.

Turning principles into action: A responsible AI playbook

Supporting the 12 responsible AI principles, we have identified strategic approaches, tactical practices, and governance mechanisms that work together to implement these principles. (Figure 2).

Figure 2. Responsible AI approaches, practices, and governance

Source: Infosys Knowledge Institute

Strategic approaches

Firms adopting the 12 principles must ensure responsible AI is:

  1. Fit for purpose. Consider the type of model used (traditional AI, LLMs, or deep learning models), the stage of the AI lifecycle, the type of data being used, and the specific use cases. For example, when AI determines loan eligibility, key considerations include fairness, privacy protection, and transparency. When generative AI creates code, major considerations are IP protection, detection of inadvertent IP infringement, and ensuring the generated code follows security best practices.
  2. Built in, not bolted on. Ethical considerations are woven into AI systems, from data selection and algorithmic design to deployment and monitoring. They are not an afterthought or add-on. Instead of waiting for issues to come to light, responsible AI practices proactively identify and mitigate potential biases, privacy concerns, and other risks throughout the AI development process.
  3. Proactive, not reactive. Responsible AI invoves continuously evaluates and monitors for emerging vulnerabilities and approaches. Enterprises must regularly upgrade their tools, systems, and processes as AI technology evolves.
  4. Platform-centric. This platform serves as a repository for all guardrails, monitoring, and testing capabilities. It also integrates with other AI systems, products and processes, providing a one-stop shop for responsible AI controls.
  5. Metrics-driven. Defining metrics improves measurement, monitoring, and external auditing throughout AI design, development, and deployment. Many of these metrics are challenging to quantify, especially those measuring societal impact. However, when done right, this data supports risk management practices, model retraining, system updates, and accountability processes. For example, a responsible AI principle like explainability should be measured at the use case level for specific measures, as well as at the enterprise level to ensure good practices are followed across units. Score-based evaluations are needed for all parameters.
  6. Operationalized. Responsible AI requires an equally responsible AI framework that ties together all interventions and helps scale them at the enterprise level.
  7. Ecosystem-led. The AI value chain is complex, with different companies collaborating to build and operate enterprise AI systems. This includes hardware providers, software development kit providers, hyperscalers, model providers (open-source and proprietary), and vector database providers. Vulnerabilities and gaps can occur anywhere in this chain, and a minor vulnerability in one area can escalate into a larger problem in the wider system. Enterprises must scrutinize all their strategic partners and vendors to ensure they adhere to the same standards. Additionally, they need to build an ecosystem of partners specializing in responsible AI solutions and guardrails.

Tactical practices

There are six tactical practices for firms adopting responsible AI principles. At the team and use case level, responsible AI requires:

  1. Use case assessment. Enterprises need to assess use cases and classify them into risk categories via automated risk assessments. After this initial classification, apply further gating criteria, and recommend different design considerations to the development team.
  2. Reskilling. All employees should receive training to understand the principles of responsible AI, regardless of their function in the enterprise.
  3. Centralized monitoring. Enterprises should implement systems to monitor all their AI projects, providing a view of model health, performance, and the nature and type of prompts used. This in turn functions as an early warning system to enable firms to respond quickly to any problems.
  4. Red teaming. Enterprises should use adversarial testing processes that expose vulnerabilities in models and use cases.
  5. Reference playbooks. To follow leading practices, enterprises need to create reference guides setting out the use of responsible AI across the organization.
  6. Ethics-guided tooling. Implement developer tools that automate and streamline embedding responsible AI. Integrate these tools, exposed as APIs, across AI use cases to detect levels of privacy, fairness, and explainability while providing mechanisms to augment and improve these facilities.

Governance

Firms embedding the 12 principles into products and services should provide:

  1. Clear and accessible policies. Enterprises should establish clear AI policies for AI procurement, development, deployment, and usage in operations. These policies should also cover AI-related grievances, accountability frameworks, model evaluation guidelines, AI crisis management, data governance, quality management, and testing and validation. These policies should be readily available and clearly state stakeholder obligations.
  2. Process guardrails. AI-powered business processes must be designed to account for AI risk elements, as risks can propagate downstream into critical workflows. Firms need to embed human-in-the-loop interventions, obtain informed data usage consent from users with voluntary opt-ins and opt-outs, and integrate business rules to safeguard the organization alongside technical guardrails. The initial step is a gap assessment to identify deviations from standard responsible AI practices.
  3. Enterprise governance. Organizations must ensure that governance leverages responsible AI as an enabler for innovation. Enterprises must look beyond the technology so that it is developed and used ethically. Infosys is among the first companies globally to be certified in ISO 42001:2023 for AI management systems.

Overcoming ethical drift

These principles enable firms to maintain focus on their AI objectives, whether building a product or implementing AI-driven processes, and to course-correct when new regulations, data, and company policies require a different approach. For example, if a company sets out to develop a fitness tracker app but fails to define the value of user privacy, the product can drift from helping users understand their health to selling user data to health insurance companies. While seemingly abstract, experience shows that ethical drift can lead to significant differences in outcomes. In Olivia Gambelin’s book, one firm started its project without regard to data privacy. The company made critical decisions around data collection and management, but due to privacy shortcomings, they made many questionable decisions about user privacy. Eventually, the company discovered that the product violated critical privacy regulations and had to rework its entire data management processes at great cost.

Embedding responsible AI into products reduces technical debt and prevents ethical drift. It also ensures that everyday business decisions lead to more innovation.

Forty-eight percent of companies that implement responsible AI and communicate their efforts experience enhanced brand differentiation. This may be reason enough to establish responsible AI as a core part of enterprise AI strategy.

Connect with the Infosys Knowledge Institute

Opt in for insights from Infosys Knowledge Institute Privacy Statement