The EU AI Act: An enterprise response strategy

Insights

Listen Here
  • The EU AI Act was approved by MEPs in March 2024, with key provisions taking effect around summer 2026.
  • However, specific provisions for risky systems will start in late 2024.
  • Infringements related to prohibited systems amount to either €35,000,000 or 7% of a firm’s worldwide annual turnover for the previous financial year, whichever is greater.
  • It is therefore paramount that organizations build an adequate response strategy to the act, across people, process, and technology.
  • The 10-pronged approach outlined here will enable firms to mitigate risk, embed responsible by design into key products and processes, increase AI preparedness, and drive innovation.

Regulation is a top concern for businesses becoming AI-first. But AI advances so fast that most regulation is playing catch-up. New global regulations are making compliance obligations a key part of the AI roadmap. A case in point is the new EU AI act (hereafter, the act), approved by members of the European Parliament in March 2024. Rather than seeing this law as a nuisance, firms should consider its ethical implications when building their responsible AI strategy. The development of truthfulness indexes by Microsoft, Google, Meta, and Amazon, along with new explainable AI techniques in the generative AI era, prove the growing awareness of and need for ethical AI independent of regulation.

Here, we lay out key provisions of the act and discuss an ethical AI response strategy to keep firms ahead of legislation as they go AI-first.

Key provisions of the EU AI Act

The act categorizes AI systems based on risk, subjecting high-risk ones to intense scrutiny and mandatory requirements, while low-risk ones have fewer gating criteria. This will prohibit certain applications, like social scoring or those exploiting human behavior, deemed as posing “unacceptable risk.” A detailed appendix defines categories of use cases that fall in these brackets.

The act also defines general purpose AI (GPAI), comprising foundation models such as ChatGPT and DALL-E, the workhorses behind generative AI innovations.

This separate category has more obligations and demands more scrutiny. Within this category, the act also defines certain general purpose AI as having systemic risks, such as those trained with a cumulative computational capability of 10^25 FLOPS or more.

Most of the act’s obligations fall on providers, developers, and deployers of AI systems. It applies to organizations developing AI inside the EU member countries, and sometimes those outside the EU if their AI system’s output is consumed within the EU.

Obligations on high-risk providers include:

  • Establish a risk management system throughout the AI system’s life cycle.
  • Maintain technical documentation to enforce compliance, accessible to authorities assessing AI systems.
  • Establish mechanisms for logging and recording AI system outputs.
  • Introduce a quality management system and human oversight.
  • Design and maintain AI systems for a certain level of robustness, security, and accuracy.

Providers of general purpose AI models must also:

  • Maintain detailed documentation on testing and training processes.
  • Pass on information and documentation to downstream providers.
  • Establish a policy to respect IP and copyright.
  • Publish a sufficiently detailed summary of the AI office’s template.

For general purpose AI models with systemic risks, obligations include:

  • Perform intensive model evaluations, including conducting and documenting adversarial testing.
  • Assess and mitigate possible systemic risks, including their sources.
  • Track, document, and report serious incidents and possible corrective measures to the AI Office.
  • Ensure adequate cybersecurity protection.

How will the act be enforced?

The EU AI Office will monitor the implementation and compliance of general purpose model providers. Downstream providers can object to work carried out by upstream providers and any other infringement they deem critical.

The AI Office will possess the authority to evaluate compliance, conduct investigations on systemic risks, and invite consultation from a panel of independent experts to adjudicate.

Major considerations for enterprises

The EU AI Act is intended in part to cement the bloc’s role as the setter of global digital standards, although companies also need to be aware of standards in other parts of the world, notably in China, which has already adopted several AI laws.

  • Timeline: The act is likely to be formally adopted in summer 2024, possibly June, and will take effect 24 months later, around summer 2026. However, specific provisions, like those on risky AI systems, will start six months after the act takes effect, expected in late 2024. Other regulations, such as those for general purpose AI systems and adherence to a code of conduct, will start after 12 months. Monitoring requirements for high-risk AI providers will begin in 18 months. At 24 and 36 months, restrictions on high-risk AI use cases of certain categories will come into effect, including behavioral profiling using computer vision technologies.
  • Penalties: The regulation establishes a three-tier structure for imposing fines on AI system operators in general, with additional provisions for providers of general purpose AI models and EU agencies. The most substantial fines apply to infringements related to prohibited systems (such as those that exploit any vulnerabilities of a person or specific group of persons due to their age, disability, or social or economic situation), amounting to either €35,000,000 or 7% of the worldwide annual turnover for the previous financial year, whichever is greater. The least severe penalties for AI operators focus on providing inaccurate, incomplete, or misleading information, with fines reaching up to €7,500,000 or 1% of the total worldwide annual turnover for the preceding financial year, whichever is higher. Noncompliance penalties can be levied against providers, deployers, importers, distributors, and any other notified bodies.

How to build an adequate enterprise response strategy

How should firms respond to these considerations and provisions to ensure a failsafe AI strategy? We recommend a 10-pronged approach (Figure 1), built on a foundation of technical and legal governance.

Figure 1. A 10-pronged enterprise response strategy to the act

Source: Infosys

As a foundation, firms should build technical and legal guardrails. Generative AI systems leveraging foundational models will need additional technical guardrails for different threats, such as personal data leaks, hallucinations, and security attacks. Additionally, legal guardrails in the form of reviewing contractual obligations, assessing impact, and clear communication of obligations, are necessary.

We recommend the following response strategy that involves 10 levers:

  1. Build a cross-functional team with centralized responsibility: This will act as a sole custodian for diffusion and adoption of the act, and it will have representations from key stakeholders including data science teams, AI developers, senior executives from the CDO/CIO/CAIO’s office, enterprise security, legal, and data privacy teams. It will overcome challenges and resistance typically encountered during rolling out transformations in processes, control, and governance.
  2. Raise awareness of the act within the organization: Launch a series of enterprise-wide deep dive sessions linking all individuals with AI in any capacity to raise awareness about the act. This should also include a knowledge base of materials for stakeholders to refer to.
  3. Conduct a comprehensive maturity and risk assessment: We recommend taking a full inventory of AI systems that are existing, under development, or will be developed as a part of the overall AI roadmap. It is critical to evaluate the legal obligations and to carry out a thorough impact/risk assessment for each AI implementation in the business.
  4. Conduct an internal systems gap analysis: Conduct a thorough gap analysis of internal mechanisms, systems, and policies. This should cover accountability tracking, data governance and management, IP protection and monitoring, risk management and control systems, quality management, maintaining documentation, and other focus areas defined in the act.
  5. Build responsible by design into AI systems: This approach incorporates ethical considerations, fairness, transparency, accountability, and regulatory compliance into designing and developing AI at all stages, including training data preparation, model engineering, fine tuning, and deployment.
  6. Establish enterprise responsible AI frameworks: Adopt an accepted and functional framework that forms the basis for interventions and connecting initiatives to ensure continued compliance. This is a strategic, future-facing investment.
  7. Build automated compliance checks: Ensure compliance with latest technologies. Enterprises may require gating mechanisms within automated workflows to verify compliance. For instance, a risk assessment tool that evaluates new use cases via inputs from use case owners and correspondingly routes them to the right resources to ensure compliance. Similarly, there is significant scope for automation to ensure compliance in areas such as quality management, documentation, traceability, tracking, and reporting through automated workflows. Now is a good moment to develop a mechanism to track and monitor AI projects via a central dashboard that acts as a single source of truth for AI projects and a registry of models with details about models being used, the health of AI systems and their compliance status. All these measures drive down the cost of compliance and streamline governance, thus ensuring innovation is not throttled.
  8. Mitigate third-party risks: Assess your organization's exposure, particularly regarding obligations that may extend to entities outside the EU if the AI system impacts EU-based organizations or individuals. Review existing legal contracts with AI service providers, including model providers and cloud service providers, to ensure they safeguard your organization from third-party legal risks.
  9. Create a market monitoring mechanism: Set up monitoring mechanisms to constantly scan the market and identify upcoming shifts. Several of the act’s key provisions are broadly defined and a lot would depend on how they are executed and implemented, and what precedents and standards are set. This will improve overall readiness and agility to respond to regulatory changes.
  10. Find the right strategic partner: Identify the need for expert guidance and advisory services early on and identify suitable strategic partners with established experience of AI development and a strong responsible AI policy. Depending upon the risk profile, consider legal advice and audit services to ensure compliance. Conduct a thorough analysis of all partners and service providers to understand their core strengths, competencies, and suitability for engagement.

Humanize AI

The act wasn’t built in a vacuum. It aligns with AI regulations and guidance from China and the US, consistent with the OECD’s 2019 core principles for AI — respect for human rights, sustainability, transparency, and strong risk management.

In America, according to the Pew Research Centre, more than half the population is “more concerned than excited” about AI. That shows how crucial it is to address people’s AI concerns to ensure the technology’s benefits aren’t hindered just when we need it the most.

The act marks progress toward trustworthy AI systems. Our 10-point response strategy enables firms not only to comply but also thrive in the AI-first era and continue to reap the rewards of their investments in this transformative technology.

Listen Here

Connect with the Infosys Knowledge Institute

All the fields marked with * are required

Opt in for insights from Infosys Knowledge Institute Privacy Statement

Please fill all required fields