Thank you for subscription.
Infosys’s Enterprise AI Readiness Radar explores executive attitudes toward AI and assesses how prepared companies are to transition from their current state of experimentation and point solutions to system-wide change and adoption.
For our Enterprise AI Readiness Radar report, we surveyed 1,500 companies globally and interviewed more than 30 executives to determine whether they have the foundational building blocks for enterprise AI and how culture and leadership mindset influence AI adoption. The report helps leaders understand the challenges of achieving enterprise AI and considers how to apply these insights to their business context.
What follows is a list of questions and answers that analysts and business leaders have posed to the Infosys Knowledge Institute since the report’s publication.
The five building blocks for AI readiness are strategy, governance, talent, data, and technology (Figure 1).
Figure 1. The readiness framework
Source: Infosys Knowledge Institute
The AI strategy should cover technology investments, talent acquisition, and ethical considerations. To execute, companies should set up an AI value office. This office is responsible for governing AI’s impact on the modern data estate, emerging technology, generative AI experimentation; integrating AI into operations; and overseeing responsibility, regulations, culture, and ethics. The office uses value-based prioritization for investments and project greenlighting, with outcomes tied to reputational, legal, and compliance risks. These investments are the AI use cases or capabilities that have the potential to deliver the most significant impact on the identified business objectives, such as increasing revenue or expanding into new markets.
In the report, we reference the productivity benefits from AI. But executives should also focus on financial impact (revenue growth and cost reduction) and customer experience (customer satisfaction and retention), along with innovation (measured by time to market and market share), user adoption rates, and model performance.
Although we did not directly assess the state of participant companies’ infrastructure, we interviewed financial services and healthcare/medical executives — industries that traditionally have many legacy systems. These interviews showed that they were using existing systems to become AI-ready by using APIs, microservices, and middleware integration, and by training AI on data from legacy systems using data transformation and cleansing techniques to ensure compatibility.
For companies where the customer journey is significant (e.g., retail, logistics, banking), interviewees told us they were reimagining processes to become AI-ready. This means mapping workflows and identifying pain points, inefficiencies, and repetitive tasks that AI can enhance or automate. These executives are prioritizing areas with the most significant impact, such as customer service (chatbots) and personalization (recommendation engines). They were keen to ensure the new process or customer journey is data-driven and capable of capturing and utilizing relevant data for AI models.
We can’t comment on industry confidence as the sample sizes were too small and the margin of error too high.
On geographies (US, UK, Germany, France, and ANZ), the following insights are pertinent.
Companies we spoke to for the report lacked adequate master data management processes and, more often than not, either didn’t have access to or hadn’t created a centralized governance office to ensure data (and model) privacy and security. Many didn’t know what data they had and where it was located, exposing the enterprise to significant risks. Poor data governance makes it difficult for teams to agree on definitions, controls, what data is shared, and when. Another problem is that they had no capabilities to stress-test their technology via red teaming, or simulated adversarial activity. This means they don’t know if their AI models are secure against attacks such as prompt injections, jailbreaking, extraction, and poisoning.
Responsible AI governance ensures that AI products and services are adopted by users. People need to trust and verify decisions made by AI systems. By embedding responsible governance into AI development and deployment, companies can ensure security and privacy and uphold core human-centric values such as transparency, safety, equality, accountability, privacy protection, and respect for human rights. This also supports regulatory compliance and overcomes what is known as ethical drift – the tendency of products to move away from their intended purpose as design and development takes place. Some 48% of companies that implement responsible AI and communicate their efforts experience enhanced brand differentiation. This is reason enough to establish responsible AI as a core part of enterprise AI strategy.
European companies should build AI products that comply with the EU AI Act, which further cements the EU’s role in establishing global digital standards. However, companies also need to be aware of standards in other parts of the world, notably in China, which has already adopted several AI laws.
In ANZ, Australia’s Ethical AI Principles sets standards around fairness, transparency, accountability, and privacy. In the US, compliance with the NIST AI Risk Management Framework, and Google’s AI principles, are a good place to start.
Our report found that only one in five employees has adequate AI knowledge. Although reskilling is a core component of getting the talent building block in place, having high AI knowledge in the organization is vital for technology readiness. Employees will also need access to four foundational technology capabilities – dynamically provisioned infrastructure and compute; security and rights management; flexible infrastructure for open- and closed-source AI models; and integration with API frameworks.
As mentioned in the recommendation section of the report, these capabilities are best accessed through a repository of self-service tools. This access can be as simple as an internal development marketplace, or a full-blown platform engineering squad that creates a user interface to access AI capabilities.
Another aspect is the level of automation in software development teams. We found that one-quarter of respondents were using manual software development processes. Automating software development has many benefits, including better customer experience, reduced error rates, improved compliance, and lower stress for teams.
To get employees on board and create a culture of tech-powered innovation, Infosys is enabling access to generative AI tools, including providing software development teams with automation through what is known as Stream AI (where whole phases of the software development life cycle are collapsed and automated), and building a pipeline of talent that is knowledgeable about AI, can build the technology through low-code tools, and develops experience of AI engineering techniques and tools.
Data infrastructure is vital for AI success because it ensures the availability, quality, and accessibility of data needed for AI models — all areas where our respondents report facing challenges.
To know when their data infrastructure is reliable enough and prepared for enterprise AI, companies should assess current systems, including hardware, applications, and integrations. Assessments include whether hardware is fit to run an AI application, interoperability, and infrastructure complexity. It is also important to identify where data is stored and to consolidate it effectively, which might require changes to the data ecosystem.
Companies are more prepared for enterprise AI if their data infrastructure includes systems to cleanse and then fingerprint data (a small, unique digital signature or hash generated using algorithms that capture the essence of the data's content without storing the data itself). This fingerprint allows for better transparency and governance. Additionally, they must know how to handle structured and unstructured data, including traditional analytic data, transactional data, synthetic data, and data from the ecosystem, generated by users or machines.
Restructuring can be costlier in terms of time and money, given the complexity of existing legacy systems, disruption to business processes, the need for customization, and the necessity to reduce technical debt. Starting from scratch is often less expensive and offers the benefit of enabling modern, streamlined data architectures without the burden of legacy issues. Restructuring is a better option when existing data is critical and cannot be replicated; when gradual restructuring is possible; and when a significant amount of money has already been invested in legacy systems.
Talent development is a huge factor in AI readiness. As employees increasingly create and train AI systems, they will need strong AI skills (see this new article by the Infosys Knowledge Institute)
However, our Enterprise AI Readiness research found that most employees lack sufficient awareness of AI and its capabilities; only one in five has adequate AI knowledge. When asked about talent capabilities within their companies, most executives identified a gap in techno-functional talent — professionals who can bridge deep technical expertise with business understanding. This gap means many organizations are unprepared and may miss out on the productivity gains AI can offer.
To get ready, forward-thinking companies are creating AI skills categories and providing AI-led learning paths for employees. For example, Infosys has classified generative AI skills across a progressive ladder of capabilities, starting with AI Aware (nearly all employees), AI Builder (a larger number of practitioners), and AI Master (a smaller number of specialists).
Upskilling talent in AI often proves better than hiring or outsourcing. It leverages existing employees' knowledge of the company’s culture, systems, and processes, leading to smoother AI integration. Nurturing in-house expertise fosters loyalty, reduces turnover, and builds a sustainable competitive advantage by embedding AI skills across the organization. Additionally, it’s more cost-effective in the long term, as continuous upskilling adapts the workforce to evolving AI needs without high costs and risks associated with recruiting new talent or relying on external vendors. This approach promotes innovation and ensures alignment with strategic goals.
Our research found that only 12% of companies are confident that they are providing employees with sufficient training opportunities. However, our recent research on technology, skills, and AI adoption trends showed signs that banking companies now rely more on upskilling than hiring or outsourcing.
This research found that enterprise AI is expected to boost productivity by as much as 40%, with a median increase of 15% across regions. Notably, most organizations are beginning to measure AI’s value delivery (ROI), with some even stating that no strategy is complete without an AI component. Organizations expect significant rewards, with European companies expected to increase investment by 115% this year, according to our Generative AI Radar research. While true enterprise AI is still three to five years away, those that get it right can achieve end-to-end AI integration, making capabilities widely available across the enterprise.
Executives are also aware of AI’s legal, compliance, and reputational risks. Even executives who are the most eager to implement AI are concerned about the uncertain road ahead, as indicated by the Cautious Explorer archetype in our report. As one CIO at a US insurance company put it: “The CEO and COO are saying go ahead and do anything that we need to do, but it brings with it a lot of prep and consideration for security, increasing compute power in the cloud and on-premises. Having these conversations can be challenging.”