AI Business Value Radar 2025

Insights

Download
  • Enterprise AI is on the cusp of scaling: 50% of AI use cases achieve some or all of their business objectives, with a fifth achieving no value or are cancelled after deployment.
  • IT-focused and industry-specific transformational use cases are most successful: Use cases in IT operations, cybersecurity, and software development are the most pursued and viable AI use cases across all industries; but use cases that require companies to change their data architecture and operating model also have a higher rate of success.
  • Preparing your workforce for AI has the biggest impact on value delivery: Effectively preparing employees to understand AI policies and engage in changes (what we term Trailblazers) can improve AI success rates by 18 percentage points.
  • Agentic AI should be at the center of organization’s enterprise AI transformation – a driving force to reshape business processes, operating models, and technical architectures.
  • Other recommendations to derive more value from AI use cases include innovating through an AI foundry and factory; going product-centric for speed, agility, and autonomy; and creating a centralized AI governance task force.

Executive summary

The era of enterprise AI experimentation is ending, and the time for scaling AI deployments is here. In our biggest survey of business leaders to date, we have found that about 20% of AI use cases are now delivering on all business objectives, and that just over 30% of use cases are on the cusp of doing so.

As costs for AI look to tumble in the wake of DeepSeek and the emergence of other potentially more frugal language models, we will see an acceleration of AI use cases that achieve viability for business, resulting in a mass expansion of AI agents across an average enterprise over 2025.

Managing this will require leaders to take transformation of their business seriously. Our research identifies a strong link between AI success and the changes that a business makes to its operating model and data structures. It also finds that engaging the workforce is the most important driver of successful AI outcomes.

To uncover these insights, Infosys surveyed 3,798 senior executives in the US, Europe, Australia, and New Zealand and interviewed more than 30 senior executives. The message from our respondents is clear. AI is being adopted and deployed across all elements of business, core and noncore.

However, success is still not guaranteed. This report provides a useful guide to those at this critical juncture and provides clear insights and direction toward building a successful enterprise AI future.

AI Business Value Radar 2025

Enterprise AI: Time to scale success

More than half of AI initiatives deliver positive impact, with about 20% achieving most, or all, of the business objectives for which it was designed. IT-related use cases, including operations, cybersecurity, and software development, are more likely to deliver positive outcomes. Human-focused use cases, such as those in marketing, customer services, sales, and workforce, are less likely to deliver positive outcomes, but as companies invest more in these areas, they will see more success.

Success, spending, and transformation

Successful AI use cases are often those that have had more investment and require more transformation in an organization’s operating models and data architecture. However, this is not always the case. Many use cases are opportunistic, and many more will become so as the costs of AI fall.

Workforce readiness drives most success

Companies with an engaged and supported workforce deliver significantly better outcomes from AI than those companies that are still halfway toward training and engaging with their employees on AI.

Up to 18 percentage points can be added to the chance of success of AI if a company has fully established change management AI training processes, and is involving its employees in decision-making about AI. This shows that it’s critical not to leave humans out of the loop.

Be bold to get the value

The report recommends five steps to become AI-first and generate business value from AI deployments:

  1. Explore agentic AI for enterprise transformation.
  2. Innovate through using an AI foundry and an AI factory.
  3. Prepare your workforce to ensure best outcomes for AI.
  4. Go product-centric for speed, agility, and autonomy.
  5. Create a centralized AI task force.

Drivers of value

This report sets out to uncover the dynamics that drive successful AI outcomes within business. Our survey of 3,798 senior executives asked about activity across 132 different AI use case types, across a pool of more than 3,200 companies with over $1 billion in revenue in the US, Canada, UK, Germany, France, Nordics, Australia, and New Zealand. Total spending to date reported by these respondents across their selected use cases amounted to $135 billion.

To identify which areas of business AI are being used most, and most successfully, we presented respondents with list of 55 use case types across 14 functional categories (Appendix A). These categories comprised business functions such as marketing, finance, human resources, and product development, all of which are horizontal functions that are more or less similar across all industry sectors.

We also presented each respondent with a handful of industry-specific use case types appropriate to their sector. In total, there were 77 industry-specific use case types across 15 industry sectors (Appendix A).

Respondents picked the five functional categories they were most interested in alongside their industry-specific category, and within each category they indicated which use case types they were pursuing. For each of these use case types, they then indicated how far they had progressed in deployment, and if they had succeeded in achieving business objectives or not.

This was a self-selecting sample, in that respondents only provided details for the top five use case categories in which their business is already interested. This is why 70% of the AI initiatives were reported as past the deployment phase. Typically, we would expect far more projects that had failed, been canceled, or are still in pilots, for what is such an early stage and experimental technology.

Enterprise AI: Time to scale success

Despite the sample bias, it is heartening to see that 19% of AI initiatives are achieving most, or all, of their objectives (Figure 1). Moreover, 32% are partially successful. Together this means that on average 50% of AI initiatives are providing some positive impact, even if they are not yet meeting all objectives fully.

This suggests that organizations are getting better at experimenting with, developing, and deploying AI technologies within their business.

It also implies that in 2025, companies will shift focus from AI experimentation to AI deployments with proven business value that are scaled across the enterprise to deliver transformative impact.

But which AI use cases will be the first to break through into enterprise-wide deployments? To understand this, we gave each use case type a viability score. This score is based on an average weighted sum of deployments achieving some or all business objectives, divided by the number of respondents that pursued that use case type. It shows how likely a use case type is to deliver business outcomes (weighted more for those achieving most or all business objectives), with a score over 1 being more likely, and those below 1 being less so.

The viability score varies significantly based on several statistically significant factors. These can include the acceptance level of users, the industry of the respondent, the amount of transformation the use case requires within the business, and the preparedness of employees.

Figure 1. Stage of use case deployments across the research sample

Figure 1. Stage of use case deployments across the research sample

Source: Infosys Knowledge Institute

Each of these is covered in this report and will be covered in further detail in following publications from the Infosys Knowledge Institute. In this report we look at the overarching dynamics that affect the success of AI use cases in business.

IT use cases dominate

Among functional use case categories, it’s clear that IT-related areas are both most viable and most popular with respondents (Figure 2). More than a third (38%) of respondents have selected use cases within our IT, operations, and facilities category, which has a viability score of 1.11 and includes use cases such as incident management and ticketing, AI-orchestrated processes, and smart building automation.

Figure 2. IT use cases have high selection rates and are much more viable

Figure 2. IT use cases have high selection rates and are much more viable

Source: Infosys Knowledge Institute

Following this is the less pursued, but much more successful, category of cybersecurity and resilience monitoring. The high viability of this category is entirely carried by use cases that are related to enterprise resilience monitoring which had the highest average viability in our survey (1.30). Other use cases in this category related to threat and anomaly detection had an average viability score.

Software development was about as popular, with 30% of respondents pursuing this category and achieving notably high viability scores for use cases such as automating code development (1.14) and developer code assistants (1.09).

Marketing, sales, and people next

It should not be a surprise that IT use cases are at the forefront of companies’ AI efforts, and that they’re delivering high levels of success. This is the natural home for AI tools due to the already highly digitized and computational nature of these functions.

What is interesting is that almost a quarter of companies are pursuing marketing, customer services, sales, and workforce use cases (as part of their most pursued AI areas), yet viability scores on average are relatively poor. These human-focused functions have been digitized significantly in the past 20 years. Yet it seems that AI success is still not guaranteed.

For instance, AI viability scores are slightly above average for well-understood use cases such as marketing asset creation (1.10), chatbots (1.07), workforce management, and scheduling (1.03), or sales strategy optimization (1.03).

However, many seemingly simple use case types are not yet delivering consistently strong outcomes. These include use cases such as finding cross-sell opportunities (0.79), customer segmentation (0.83), personalized customer service (0.91), and talent acquisition (0.94).

In our view, it is only a matter of time before companies start seeing higher success rates in these areas. In fact, together these categories account for 24% of the total spending on AI use cases across our sample. Spending on marketing, sales, and customer service alone is equal to that spent on the three IT-focused use case categories in Figure 2. That suggests that viability for these categories will catch up soon as well.

The white-collar advantage

Considering viability across different industry sectors, white-collar and highly technical industries tend to lead. So, professional services and life sciences sectors outperform other industries in terms of their viability score — defined as the likelihood that their AI deployments will achieve all business objectives (Figure 3).

Figure 3. White-collar industries dominate value creation, blue-collar further behind

Figure 3. White-collar industries dominate value creation, blue-collar further behind

Source: Infosys Knowledge Institute

Life sciences likely leads in this area because it brings together many factors that help support strong AI outcomes. This industry is staffed by a highly qualified and technically minded workforce. Its core business involves processing and analyzing proprietary data, over which it has strong controls. And perhaps more important, it is very product-oriented, with the business organized around drug discovery and development initiatives.

This is compared to financial services, which sits in the middle of the viability ranking, despite sharing many similar characteristics to the life sciences industry. For instance, they both employ highly qualified staff, and deal with large volumes of highly regulated data. However, data in financial services is often fast-moving, free-flowing across multiple business and product lines, and is a mix of proprietary and third-party data. The industry is also held back by a large reliance on legacy infrastructure and complex challenges over modernizing toward broad cloud computing adoption.

We can also see that the bottom of the chart is dominated by industries that involve significant interface with consumers or the public, and which rely on physical assets as a core part of their service offering. Healthcare, retail, public sector, and travel and hospitality all sit in the bottom half of our viability ranking, and all rely on a significant element of their service being provided by people and through bricks-and-mortar or physical assets.

Success, spending, and transformation

Our research uncovered more than just insights on viability. Respondents also reported how much they had spent, and what the user acceptance level was for each use case. But most notable was that it also asked respondents about the “required transformation.” This is the amount of change to existing data structures, technical architecture, and business operating model or business processes that a use case requires in order to be implemented.

When plotted together with average spend and viability, a clear pattern emerges regarding the transformational impact that AI will — if not must — have on the business in order for it to be successful (Figure 4).

Figure 4. Transformational use cases have higher success rates and attract highest spend

Figure 4. Transformational use cases have higher success rates and attract highest spend

Source: Infosys Knowledge Institute

The first thing of note is that there is a significant positive correlation (p <0.0001) between average spend on each use case and viability: the more likely a use case is to be successful, the more that was spent on it on average — and vice versa.

The second thing is that the use cases that require more transformation are generally those that have had the most money spent on them. This can be down to two things: (a) companies value the outcomes of these use cases higher, and therefore are willing to invest more; and (b) these use cases incur more cost to deploy because they require more transformation in the organization to be effective.

We have named these transformational use cases because they require an organization to change its operating model and data structures for it to work. These use cases literally transform the organization around them through the nature of their being.

It is unsurprising then that the most transformational use cases are also industry-specific use cases, because these touch on core parts of the business. In fact, the proportion of industry-specific to functional use cases in the top right of the graph is 2:1 — or 19 industry-specific and nine functional use cases.

These core use cases include claims processing and telemetry in insurance; energy trading in energy, mining, and utilities; AI-driven new product launch execution in consumer packaged goods; and AI-powered navigation systems with fleet management in the automotive industry. These are often use case areas that can help a company differentiate its core product in its market and therefore are critical to the functioning of the business.

Our analysis strongly suggests that companies are spending more on transformational use cases that are core to their business — and that these are the use cases most likely to deliver on business objectives if successfully deployed. In fact, we have found that focusing on these transformational use cases can uplift a company’s chance of achieving AI business objectives by 3 percentage points.

Spend less, today and tomorrow

There are successful AI use cases that neither require significant transformation nor need high levels of spend to succeed. This cluster of use cases (circled in green) is more established and tend to be more functional than industry-specific. There are 22 functional use cases in this cluster, compared to 14 that are industry-specific.

This cluster includes well-understood or mature functional use cases such as IT operations, software development, chatbots, financial reporting and cashflow forecasting, fraud detection and risk analytics, marketing asset creation, or supply chain optimization.

These are good bets for companies looking for low risk, low effort, and potentially lower cost AI wins. This is because they require much less change to the organization and data architecture, and they are more likely to be successful with relatively low levels of spending.

But spending could become less of a barrier to success in future. The release of DeepSeek’s R1-Zero model in January 2025 shocked the market when it was revealed that it could provide comparable performance to leaders such as OpenAI but at 6% of the training cost. Meanwhile, in February, a research team from University of California Berkeley announced that its TinyZero model had recreated the core function of the R1-Zero model for a cost of $30.

If costs for AI models continue to fall in this way, and can be maintained at such low levels, this could shift the calculus of AI viability. Use cases that did not score highly in our viability model could well start to become more viable as companies are able to invest more in them as the fundamental cost of doing AI falls.

The future could quite quickly see many use cases uplifted to higher viability scores as falling costs enable higher levels of investment in experimentation, deployment, and transformation around each.

Agentic AI, a key to transformation

Falling AI costs in the future could accelerate AI use case viability, but will not remove the requirement to transform business operating models and data architecture to make many AI use cases work. Such transformation is difficult, and many companies are not prepared, as we found in our Enterprise AI Readiness Radar published in late 2024.

Ideally, such changes are made incrementally, aligned with clear business value at each step, while being delivered with supporting change management investments. Ironically, it is one of the most successful AI use cases identified in this research that can help here.

Agentic AI is an approach in which organizations create multiple small AI agents, each playing a specific role or carrying out a specific task. These agents have a degree of autonomy and act on their own to achieve specific goals. They can make decisions, plan actions, and learn from experiences. They can also be orchestrated together to enable end-to-end processes.

This approach is both incremental and closely linked to specific business value goals at each step.

This is why we recommend that agentic AI should be at the center of an organization’s enterprise AI transformation.

It’s also why we predicted recently that agentic AI would be a Top 10 AI imperative and increasingly be guiding employee decisions and processes over 2025-2026.

In fact, orchestration of AI agents is the most commonly pursued AI use case type in our sample, and it has a high viability score of 1.16, placing it in 90th percentile of viability for all use cases. This supports our belief that agentic AI will develop significantly over the coming year and will be the driving force of enterprise transformation as it reshapes business processes, operating models, and technical architectures.

Engaged staff deliver best returns

While transformation is crucial, its success hinges on how well a company’s workforce engages with and adapts to the changes. Our research found that organizations with deliberate workforce-preparation strategies significantly outshine those that deploy AI use cases without fully supporting their employees.

To come to this finding, we asked respondents to indicate which of the following definitions in Figure 5 best matched their own company’s employee engagement with AI. We then categorized them into four archetypes.

Figure 5. Workforce readiness archetypes

Figure 5. Workforce readiness archetypes

Source: Infosys Knowledge Institute

Trailblazers represent the pinnacle of readiness and have woven AI into the daily muscle memory of employees. They offer robust training, cultivate a culture of continuous learning, and foster feedback loops so that staff can shape how AI is applied. At the other end of the spectrum, Watchers only minimally engage with employees on AI, leaving them to engage and work with AI on their own initiatives. In our sample there were about as many Watchers (17%) as Trailblazers (16%) (Figure 6) — indicating just as many companies are doing almost nothing as going all in on preparing their workforce.

Two-thirds of respondents fell in the middle, in either Explorers or Pathfinders (Figure 6). Explorers are those that have taken initial steps to educate and support their staff with AI, and Pathfinders have gone that much further, but still not far enough, which means there is still some uncertainty around AI policies, expectations, and tools.

Figure 6. Most are Explorers or Pathfinders

Figure 6. Most are Explorers or Pathfinders

Source: Infosys Knowledge Institute

As might be expected, Trailblazers enjoy the highest levels of success across all archetypes (Figure 7). They achieve significantly higher levels of major success (the chance of a use case achieving all its business objectives), as well having a higher average viability score for their AI initiatives.

User acceptance of their AI use cases is also much higher. Acceptance is the score respondents gave for each use case to indicate how easily their users or customers adopted and accepted the outputs of an AI use case. Interestingly, we found a strong correlation between the acceptance score of use case types and their required transformation. This suggests that user acceptance grows in line with efforts to transform the business around an AI use case, which is another reason to engage employees in the process.

Go all in, or nothing

The surprising finding is that Explorers and Pathfinders — those that are midway on workforce preparation — are worse at delivering AI value than Watchers, who have done nothing to prepare their employees. In fact, Watchers outperform or nearly tie with Pathfinders — and do much better than Explorers — on all our performance metrics (Figure 7). This suggests that companies that begin, but do not finish, their AI workforce preparation journeys are at risk of holding back their AI outcomes. More concerning is that this is where most companies sit today.

Figure 7. Trailblazers in workforce prep have highest viability and acceptance

Figure 7. Trailblazers in workforce prep have highest viability and acceptance

Source: Infosys Knowledge Institute

The message here is not that you should aim to be a Watcher, as their AI success rates, viability, and acceptance scores are still very poor. Instead, companies that have begun their AI workforce preparation journeys must focus on finishing them. The benefits are clear. Explorers can expect a 18 percentage point uplift in the chance of use cases achieving all business objectives. Pathfinders can expect a 13 point increase of the same.

Statistically speaking, this means that investing in AI education, on the job training, change management and engaging employees in the development and feedback of AI systems can return bigger benefits than focusing on operating model and data architecture change alone.

The message is clear: Those companies that leave their workforces out of the equation risk missing out on the promised benefits of the enterprise AI era.

Be bold to get value

It’s clear from our research that to achieve transformative value from AI, companies need to focus on transforming their own organizations. And this means transforming operating models, data architecture, and engaging the workforce in AI.

Of course, companies have undergone major transformations before, not least of which has been the digital transformation of the past 20 years. But AI transformation is different — as people are involved in shaping and training AI in a way that they were not for earlier digital transformations.

“You can’t approach AI in the same way you approach digital projects,” says Satish H.C., executive vice president and chief delivery officer, Infosys. “More foundation is required, as are high-quality data, guardrails, and continuous fine-tuning.”

The era of enterprise AI will result in profound change that will challenge today’s organizational hierarchies, governance structures, communications channels, and work schedules. What a team — or whole business division — might have spent weeks doing in the past could in the future take just minutes and involve a tiny number of humans.

Understanding the ramifications of this future and designing for it can only be done while also building it. And as the costs for AI fall, more companies will gain the confidence to do more in this field in the hope of achieving competitive advantage. For those bold enough to start down this path, the following five recommendations help guide the way to success:

  1. Explore agentic AI for enterprise transformation.
  2. Innovate through using an AI foundry and an AI factory.
  3. Prepare your workforce to ensure best outcomes for AI.
  4. Go product-centric for speed, agility, and autonomy.
  5. Create a centralized AI task force.

1. Expand agentic AI orchestration

Deploying and orchestrating agentic AI systems is a foundational step for effective enterprise AI. Our research identified it as a low cost, low effort, and highly viable use case. And by its nature, it will support the experimentation and incremental deployment of AI that will help organizations understand the process and operating model transformations they need to engage with to benefit more broadly from AI. Agentic AI will also help prepare the workforce for a future of human-machine collaboration. In this new era, AI will execute decisions, with humans governing the process (Figure 8).

Figure 8. AI agents will reshape work

Figure 8. AI agents will reshape work

Source: Infosys Consulting

Indeed, agentic AI systems can make decisions, plan actions, and learn from experiences. In the past year alone, Google, Microsoft, OpenAI, and others have invested in software libraries and frameworks to support agentic functionality.

Unlike traditional assistive tools, these intelligent agents function as behind-the-scenes workhorses. For example, in software development, one agent might be responsible for writing code, another for testing, and another for critiquing. For individuals, a virtual assistant, for example, could plan and book a complex personalized travel itinerary, handling logistics across multiple travel platforms.

Agentic AI enables a future in which many different autonomous AI tools interact across a business with employees, customers, data, and applications. Leaders and employees will need to learn how to work with, and collaborate with, AI systems that have these capabilities.

2. Innovate with an AI foundry and factory

“Small wins through AI use cases are an important step toward enterprise AI,” says Sunil Senan, senior vice president and business head, data, analytics, and AI, at Infosys. These small wins lead to people accepting and trusting AI, prerequisites for adoption.

But small wins also need to scale quickly, and for this companies need a two-speed approach to innovation on AI. They can establish an AI foundry to experiment with and incubate new technologies, develop new patterns, and test use cases. Then an AI factory can turn the learnings from the foundry into products (Figure 9).

Figure 9. Innovate at speed and scale

Figure 9. Innovate at speed and scale

Source: Infosys Topaz

The foundry and factory together can form the hub of AI innovation, experimentation, value realization, workforce engagement, and source of governance for an enterprise in the early stages of AI deployments. For this reason, it’s important that they are not treated just as innovation labs delivering novelties but as the engine of AI transformation tasked with driving the organization forward on its enterprise AI journey.

3. Engage with the workforce

The data could not be more clear. The single biggest factor driving AI success is whether a company has engaged with its workforce on AI effectively. The fact is that if employees do not use the AI tools that have been built for them, it will be impossible to get them to deliver any value.

Indeed, those Trailblazers that represent the gold standard of engagement enjoy AI success rate improvements of up to 18 percentage points. Trailblazers do four things really well:

  • They fully engage their employees in continuous AI training, education, and change management.
  • They actively address and mitigate employee concerns about job impact, explainability, compliance, costs, and cultural barriers through transparent communication and involvement.
  • They fully support employees in understanding and adapting to AI.
  • They ensure that employee feedback actively informs AI deployment strategies.

To enact this sort of change, it’s imperative that senior leadership is the driving force.

“Resistance to change is a common barrier to AI adoption, particularly among operational teams burdened by daily tasks,” says one consumer packaged goods executive we spoke to. “Overcoming this requires leadership-driven initiatives that emphasize long-term benefits over short-term disruptions.”

Being clear on the corporate vision is important. Ensure all understand what is at stake, what the ultimate objective is, and how they will benefit from the journey. The vision will — and must — evolve over time. What is important is that it is articulated and communicated regularly, and that it defines the overall purpose and direction for AI in your business.

It’s also important that employees know where and how they can access and experiment with AI, so they can also champion AI, and be part of testing and designing the company’s enterprise AI future.

4. Go product-centric for speed and agility

Effective AI experimentation and deployment requires nimble, autonomous teams that define, design, and deploy their own tools for their specific uses. To achieve this requires a product-centric operating model (Figure 10).

Figure 10. A product-centric model democratizes and atomizes AI development

Figure 10. A product-centric model democratizes and atomizes AI development

Source: Infosys Knowledge Institute

In this setup, product and platform teams, led by a product owner and working in an Agile cadence, are grouped to orchestrate an end-to-end customer journey. Employees are in charge of their own success and empowered with the tools and guardrails to act fast and responsibly.

This is an ideal model for scaling AI. Employees are autonomous but the guardrails for responsible AI are enablers, not barriers, for speed and agility (see also Recommendation 5). These guardrails ensure a well-governed and controlled environment to minimize cost and risk, but maximize experimentation and speed. Balancing this effectively will be crucial for organizations as they look to derive ever more value from AI implementations.

5. Create an AI governance task force

AI will only be successful if it’s accepted by users, and we believe that this acceptance hinges on responsible AI hygiene. Infosys has defined 12 principles of responsible AI (Figure 11) that can help guide organizations toward building effective governance foundations for strong user trust and acceptance of AI.

Figure 11. The 12 principles of responsible AI

Figure 11. The 12 principles of responsible AI

Source: Infosys Knowledge Institute

A centralized responsible AI task force will support AI acceptance — and it will reduce risk from AI. This task force should focus on adoption and implementation to ensure that ethics are not overshadowed by business imperatives.

At Infosys, the Infosys Responsible AI Office is the custodian of AI governance and facilitates collaboration across functions. The office establishes and maintains a governance framework and defines policies, procedures, and decision-making processes related to AI.

The task force will enable organizations to maintain focus on their AI objectives and to course-correct when new regulations, data, and policies require a different approach.

For example, if a company sets out to develop a health tracker app but fails to define the value of user privacy, the product can drift from helping users understand their health to selling user data to health insurance companies. As soon as customers become aware of this, acceptance and viability of the app will plummet.

Recent research found that 48% of companies that implement responsible AI and communicate their efforts experience enhanced brand differentiation — reason enough to establish a responsible AI task force as a core part of enterprise AI-first strategy.

Appendix A: Use case types used in the survey

Based on interviews with SMEs and desk research, we collated 55 use case types across 14 categories (Figure A1). We similarly collated 77 industry-specific use case types across 15 industry sectors (Figure A2). All use case types are themselves at a level of abstraction higher than a specific use case, to make the survey manageable — but also relevant for all respondents.

Figure A1. Functional categories and their use case types

Figure A1. Functional categories and their use case types

Source: Infosys Knowledge Institute

The survey asked respondents to select up to five functional categories out of 14 (Figure A1) where their companies are pursuing AI. Each category had between two and six common use case types (for example, product recommendation use cases in the sales and revenue category). For each use case type within a category, respondents were asked about the stage of implementation of their initiative(s). Options for this question were: No plans to implement; Planning; Created proof of concept or pilot; Canceled before deployment; Deployed, not generating business value; Canceled after deployment; Deployed, generating some business value; Deployed, achieving most or all objectives. Respondents were then asked about the amount of spending for that use case type to date (from any start date). This was followed by questions about the amount of operational or business model change as well as the amount of change in data structures and technical architecture needed for each use case type. Finally, respondents were asked about the proportion of their user base that accepted and used the AI tool deployed (if any) for each use case type. The same series of questions was asked of industry-specific use case types for the industry of the respondent (Figure A2).

Figure A2. Industry-specific use case types

Figure A2. Industry-specific use case types

Appendix B: Research approach

Appendix B: Research approach

Download

Connect with the Infosys Knowledge Institute

All the fields marked with * are required

Opt in for insights from Infosys Knowledge Institute Privacy Statement

Please fill all required fields