Insights
- Artificial intelligence has found increased prominence in consumer use cases. But its adoption is much patchier on the enterprise front, with few organizations adopting AI at scale.
- A major roadblock to enterprise-wide adoption is the perceived threat of the technology, and the question of whether it will remain robust, transparent, and unbiased in production environments.
- However, right usage and a holistic strategy focused on creating AI “evangelists” and “practitioners” can solve these pain-points.
- In our experience, AI “blockers” and “skeptics” can move into the “evangelist” and “practitioner” archetype by leveraging concepts such as transparent and resilient AI, good AI ethics, explainable AI, low code development and open-source frameworks and models.
- To encourage long term, sustainable collaboration and synergy between employees and AI technology, a cultural shift is also needed.
- With the right balance of people, process, and technology — enterprise-wide adoption of AI will enable the transformation towards a live enterprise — a continuously learning, evolving, and sentient organization.
Today, e-commerce companies recommend products, sports have AI powered decision-making systems, self-driving cars are getting closer to reality. The impact of AI pervades our experiences, and as consumers, we have already accepted it into our lives — although often without realizing it.
For business applications, the story has been different: Few large organizations have fully deployed enterprise AI successfully. According to a McKinsey survey, high-tech and telecom firms are doing better than most, with at least 30% embedding deep learning models into operations.1 It’s also notable that AI adoption is often highest in product development, as firms buttress the technology with Agile, cloud, and analytics platforms to make the most of AI investments.
Moreover, not all AI is the same. Even the most visionary enterprises use “weak” — simple models and analytics — rather than “strong” AI to crunch data and generate insights at speed. These simpler algorithms, though not as daring as their stronger counterparts, improve the customer experience, make systems more robust, and optimize processes, from HR to legal. The laggards, however, haven’t made much of a start in any AI domain. They lack good AI strategy, business leadership, and the ability to replace legacy systems with new AI-powered ones.
For organizations that get AI right, the benefits can be staggering. Firms using AI can increase enterprise profits by 38%, according to Infosys research.2 Overall, the technology is expected to provide businesses with $14 trillion (nearly the size of China’s economy) of gross added value by 2035.3
When employees trust and understand the value of AI, it creates an ecosystem where both users and AI algorithms learn from one another
That potential is expected to lead to an AI arms race in many industries. The enterprise AI market was valued at nearly $4.7 billion in 2018 and is expected to reach $53 billion by 2026.4 Even the ongoing pandemic has not slowed the pace of AI investments. A 2020 Gartner poll reported that 47% of AI investments didn’t change after start of the pandemic and 30% of organizations plan to increase spending. 5
Still, many employees are skeptical of AI’s promise, which can create a barrier to adoption. Workers believe their jobs are threatened by the technology’s ability to sift through enormous amounts of data and make powerful inferences in real time. Others don’t trust AI’s ability to make unbiased decisions or to explain how those decisions were made. Workforce resistance to AI systems can significantly slow down adoption and derail the transformation companies envision and need. Conversely, buy-in from employees creates a symbiotic relationship and environment where both employees and AI systems learn from each other. Think of a graphic designer using AI features in Adobe Photoshop to conceptualize her next advertisement or a delivery driver of a logistics company utilizing an AI powered route optimizer to effectively plan his deliveries for the day.
To manage these conflicting forces, businesses need a robust enterprise change management strategy that converts skeptics into enlightened practitioners — and even AI evangelists — a strategy that covers everyone from leadership to those using the technology day in and day out.
Turning skeptics into AI evangelists
Even the most audacious, robust, and technologically advanced strategy will falter if employees don’t buy into the vision. Infosys interviewed more than 690 organizations in 2020 and found that, along with process inefficiencies and legacy systems, a risk-averse culture was among the most significant problems.6 Any AI strategy worth its salt needs to ensure humans will be augmented by the technology and that fear and suspicions are firmly put to bed. Employees’ perception of enterprise AI depends on two factors: the person’s proximity to technology and the level of trust in the AI systems.
Through our work with clients, we found four archetypes (AI Practitioner, AI Evangelist, AI Blocker, and AI Skeptic) that represent how employees respond to AI when it is deployed at scale. (See Figure 1.) By addressing each group’s specific concerns, companies can create an effective change management plan — one that reduces the friction that undermines AI’s effectiveness in the workplace.
Figure 1. Different types of employees can either help or hinder enterprise AI
Source: Infosys
To fully harness the technology’s potential, organizations should strive to have more workers in the Evangelist archetype. This person uses AI productively, is comfortable with the technology’s expansion to other functions, and understands how it contributes to business outcomes.
The walls between these archetypes are permeable, however. Users can graduate to the most optimal category of Evangelist through the influence of several AI concepts. At the same time, there is always a threat that Evangelists could slip into less optimal archetypes due to factors like a drop in trust or missed opportunities to improve knowledge about AI. Organizations need to better understand each archetype and how they interact with the enterprise’s overall AI strategy.
AI Evangelists focus on business value
This community of AI users is closer to the core business operations than to the technology side. The group include sales associates or production managers who rely on technology and understand the value of data and AI in their businesses. This experience creates trust in AI, which is important at both the strategic and practitioner levels. The community’s attitudes and approaches are governed by the principles of transparent AI and ethical AI.
- Transparent AI opens the black box, making details of the system available to its consumers. This information includes the AI principles adopted, algorithms used, training methodology leveraged, data sets considered, and parameters used by the underlying models. It can resolve doubts held by the Evangelist and settle misunderstandings. This not only eases employee concerns but also forces organizations to think more carefully about how their AI is assembled and how assumptions are made.
- Ethical AI requires a clear articulation of what its user community could gain from data and, more importantly, how that data is collected. For example, AI tools that monitor workforce productivity can affect employee morale and motivation. Companies can reduce these concerns with a strong ethical AI framework. This should conform to privacy laws, such as the European Union’s General Data Protection Regulation, and have zero tolerance toward violations. Remember: An AI Evangelist understands the combined power of data and AI.
An AI Evangelist is the user who understands the value AI and its applications can bring to the business
There is a risk that AI Evangelists will become Skeptics if a company’s ethical framework is either fuzzy or conflicts with their conscience. Google 7, Microsoft 8, IBM 9, and other leading technology companies have already established AI codes of ethics. These guidelines are intended to ensure that their AI developments are fair, socially beneficial, and accountable, and maintain user data privacy. 10
AI Practitioners understand the technical and business values
AI Practitioners are usually technology-focused employees who are comfortable designing and building AI systems. But they are also good at connecting technology with the business. While an Evangelist focuses on business growth through AI, the Practitioner is hands-on with the actual systems. The principles of transparent AI and ethical AI are also needed to reinforce this community’s support for AI. Along with these, scalable AI — facilitated by low-code and open source technologies — can help create stronger AI platforms. With open source, knowledge of a single programming language enables development on multiple platforms. Further, low-code helps Practitioners quickly develop applications with minimal hard coding. These concepts can even allow Evangelists to occasionally indulge in the development process.
- Scalable AI requires a company to adopt best practices that help it develop and deploy more robust and relevant AI systems. Scalable AI creates a pipeline of clean and robust data that enables the AI systems to make better choices. This helps reduce bias that might accidentally creep into the system.
- Low-code AI allows users to more easily participate in development and better understand and control systems. Microsoft and OpenAI recently launched the GPT-3 neural network machine learning model, which is trained to generate any type of text using internet data.11 This technology allows employees with little or no coding experience to develop Power Fx applications using conversational language.12
- Similarly, open source AI ensures software development is always accessible to technically inclined users and enables them to participate. This also has the effect of bringing new technologies into the Evangelist’s purview and excites them about what they can accomplish with data.
AI Skeptics mistrust the technology
Users in this group lack exposure to enterprise AI systems. They may not understand the power of AI and the value it can bring to day-to-day work and the business overall. Skeptics are often familiar with breaches in AI systems, which can lead to mistrust. Usually, they need to be coached to overcome their skepticism. To convert this community into AI Evangelists, organizations can use explainable AI and resilient AI.
- Explainable AI will help the Skeptic interpret the decision-making and behavior of the AI system and build trust. However, the internal workings of the system are sometimes unavailable or unexplainable to users. In those situations, company leaders should make sure employees understand the efficiency and value AI adds to the enterprise.
- Organizations should use resilient AI to defend the enterprise from cyberattacks and ensure that all employees understand this system’s value. This more robust security layer is needed since AI presents its own vulnerabilities. These include the intentional or erroneous “poisoning” of data, query attacks manipulating AI’s logic or entire model, and the theft of AI training data. To boost Skeptics’ confidence in AI, organizations need robust cybersecurity systems that safeguard AI from both traditional and AI-specific threats. If Skeptics are aware that AI protects the company, that knowledge can help boost their trust in the system.
AI Blockers face ethical dilemmas
This archetype includes technologists (sometimes even technology purists) who don’t always understand business priorities and the boundaries of the enterprise AI strategy. This lack of understanding, coupled with access to data, might create ethical dilemmas. These can include doubts about the use of personal data, concerns with employee monitoring systems, and fears about AI displacing jobs. Blockers also worry about data management, privacy, and security. A Gartner survey recently reported that security and privacy were among the top challenges in enterprise AI implementation.13
Converting Blockers into Evangelists is extremely challenging; often, these doubting technologists must become Practitioners first. Almost all concepts discussed so far will be required to move a Blocker to one of the other communities.
- Organizations will need to put in place transparent and ethical AI to help this user group build trust.
- As trust develops, organizations will need resilient AI to ensure the system is secure and the users do not slip back into old ways of working.
Figure 2 explains how each archetype can transition to Practitioner or Evangelist status using the suggested approaches. With all AI users in any of the archetypes, organizations can ensure optimal usage of technology, creating maximum business value.
Figure 2. How to convert employees into AI Practitioners or AI Evangelists
Source: Infosys
Managing these different concerns and needs can have significant consequences. Infosys Knowledge Institute research found that the most successful enterprise AI efforts were ones that kept employee and customer satisfaction at the center of their vision and strategy. The highest-performing businesses deployed ethical AI practices to increase trust in their systems. At the same time, they invested in employee reskilling initiatives to make all users comfortable with the technology. These businesses reported that 91% of their AI initiatives were already operating at scale and, as a result, posted a two percentage point increase in operating margin.14
In a recent Gartner survey, ‘security and privacy concerns’ emerged as one of the top barriers to AI adoption
Culture change to drive enterprise AI deployment
Every survey, projection, and data point suggests that adoption of enterprise AI will accelerate — boosting everything from operational efficiency to customer engagement to employee satisfaction. This surge of AI activity requires all users to get comfortable with the technology. But this rapid shift will inevitably cause upheaval in many companies and potentially lead to AI skepticism. Based on their understanding of the technology and trust in the system, users react differently — regardless of use case or industry. Businesses need to tackle these challenges by deploying AI concepts in a strategic way that addresses the needs and concerns of different constituencies.
In addition to the AI concepts discussed previously, businesses will need a host of other changes, mostly focused on organizational culture. These include:
- A data-driven culture across the firm. This must be led by data experts who can drive development and by executives supporting the initiatives.
- Business leaders who can lead by example and demonstrate the benefits of enterprise AI adoption.
- The ability to demonstrate a few quick wins to establish AI’s capability and credibility.
- A clear message across the organization, making it apparent that enterprise AI functions are meant to augment human intelligence and not replace it. By assuring workers that soft skills and creativity are highly prized, small pockets of AI capability will grow and augment this transformational change.
Taken together, these four pillars lead to what Infosys calls the Live Enterprise — a knowledge- and data-driven organization with the agility to swiftly respond to all challenges and business opportunities. Not only will this increase growth and improve business and IT outcomes, but it will lead to a more inclusive AI strategy. The technology offers wondrous possibilities worthy of celebration, at least for those companies that can fully implement AI and share in the profits promised for the coming decade.15
References
- Global survey: The state of AI in 2020, Tara Balakrishnan, Michael Chui, Bryce Hall, Nicolaus Henke. McKinsey & Company, Nov. 17, 2020.
- Maturing AI in the Organization, John Gikopoulos, Saibal Samaddar, Nidhi Om Subhash, Harry Keir Hughes, Infosys Knowledge Institute, December 2020.
- Maturing AI in the Organization, John Gikopoulos, Saibal Samaddar, Nidhi Om Subhash, Harry Keir Hughes, Infosys Knowledge Institute, December 2020.
- Enterprise Artificial Intelligence (AI) Market is Expected to Reach $53.06 Billion By 2026, Bhushan Jagtap, Allied Market Research, 2019.
- Three Top AI Trends To Watch This Year, Prem Kiran, Forbes, March 16, 2020.
- Maturing AI in the Organization, John Gikopoulos, Saibal Samaddar, Nidhi Om Subhash, Harry Keir Hughes, Infosys Knowledge Institute, December 2020.
- AI at Google: our principles, Sundar Pichai, Google, June 7, 2018.
- Microsoft AI principles, Microsoft.
- AI Ethics, Francesca Rossi, Christina Montgomery, IBM.
- Ethical AI: its implications for Enterprise AI Use-cases and Governance, Debmalya Biswas, Towards Data Science, Nov. 23, 2020.
- GPT-3, Ronald Schmelzer, SearchEnterpriseAI. June 2021.
- Microsoft revs up AI push with GPT-3-enabled Power Apps, Paula Rooney, SearchEnterpriseAI, May 28, 2021.
- Build 3 Operations Management Skills for AI Success, Laurence Goasduff, Gartner, Feb. 2, 2021.
- Maturing AI in the Organization, John Gikopoulos, Saibal Samaddar, Nidhi Om Subhash, Harry Keir Hughes, Infosys Knowledge Institute, December 2020.
- Infosys Live Enterprise – A Continuously Evolving and Learning Organization, Dr. N.R. Srinivasa Raghavan, Harry Keir Hughes, Samad Masood, Infosys Knowledge Institute, 2019.