Scaling Enterprise-Grade AI – A Study in Cloud-Based AI for Automation
Companies have been using Artificial Intelligence (AI) in their businesses since the 1950s. However, in the last ten years, AI has been recognized for its ability to streamline technology operations and create efficiencies that didn’t previously exist. The modern view of AI has shifted focus to integrating AI solutions within enterprise applications and business workflows to provide tangible value. As per Gartner, AI has passed through the “Peak of Inflated Expectations”, a time when AI Cloud services were popular sometime in 2019. Currently, the technology is at the “Trough of Disillusionment” when impatience for results slows down adoption. But there is optimism; we are on the brink of the “Slope of Enlightenment” as organizations will recommit to AI based on newly realized benefits and operationalize AI initiatives to make efficient use of data, models, and compute while investing in data for AI and responsible AI.
Deterrents to full-fledged adoption of AI
Most large enterprises know they need to invest in AI to keep pace with their competitors but have done so in a spotty matter. This phenomenon manifests in either a siloed approach – where only certain groups within an enterprise deploy AI – or where the technology stack in the organization is not interconnected.
AI consumes high storage and vast computing power, which deters companies from investing in AI initiatives. Further, ensuring security and ethical checkpoints is critical for a thoughtful execution of an AI strategy. Lastly, existing legacy business processes and culture are invariable challenges that need to be addressed.
Cloud is the force multiplier for AI
But now, with advances in two different technologies – Cloud and Microservices – barriers are being removed from an enterprise-wide scaling of AI across the organization. Cloud lowers adoption costs, provides the necessary flexibility, agility, and scale required to power AI-driven innovation. It facilitates co-creation by supporting other technologies such as RPA, to fulfill the promise of the AI proposition.
Cloud brings interconnectivity through data interoperability and the ability to connect various platforms using AI engines to create an enterprise-grade infrastructure that can result in efficiencies that provide true business benefit.
One problem that many businesses face when hyper scaling AI is how to bring together data generated both on external platforms and internal ones to derive more business value. For example, one of our customers, global healthcare giant, GlaxoSmithKline (GSK), wanted to interconnect their external platforms, such as Facebook, WhatsApp, and Twitter, with internal platforms such as the Enterprise Resource Management (ERM) and Customer Relationship Management (CRM). They wanted to drive better digital interactions with different stakeholders that included both internal and external personas.
GSK used a poly-cloud approach to scale a Conversational AI program across the enterprise. According to Sudeep Gupta, Director of Data and AI Platforms at GSK Consumer Healthcare, this approach has maximized the efficiency of their operations and created a model that can be industrialized and replicated for stand-ups across their multi-faceted organization.
“By using our poly-cloud approach, we are able to have a Conversational AI platform that serves all the enterprise’s users and personas, and across a multi-disciplinary organization such as GSK, in a manner that is decentralized but coordinated. As a result, all interactions have been improved or ‘distilled’ for the benefit of all our users,” Gupta said.
Removing Barriers to adopting AI with Cloud
However, there are organizational challenges to be surmounted. These include creating a culture that embraces the automation of processes through AI. Even within an organization that accepts AI as the future, technology, organizational, and process challenges can slow the distribution of AI.
One approach that has proven to be successful is to leverage existing platforms and make the technology available across the organization. This means that various departments and groups within an organization can quickly build customized applications that work for their unique business needs – such as Conversational AI– by using a tested methodology and building blocks that are in place at the organizational level. Deployment in a cloud environment makes this much easier and more efficient and opens the entire ecosystem to various teams.
Once an entity grows large enough to be called an enterprise, technology is always disparate and distributed across the entire organization. A variety of platforms and even operating systems can exist within a single enterprise. Historically, this has proven to be a challenge when systems couldn’t speak to one another. Instead of imposing technology standards, approaches to AI deployment must accommodate multiple technology stacks to be efficient. A multi-cloud approach with APIs and other technology connecting and creating true data interoperability can create an efficient framework. Also, teams must be empowered to design and own their own AI deployments, to ensure that all use cases are adopted.
AI on the Horizon: What’s Next
In the next several years, the cloud will move the needle for AI from being experimental to becoming an enterprise-wide presence. The two technologies will complement each other. Cloud can provision in an automated manner the resources required to support the AI workloads that are computing and memory intensive. Companies can give their clients access to curated datasets, tool stacks, and trained models through the cloud. The entire landscape will be technology-agnostic, standardized, and democratic by nature leading to efficient use of computing power, data, and models.