Generative AI

Trend 1: Closed-source foundational models gain wider acceptance

Closed-source large foundational models like GPT-3.5/4/Turbo, Anthropic, PaLM, and Claude thrive on minimum infrastructure, which boosts their use cases across diverse segments. Enterprises can use closed models for industry use cases even without fine tuning, aided by retrieval augmented methods (which improves AI trust and quality), such as semantic search. They leverage a secure cloud ecosystem, trained on extensive public data, ensuring robust security and privacy controls.

These generic models perform multiple tasks, such as g=enerating human-like language and using complex pattern identification to perform reasoning tasks. This broad capability facilitates applications such as document summarization and enterprise knowledge management.

A major tech company collaborated with Infosys to enhance its content moderation system. They implemented an AI model using supervised transfer learning for vision and text, enabling the identification and isolation of toxic content in user-uploaded forms.

Generative AI

Trend 2: Organizations leverage LLMs for seamless code migration

LLMs like GPT-4, trained in diverse codes and programming languages, excel in contextual inference, pattern recognition, and generalization. They interpret legacy languages and generate code in modern languages, facilitating migration from languages like assembly, Smalltalk, and C/C++. For legacy COBOL applications, crucial in many enterprise systems, maintenance and documentation challenges arise.

Manual processes are time consuming costly, particularly with a retiring cohort of COBOL experts. Generative AI systems proficiently analyze COBOL codebases, generating precise, readable documentation through advanced pattern recognition and natural language understanding. Variables, functions, and program flows are accurately described and defined.

A major American airline transitioned from a commercial integration and complex event processing platform to open-source tooling. This required migrating a large swathe of code written in a proprietary language to Java using a state-of-the-art LLM. This saved about 30% of the firm's effort.

Generative AI

Trend 3: LLMs optimize knowledge management and semantic search

Transformer models, incorporating embedding generation like OpenAI's GPT-3.5 and Microsoft's E5 Large, precisely navigate industry contexts. Embeddings capture data's principal components, forming low-dimensional vectors that faithfully represent the original data. When combined with generative models (GPT-3.5/4, Llama, Falcon, Mistral-MoE, Flan-T5), they generate accurate, context-aware answers for end users, considering the enterprise context.

Retrieval augmented generation (RAG) is a prevalent strategy for use cases like knowledge retrieval, Q&A-based chatbots, summarization, and similar search experiences. Many organizations migrate their knowledge repositories to embedding-based vector databases, enhancing industry context semantic search processes and improving business and IT operation user experiences. For instance, in aviation, this approach trims 17% off aircraft repair design efforts, cutting airport downtime.

Generative AI

Trend 4: LLMs improve productivity across the software development life cycle

Companies increasingly explore LLMs to improve overall productivity across the software development life cycle, customizing user experiences through fine-tuned generative AI models. As our Tech Navigator: Building the AI-first organization discusses, data scientists are encouraged to use their preferred tools, combining open and closed AI models based on the enterprise use case.

The rising demand for GitHub Copilot and other LLM applications spans tasks like software requirement elicitation, code generation, documentation, unit test case creation, and general test case generation. In-editor experiences significantly boost both developer and tester productivity, elevating overall outcome quality. According to the annual State of AI report, using GitHub Copilot led to substantial productivity gains, with less experienced users benefiting the most — a productivity gain of approximately 32%. In a recent statement to Infosys, Nvidia CEO Jensen Huang said, “while some worry that AI will take their jobs, someone who is an expert in AI will certainly do so.”

A large telecom company implemented fine-tuned open-sourced LLMs to generate Java, Python, Angular JS, .Net, and Shell scripting for developers. They used custom plugins for code editors to assist in code completion, documentation, and unit test case generation tasks, all while safeguarding sensitive information within the company network.

Generative AI

Trend 5: Autonomous agents shape the future of generative AI

AI agents are autonomous workers who independently execute tasks aligned with predefined goals and parameters. They are behind-the-scenes workhorses, independently accomplishing tasks — a contrast to tools like Copilot, which primarily serve to aid and guide. In the realm of autonomous agents, the underlying LLMs rely on a model to plan and execute workflows, leveraging the capacity to identify and execute steps essential for complex tasks. This involves the execution of code using APIs to retrieve information from ERP systems. Autonomous agents find applications in tasks like invoice processing, involving data extraction from invoice documents, matching/validation against ERP systems, and subsequent invoice processing. Examples like Autogen, BabyAGI, and AutoGPT represent autonomous agents that serve as valuable buddies. They enhance roles such as IT support and function as knowledge assistants, leveraging the capabilities of LLMs, information extraction, and existing data tools within the organization. However, there are risks too. Autonomous agents are self-evolving, undergoing rigorous changes that may be hard to estimate and control. However, used carefully, autonomous agents are the next phase of generative AI, and will truly transform existing experiences and increase productivity.