Insights
- Firms use generative AI agents for more efficient workflows and value streams, including the software development life cycle (SDLC).
- With generative AI, whole phases of the SDLC can be collapsed, leading to lower costs, higher quality products, and much faster delivery times.
- Stream AI takes user inputs and executes each SDLC phase — without intermediate pause or manual touch points — generating corresponding content (code, pipelines, etc.) in minutes for a fully functional outcome.
- Stream AI is a big part of the AI-first vision, and occurs when other essential components are in place, including the right digital operating model, AI-ready talent, AI-augmented development and testing, and an engineering shop well-versed in platform engineering.
- However, Stream AI must ensure users trust AI outputs and avoid risks associated with generative AI.
- This way of working inevitably means jobs will change. For example, a solution architect will still design, implement and test architecture, but will now focus on how to create software more efficiently, using a range of tools.
AI-based software development and constant reimagination to create software products are inevitable. Look at Devin, an intelligent agent in the demo phase (at the time of writing), which performs many functions a human software engineer can. The agent has capabilities in machine learning, natural language processing, and data analysis. While many fear it will eventually displace programmers, the reality is more akin to human-machine collaboration that enhances productivity and gives humans more time to improve product features.
Goldman Sachs has conducted tests and pilots with intelligent agents, achieving up to 40% gains in software engineering velocity. Microsoft’s GitHub Copilot has 1.3 million subscribers, with rival products emerging from Amazon and Google.
This signals a future where firms use generative AI agents for more efficient workflows and value streams, including the software development life cycle (SDLC).
Lots of AI investment, but benefits come with process reimagination
According to our Generative AI Radar 2023 APAC report, even though some firms are tripling generative AI spending in 2024 and C-suite support is strong, concerns of data privacy, usability, and ethics, alongside fears of reputation implications, hinder progress. Also, many just don’t have skilled employees to adopt generative AI at scale. Another challenge is the tendency to focus on single use cases rather than a holistic examination of value chains and processes. This reduces business benefits; only 6% of European companies generate business value from their generative AI use cases.
Stream AI – an innovation to substantially increase RoI
However, they’re leaving a lot of value on the table. Research suggests that when utilized properly, generative AI delivers $4.4 trillion in value globally each year.
Much of this value is from enhanced user experience, increased operational efficiency, and automation, along with streamlined product development and design, according to Generative AI Radar 2023 Europe.
With generative AI, whole phases of the SDLC can be collapsed, leading to lower costs, higher quality products (according to client testimonials), and much faster delivery times.
Picture this: you write a prompt to create a software application with data visualization and reports, answer a few questions about product requirements, and a smart code buddy handles multiple phases simultaneously and delivers a fully functional application ready for production. Stream AI, as we call it, takes user inputs (from developers, product managers, or business/operations professionals) and executes each SDLC phase — without intermediate pause or manual touch points — generating corresponding content (code, pipelines, etc.) in minutes for a fully functional outcome.
Stream AI compresses SDLC steps (requirement gathering, solution architecture, user experience testing, etc.). It delivers a directly usable application or software product in a single go, with around 30% to 40% faster delivery, 20% to 40% cost reduction, and 30% to 40% quality improvement. These estimates stem from internal proofs-of-concept, which factor in model type, use case complexity, and data preparation costs. This significant productivity boost (in particular) is a key reason why firms will adopt Stream AI in the months ahead for greater value from AI projects.
Copilot in Power BI exemplifies the possibilities, demonstrating data analysis and report generation with flavors of Stream AI embedded in the solution. For example, the full software life cycle, from requirements gathering to release management, can be collapsed using Power BI. Another example is a legacy modernization lift-and-shift to AWS Native Platform that Infosys is carrying out for a client. The essence of the project is to get roughly 40,000 files of 40 different file formats processed, and create a visualization for the end consumer. Instead of 10 developers working on the project, Stream AI partially collapses the life cycle of the project, enabling one person to write a good prompt to generate the entirety of the code.
Firms can reduce headcount on many junior roles, with upskilling initiatives for high performers to take on more instruction- and prompt-based tasks.
According to our in-house teams, creating samples of these flows doesn’t take long (a few weeks), but full-scale end-to-end implementations are complex, and require as much effort as other IT transformation projects, of between six months and a year. That said, Stream AI is not prohibitively expensive; akin to other AI-based investments, it promises substantially greater return on investment (RoI).
Stream AI, a big part of AI-first
First movers in the AI era have competitive advantage. Goldman Sachs, Danske Bank, Salesforce, Nvidia, and Infosys use generative AI to redefine value delivery to stakeholders, whether employees or customers. They not only build AI capabilities but embed AI into new products and services, reimagining their full life cycle. We call this going AI-first.
Stream AI is a big part of this AI-first vision, and occurs when other essential components are in place, including the right digital operating model, AI-ready talent, AI-augmented development and testing, and an engineering shop well-versed in platform engineering (Figure 1).
Figure 1. Stream AI requires AI-first, building on AI-augmented workflows
Source: Infosys
Firms should see this AI-first journey as a transformation management process, with Stream AI as one of the goals. Breaking up the journey into bite-sized chunks enables firms to deliver RoI quickly, and alleviates the stress of years-long AI transformation projects. Firms should move from AI-augmented to blueprint-driven to platform-centric and then finally Stream AI (Figure 2).
Figure 2. The multiplier effect on development flows: From AI augmented to Stream AI
Source: Infosys
Step 1: AI-augmented
Here, intelligent agents augment each SDLC phase. Firms aiming to develop a chat interface with AI-augmented capabilities (like Amazon CodeWhisperer or GitHub Copilot), require:
- Intelligent agent for requirement gathering and generating documentation.
- AI assistance in the design phase to optimize steps and facilitate solution and UX design.
- Intelligent AI to write code, generate test cases, and perform automated testing.
- AI assistants for automated pipeline setup and code deployment on cloud instances, with infrastructure-as-code generated by AI-assisted processes.
- Solution launch by AI-driven A/B testing.
Even simple AI augmentation saves significant time and costs. However, the flow consists of four to five separate actions (or phases), which have interdependencies and the need for human verification at each stage.
Step 2: Blueprint-driven
AI-first software engineers still need to understand how code works, how data flows, and how to solve problems. Regardless of the tools used, developers need blueprints — the understanding of structure and materials and how software products come together. At the next stage of maturity, firms must ensure application development is blueprint-driven, with templatized standards and best practices. For this, firms can use iLEAD, the Infosys Live Enterprise Development Platform.
Step 3: Platform-centric
Once firms achieve this level of AI maturity, they must progress to create curated platforms across infrastructure, UX, compute, and storage, using platform engineering to develop assets across software, knowledge, and infrastructure environments.
Step 4: Stream AI
Finally, firms can achieve Stream AI by reimagining whole sub-SDLC processes (such as solution architecture and design/user experience/DevSecOps). This leads to an AI-first organization that provides high-performance, generative AI-assisted workflows across the life cycle. The AI here utilizes large language models to break down problems and create workflows based on chain-of-thought “reasoning” with a finetuned prompt. This approach eliminates the need for programming and completely changes the way developers or architects fundamentally test, deploy, instrument, alert, load data, etc.
As firms mature AI capabilities, they end up at the intermediate stage of AI augmenting workflows through generative AI tool rollout. But a lot more is needed to get to an AI-first enterprise, including AI-curated blueprints, platform-centricity, and a move to AI-augmented lean IT life cycle value delivery (Figure 3).
Figure 3. The AI maturity journey, from automation-first to an AI-first enterprise
Source: Infosys
Responsible AI is key
Beyond the considerations of better blueprints and platforms, Stream AI must ensure users trust AI outputs and avoid risks associated with generative AI.
For instance, to write a prompt in a reporting system and get a report out the other end, firms must make sure their data is properly organized and classified rather than scattered in multiple systems.
Data must also be machine-readable and clean, with potential bias removed, making data preparation paramount.
For firms seeking an enterprise analytics report with just a single click, they would also want to ensure the machine doesn’t expose unintended data. While traditionally caught in user testing, Stream AI has eliminated this step.
There’s also the question of how firms going AI-first through Stream AI prevent personal information being exposed by the application, given the solution exposes the entire platform.
To help, the AI system might use “external control” — where APIs are accessed through a cloud container. The container here (say Azure OpenAI Service) encrypts data in transit, filters content, removes personal data, and prevents prompt injections. It also checks compliance, filters hallucinations, and flags potential copyright violations and sensitive data exposure.
Another problem is security enforcement, with the right access controls to data. AI models must be resilient to threats. To get ahead, firms should make their Stream AI secure by design, establish responsible AI hygiene measures, and integrate generative AI model security with enterprise security protocols.
Stream AI influences all roles
AI-first means restructuring business workflows across the organization to add significant value. AI can be built into these workflows, collapsing many SDLC phases to achieve Stream AI.
This way of working inevitably means jobs will change. For example, a solution architect will still design, implement and test architecture, but will now focus on how to create software more efficiently, using a range of tools.
Not all phases of the solution architecture will need Stream AI (Figure 4). For example, architectural pipelines will benefit from simple AI augmentation, whereas Stream AI will provision the development platform and validate the architecture, at 30% and 10% increased velocity, respectively, according to working demos of partial flows we’ve developed.
Figure 4. AI-first delivers benefits across the solution architect value flow
Source: Infosys
Similarly, scrum masters in the product-centric delivery model can use Stream AI to manage and prioritize product backlogs, features, and stories.
Stream AI may face slower adoption in some industries (healthcare, travel, energy): software in these industries is often messy, with old systems, undocumented features, lots of integrations, and understanding how to work within these parameters is currently out of scope for most generative AI models.
What is sure is that software engineering will use this powerful new technology to remain competitive. Humans will collaborate with machines to build resilient software systems. This new era is about knowledge disintermediation (removal of intermediaries from the software/knowledge supply chain); knowledge democratization; a shift-left in workflows (tackling tasks earlier in a processes’ timeline can increase efficiencies); new integrated engineering workspaces; and new skill compositions and roles — including prompt engineering and AI-led product managers and business analysts.
Prompt engineers, who now earn salaries of more than $300,000, design and refine inputs on generative AI platforms to gain optimal results. Companies including Nestlé and KPMG are hiring hundreds of prompt engineers. Engineer roles will not go away, as some fear, but will become more specialized, requiring upskilling and continuous learning along with softer skills. They will work on product-based teams, where solving problems toward better business outcomes are more important than just engineering outputs.
Firms should act now: create AI-led learning paths for application developers, optimize SDLC through Stream AI, implement responsible AI practices, and witness rapid benefits that positively impact their bottom line.