Insights
- Approximately 43% of core banking systems in the US are built on COBOL.
- Core banking, payment, lending, and fraud operations are all fundamental to the delivery of banking services. However, for most US banks, these functions are delivered by a few dominant off-the-shelf software providers.
- As off-the-shelf, highly regulated products, they are by necessity rigid, and banks that want to customize them are limited by the speed at which the vendor can bring them on the journey.
- Infosys has worked with several banks, significantly reducing costs and increasing agility and digital capability. The solution is to innovate around the core. Infosys has used the following four design patterns to enable banks to be less restricted by their core systems:
- Replicate the data and synchronize with the core.
- Replicate a lighter version of the process for low-value and high-volume transactions.
- Reduce noncore activities in the core system.
- Optimize and automate processes around the core.
Core systems are a convoluted web
Financial institutions have long been held back by their core systems and processes. Many banks in the US still rely on mainframes and legacy systems for their core systems. According to recent data, approximately 43% of core banking systems in the US are built on COBOL. Additionally, 61% of global infrastructure hardware decision-makers reported that their organizations use mainframes, while more than 70% of the Fortune 500 companies operate their core businesses on mainframes, according to research by Forbes.
Core banking, payment, lending, and fraud operations are all fundamental to the delivery of banking services. However, for most US banks, these functions are delivered by a few dominant off-the-shelf software providers.
The providers, such as FIS for core banking, Fiserv for payments, and FICO for fraud, form the backbone of the US banking industry’s highly regulated and secure systems of record. Each has been modernizing and moving to the cloud, providing more digital functionality and value.
A coreless strategy
However, as off-the-shelf, highly regulated products, they are by necessity rigid, and banks that want to customize them are limited by the speed at which the vendor can bring them on the journey. But what if you build a coreless bank strategy? Infosys has worked with several banks, significantly reducing costs and increasing agility and digital capability.
The solution is to innovate around the core. Infosys has used the following four design patterns to enable banks to be less restricted by their core systems:
- Replicate the data and synchronize with the core.
- Replicate a lighter version of the process for low-value and high-volume transactions.
- Reduce noncore activities in the core system.
- Optimize and automate processes around the core.
In all these implementations, value is delivered without impacting the core, ensuring compliance, while enabling faster innovation at a lower cost and supporting the ability to scale.
The sections below illustrate all of the above design patterns with adequate technical details and the benefits realized:
Design pattern 1: Replicate the data and synchronize with the core
Banks use legacy core banking solutions that are based on the mainframe. Mainframe costs are determined by the amount of computing power needed to perform tasks. The price of this computing power varies from $1,000 to $4,000 per million instructions per second (MIPS). A 2,000 MIPS mainframe system costs more than $4 million on average. Customer transactions are processed on the mainframe and the relevant data is returned to be shown on the channel touchpoints — online banking, branch, ATM, and customer support.
Given that banks are required to provide a more innovative and personal digital experience, they need real-time customer transaction data to be available outside of the core banking solution. Also, to support growing customer bases, they need a highly distributed solution on the cloud, which provides scale without hitting the core banking system running on the mainframe.
Banks often use M&A as a way to scale their customer base. This results in the merged entity running multiple core banking solutions. But with a coreless strategy, multiple cores can converge to provide each channel touchpoint with the required personalized experience.
Infosys worked with a leading regional US bank to implement this solution, supporting the bank’s coreless banking strategy. In this model, all the inquiries coming from various channels to the core banking system are routed to a scalable cloud-native solution that has a datastore with a copy of the core banking transaction data synchronized in real time, with the main goal to keep the operational datastore in sync with the data on the mainframe. This datastore is fronted with application programming interfaces (APIs) for the channels and events for other enterprise systems that rely on this data. In the first year, the bank reduced costs on its deposit core system by $1.1 million.
Figure 1. Implementation for design pattern 1
Note: VSAM - Virtual Storage Access Method
Source: Infosys
Infosys worked with a partner solution (CDC engine, as shown in Figure 1) that can detect and capture the change in the mainframe core system in almost real time as log streams, which are routed to different topics on a highly scalable, distributed messaging platform like Kafka as events. The data ingestion layer is designed as a set of microservices having the logic to logically group the events for a transaction (e.g., address change, debit, or credit) and then ingests them into a cloud-based datastore such as MongoDB that holds a canonical representation of core domain data model, agnostic to the core banking product data model.
This solution provides multiple benefits. First, it offloads all the inquiry traffic from the mainframe and provides direct cost savings on MIPS reduction. Second, as the data is available in a datastore, the bank can carry out further functions such as real-time fraud detection, serving personalized notifications, and mining transaction data for analytics, which in turn offers cross-sell and upsell opportunities to customers. The benefits can be expanded further with a personal finance solution that can be offered to its customers to guide their spending and savings patterns.
There is another interesting benefit of this solution for those banks having multiple core banking solutions due to mergers and acquisitions. In such a scenario, the above solution would simply converge the core banking data into this around-the-core system and provide a converged single-core view to all the channels.
Design pattern 2: Replicate a lighter version of the process for low-value and high-volume
With the rise of banking through mobile phones and instant payment options, especially in countries like India, banks have prioritized scaling the core payment systems.
However, core payment systems are mostly built in legacy technologies and hardened over decades by the product vendors to be compliant with ever-evolving regulatory standards. Scaling these core systems for population-scale use cases is technically very challenging, as they cannot process the transactions per second (TPS) requirement at the necessary scale. It would also be financially nonviable to execute these transactions for small sums, such as transferring INR50 ($0.58) from a customer in a small shop to the retailer, or making a bill payment for INR200 ($2.31) to a service provider.
The best approach here is to build a mini version of a parallel system with a cloud-native distributed architecture that can scale. For example, for a real-time low-value money transfer, a risk threshold is defined by the type of customer and transaction. The new parallel system can handle the credit and debit without going to the core and settle with the core systems periodically — say every few minutes — to stay in sync.
Likewise, for a low-value bill payment to a service provider such as a mobile network, the transaction between the customer and the provider is handled by this parallel system by crediting the amount to the provider account in the new system without involving the core payment system. One single settlement transaction is taken to the core system for the total amount for the service provider.
This is achieved by a custom bank payment orchestrator component that has the intelligence to route the traffic to this new parallel system instead of sending it to the core. Settlements with the core payment system are done periodically in batches to help provide a throttling mechanism by mirroring the transactions (aggregated or at the individual level). Other configurations could be the amount (value) of the transaction based on the risk tolerance limits of the financial enterprise or bank to be routed to this new component to take the load off of the core system.
Infosys is implementing this solution using its core banking system, Finacle, for one of the largest private banks in India. The cloud-native parallel system, Finacle Thin Ledger, is being built to handle high-volume, low-value, and low-risk transactions. In the initial benchmark, Finacle Thin Ledger has seen supporting over 2,250 transactions per second, and this is expected to scale up multifold — driven primary by the surging growth of UPI, with country-level transactions expected to become a billion per day within the next few years. Thanks to the distributed transaction processing design leveraging cloud-native, microservices-based, asynchronous events-based architecture to scale the throughput.
Figure 2 portrays a design that is being built to handle the ever increasing scale since the introduction of UPI, an instant payments system developed by the National Payments Corporation of India (NPCI) in 2016.
Figure 2. Implementation for design pattern 2
Note: NPCI – National Payments Corporation of India
Source: Infosys
UPI allows users to make instant payments between bank accounts using a mobile app. It provides online transaction routing, processing, and settlement services to members participating in UPI. It enables very low-value transactions hitting the bank to be transferred as part of any transaction across the country.
This solution alleviates the need for core payment systems to handle the higher TPS requirement by offloading the heavy low-value traffic to the distributed system built outside the core that can scale very well. Hence, this solution pattern is recommended for use cases where the value of the transaction is low but with a high scale. Also, the risk of allowing these transactions needs to be weighed in while calibrating the value through a configuration in the new system.
Design pattern 3: Reduce noncore activities in the core system
In financial services, some services are offered by software as a service (SaaS) providers as a subscription platform to enterprises and banks with a fee-per-transaction model and a wide range of capabilities.
These platforms offer numerous benefits:
- Efficiency: Streamlined processes for tasks, reducing manual work.
- Compliance: Built-in features to ensure adherence to regulatory requirements.
- Data insights: Access to comprehensive data for better decision-making and reporting.
- Customer service: Improved customer experience through online access.
While banking enterprises have widely adopted this pattern, they have realized that these platform offerings are expensive due to the fee-per-transaction charging model. Additionally, banks and financial institutions have tended to become locked into SaaS providers for even noncore capabilities, which are often provided by platform providers to differentiate in the marketplace.
A critical look into the capabilities offered by these platforms reveals that there are scenarios that call for a real need to discriminate the capabilities that are core and can only be realized on the offered platform versus:
- Scenario 1: Those capabilities that are independent and can be moved out to a new custom-built solution.
- Scenario 2: Those capabilities that are inquiry transactions hitting the SaaS platform for read-only data.
- Scenario 3: Those capabilities that can be moved to a new platform but have a hard dependency on the data in the vendor-provided core platform (almost with very low latency — read and write).
Here, scenario 1 is straightforward: Move those capabilities to a parallel system that can be built from scratch. The expensive SaaS platform remains in use because it might not make sense to move to a separate platform as the benefits may not offset the cost of running two systems. However, when combined with scenarios 2 and 3, this becomes a powerful business case that business and technology leaders would like to build to reduce the volume of transactions hitting the core SaaS platform.
Infosys has implemented this pattern for a leading mortgage servicing client in the US. The pattern that we have taken to build scenario 2 is similar to design pattern 1, which replicates the data to a parallel datastore and synchronizes with the core. In addition, given this new parallel datastore, having all the required data helps reduce the cost to a fraction of the per transaction fee otherwise being paid to the SaaS platform provider. Similarly, to implement scenario 3, the standard CQRS pattern can be leveraged in the around-the-core system.
Figure 3. Implementation for design pattern 3
Source: Infosys
Given the cloud and API ecosystem is so mature, building a highly scalable parallel system is a real possibility. SaaS platform providers have APIs and events to get the data in real time to keep the local datastore in sync.
In the implementation of Design pattern 3, all the inquiry transactions coming from the customer are now serviced by the around-the-core system that has the loan-servicing data synchronized through an event-driven architecture. This decouples all the downstream system integration with the original SaaS solution through this around-the-core system. Having the entire loan servicing data in this around-the-core system provides further benefits, as discussed in design pattern 1, including AI- and ML-driven analytics to drive personalized servicing for customers.
However, a point to be noted here is that there are two platforms with two different user experiences for the servicing team as part of this option. Care must be taken to ensure that the end users’ and agents’ experience-related expectations are clearly discussed and agreed upon with stakeholders prior to going with this route.
Design pattern 4: Optimize and automate processes around the core
Financial institutions are required by law to implement and maintain a suspicious activity reporting (SAR) program to comply with anti-money laundering regulations. For this, they use automated systems to scan transactions for patterns that might indicate suspicious activity, such as large cash deposits, unusual wire transfers, or transactions with high-risk geographic locations. Specific criteria, like large, unexplained transactions, complex transactions with no apparent economic purpose, or activity linked to known criminals, can trigger the need to file an SAR.
The actual filing happens on vendor-provided financial crime risk management (FCRM) platforms. Based on our experience, the operations team spends much effort pulling the required data points to create the report on FCRM platforms, with a lot of back and forth to finish the filing process. This can mean missing deadlines, as well as needing more people and incurring higher costs. Right-sizing the operations team is often challenging.
The need for any financial institution to be compliant is high. It has a positive effect on the financial institution’s brand from investors’ and customers’ perspectives. At the same time, containing the cost and a short time-to-market has become equally important for the c-suite.
The solution that can help the operations team automate data collection from disparate sources and make it available in a dedicated datastore. This datastore can then facilitate the operations team pulling the data into the FCRM platform in one shot and proceeding with the filing of the SAR report. At present, a single SAR file creation done manually takes between two and four hours.
For one of our largest US regional banking clients, we are building out this solution. Each month, the client sees around 2,000 SAR filings. The potential to streamline this process is clear. When a transaction is flagged for SAR reporting, the new cloud-native solution can respond to the trigger and pull and collate all the required inputs from various sources using APIs. This helps file multiple SAR filings within the regulatory timelines, reduces errors in filing, decreases manual efforts, and enables the operations team to focus on more productive, value-added operations.
Figure 4. Implementation for design pattern 4
Source: Infosys
The case for implementing this is clear, as the value outweighs the cost of building and maintaining this solution (Figure 4). The datastore can be cleaned up periodically as it’s only a landing zone for the data collection for the filing process that happens within the FCRM platform.
Bypassing the core for modernization
These four patterns provide effective solutions for banks to address the challenges posed by legacy core systems. By replicating data and synchronizing with the core, creating a parallel system for low-value transactions, reducing noncore activities in the core system, and optimizing and automating processes around the core, banks can foster innovation, reduce costs, and enhance digital capabilities without disrupting their core systems.