Strategic Use of Open Source for Population Scale
India is a model nation for creating sophisticated technology solutions for its billion-plus population. Ten years ago, the Indian government began creating a digital ID system that would interface with other state systems. The government assembled a team of the best talent from Indian public and private sectors to begin working on the ID system, later named Aadhaar (Hindi, meaning “foundation” or “base”). This team designed the architecture so the ID system was scalable to the population and loosely coupled, adaptable enough to evolve as other technologies evolved. Today, most adults in India have Aadhaar IDs, and the system keeps up with generating new unique IDs for the growing population.
Nandan M. Nilekani, co-founder of Infosys and current Non-Executive Member of the Board, led the Aadhaar team. He discussed using open source with carefully planned architecture for Aadhaar and in his work on other nationwide programs for instant payments and streamlined taxes. He is following a similar model of open source and strategic architecture in his current venture in EkStep, an education learning platform.
National IDs
The national ID project began in 2009, the early days of open source and before cloud computing. Along with his team, Nilekani used open source software to create a very low-cost, highly scalable ID system.
Nilekani and his team quickly realized that conventional computing solutions couldn’t handle a project of this scale, volume, and technical sophistication. Specifically, each of the 1.2 billion citizens of India needed a unique biometric ID. To ensure that each ID is unique, the team needed to “deduplicate” the IDs. In this process, each person’s unique biometric signature was compared against others in the database. This was a huge and complex process. When the database scaled up from 500 million IDs, adding 1 million new Aadhaar enrollees every day, the system ran 500 trillion matches to establish uniqueness of IDs.
At the time, open source software existed but was not nearly as prevalent as it is today. Nilekani and his team used open source software to help them with many aspects of building Aadhaar, but the biometric software was proprietary. Moreover, no software existed to do what Nilekani and the team needed it to do: deduplicate complexly aggregated biometric information, at population scale.
Their creative solution was to design the system architecture so that it could bear the load of scaled deduplication. That is, they used an application programming interface (API) to connect biometric applications made by three different vendors so that “all the engines talked to each other in the same way.” This interface enabled biometric packets to be sent among the engines, each checking for duplicated information in a process that decreased error rates.
This solution worked, costing around US$1 per person, per month, each done in real time and in microseconds.
Payment interface
Next, Nilekani worked on a demonetization project, a streamlined payment system that worked regardless of what bank a person used. He worked on this project in his capacity as the technology and innovation advisor to the National Payments Corporation of India (NPCI), a nonprofit organization owned by the banks.
Again, Nilekani and his team, some of whom had also worked with him on Aadhaar, realized that the software that managed traditional payment systems could not manage a project of this volume and scale. And again, they used open source software and an adaptable architecture. Unified Payment Interface (UPI) was a completely next-generation system that could handle high volumes, was highly scalable and had low transaction costs.
UPI launched in May 2016 and went from facilitating nearly 100,000 to 800 million transactions per month, with a cumulative value of $25 billion worth of transactions every month. Eighty percent of the transactions are person-to-person payments, and approximately 20 percent are person-to-merchant.
Open source for other centralized platforms
Nilekani’s next project was 100 percent open source and took on designing the back-end of India’s Goods and Services Tax (GST) reform. GST brought India from “having one central excise tax, and every state having its own value-added tax, to one single goods and services tax,” Nilekani described.
Infosys and Nilekani created the Goods and Services Tax Network (GSTN), a central platform consolidating taxes and information from every state in India onto one central platform. This platform today handles all of India’s indirect taxes for 11 million businesses. It is able to handle and organize data at the individual level for taxpayers, and at the invoice level for businesses.
The system works effectively, especially from the perspective of open networking, Nilekani described. Consumers, businesses and the government experience seamless transactions. The system handles a very high volume, is very low-cost, and is built entirely in open source software.
Currently Nilekani is using open source for EkStep, the education learning platform referenced above. He and his team are using knowledge graphs and a wide variety of open source products “to create learning infrastructure, which can be rolled out to millions of children in a highly scalable way.”
“Open source is more than ready for prime time”
Far from wondering if open source is viable, Nilekani believes that it might be the only way to build career-grade, high-volume, low-cost systems at population scale. He urged people to be forward-thinking and flexible about using open source. Architecture should be designed to be linearly scalable; that is, to adapt and evolve with new technology. The architecture should allow for future upgrades, swapping out software, plugging in and unplugging, without the user even knowing. In Nilekani’s work, the user is any one of India’s 1.2 billion people. It turns out that open source software may be the one thing that can match Nilekani’s vision and scale – and the citizens of India have surely benefited from this exponential partnership.
The Internet Infrastructure Transformation and the Indian Software Industry
Costs are increasing for internet infrastructure operators, but the average revenue per user (ARPU) is flat or falling. Operators are feeling pressure to transform – the way they’ve been building infrastructure and services for the past 40 or 50 years is no longer sustainable.
Internet traffic is growing very rapidly because of smart phones, videos and the internet of things (IoT). Operators have had to constantly add network capacity to their infrastructure to keep up, so their capital and operating expenditures (capex and opex) have grown directly proportional to the traffic increases. Their ARPU, however, is not growing proportionally; it is relatively flat for some companies and declining for others. Open networking is enabling internet infrastructure to transform on both the technical side and the business side, addressing these imbalances.
Internet infrastructure transformation: Push and pull
Guru Parulkar, executive director at Open Networking Foundation (ONF) and executive director of Stanford Platform Labs, discussed the ways in which technical innovations and business realities are driving transformation, having a push-pull effect. On the push side, technological and network architecture innovations are enabling internet infrastructure transformation. On the pull side, business realities are forcing the need for transformation and creating the motivation for both operators and vendors to embrace open source and new ways of building network infrastructure.
The push: Disruptive innovations
In the current model, original equipment manufacturers (OEMs) build highly specialized and complicated devices, taking requirements from all possible customers and maximizing device functionality to serve as many customers as possible with one box. The operators, however, cannot customize features because the OEMs’ systems were closed and not programmable — only the OEM can make changes. Operators are thus constrained when it comes to innovating and bringing new services and capabilities to the network. Operators want to change to so they can take back control of their networks and so they can innovate and move more nimbly.
The pull: Operator business realities
In the current business model OEMs have essentially become the decision makers for how networks are built and how they behave. OEMs assemble all the components from silicon to software to build complete solutions. They build complicated highly integrated chassis systems with lots of complex functionality. Operators have grown dependent on these systems. As a result, this has become a high margin business for OEMs where they dictate pricing and thereby determine the network’s capex and opex. The mobile and wireline operators are at the mercy of the OEMs.
The new era of disaggregation, white box and open source makes it possible for multiple companies to each participate in building an operator’s network – with each vendor providing what it does best, be it silicon, system, software, or system integrator services. Operators are learning from the cloud providers, and are building solutions based on simple hardware components and open software. Companies that historically have been low down in the supply chain can now have directly relationships with operators and become part of new disaggregated solutions.
Open source is the foundation of the new supply chain that is emerging. The supply chain begins with silicon providers, then progresses to white box vendors, software companies, and then mobile and wireline operators. One recent influence on the supply chain, according to Parulkar, is that “in the past 5 to 7 years, the merchant silicons have matured and are also starting to play an important role in networking.”
The value of software-defined networking (SDN)
Parulkar explained that underlying technology making much of this transformation possible is SDN, which separates network packet forwarding from the control plane. An open control plane then creates a global network view, which enables anyone to write network applications.
The main benefit SDN offers to these operators, Parulkar described, is that it can help reduce capex and opex, because SDN essentially makes it possible for operators to use simple forwarding devices. These devices can be based on merchant silicon, and provided by original design manufacturers (ODMs) or white box suppliers. From there, the control plane and application do not have to be a part of OEM’s proprietary solution — they can be based on open source software from third parties.
The freedom to use open source for control applications then enables more innovation and accelerates new revenue-generating services, Parulkar explained. The network operators have more control in this open-source model, because they can write their own applications, implement their own customizations or choose to work with a third party software developer. In all cases, the operator is in control of their network.
ONF’s approach
ONF is an operator-led, non-profit consortium, helping operators transform their access and edge networks with disaggregation, white boxes and open source. ONF’s mission results in both substantial cost savings (capex and opex) and enables operators to deliver new innovative services – and it does this while giving operators control of their networks. “ONF has been at the forefront of this transformation,” Parulkar explained, and would be a good partner for software companies working in this area.
Open source is the new way of building software. Parulkar argued that India’s software industry, with the most highly skilled software workers in the world, is well positioned to lead this transformation.
Technical Deep Dive 1: Virtualized Broadband Access
The first speaker at the Technical Deep Dive session was Saurav Das, who was part of the original team at Stanford that made SDN popular and is currently Vice President of Engineering at ONF.
Das discussed virtualizing broadband access, and in particular SEBA (SDN-Enabled Broadband Access), the project he leads as VP of Engineering at ONF. SEBA is an open source platform that virtualizes broadband networks. SEBA aims to support a multitude of virtualized access technologies at the edge of the carrier network, including passive optical networks (PON), G.Fast, and DOCSIS. It extracts and “cloudifies” functions that traditionally ran in complicated chassis-based PON systems and enables operators to leverage white-box single rack-unit PON systems from multiple vendors while running common control software.
Virtualizing broadband access
In traditional residential access, the optical line terminal box (OLT) in the central office went through a fiber distribution network and then connected to optical network units (ONUs) at the residence. Upstream, several OLTs connected to an Ethernet aggregation switch, and beyond that to a broadband network gateway — basically, a specialized router. This router then connected to the actual backbone network.
Das and his team working with the open source community, disaggregated residential access by virtualizing it using open software. Open Compute Project contributed the open-hardware for the project, and ONF created VOLTHA (Virtual OLT Hardware Abstraction) so the control software could be run outside the box. VOLTHA abstracts the OLT and ONUs in a PON and makes them appear as an OpenFlow switch to the SDN controller (ONOS). VOLTHA basically hides the low-level details of running a PON, such as the OMCI stack, GEM ports, T-CONTs, and schedulers that are used to enable QOS functionality, or the dynamic bandwidth algorithm used in PONs. The OMCI stack was implemented in the VOLTHA core, and to “talk” to the physical devices, you have southbound adapters like OpenOLT and OpenONU adapters.
SEBA
SEBA contains VOLTHA, ONOS (an SDN controller), the network edge mediator (NEM) and Trellis. The NEM is the layer of software that orchestrates the operation of the entire pod. The network edge mediator essentially takes care of the software and hardware life cycle inside the port, and it collects FCAPS and inventory information. ONF has their own NEM and creates orchestration, models and the FCAPS collection. However, other NEMs can also be used and brought into a SEBA pod as long as they perform the same functionality as the ONF NEM.
Trellis is an open source leaf-spine fabric that has a lot of functionality and features when used as a full fabric. In SEBA, Trellis is used in its simplest form – 1 or 2 leaf switches. However, the underlying SDN concept is the same as for the PON. White box describes the open hardware that comes from the open compute project, and the open-source software from ONF.
A SEBA pod is modular and can be customized for different operators’ needs and backend systems. A vendor can also deploy their OLT and not worry about compatibility with other components in the system as long as they are compatible with VOLTHA. The system has increased openness so other hardware and software can be used in a residential access ecosystem.
All ONF projects are similar in that they are modular and can be mixed and matched with flexibility. CORD is ONF’s umbrella project, and SEBA falls under CORD. SEBA and CORD have healthy, active communities, including smaller working groups that make this innovation possible. Das discussed several other interesting aspects of ONF work that tie into SEBA and CORD, such as mobile infrastructure, BNGs and a simulator called BBSim. By the time the session closed, the participants had received a thorough review of open source virtualized broadband access.
Technical Deep Dive 2: Multi-Access Edge Cloud
The second speaker at the Technical Deep Dive session was Oguz Sunay. He is the chief architect of mobile networking at the Open Networking Foundation, driving the transformation of wireless network infrastructure and carrier business models toward 5G. His session focused on multi-access edge cloud, bringing convergence, mobile, and broadband network edge to enable operators to deploy the next generation of solutions.
“5G is here,” Sunay said. He stated that despite much better data rates, data volume, mobility, connected devices, energy efficiency, and end-to-end latency, the initial stage of 5G “has been a little bit disappointing.” The data rate difference between LTE and 5G is negligibly small, so the industry should see this more as an enabler for telecom’s long overdue transformation, rather than to drive the transformation itself.
Sunay emphasized that 5G is going to drive a similar situation as Saurav Das described in the earlier Deep Dive: ARPU will steeply decline as soon as 5G is implemented. He traced the evolution from disaggregation to virtualization, that is, network function virtualization for telecom. This virtualization allows decoupling of vertically integrated elements – decoupling hardware from software enables software to run on commodity hardware. This transformation has already taken place at a number of operators. Beyond NFV, the next layers to be disaggregated are radio resource control (RRC), packet data convergence protocol (PDCP), radio link control (RLC), and medium access layer (MAC).
COMAC
Sunay discussed how these layers could be disaggregated to show why we need the edge functionality and how 5G might enable a network cloud. He described additional access technologies that most telecoms can actively use for their subscribers. One of these technologies is broadband access — which brings leads to converged multi-access and core, or COMAC.
COMAC is an open source platform optimized for 5G and OMEC, an open source production-grade Evolved Packet Core (EPC). Sunay described how COMAC, built on a micro service architecture leveraging SDN and cloud principles, is a platform that will enable greater infrastructure efficiencies as well as common subscriber authentication and service delivery capabilities to allow access, edge, core and public clouds to work in a centralized and coordinated manner.
Re-Architecting MME
Sunay’s major project now is re-architecting a mobility management entity (MME) and testing the functionality of this new MME. “In 2019 under the leadership of Infosys, we are going to continue hardening the MME, and this is going to be complete by the end of the year.” Sunay noted that he and his team are starting to engage in SEBA. Part of this process includes authenticating an optical network unit and then assigning it an IP address so that it can reach the intranet. Focused steps like this are part of Sunay’s transformation work at the Open Networking Foundation and bring multi-access to the edge.
Operationalizing Open Networking Solutions
This panel was moderated by Anurag Vardhan Sinha of Infosys, who heads the Telecom, Media & Entertainment business for Infosys and has been working with some of the world’s leading communication service providers and media companies to leverage technology and partnerships to create sustainable differentiation. Anurag fairly believes that in a 5G era, telecom service providers will create sustainable value when they focus on software-defined networks (SDN) and network function virtualization while working on creating next generation innovative platforms and ecosystems. The distinguished panelists shared their perspectives on operationalizing innovations to benefit consumers and businesses.
Satish Jamadagni of TSDSI talked about the spectrum cost and additional cost of deploying services. He emphasized that it is important that the cost of the network be as low as possible. He also discussed the importance of open sources as a common base denominator upon which you can innovate and start to visualize services.
The Indian Institute of Science’s Dr. Inder S. Gopal discussed launching the first open source networking group, OpenDaylight, at the emergence of SDN. In Dr. Gopal’s words, OpenDaylight took the approach that “it’s not enough just to make an architectural change; what we really need to do is also transform in some sense the way networking is delivered and consumed, and move from a closed proprietary system to an open source system.” The objectives of OpenDaylight were to reduce cost, make it easier for the industry to build new products, and open the network. He also talked about the proliferation of open source projects and the related need to make projects coherent, similar to the curated open source model that Guru Parulkar described earlier in the day. At the same time, the evolution of open source must be accelerated so that it becomes more like a standardized Linux-type platform.
Next, Aseem Parikh of ONF added perspectives from operators leading the adoption of open networking. He explained the importance of these operators to create incremental solutions that will be part of a 50-year evolution. Parikh urged the sharing of resources and R&D, as well as the importance of following Guru Parulkar’s “curated open source model.”
Sumeet Chadda of Axiata Group addressed how stakeholders in this ecosystem collaborate to accelerate progress. He explained that network equipment building systems (NEBS) are not interested in open source, believing that they already have systems that enable achievement of network KPIs. One of his biggest concerns was how innovations can be quickly upstreamed into the open community and then downstreamed to become standards, and how quickly this can be done. He thinks this process would be accelerated if the open source community were to agree on a set of standards to follow. An earlier keynote speaker, Rajan Mathews of the General Cellular Operators of India, was another panelist. He noted that the open source community tended “to be a free-thinking, free-acting group of folks and we balk at constraints like policy and regulation.” When the great innovations from open source are brought to the telco world, they still have to work with established policies, regulations, and testing.
Jamadagni agreed that policy concerns are there, yes, but that from Day One, everyone knows the rules and the system in which they are operating. He pointed to a counterpoint from the world of companies like Facebook, which rely on users’ personal data as a central part of their data. In those environments, the rules are non-existent and constantly changing, posing different challenges for open source. Jamadagni identified one open source security challenge, which is that because it is available to anyone, any loopholes become known. He talked about the importance of customization and diligence to make systems and networks secure.
Aseem Parikh of ONF interjected at this point to say that he thinks it’s better for “everyone to know of a flaw because then anyone can fix it.” Jamadagni countered that there won’t always be time to get a flaw fixed — one breach is one too many. Dr. Gopal echoed Parikh’s points, saying that “open source is in fact a lot more secure” and “in the long term, a lot more reliable and scalable than many proprietary assessments.”
The conversation then turned to the skillset of India’s tech workers and how India can best position itself regarding open source. Mathews said that along with outsourcing its skilled workers, India has also outsourced intelligence, and now needs to get this intelligence back in terms of network design, maintenance and operation. He argued that these are very different skillsets, and something that India needs to develop. These skills for the telco environment are hard to develop, though, because most R&D is happening in the OEM world.
Dr. Gopal highlighted that open source provides a way of executing collaborative development and can help India’s workers gain these skills. Similarly, Jamadagni disagreed with the suggestion that innovation was driven by the OEMs.
The final topic was on the most strategic use cases for open source. Jamadagni said that not only does open source save money and provide software opportunity for the Indian people, it also puts people in the collaboration mindset — an essential approach in today’s organizational environment. He said that the intrinsic nature of the operator’s business is changing so that it’s disaggregated and open sourced. “Open source is simply the way to go,” he closed.
Mathews said that in India, open source has to be socially oriented, especially regarding 5G and education, agriculture, health, smart cities and waste management. Dr. Gopal agreed and said that his IUDX project, which is a data exchange for smart cities, was built with open source. Chadda said that he and his group are looking forward to using open standards to reduce capex and opex pressures, reducing cost for big items like zero-touch operations and DevOps, ultimately lowering costs and making it easier to maintain networks.
Anurag closed the panel by thanking everyone for sharing their perspectives, saying that it was a phenomenal opportunity for people to participate and embrace the open network journey.