AI/Automation
AI Trustworthiness in Aerospace, Fintech, Automotive, and Health Care
Artificial intelligence (AI) initiatives are challenging even for the most sophisticated companies. But not all AI projects are equal. In highly regulated industries, such as aerospace, fintech, autonomous vehicles, and health care, the barriers to adoption can be formidable.
Regulatory structures are rapidly evolving to address AI concerns about data security, privacy, and customer safety. Understanding these new rules can make or break a company’s potentially transformative AI strategy.
AI spending is expected to increase by a compound annual growth rate of 46% over the next five years. Much of AI’s value will derive from human-machine augmentation, powered by response-based systems. Companies will also be able to reduce costs and improve technology risk, which in turn will ensure that applications have business significance and close association with global and geography-specific ecosystems.
However, few of those benefits are likely to materialize for companies that can’t navigate shifting regulatory systems. Government bodies and industry consortiums worldwide are coming together to determine the rules for AI. With this focus, we looked at these industries to assess where they stand on the critical elements of AI trustworthiness. Below are highlights from an Infosys whitepaper detailing trends in the aerospace, fintech, automotive, and health care industries.
AI trustworthiness in the aerospace industry
AI trustworthiness in the aerospace industry is primarily driven by two perspectives: safety and explainability.
The European Union’s Aviation Safety Agency has created building blocks for AI. These include learning assurance, AI explainability, AI safety and mitigation, and the creation of a foundation layer for AI adoption in the aerospace industry. Within this framework, the key elements involve technical robustness and safety, accountability, privacy and data governance, and societal and environmental well-being. On the business process side, where human and machine-oriented processes will align and amalgamate, various competencies are in development, including how relevant AI is in the process, road maps to implementation, technology stacks, and safety.
To adopt affordable AI, important elements must be both predictable and standardized. Data privacy and governance mandates are evolving so that captured data is assessed both from a fairness perspective and a discrimination perspective. Explainable AI (XAI) is needed in each workflow.
AI trustworthiness in fintech
Fintech companies were among AI’s early adopters, an outgrowth of the industry’s embrace of digitization. AI has been used in data processing and synthesizing along with rules-based business processes lower in the value chain.
With privacy so important, encryption layers and policies are needed at all stages in the technology stack. AI platforms also play important fraud prevention roles in business portfolios that include credit data and loyalty programs. With banking, trading, and commercial units all having customer-facing attributes, a structured transition to AI is needed, with a road map that moves from cognitive to self-managed to self-resilient systems. A careful balance of governance, ethical awareness, financial data monitoring, and fair decision-making must be the cornerstone of all fintech initiatives.
As such, fintech consortiums are working together to ensure that end users accept this AI fintech revolution, with rules governed by geography-specific institutions, regulatory bodies, and business organizations.
AI trustworthiness in autonomous vehicles
AI in autonomous transportation has matured; it’s now used in connected vehicles for compliance, safety testing, secure communications, and platform services (Tesla is a great example). Control and vision systems are popular, while safe adoption has been accelerated through encrypted batch streaming with data analytics. For such critical systems, redundancy must also be built in to ensure human safety, while the use of cloud API-based automation and intelligent analytics increases each system’s viability.
Implementation is made easier through automated decision response, since taking humans out of the loop makes the car safer. In this paradigm, data flows seamlessly through all elements of the system and adheres to strict policies. AI trustworthiness is certified by ensuring the unmanned system can navigate many different scenarios and environments without failure.
There are currently no federal regulations specifically related to autonomous vehicle technology, although U.S. agencies are examining the issue.
AI trustworthiness in health care
The health care sector is taking a product and platform approach to AI. The technology is used to assess historical data, authenticate data, and speed up real-time decision systems. However, oversight by doctors is still critical to assure ethical governance. Adoption is going full steam ahead with descriptive, predictive, and prescriptive AI embedded across the value chain. Big data is used to provide pattern recognition and simulate surgical outcomes.
Successful ethical guidelines that will be accepted by the public require many attributes, including data safety, privacy, XAI, consent, liability, data integrity, and algorithmic accountability. The adoption of natural language processing, deep learning, and knowledge representation with AI will be necessary to meet these requirements.
Currently, there are no national regulations specific to the health care sector. However, many different policy groups, task forces, practitioners, regulatory bodies, researchers, and councils are forming consortiums specific to different areas of health care, with most currently drafting guidelines.
Moving forward with AI trustworthiness
AI is a prominent technology in each industry and is used to improve efficiency, enhance the customer experience, and create autonomous decision systems (while keeping humans in the loop). AI allows businesses of all types to sprint toward Industry 4.0 and Society 5.0. As highlighted, all ecosystem partners must work closely with regulators and professional bodies to develop standards and best practices for this critical technology. Infosys is working hard in this area, contributing to SAE International standards development as part of technical committees. In time, this will provide incentives for companies to follow through with a quicker AI adoption strategy, unleashing the full potential of humans and leading to comprehensive and sustainable growth.