How to secure your business against deepfakes: The role of AI and zero trust

Insights

  • Deepfakes, synthetic media produced and altered to deceive, threaten businesses by casting doubt on communication authenticity, causing delays and financial losses.
  • Personalized disinformation targets small groups, creating discord and damaging reputations.
  • To fight back, firms use AI techniques like photoplethysmography and data triangulation.
  • But evolving deepfake AI may soon outstrip AI defenses.
  • Firms should proactively use C2PA standards to secure data, with training and a zero-trust approach for defense.

Deepfake technology, capable of spreading mis- and disinformation, is the word of the moment. More than 1,400 security experts recently warned the World Economic Forum that deepfake risks could be more catastrophic than inflation, extreme weather, and even war.

So what are deepfakes? Without too much elaboration, deepfakes — digitally manipulated videos, audio, or still images to deceive a group or a specific individual — sow doubts about the authenticity of any communication. This complicates the verification of emails, video calls, and voice messages, leading to confusion and delays in decision-making.

But what does this mean for businesses?

Talking to our AI experts at Infosys, they warn that the effectiveness and scale of deepfake attacks on enterprises will only increase. As Amber Boyle, an ex-FBI cybersecurity expert at Infosys Consulting, says, “It’s all about aggregating data. Attackers now have a scary amount of knowledge on potential victims and can use deepfake technology to weaponize audio and video to make a significant dent into the financial and reputational health of the organization.”

Insider threat is still the most frequent source of serious system compromises. A well-crafted and targeted deepfake video could potentially trick or blackmail employees, contractors, or partners into committing or facilitating a system breach.

This facilitates financial fraud and fosters mistrust.

Beyond the enterprise perimeter, this abuse of trust can significantly impact businesses. Using AI tools, attackers can create personalized disinformation generated to appeal to small customer groups (think soccer moms or dads in a specific town). They can also create deepfakes that micro target specific individuals with disinformation, based on knowledge of their preferences, biases, and concerns.

These sophisticated and evolving deepfake threats raise serious concerns for all firms, big and small.

“For corporations, it’s about the financial and reputational consequences. And letting very important data out of the door,” says Amber Boyle.

Modern tools as defense against deepfakes

AI deepfakes are a clear threat, but AI also serves as a defense. For example, advanced pattern recognition can triangulate data sources to verify identities. SymphonyAI, an enterprise software provider, acquired NetReveal, a fraud detection platform, from UK-based BAE Systems Intelligence in 2022. NetReveal uses deep learning to uncover social engineering attacks, mapping connections between known entities and deepfakes through phone numbers, emails, and other identity markers.

With attackers now able to replicate a person’s voice from just three seconds of audio, and AI face swap video technology becoming increasingly accessible through rising adoption of large language models, stakes are higher than ever. This is especially concerning when attackers can easily build a detailed profile of their target through social media information and exploit potential vulnerabilities.

To defend against these attacks, AI can detect subtle alterations made to pictures and videos and employ techniques such as photoplethysmography (PPG), which measures blood flow per unit of time in an artery to identify AI-generated images or videos. In one study, frequency was used as the main feature of a deep learning model. By analyzing blood flow frequency in the face and neck regions of the video, it was determined that real and fake videos exhibit significantly different frequency distributions. This shows that blood flow is a useful metric to sift deepfakes from real videos. Intel’s FakeCatcher, which uses PPG, has a real-fake detection capability reported to be 96% accurate.

Other techniques include working out whether the media is fake by looking closely at the digital footprint. A large language models creates deepfakes pixel by pixel, leaving traces of its process. AI can detect these traces, or digital footprints, even in deepfakes that are convincing to the human eye (Figure 1).

Figure 1. AI can detect fake images by finding alterations in light, placement, and texture

Source: Infosys

In this solution, a neural network-based algorithm can scrutinize images down to the individual pixel level. Trained on thousands of deepfake and genuine images, it learns the distinguishing features of fakes. By analyzing various features, including resizing of pictures and the level of exchanged features (including light conditions, contours, and inconsistent skin textures), the technology can differentiate between authentic and manipulated content.

However, many experts believe deepfake AI detection algorithms will soon struggle to differentiate between true and false.

“As deepfake technology improves, including AI faceswap video, it will become harder to distinguish real from fake,” says Sangamesh Shivaputrappa, an information security expert at Infosys. “Security controls that rely solely on identifying deepfakes might become obsolete.” Indeed, of 23 AI experts who attended the NeurIPS AI conference in New Orleans in December 2023, 17 thought AI-generated media would eventually become undetectable. Only one believed reliable detection would remain possible.

Experts predict that AI tools that produce deepfakes (through the use of advanced generative adversarial networks) for business scams will become so sophisticated that firms will have to watermark and fingerprint all important data proactively. This involves maintaining a database of manifests to indicate whether an image, video, or other media has been tampered with. Every manifest includes details such as assertions (modifications made on the asset), claim (a declaration of provenance by a certain owner), and claim signature (digital signature on the claim). The indication of an asset being synthetic or not can be embedded as an assertion itself.

In support, big players such as Google, TikTok, Sony, Leica, and Qualcomm are deploying initiatives like Content Authenticity Initiative (CAI) and Coalition for Content Provenance and Authenticity (C2PA).

Going beyond AI: Training, awareness, and privileged access

A key challenge in responding to these threats is enhancing employee skills and awareness. This enables employees to recognize potential attacks and build products that are more resilient against such attacks. Firms must deploy synthetic media awareness training and embed a responsible by design approach across their operations.

“I think social engineering is the biggest deepfake threat vector, because there’s not a lot of training and awareness out there,” says Boyle. “Many employees and even CEOs we talk to are astonished at how real this is. If you do a demo, people are just shocked.”

Firms will also have to make clear that no one is above such training, nor are they above following security protocols: “Even if you are the CEO, if you’re stepping outside of company protocols and business policies, you’re putting everyone in jeopardy, along with the crown jewels of the company,” Boyle adds.

To counter this issue, firms should think carefully about privileged access, says Boyle, and should consider a zero-trust approach as part of their defense. “When you’re authorizing escalation of privileges, you should be locking that down from both a ransomware and malware perspective,” she says. “This sort of surgical authentication means there’s no bypass, no matter who you are in the firm. Assets should also be tagged in terms of importance, so that if data X or Y is compromised, it will cost firms this much money.”

Zero trust: Security teams to become a bigger force in business

Even before deepfakes and other AI-generated content, social engineering was a significant threat. Workers are now hyperconnected, and enterprises increasingly depend on video conferencing and collaboration tools. This has contributed to nearly four million financial frauds of $1,000 or less in 2022 in the US alone. In fact, cybercrime is set to reach $10.5 trillion by 2025.

No wonder then that in 2023, 88% of senior security executives said adopting a zero-trust approach was “very important” or “important”. As well as new techniques such as AI against AI, security platforms like NetReveal, photoplethysmography, and incorruptible assets manifests through C2PA, established approaches such as zero trust are all part of the defense stack when it comes to deepfakes and other AI-driven cyberthreats.

With deepfakes permeating through enterprise borders, a zero-trust posture means you not only don’t trust the bad guys, but you don’t immediately trust the good guys either.

This “never trust, always verify” approach secures identities, endpoints, applications, data, infrastructure and networks, while providing visibility, automation, and orchestration.

A zero-trust enterprise architecture for fighting deepfakes requires a proactive, multilayered approach that combines prevention, verification, and continuous adaptation, and builds on what we at Infosys describe as our Live Enterprise approach. This means continuously aligning security policies as incidents occur, and adopting technologies, ways of working, and policies that support business agility while enhancing security.

Building and maintaining a zero-trust model requires investment in technology, training, and human resources, across the five areas of identity, devices, network, apps, and data (Figure 2).

Figure 2. Zero trust as a holistic defense strategy

Source: Infosys

Deepfakes are a significant part of the threat landscape. Their fast-evolving nature (through ever more powerful generative adversarial networks and large language models) and success as attacks mean businesses must adopt a range of techniques to counter them and other threats.

Identity is the area in which most novel attacks, including social engineering, are happening. The average staff member now has 30 identities. Also, 52% of firms don’t protect identities linked to business-critical applications, and nearly half lack identity security controls around cloud infrastructure and workloads. Going further, security leaders say credential threat is their number one area of risk. Enterprises should use zero trust to build security hygiene measures, including strong authentication mechanisms such as MFA, continuous approval, and endpoint privilege management to block credential threat attempts, and limit access to sensitive data.

This holistic defense should form a major part of any responsible AI strategy, which provides a robust approach to deepfake threats and ensures that the organization’s data, IP, and reputation are not compromised. As deepfake threats constantly evolve, enterprises must relentlessly adapt their defense strategies and tools to stay ahead.

Related Stories

Connect with the Infosys Knowledge Institute

Opt in for insights from Infosys Knowledge Institute Privacy Statement