Knowledge Institute Podcasts
-
AI Interrogator: The Peril and Promise of AI Cybersecurity with Amber Boyle
November 13, 2023
Insights
- Cybersecurity needs to embrace AI for proactive defense. Leveraging AI for real-time threat detection and response is crucial in the ever-evolving landscape of cyber threats. AI empowers security teams to identify, contain, and mitigate incidents more effectively, closing the gap between intrusion and detection.
- AI, when used responsibly, is a force for good. Despite the potential for AI to be used maliciously, the majority of individuals and organizations are harnessing AI for positive purposes. AI's ability to solve real world problems and drive advancements in various fields is a reason for optimism, even in the face of cybersecurity challenges.
Kate: Hello, and welcome to this episode of the Infosys Knowledge Institute Podcast on all things AI, the AI Interrogator. I'm Kate Bevan, and my guest today is Amber Boyle, who's a cybersecurity expert with Infosys Consulting. The first question to come in on is how can AI help cybersecurity professionals do their job better?
Amber: Well, thanks, Kate. I really appreciate chatting with you. It's always great to talk to you, so thank you for having me. One of the things that as a cybersecurity professional that we always need to be mindful of is what we call tactics, techniques and procedures. And you can't be a good cyber practitioner if you're not keeping up to date with technology, and to me, AI is one of those things that you cannot afford to not know about in this day and age if you're trying to keep pace with cybersecurity threats.
And so when we talk about being a cybersecurity professional, we talk about defending networks for organizations, we have to address the ways that artificial intelligence can be used as defenders. The automation capabilities, the learning capabilities, the ability to do smart executions is what AI does best and is something that the cybersecurity professional must leverage.
Kate: What do you mean by doing executions better?
Amber: In terms of executions, we talk about enhancing security. So, some of the things that we typically want to do is do threat detection, we want to do vulnerability detection, we want to deploy patches and security countermeasures and controls more effectively. And so when we talk about executing more quickly, efficiently and trying to automate some of those, it's really a strong suit of AI, and something that should be leveraged.
Kate: I did suddenly think, "Oh my goodness, what if she's talking about actually executing criminals who get inside," and really, that wasn't quite right.
Amber: You're going to bring me back to my old days with the FBI.
Kate: Thinking about this, how aware are businesses about the potential of AI, first of all for what it can do for them, but also what the risks are to businesses, as well, from AI? How aware of the risks are they?
Amber: So, as we said, that cybersecurity professionals should be using it for defense, cybersecurity adversaries, threats, cyber financially motivated criminals, nation states are using it to their advantage. And one of the major ways that they're doing it is they're making their malware, or their exploits, more sophisticated with little to no effort. They're writing code at a faster pace that is more sophisticated and also has the capabilities that it never had before. When we talk about polymorphic malware, it can assess how antivirus, anti-malware tools are being used, and it can write the code such that it will evade detection and change to remain undetectable.
Kate: Are businesses really aware of this threat?
Amber: I think that businesses are absolutely aware of the headline of AI and automation. I don't know how you could be on LinkedIn or be on any type of social media and not hear about how AI is threatening the world, or saving the planet, or resulting in new breakthroughs in medical technology. So, absolutely, corporations are trying to figure out how to leverage it to the bottom line, gaining insights into the market and consumers. But I don't think that companies really understand how they leverage it in a security capacity.
Kate: So, when you go into a company, what do you say to them when it comes to AI? Are you going in softly, softly, saying, "Look, AI can really make things better," or are you actually looking to worry them so that they act?
Amber: One of the biggest challenges as a cybersecurity professional is companies have so many things that they need to get right all the time. The criminals, the adversaries, are looking to penetrate the organization through every means possible. And if you're a large corporation, you're under constant attack by numerous adversaries. They're constantly scanning your networks, they're constantly looking for vulnerabilities in your access controls. So, when we go into a company, we're looking at the fundamentals. We're looking at, do you know your asset management? Do you know where your data is stored? Are you controlling access to your data? Do you have good access controls? And then data protection, are you protecting that data when we assume in a zero trust environment that your network is already penetrated?
We go in from still a fundamental perspective of saying asset management, data protection, access controls, be brilliant at the basics. But then we say, you know what? Human error, it's impossible to withstand all of this all the time. So, if you are using AI and machine learning, use that to help enable your people that are on the front line. Your cybersecurity professionals that are inundated by, let's say, the volumes of logs, all of these incidents and events that happen on a day in, day out basis, why don't you leverage AI and ML to give more thoughtful recognition to the patterns to more precisely identify those logs that are meaningful that you need a human to intervene on?
Kate: Is that actually happening now, or are we still at the stage of talking about it?
Amber: I think that we're transitioning from the stage of talking about it into the implementation stage, probably more predominantly with the larger organizations that are doing endpoint detection, threat response. So we have to talk about the Mandiants, the CrowdStrikes of the world that are starting to incorporate that, but this is going to be something that's going to very quickly be mainstream. No one's going to select a tool or solution five years from now that doesn't have some heavy reliance on AI or machine learning for improving and evolving.
And this goes back, Kate, to where we began our conversation, which is the threat adversaries are using it, and so you can't afford not to use it yourself. It's a cat and mouse game, but you can't afford to be behind the learning curve here.
Kate: So, the adversaries are already using it, and as you said, they're using it for code generation and to build polymorphic threats. What else are they using it for?
Amber: There's a lot of data that can be aggregated, obviously, with AI and ML, so you can take disparate datasets that maybe the average attacker wouldn't have put together before to do some behavioral analytics. So, when we talk about phishing campaigns are super prolific, they're targeting every single company throughout the world. And when you can take everybody's Facebook page, all their social media accounts, you can take the data that's on the internet, public-facing websites, about who's who in that organization, you can take profit and loss statements from the organization, you can get an incredibly scary, accurate picture of who exactly every person is in the company.
And then if you couple that with using AI and ML to do deep fakes to replicate someone's voice, to know where the water cooler is down the hall and exactly what the context is behind a major deal, for example. You can imagine how much more effective those precision attacks will be even just phishing campaigns alone.
Kate: Crikey, you're actually scaring me with that thinking. We rely on signals in phishing, like getting the name wrong or getting the language of it wrong. Even though, yes, that's been written by somebody in sort of Dokshitz, Belarus. I can spot the dodgy English in that. But it's not going to be the case. So how can companies, given that humans are the key endpoint, how can companies train their humans to be aware of this, given that it's hard enough to train humans to be aware of traditional phishing and spear phishing type attacks?
Amber: It's an excellent question. I don't think that anybody's really cracked that code on the human factor in companies. Sadly, as much training and education that we give every year at your mandatory obligated, don't click on this. Studies have shown that the click-through rate is still substantially high. When you couple that with AI, I'm not sure that there's a good answer that I have for you, Kate, other than to say, I think that there needs to be some training with the AI component to it to avoid it. I think that you're going to have to get into additional controls that would be compensating controls. In other words, with say, mortgage companies that are frequently the target of phishing campaigns.
Kate: What would that training look like, do you think? I'm envisaging a company doing the training going in and actually themselves building a deep fake of, I don't know, the chief executive saying, "Yes, yes, you must release this money to this bank account." That kind of thing. Is that what it's going to look like?
Amber: I think it's the same way that companies do simulated phishing attacks now. And I think it's just that awareness. If you haven't seen it before, you wouldn't think it was at all in the realm of possibility. Then you see it for the first time, and then it allows you to be aware and not just trust. And we want people to verify. And so I think that seeing it, believing it, because it's incredible technology.
Kate: It really is.
Amber: It's incredibly, incredibly convincing technology.
Kate: It really is. I saw something I think on Twitter over the weekend where somebody had built a deep fake video of themselves saying the same phrase in five different languages. This person did not speak those languages, and he posted it up on Twitter and everyone was like, this is really terrifying.
Amber: Absolutely terrifying.
Kate: And it's also, we're thinking about enterprise cybersecurity here, but actually when you're dealing with the humans, you're dealing with consumer cybersecurity.
Amber: 100%. If you look at all of the attacks, major attacks, that have happened to corporations throughout time, it has generally always resolved back to one click or one credential, one off. Just only takes that one opportunity or access point for sophisticated cyber criminals to navigate their way, increase their foothold, and then ultimately exploit the company.
Kate: Yeah, it's quite something for CISOs to think about, isn't it? And CTOs. What would be your message to people managing cybersecurity and also particularly to the C-suite?
Amber: To the C-suite, I would say you can't afford to not leverage AI and machine learning in your cybersecurity. It's a game changer in terms of when we talk about leveraging tools, technology, processes, and people, you're putting yourself at a disadvantage if you're not applying this technology that essentially gives your people on the front line, your network defenders, an even footing with adversaries that are using it against you.
Kate: So how then can we leverage AI into cybersecurity offerings, into cybersecurity products?
Amber: There's a lot of the space that's evolving, but when we talk about real-time threat detection, when we talked about all of those logs and event indicators that may not mean much. You have your intrusion detection systems, your intrusion prevention systems, but having AI applied to those effectively to aggregate that data in a more sophisticated precise manner, to then act on those more quickly, getting to more of a real-time detection capability.
We're talking about the average time for intrusion now is shrinking. Shrinking every day to the point of where it's now recognized as it being under 10 minutes of where someone has gotten a relatively strong foothold within your network. And so real-time threat detection, enabling that capability makes a difference between that impact and the ability to identify, contain that incident. Super important.
Kate: That's actually quite scary. We've talked about the threats, so I'm going to finish up by asking the big question. Do you think the AI is going to kill us all?
Amber: Absolutely not. And now everything that can be used for good can also be used for evil. But maybe this is like the Girl Scout in me, or the ex FBI in me, of where I tend to think the best of humanity or our ability to leverage the good of technology. But I absolutely do think that while cyber crime is very prevalent, those threat actors represent a very small percentage of the overall population. And we have fantastic people that are working within emphasis, and around the globe, working in this space, and they are applying it for the betterment of humanity and the betterment of all.
And so I absolutely do not think that AI is going to kill us all. I think it'll be our saving grace, in fact, but only in the right hands. And I think there's more people out there that are trying to use it for good purposes. And so I think we can still sleep at night. I still sleep at night, so I think we'll be okay.
Kate: That's really good to know. There's all the hype around it, the tech bros talking it up. I think partly because they want to be in on designing the guardrails and also perhaps more cynically, designing the products that are going to keep everybody safe. But yeah, it feels to me like we need to approach AI in all areas, not just cybersecurity. We're thinking about what it can do for us, what the threats from it are and how we manage that safely.
Amber: And there's always going to be conspiracy theorists, right?
Kate: Conspiracy theorists are part of the threat landscape.
Amber: Absolutely right. I'm excited. AI still continues to excite me in the ordinary everyday applications and also in the professional development and the challenges that it's helping us overcome.
Kate: I think that's a great place to end it. I'm glad you're optimistic about it. For somebody who sees all the threats, I'm glad you're optimistic about it. Amber, thank you so much for your time today.
Amber: Thank you so much, Kate. Appreciate it.
Kate: This is part of the Infosys Knowledge Institute's AI podcast series, the AI Interrogator. Be sure to follow us wherever you get your podcasts, and visit us on infosys.com/IKI. The podcast was produced by Catherine Burdette and Christine Calhoun. I'm Kate Bevan of the Infosys Knowledge Institute. Keep learning, keep sharing.
About Amber Boyle
Amber Boyle is a distinguished expert in cybersecurity and national security with more than 17 years of experience. She currently serves as Senior Principal at Infosys Consulting, where she specializes in orchestrating enterprise privacy programs, security audit remediation, risk assessments, and strategic IT planning. Her extensive cyber expertise is renowned for protecting organizations from cyber threats, aligning technology initiatives with corporate objectives.
Amber's impressive career includes roles as an Assistant Special Agent in Charge at the Federal Bureau of Investigation (FBI), where she demonstrated exceptional leadership and operational expertise in overseeing national security programs, encompassing Cyber, Counterintelligence, and International Terrorism. Her tenure with the FBI provided her with invaluable insights into protecting the nation's security and combating cyber threats.
In addition to her professional career, Amber is a distinguished U.S. Army veteran with extensive experience in National Security Investigations, force protection, and intelligence collection. She served with distinction in multiple campaigns, including Operation Noble Eagle, Operation Enduring Freedom, and Operation Iraqi Freedom.
Amber's blend of cyber skills, national security expertise, and military background offers a unique perspective on the use of AI in Cybersecurity.
Connect with Amber Boyle
- On LinkedIn
Mentioned in the podcast
- ‘Cybersecurity: The Guardian of Digital Transformation’ Infosys Knowledge Institute