The Use of AI as a Cybersecurity Weapon
June 7, 2023 No Commentsby Katrina Thompson
We all knew it would happen, and now it is – only not in the way we thought. And at the same time, in all the ways we thought. AI-driven cyber threat- hype may be old news, the danger is not.
AI driven threats fall into two categories – the nefarious, mustache-twirling variety and the second, nondescript blunder. We’ll dig into both, here.
What we thought AI-driven threats would look like
To set the record straight, these AI-based threats are still very much on the horizon. The stuff of AI-cyber-nightmares includes:
– AI-powered malicious reconnaissance including pentesting and zero-day vulnerability scoping
– AI-powered ransomware that learns enough to mimic usual behaviors and remain undetected
– Using AI to spin out malware in droves
Phishing emails crafted by AI seem to get higher click rates, and we all know that AI creates constantly changing (“emerging”) exploits that can evade detection by traditional security models.
Ultimately, the words of Mark Driver, a research vice president at Gartner, ring true. When we consider realizing AI’s full cyber-offensive capabilities, “it’s terrifying.”
What they really look like
Well, get ready to get terrified; or at least informed.
In addition to all the threats listed above, there is one that has the potential to cause havoc than they all [and we wouldn’t even know it]: ChatGPT.
Technically, we can say ‘generative AI’, but ChatGPT is effectively the ‘Band-Aid’ or ‘Kleenex’ of the brand. Either way, the latent criminal potential of this absolutely viral AI tool is embedded in its very banality. It’s a joke, it’s a tool, it’s a threat to our jobs, and it not-so-secretly takes in every ounce of information you feed it (secret or not). And then, it promptly spits it out to whoever asks the right questions.
We have to be careful what we share with it. Larger companies like Apple, Goldman Sachs and Samsung won’t let employees touch it. Research by cybersecurity firm Cyberhaven reveals that within five months of ChatGPT adoption, employees were triggering “literally thousands” of data egress events weekly. They note, “Because the underlying models driving chatbots like ChatGPT grow by incorporating more information into their dataset, there is a real risk of sensitive data provided to ChatGPT becoming queryable or unintentionally exposed to other users.”
Part of the reason generative AI is so potentially poisonous to security is that it can be accidentally leveraged by those who don’t even know they’re putting the company at risk. Baddies using AI for nefarious purposes? We expect that. An innocent intern, CFO, or system administrator dumping some files online then asking Bard to draw up a report based on what they just did? Bad, bad, bad. Bard, ChatGPT, Microsoft Bing AI and their counterparts are not closed systems – they are open to the broader internet and the world at large, so what they ingest in one area is free game in the next.
Is there anything we can do?
Besides wring our hands and forsake generative AI completely? Yes, of course there is.
We need to fight fire with fire. At this point, there may be no other choice. “If AI caused these problems, AI can solve them” the saying might go. And it’s necessary.
There’s no other way to respond at scale to the amount, sophistication, and subtlety of AI-engendered attacks than to keep up with powerful AI security tools of our own. As was stated by Bernd Hansen, Branch Head of Cyberspace at the NATO Allied Command Transformation, “In the AI experiment, it’s basically a two-way street. It’s recognizing AI that is used by opponents and it’s on the other hand, exploring how AI may support our own operations.”
Using behavioral-based tools like XDR can not only ensure we’re catching sneaky, hidden AI exploits at scale, but that we have the force to combat them. Automation is a key element in any modern security strategy, and every team is a “small team” in the face of unprecedented AI attack stats. We all need to level up where capacity is concerned, eliminate time wasted on false positives, and use playbooks and automated response capabilities to watch the parts that we miss. Or, to hunt down and block the threats our SOCs don’t have time to.
According to the Orca 2022 Cloud Security Alert Fatigue Report, 79% of organizations have more than 500 security alerts open per day – and that’s just in the cloud. According to that same research, 43% believe that 40% are wasted space.
Streamlining and maximizing our security capabilities is the way we can fight AI-driven cybersecurity threats. As attacks get more prolific, we need to double down on the basics; chasing real threats, finding behavioral evidence of AI-driven attacks, learning how (and how not) to use generative AI, and using current technology to the best of our abilities. Attackers sure are.
An ardent believer in personal data privacy and the technology behind it, Katrina Thompson is a freelance writer leaning into encryption, data privacy legislation and the intersection of information technology and human rights. She has written for Bora, Venafi, Tripwire and many other sites.
Sorry, the comment form is closed at this time.