Conversations On AI App Development CRM Enterprise IT Ethics & Governance Futures HR Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All topics For Leaders In IT & Dev Customer Experience Finance, Operations & Strategy Employee Experience Security & Risk News & Events People & Culture My List Explore All
March 4, 2025 2 min Welcome to the AI arms race Cybersecurity pros and hackers are duking it out—and enterprises need to be on high alert. Ethics and Governance Thought Leadership
Rabinowitz Rabinowitz author image
Howard Rabinowitz Workflow Contributor
green block background image
On the bustling show floors, across the main halls, and in the meeting rooms of this year’s biggest cybersecurity and IT conferences, generative AI has been the term on everyone’s lips.

The hallways hum with a mix of excitement and anxiety. Especially since OpenAI launched ChatGPT in November 2022, malicious actors have been racing to use chatbots trained on large language models to exploit security vulnerabilities

Out of the gate, the hackers seemed to have the edge. Within weeks of ChatGPT’s launch, Check Point Research detected scammers trading notes on the dark web about how to circumvent its content filters and use cutting-edge techniques to penetrate security firewalls. Just a few months later, the security firm Darktrace reported that it had logged a 135% increase in what it called “novel social engineering attacks” among its clients, tied directly to ChatGPT—in other words, phishing on steroids.

But cybersecurity pros are recognizing that generative AI offers substantial security rewards alongside its risks. “Its capabilities are increasing at an exponential rate,” says Randy Lariar, practice director for big data and analytics at Optiv Security. “Downstream of that will create opportunities both for defenders as well as attackers.”

For SecOps teams facing off with hackers, it’s the age-old game of cat and mouse.  The hope is that generative AI will help defenders build better mousetraps.
Just add AI

Tech companies are quickly recognizing the opportunity for AI to outsmart cybercriminals. At the RSA Conference, heavy hitters in cybersecurity announced an array of security tools powered by generative AI. Google unveiled its Cloud Security AI Workbench suite and Microsoft launched its Security Copilot, while SentinelOne introduced Purple AI and CrowdStrike debuted Charlotte AI. They all use the large language model (LLM) technology that fuels ChatGPT, creating AI trained on massive sets of public and proprietary data, including the latest research on software vulnerabilities, malware, phishing, and threat actor behavior. 

While each company’s AI is unique, they all promise the ability to bolster the capabilities of security operations center teams. Think of them as AI assistants that can help security teams get easy-to-understand answers to complex questions in real time. It’s a huge asset for every time-pressed SecOps member, especially for less experienced employees.

“It’s like everyone has gotten a promotion,” says Lariar. “It makes it a whole lot easier to hire less experienced people and equip them with knowledge and accelerators that are going to make them much more likely to catch a real breach as it's happening.” 

These AI copilots have the potential to act as a force multiplier in a world facing a dire shortage of 3.4 million cybersecurity workers, according to a recent report by global nonprofit research organization ISC2.

 

The AI cavalry couldn’t have arrived at a better time.

The AI cavalry couldn’t have arrived at a better time, says Ric Smith, chief technology officer at SentinelOne. Effective threat hunting depends on managing data “on the scale of petabytes,” he notes. “There’s a massive shortage of talent needed to do it. AI is the key to filling the gap and leveling the playing field.”

For that reason, generative AI won’t automate away cybersecurity jobs—far from it, according to ServiceNow Chief Information Security Officer Ben de Bont. “It will give people an opportunity to do more and be more effective,” he says. “It's going to open opportunities for people to work on and solve harder problems.”

In April 2023, for example, Recorded Future, a global cyber threat intelligence cloud platform and ServiceNow partner, launched an  LLM-trained product called Recorded Future AI. It generates clear, concise threat intelligence reports within seconds.

Armed with more timely reports, companies can respond to threats more quickly. Jerry Hodge, senior product manager at Recorded Future, recalls recently being on-site with a customer, one of the largest banks in Europe, whose analysts reported that Recorded Future AI was saving them two to three hours of research each day. 

“It’s uplifting every analyst by saving them this time,” says Hodge. “It’s not just a way of cost cutting. It’s a way of empowering the analyst to detect threats at a faster rate.”

Risk laggards are more likely to say digitization will heighten security risks over the next two years

 

 

Alt
The new phishing battlefront

No tactic used by threat actors is more common or costly than phishing. It’s the top form of cybercrime, according to the FBI, with 300,479 criminal complaints of phishing attacks reported in 2022. (In 2018, the number was a mere 26,379.) 

For cyberattackers, generative AI, with its ability to craft persuasive content, is an ideal tool for phishing—in particular, spear phishing, an email message tailored to a specific persona, job role, or industry. As an exercise to test how easily generative AI could be manipulated, Checkpoint Research tricked ChatGPT into writing an effective spear-phishing email by claiming it needed a sample to train employees to recognize phishing techniques.

Here, too, generative AI tools are emerging to help companies foil phishing attacks. In February, cybersecurity startup SlashNext introduced Generative HumanAI, software that the company claims can detect phishing emails with 99.9% accuracy. And NVIDIA has unveiled a similar product called Morpheus that it says is 90% effective at detecting spear-phishing emails tailored to individuals. But in the realm of social engineering, where hacking tactics exploit human error, even the sharpest tools may fall short, cautions George Westerman, senior lecturer at MIT Sloan School of Management.

 

Next up
Dive into more conversations AI App Development CRM Customer Stories Ethics & Governance Human Resources Industries IT Now On Now Platform Foundations Products & Solutions All Topics
Stay in the know Join Us
stay in know image
Alt