How AI can stop cybercrime

Most security breaches are caused by human error. Luckily, there’s an algorithm for that

By Lee Bruno

Security breaches over the past few years have exposed a staggering amount of personal and financial consumer information while damaging the reputations of companies that suffer them.

The economic losses are significant. The average cost of a corporate breach was $11.7 million in 2017, up 23% from the previous year, according to a recent Accenture study. Meanwhile, Juniper Research estimates that cybercrime will cost businesses up to $8 trillion over the next four years in direct damages and disruptions to business.

Here’s the really bad news: 95% of all cybercrime results from human error, according to a 2014 IBM study. Despite the advanced security technologies available today—including nascent AI applications that can take matters out of human hands—most major hacks target vulnerabilities rooted in human behavior, not just those in systems and networks.

No wonder, then, why 35% of IT professionals now consider themselves to be biggest internal security risk to their own networks, according to research from security tech firm Balabit. “Attackers didn’t need to break down a wall of ones and zeros, or sabotage a piece of sophisticated hardware,” says Andrew Blau, vice president at behavioral design firm ideas42. “They simply needed to take advantage of predictably poor user behavior.”

So, what are these destructive behavior patterns in security, and how can technology—in particular, AI—help CISOs devise effective countermeasures? Behavioral psychology offers some clues that can help organizations identify and modify destructive human behavior patterns. AI applications, meanwhile, promise longer‑term defenses against cybercrime that can free up security personnel to focus on higher‑level tasks that require uniquely human cognitive skills. Booz Allen Hamilton, for instance, is already tapping into first‑generation AI tools to help triage risks and assign security personnel to the right tasks.

Here are some typical human behaviors that play into the hands of cybercriminals, with tech solutions that organizations can deploy to strengthen their defenses.


Research has shown that waves of security warnings and the constancy of threats actually makes employees less likely to respond to them. In psychology, this pattern is known as habituation. For decades, Blau says, therapists have been using habituation to treat phobias. Researchers have discovered that neural responses to a warning drop dramatically after the second exposure and continue to decrease with subsequent exposures. Users become less likely to view each new one as a significant threat.

To counter that pattern, researchers have had some success with polymorphic warnings—graphic alerts that jiggle, zoom, or twirl on screen. They are far more resistant to habituation than regular warnings because they trigger visceral reactions that prompt people to respond. Polymorphic warnings can also change intensity, cadence, or form for the best intended effect on the target user.

Misplaced fear

A recent Pew Research study found that Americans view foreign cyber attacks as the second‑most serious security threat facing the U.S., behind ISIS. That perception has a carry‑over effect into how many companies manage cyber risk. In the wake of every high‑profile global attack, security pros generally rush to prevent the same thing from happening within their organizations—while often ignoring known threats such as critical patch upgrades. This is the result of availability bias: people tend to overemphasize the likelihood of something happening again, based on how easy it is to remember. 

This institutional tendency to keep fighting the last war can become a double‑edged sword, says Brian Lord, Managing Director of cybersecurity firm PGI, because it “creates complacency and inaction in areas that should be reacting to the more measured and proportionate reality.”

Default bias

Most people never change the default security settings on their computers and don’t opt into extra security features such as simple encryption, even when they know it will protect their data from being stolen. This pattern has given IT departments headaches for decades.

Even employees who deal with highly sensitive information frequently don’t choose to turn on extra security features. It follows, then, that security pros should raise the bar on default standards to include basics, such as two‑factor authentication.

Peer enforcement

Employees tend to model peer behavior. This phenomenon, called social proof, can significantly influence behavior, especially when trying to get users to embrace security hygiene practices that appear more abstract than real.

Data security training programs may increase employee knowledge, but they rarely change behavior. However, the chances of success rise sharply when training becomes a constant feedback system for users. According to Blau, constant feedback can help organizations address phishing attacks.

One promising peer‑based solution is for companies to publish the vulnerability data of every business unit from HR to accounting, and reveal the rankings of all departments. “The basic premise is no one wants to be relegated to last place,” says Sean Convery, Vice President and General Manager of the Security Business Unit at ServiceNow.

The promise of AI

Cybersecurity is plagued by a shortage of human talent. Nearly 1.5 million security jobs will go unfulfilled over the next two years because of a lack of required skills, according to research from Juniper. Nearly half of all IT organizations report a shortage of cybersecurity skills today.

That is one reason why many are pinning their hopes on AI to help manage risk in concert with human intelligence. For example, MIT’s Computer Science and Artificial Intelligence Lab has developed  an “adaptive cybersecurity platform” called AI2 that adapts and improves performance over time by combining machine learning tools with human security analysts.

AI2 sifts through tens of millions of log lines each day, flagging anything deemed suspicious.  Analysts confirm or adjust the results and tag legitimate threats. Over time, AI2’s algorithms fine‑tune their monitoring, learn from mistakes, and get better at detecting breaches and reducing false positives. In early trials at MIT, AI2 has correctly predicted 85% of cyber attacks.

Longtime security expert and author Bruce Schneier believes the strongest argument for AI is that it focuses on improving systems, not behavior. “We’ve designed computer systems’ security so badly that we demand the user do all of these counterintuitive things,” says Schneier. “Why can’t users choose easy‑to‑remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing myriad viruses? Why are we trying to fix the user instead of solving the underlying security problem?”

Truly effective solutions will come from platforms like AI2 that blend human and machine intelligence. “You can only automate what you're certain about, and there is still an enormous amount of uncertainty in cybersecurity,” says Schneier. “Automation has its place, but the focus needs to be on making people effective, not on replacing them.”

Longtime business and technology journalist Lee Bruno has worked as a staff writer, editor and freelancer for The Guardian, MIT Technology Review, Red Herring, Scientific American, and The Economist.