The growth of cybercrime-as-a-service means even those who don’t possess the technical prowess themselves can easily access the tools they need to mount a damaging attack. The ‘rewards’ from their activities are huge and growing. The Global Anti Scam Alliance estimates that scammers’ activities accounted for nearly $1 trillion in 2024 alone. In fact, according to one security expert, if cybercrime were an industry, it would be the third largest in the world.
With the rise of agentic AI, those possibilities are set to increase. Gartner, for example, predicts that by 2027 AI agents will halve the time it takes to effect an account takeover (ATO) – one of the most common vectors for an attack.
The ability of AI agents to learn and take decisions autonomously makes them a powerful ally in the quest for higher productivity and business reinvention. But those same capabilities create an equally formidable foe. Malign agents could, for example, combine information about a company and the industry it operates in, identify a senior spokesperson and mimic them to gain access to other senior figures. This isn’t just a possibility, it’s already happened. One corporation in Hong Kong was persuaded to make a significant payment when an AI-generated imitation of the CFO, along with other employees, appeared on a group video call and requested the transaction.
The World Economic Forum’s (WEF) latest Cybersecurity Outlook reports that while 66% of organisations believe AI will have the greatest impact on cybersecurity in the coming year, just 37% of respondents have the tools in place to assess the security of their AI tools before deployment.
Anna Mazzone, ServiceNow’s Risk Leader for ServiceNow in EMEA, explains it's a challenge that companies need to solve, and it’s especially acute for the AI in tools and solutions provided by third parties.
Because AI touches all corners of the organisation, the governance and controls around its use have to be similarly pervasive. The role of Chief AI Officer is becoming increasingly common, and we typically see these individuals charged with developing and implementing AI strategy across the enterprise. But the array of stakeholders that need to engage with the potential risks of AI should be much broader, ranging from the CISO to the CTO and CIO, heads of risk and compliance, HR, legal and the CFO. They all have a stake in maximizing the trust and security with which AI operates. Without that, the business will struggle to realise its full promise.
Leaders across every enterprise therefore have to be aware of the risks associated with their use of AI. But AI tools themselves also have a clear role to play in bolstering security. Threat detection can be made orders of magnitude more efficient. And automation can create both more efficient and effective responses, which should be music to the ears of CFOs as they look to boost the CISO’s capabilities without having to add to headcount. What’s more, the ability of AI agents to orchestrate much of the routine processing, documentation and reporting of cybersecurity activities means the scarce and hugely in-demand security talent that an organisation employs can focus on higher-value and more complex work.
And there’s another dimension where the power of AI could come into its own. Because it can process, rationalise and standardise vast amounts of information with astonishing levels of efficiency and accuracy, AI could transform governance, risk and compliance processes.
This would be a significant benefit. The overheads currently associated with compliance programs are substantial. For example, more than 50% of financial services businesses in the UK spent in excess of €1 million on complying with the Digital Resilience Operational Act (DORA).
That could be achieved by enabling standardisation of content and taxonomies across the entire organisation to support one control framework. With those standards in place, it becomes significantly easier to respond to regulatory demands, because there’s a common reference framework used across the global organisation. What’s more, this would, as Anna Mazzone sys, “Make it far easier than today to issue new policies in response to regulatory changes: a common taxonomy integrated into the business architecture means controls and rules can be pushed out into a workflow with relative ease and a great deal less complexity than today.”
Achieving that, though, requires a (likely significant) effort to create a consistent and common shared data landscape for the enterprise. Many organisations struggle with data that’s fragmented across a global organisation. So cleaning and standardising data has to be a priority objective for organisations seeking to start deploying some of the new approaches to risk, governance and control that AI promises. The investment will be worth it. With the right data foundation in place, businesses can start to see the transformational outcomes agentic AI can deliver: a more responsive, agile business that’s one step ahead of the risks it faces, and, crucially, the opportunities they create.
Exploring how robust data strategies and intuitive design are transforming AI with Andy Baynes, former Apple, Nest and Google Executive.