ARTICLE Agents of change: how agentic AI is remaking the landscape for cybersecurity, risk and governance
Exploring how AI is reshaping both cybersecurity threats and enterprise defences
Businesses are not the only ones harnessing the power of AI to transform how they operate. A number of cyberattacks against high-profile targets took place in the UK recently and made front-page news for days. And they’re far from the last headlines on cybercrime that we’re likely to see. Hackers’ determination to penetrate organisations’ defences continues to increase alongside the growing sophistication of the tools they use to perpetrate their crimes. This is because cybercriminals are also exploiting the possibilities that AI offers. And access to these technologies has never been more open. There are LLMs  – such as WormGPT – easily available online that impose no safeguards on what their users can create. They appear to be expressly designed to enable the creation of hyperauthentic looking and sounding communications which can be readily weaponised.

Cybercrime as a service

The growth of cybercrime-as-a-service means even those who don’t possess the technical prowess themselves can easily access the tools they need to mount a damaging attack. The ‘rewards’ from their activities are huge and growing. The Global Anti Scam Alliance estimates that scammers’ activities accounted for nearly $1 trillion in 2024 alone. In fact, according to one security expert, if cybercrime were an industry, it would be the third largest in the world.

With the rise of agentic AI, those possibilities are set to increase. Gartner, for example, predicts that by 2027 AI agents will halve the time it takes to effect an account takeover (ATO) – one of the most common vectors for an attack.

New threats, new capabilities

The ability of AI agents to learn and take decisions autonomously makes them a powerful ally in the quest for higher productivity and business reinvention. But those same capabilities create an equally formidable foe. Malign agents could, for example, combine information about a company and the industry it operates in, identify a senior spokesperson and mimic them to gain access to other senior figures. This isn’t just a possibility, it’s already happened. One corporation in Hong Kong was persuaded to make a significant payment when an AI-generated imitation of the CFO, along with other employees, appeared on a group video call and requested the transaction.

The World Economic Forum’s (WEF) latest Cybersecurity Outlook reports that while 66% of organisations believe AI will have the greatest impact on cybersecurity in the coming year, just 37% of respondents have the tools in place to assess the security of their AI tools before deployment.

Trusted third parties?

Anna Mazzone, ServiceNow’s Risk Leader for ServiceNow in EMEA, explains it's a challenge that companies need to solve, and it’s especially acute for the AI in tools and solutions provided by third parties.

The fact that people don’t have a real process for evaluation is somewhat understandable in such a nascent field. But the third-party element is going to be the biggest challenge for companies. They need to understand how a third party has developed the agentic AI in their solution. What’s their vision and philosophy, and are they ethical in their approach? Anna Mazzone EMEA AVP, Operational Resilience, Risk & Security Leader
As the push from the C-Suite to adopt AI becomes increasingly intense, businesses have to make sure that they have the governance and controls in place to understand their expanding AI portfolios and, especially, the risks associated with this technology.  And while the CISO may be charged with defending the business against attacks, responsibility for how AI is implemented – and the risks it may create – needs to extend right across the enterprise.  That should include all AI assets, whether developed in-house or from third parties, and needs to be continuously updated as new solutions and releases come on stream.

AI security is everyone’s job

Because AI touches all corners of the organisation, the governance and controls around its use have to be similarly pervasive. The role of Chief AI Officer is becoming increasingly common, and we typically see these individuals charged with developing and implementing AI strategy across the enterprise. But the array of stakeholders that need to engage with the potential risks of AI should be much broader, ranging from the CISO to the CTO and CIO, heads of risk and compliance, HR, legal and the CFO.  They all have a stake in maximizing the trust and security with which AI operates. Without that, the business will struggle to realise its full promise.

Leaders across every enterprise therefore have to be aware of the risks associated with their use of AI. But AI tools themselves also have a clear role to play in bolstering security.  Threat detection can be made orders of magnitude more efficient. And automation can create both more efficient and effective responses, which should be music to the ears of CFOs as they look to boost the CISO’s capabilities without having to add to headcount. What’s more, the ability of AI agents to orchestrate much of the routine processing, documentation and reporting of cybersecurity activities means the scarce and hugely in-demand security talent that an organisation employs can focus on higher-value and more complex work.

Changing the rules of the risk game

And there’s another dimension where the power of AI could come into its own. Because it can process, rationalise and standardise vast amounts of information with astonishing levels of efficiency and accuracy, AI could transform governance, risk and compliance processes.

Getting the rewards from risk

This would be a significant benefit. The overheads currently associated with compliance programs are substantial. For example, more than 50% of financial services businesses in the UK spent in excess of €1 million on complying with the Digital Resilience Operational Act (DORA).

Rather than seeing risk, governance and compliance as a costly overhead, AI could help those functions become revenue drivers, advising the business about potential operational vulnerabilities and in doing so enable the enterprise to avoid a crisis. Ultimately, the organisation’s leadership begins to develop confidence that they will be able to manage a crisis event efficiently. This then gives them the confidence to make bold moves in terms of revenue growth and investment. Anna Mazzone EMEA AVP, Operational Resilience, Risk & Security Leader

That could be achieved by enabling standardisation of content and taxonomies across the entire organisation to support one control framework. With those standards in place, it becomes significantly easier to respond to regulatory demands, because there’s a common reference framework used across the global organisation. What’s more, this would, as Anna Mazzone sys, “Make it far easier than today to issue new policies in response to regulatory changes: a common taxonomy integrated into the business architecture means controls and rules can be pushed out into a workflow with relative ease and a great deal less complexity than today.”

Achieving that, though, requires a (likely significant) effort to create a consistent and common shared data landscape for the enterprise. Many organisations struggle with data that’s fragmented across a global organisation. So cleaning and standardising data has to be a priority objective for organisations seeking to start deploying some of the new approaches to risk, governance and control that AI promises. The investment will be worth it. With the right data foundation in place, businesses can start to see the transformational outcomes agentic AI can deliver: a more responsive, agile business that’s one step ahead of the risks it faces, and, crucially, the opportunities they create.

You might also like Discover More PODCAST AI transformation and the future of work

Exploring how robust data strategies and intuitive design are transforming AI with Andy Baynes, former Apple, Nest and Google Executive.

 

Read Article
PODCAST From Service to Strategy: How Kerry Group is transforming employee experience with AI and GBS
Learn how Jacqueline McGirr is transforming HR and GBS (Global Business Services) through technology, automation and AI.
Read Article
ARTICLE Addressing the Gen AI fear factor
Recent studies reveal CEOs are optimistic about the transformative impact of Generative AI, but frontline workers have concerns. How do you bridge the gap between perception and reality?
Read Article