Expert outlook: Risk in the age of AI

Risk in the age of AI: man holding laptop in front of graphs and charts on a large screen

Every business leader is thinking about AI these days. It has the potential to be a business-changing transformational tool. Some organizations are moving too slowly when it comes to adopting AI, whereas others are moving too quickly, for a variety of reasons.

In either case, there’s a risk threshold around AI that needs to be taken into account. I spoke with risk experts from Edgile, EY, KPMG, and NewRocket on the Innovation Today podcast to discuss risk in the age of AI. I discovered AI can be beneficial to an overall risk management strategy.

The risk threshold of AI

Melissa Cohoe, global practice strategist for security, risk, and resilience at NewRocket, sees risk as a business enabler. “If you make a business change and you go after a piece of business without analyzing the likelihood and impact of something going wrong, you’re more likely to fail and have a huge negative impact to your business,” she explains.

It’s better not to rush in, but to take time to analyze the possibilities. Cohoe encourages business leaders to ask themselves:

“It’s not about moving slowly. It’s about moving with care,” she adds. “It’s an incredibly important part of doing business, especially in this day and age, where if you do something wrong, it’s exposed immediately.”

AI is a great opportunity to drive more of the trend side of resilience to really analyze the services you’re providing. -Andrew VanWagoner, EMA ServiceNow Platform Lead, KPMG

The million-dollar AI question

Dan Prior, a partner in risk technology at EY, is asked repeatedly, “Where do we start with AI?” Organizations that have done all their traditional risk work are unsure how to move forward to engage in an AI-first world while keeping their risk numbers manageable.

“No one's boiling the ocean,” he says. “They're picking a priority area and saying, ‘Let's go after that.’ It's looking and making sure you have an understanding of what the current state looks like—not the current state in terms of maturity or processes, but the underlying technology.”

EY is seeing clients identify what’s practical and what has tangible return on investment and starting there, he adds. But the firm also sees organizations that are looking for true step change. “Where can I use AI and really augment or do something different?” Prior says. “How you go about that is what a lot of clients are trying to think through because they want to do it responsibly, but they also want to do it at scale from a competitive advantage.”

How AI applies to risk

Geoff Hauge, financial services leader at Edgile, believes AI offers significant opportunities in the risk and security space. “We’re just now starting to see some of the implications,” he says of AI, “how it will enable us, as well as what potential harm it could do to us.”

AI tools “provide a tremendous interpretation of certain threats we’re dealing with,” he adds. “They can interpret scripts. They can interpret a number of different attacks in a context that will give our first-, second-, and third-line support a lot better insight into the options available.”

That could start with humans in the loop and eventually progress to automation owning the decision process, from threat incident to resolution, Hauge says.

The role of governance

“How do you govern AI?” is another question EY’s Prior gets a lot. Governance is the first thing Cohoe from NewRocket recommends to organizations. “Before you really start to go into AI, you really should be looking at what are the potential threats that you could be exposing yourself to by implementing AI,” she says.

“I talk a lot about responsible innovation as being a really critical thing,” she adds. “It used to be that you could just go and experiment, and you would try something out, and everything would be OK. Now that we’re talking about technology, your failures can be exposed very quickly by an enterprising bad actor.”

Part of governance involves identifying the data and if it’s the right data, as well as how the data will be used. “Number one is having the right data,” Prior stresses.

Before you really start to go into AI, you really should be looking at what are the potential threats that you could be exposing yourself to by implementing AI. -Dan Prior, Partner, Risk Technology, EY

Where to start with AI and operational resilience

“AI is a great opportunity to drive more of the trend side of resilience to really analyze the services you’re providing,” says Andrew VanWagoner, ServiceNow platform lead across Europe, the Middle East, and Africa at KPMB and a ServiceNow Certified Master Architect. Getting started with AI in operational resilience requires first understanding your current risk posture, he adds.

“Having a really robust definition of services and really understanding everything you provide to customers end to end is key,” he explains. There are different ways to do that. You can map out your services in an automated fashion or gain an understanding of the people and technology involved in the services.

“The biggest benefit you get out of driving resilience is knowing more about what your offerings are at the end,” VanWagoner adds. “And then AI is obviously a maturity step that will help you with better automation in that definition cycle.”

In the rapidly evolving landscape of AI, balancing innovation with risk is crucial. As highlighted by the experts, the path forward lies not in rushing, but in taking calculated steps that align with both business goals and responsible governance.

AI offers transformative opportunities, yet it demands careful consideration of potential pitfalls and a robust risk management strategy. By integrating AI thoughtfully and strategically, organizations can enhance resilience and gain a competitive edge in an AI-first world. The key is moving forward with informed caution—empowering innovation while safeguarding against unforeseen challenges.

Gain more insights in our Enterprise AI Maturity Index.