Subscribe Home Conversations On AI App Development CRM Enterprise IT Ethics & Governance Futures HR Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All topics For Leaders In IT & Dev Customer Experience Finance, Operations & Strategy Employee Experience Security & Risk News & Events People & Culture My List Explore All
May 5, 2026 4 min Stop asking if you’re secure. Start asking if you’re in control. Agentic AI poses new cybersecurity challenges that require a new approach Ethics and Governance Thought Leadership
Ben de Bont headshot
Ben de Bont Chief Information Security Officer, ServiceNow
Amanda Grady
Amanda Grady VP and GM, AI Platform Security, ServiceNow
Abstract architecture background with geometric shapes

Are we secure? It’s a pressing question, and one that cybersecurity leaders know all too well has never had a clear answer. That ambiguity isn't new; it's always been the honest reality of the role. What has changed is what we're being asked to secure.  

For decades, the industry built layered defenses: perimeter controls, patch management, zero-trust architectures, defense in depth. Each model added sophistication, but the underlying assumption held. Security was about protecting systems and governing the humans who used them. 

That assumption no longer holds. AI agents approve transactions, trigger workflows, and take actions without a human in the loop. The systems running our businesses are no longer just tools people use; they’re actors making decisions with real business consequences. The question that matters now is fundamentally different: Can we trust what's running our business? 

If an AI agent has excessive permissions, flawed logic, or compromised inputs, the impact is immediate. Multiply that across thousands of agents operating simultaneously, and the scale of exposure becomes clear. 

We’re no longer securing systems people use. We’re securing systems that act. 

We’re no longer securing systems people use. We’re securing systems that act.
You can’t govern autonomous systems with reactive controls.

Why the old model breaks  

Security governance has never stood still. The industry moved from perimeter defense to zero trust, from on premises to cloud, from static controls to identity-centric models. 

What breaks now is governance designed around human cadence. Controls were reactive. Reviews were periodic. Approvals assumed a person would be in the loop before consequential decisions were made. AI agents make decisions with real business consequences at a speed and volume that human-paced governance was never meant to handle.  

The threat landscape is also changing just as quickly. We’re now seeing models used to automate exploitation chains and accelerate the discovery of weaknesses in software and infrastructure. Capabilities once limited to highly skilled operators are becoming faster, cheaper, and more scalable. Models are also improving at reverse engineering poor-quality code into exploitable vulnerabilities. 

The attack surface has expanded in every direction. Identities now include machines and AI agents alongside people. Endpoints include connected devices traditional security programs were never designed to manage. The volume of decisions being made without human involvement continues to grow. 

You can’t govern autonomous systems with reactive controls. 

The control gap 

Organizations have invested heavily in visibility. Most can see more of their environment than ever before: assets, identities, configurations, data flows, and agent activity. But seeing risk and controlling risk are two different things.  

In many cases, action is slowed by manual processes, siloed tools, and disconnected teams. The gap between what organizations know and what they can act on is where trust breaks down. 

Continuous control means reducing risk as work happens, not after the fact. When abnormal behavior is detected, access can be restricted immediately. When posture drifts, remediation can happen automatically. Most organizations are still early in that journey. 

Trust in autonomous systems cannot be earned through periodic audits or static policies. It has to be built into day-to-day operations through three foundational capabilities: 

  1. Real-time visibility 
  2. Primitive identity 
  3. Continous risk reduction 

Organizations need a current, accurate understanding of what’s happening across systems, identities, assets, and AI agents as conditions change. Without that visibility, risk is often discovered too late.  

Going beyond seeing 

Visibility alone is not enough. Seeing a problem doesn’t mean you can control it. That’s why the second requirement is treating identity as a foundational primitive for AI, not an afterthought borrowed from how we manage people.  

Identity for an AI agent is much more than a username and a set of permissions; it has to carry context about what the agent was built to do, who’s accountable for it, and what it’s been authorized to do at that moment. Permissions tell you what an actor could do. They don't tell you whether a given action should happen. 

Once organizations can see what’s happening and control who or what can act, the next step is reducing risk in real time. The third requirement is continuous risk reduction. Controls need to be enforced and issues remediated as work happens, rather than waiting for the next review cycle or until after an incident occurs. 

The goal isn’t faster response; it’s less need to respond at all. 

The goal isn’t faster response; it’s less need to respond at all.

A control layer for the AI era  

In this new era, where AI is moving from a productivity aid to a workforce participant, organizations need something structurally different from the security architectures of the last 20 years. The answer is a neutral control layer for their entire AI footprint: a unified system that governs how AI agents, workflows, and digital identities operate across the enterprise in real time. 

Most AI governance tools are bolt-ons, analytics dashboards watching AI from the outside. What makes ServiceNow AI Control Tower such a game changer for our customers is that we’ve created a control layer that’s woven into the same operational fabric that defines, approves, executes, and audits work.  

Every AI system, model, agent, and Model Context Protocol (MCP) server connection is registered, assessed, and monitored as part of a living operational inventory. Policies are encoded as rules that execute automatically, applied consistently across every interaction and transaction. 

This is fundamentally different from passive monitoring. It’s an active, governed system of action that manages cross-platform agentic workflows and enforces agent-specific policies, whether the AI was built internally or deployed by a third party. When a new agent is introduced, the system assesses it, routes it through approval workflows, and applies your governance criteria before it ever acts on your behalf. 

Identity in the agentic age 

One thing is becoming clear from the work we’re doing with customers: Identity for AI is going to look different from identity for humans. A human user's identity is relatively stable: a name, a role, a set of entitlements that change a few times a year.  

An AI agent's identity is dynamic. Its purpose, scope, the data it touches, the systems it acts on, and the human accountable for it can all shift between one task and the next. Treating that as a permissions problem alone underestimates what's needed.  

The organizations getting this right are starting to think about agent identity the way they think about operational state—something that has to be current, contextual, and continuously verified, not just provisioned once and trusted thereafter. 

The shift from insight to action follows a clear path. Visibility produces understanding. Identity produces control. Workflows produce action. Together, they produce continuous, automated risk reduction across the enterprise. 

Agentic governance at the board level  

Security leaders are increasingly called upon to brief their boards of directors on how risks related to agentic AI are being managed. In these conversations, boards aren’t asking for more security details. They’re asking whether the business is in control: 

  • Can we scale AI without creating unmanaged risk?  
  • Do we understand what these systems are doing on our behalf?  
  • If something goes wrong, can we contain it quickly and explain it clearly?  
  • Can we demonstrate that the right controls were in place? 

Those questions are becoming more urgent as AI is used not only to improve operations, but also to accelerate attacks. When exploitation becomes faster and more automated, resilience and response speed become business issues, not just technical ones. 

That’s the shift. Security is no longer measured only by prevention, but by resilience, accountability, and confidence to keep moving. 

If leadership is unsure whether AI systems are governed, decisions slow down. Deployments stall. Opportunities are delayed. 

Trust becomes the gating factor for growth in the AI era. 

Trust becomes the gating factor for growth in the AI era.
If you can’t control AI, you can’t trust it. And if you can’t trust it, you can’t scale it.

How trust unlocks scale 

The systems running our businesses are increasingly autonomous. They make decisions, take actions, and interact with other systems faster than humans can realistically supervise. The security models built for a human-driven enterprise cannot keep up.  

What’s required is a new operating model that embeds visibility, identity governance, and continuous risk reduction into how the business runs—one that treats AI governance as a control layer woven into every workflow, every agent interaction, every digital identity. 

The organizations that solve this first will move faster than everyone else. The ones that don’t will spend their time managing risk they never intended to create. 

If you can’t control AI, you can’t trust it. And if you can’t trust it, you can’t scale it. 

Find out how ServiceNow helps put autonomous security to work for people

Next up
Dive into more conversations AI App Development CRM Enterprise IT Ethics & Governance Human Resources Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All Topics
Stay in the know Join Us
stay in know image
Alt