Who's in charge of AI agents?

ARTICLE | January 27, 2026 | VOICES

Who's in charge of AI agents?

AI governance will determine which organizations thrive with AI agents and which ones don’t

By Dave Wright, Chief Innovation Officer


It’s a common misconception among enterprise leaders that securing artificial intelligence (AI) is fundamentally a technology problem.

It isn't. It's a strategy and a people problem.

Consider this anecdote from a product manager I recently spoke to. Excited about generative AI's summarization capabilities, this person uploaded his employer's entire product roadmap to his personal ChatGPT account to generate an executive summary. The problem? That proprietary roadmap will now become part of the model's training data, effectively making this confidential information accessible to anyone who asks the AI the right questions.

The company's solution to prevent this from happening again wasn't to ban employees from using AI. Instead, it worked with multiple AI vendors to create secure, enterprise-grade instances of every major language model. This allowed employees to use the AI model of their choice while keeping data safe inside corporate boundaries.

The insight: You can't govern what you don't provide.

That was one employee with one prompt. Now add thousands of AI agents operating autonomously across your enterprise, each capable of accessing systems, making decisions, and interacting with other agents—all at machine speed.

In many organizations, AI security failures won’t happen because of technology. Rather, they will occur because no one has decided who's in charge—and of what—until it’s too late.

Related

Risk and security in the AI era

A report by ServiceNow and research partner ThoughtLab spells out the risks. A majority of the 1,000 global senior executives across industries surveyed reported increasing technology and AI risks over the last three years, and they expect these risks to keep rising over the next 12 months.

When asked about specific AI agent risks—malicious use, loss of human oversight, and security vulnerabilities from AI applications—fewer than half of respondents (42%) reported low to moderate levels of confidence in their organization’s ability to mitigate the growing risks.

Companies are deploying agentic AI before they have established mature governance policies. We're already seeing the consequences. Only 1% of enterprises view their AI strategies as mature, according to McKinsey. Meanwhile, IT security provider SailPoint reports that 80% of organizations have seen AI agents act outside intended boundaries, including unauthorized access (39%), restricted information handling (33%), and phishing-related movements (16%).

The pattern is clear: Organizations risk losing control over agentic AI if they don’t have a mature framework to proactively identify and fix the issues.

Before deploying AI agents at scale, organizations must determine risk exposure and oversight accountability. At first, these may seem like technical decisions; however, they're inseparable from business strategy and require the input, oversight, and support of all parts of the enterprise.

Think of this as the “ACID” test for agentic AI governance: 

Autonomy: Set clear boundaries specifying which actions agents can execute independently, which require human approval, and which are prohibited entirely. Certain capabilities—particularly elevated access privileges—should remain completely off-limits. Privilege escalation bots are prime targets for attackers because compromising one can unlock full system access. Gartner predicts that “loss of control—where AI agents pursue misaligned goals or act outside constraints—will be the top concern for 40% of Fortune 1000 companies by 2028.”1

Classification: Not every agent request carries the same weight. Governance frameworks should direct routine operations automatically while escalating high-risk requests to human decision-makers. These frameworks should be built into the architecture from the beginning as a key development step, rather than bolted on after the fact.

Identity: When an ADP AI agent requests information from a ServiceNow AI agent, how does the receiving system confirm the request is legitimate? Just as you wouldn't give a new hire access to every system on day 1, agents need proper credentialing, foolproof verification, and appropriate access limits.

In many organizations, AI security failures won’t happen because of technology. Rather, they will occur because no one has decided who's in charge—and of what—until it’s too late.”

Detection: Organizations must establish behavioral baselines so that anomalies trigger alerts. Unexpected API call volumes, unusual data access patterns, or suspicious prompt length changes could signal a data compromise. Security shifts its focus from simply defending perimeters to monitoring and recognizing intrusions within the enterprise.

What agents are permitted to do is a business decision. How organizations enforce those boundaries is a technical one. The two are inseparable.

Who defines these restrictions and who enforces them?

The answer isn't a single executive or a technical committee with advisory status. The scope of these decisions—spanning security, operations, compliance, and strategy—cannot rest within any single function.

Research from ServiceNow and Oxford Economics' AI Maturity Index reveals that organizations achieving measurable AI benefits have established cross-functional governance councils with genuine executive authority.

To be successful, these councils require the input of three separate enterprise perspectives: business units defining what AI should accomplish, C-suite leadership setting strategic direction, and technical teams implementing safeguards. The key distinction is authority. Advisory committees produce recommendations; governance councils make binding decisions.

Securing agentic AI must become a core principle for the entire enterprise. Organizations with mature, responsible AI frameworks already achieve 42% efficiency gains compared to less mature peers, according to McKinsey. Moving forward, the gap between mature AI-empowered companies and those at earlier stages will only widen.

 

Gartner predicts that “more than 40% of agentic AI projects will be canceled by the end of 2027, due to escalating costs, unclear business value or inadequate risk controls.”2 Organizations that build governance structures will deploy ambitious use cases with confidence. Those that don’t will fall further and further behind.

The business promise and potential of AI cannot be fully realized without the accompanying commitment to safe and secure deployment of AI. Leaders who dump this responsibility onto IT’s desk risk incomplete security solutions. Enterprises that align and involve technology, strategy, and people throughout the organization will be best positioned to effectively manage and secure AI agents.

Organizations risk losing control over agentic AI if they don’t have a mature framework to proactively identify and fix the issues.”

1 Gartner, “AI’s Next Frontier Demands a New Approach to Ethics, Governance and Compliance,” Nov. 10, 2025.

2 Gartner, “Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027,” June 25, 2025.

Related articles

AI security built for tomorrow
ARTICLE
AI security built for tomorrow

A conversation with ServiceNow CISO Ben de Bont

Explore the impact agentic AI will have on your industry—and your life
Explore the impact agentic AI will have on your industry—and your life

How AI and design thinking could help keep data safe

What’s next for AI in 2026
ARTICLE
What’s next for AI in 2026

AI agents will help transform organizations. Are we ready?

7 ways AI will change the future of work by 2030
ARTICLE
7 ways AI will change the future of work by 2030

Generative and agentic AI will rewire the enterprise and unleash human potential

Author

Dave Wright, Chief Innovation Officer_08667

Dave Wright is the chief innovation officer at ServiceNow.

Loading spinner