It’s a common misconception among enterprise leaders that securing AI is fundamentally a technology problem. It isn't. It's a strategy and a people problem.
Consider this anecdote from a product manager I recently spoke to. Excited about generative AI's summarization capabilities, this person uploaded his employer's entire product roadmap to his personal ChatGPT account to generate an executive summary. The problem? That proprietary roadmap will now become part of the model's training data, effectively making this confidential information accessible to anyone who asks the AI the right questions.
The company's solution to prevent this from happening again wasn't to ban employees from using AI. Instead, it worked with multiple AI vendors to create secure, enterprise-grade instances of every major language model. This allowed employees to use the AI model of their choice while keeping data safe inside corporate boundaries.
The insight: You can't govern what you don't provide.
That was one employee with one prompt. Now add thousands of AI agents operating autonomously across your enterprise, each capable of accessing systems, making decisions, and interacting with other agents—all at machine speed.
In many organizations, AI security failures won’t happen because of technology. Rather, they'll occur because no one has decided who's in charge—and of what—until it’s too late.
A report by ServiceNow and research partner ThoughtLab spells out the risks. A majority of the 1,000 global senior executives across industries surveyed reported increasing technology and AI risks over the last three years, and they expect these risks to keep rising over the next 12 months.
When asked about specific AI agent risks—malicious use, loss of human oversight, and security vulnerabilities from AI applications—fewer than half of respondents (47%) reported low to moderate levels of confidence in their organization’s ability to mitigate the growing risks.
Companies are deploying agentic AI before they have established mature governance policies. We're already seeing the consequences. Only 1% of enterprises view their AI strategies as mature, according to McKinsey. Meanwhile, IT security provider SailPoint reports that 80% of organizations have seen AI agents act outside intended boundaries, including unauthorized access (39%), restricted information handling (33%), and phishing-related movements (16%).
The pattern is clear: Organizations risk losing control over agentic AI if they don’t have a mature framework to proactively identify and fix the issues.
Before deploying AI agents at scale, organizations must determine risk exposure and oversight accountability. At first, these may seem like technical decisions; however, they're inseparable from business strategy and require the input, oversight, and support of all parts of the enterprise.
Think of this as the “ACID” test for agentic AI governance:
Set clear boundaries specifying which actions agents can execute independently, which require human approval, and which are prohibited entirely. Certain capabilities—particularly elevated access privileges—should remain completely off limits. Privilege escalation bots are prime targets for attackers because compromising one can unlock full system access.
Gartner predicts that “loss of control—where AI agents pursue misaligned goals or act outside constraints—will be the top concern for 40% of Fortune 1000 companies by 2028.”1
Not every agent request carries the same weight. Governance frameworks should direct routine operations automatically while escalating high-risk requests to human decision-makers. These frameworks should be built into the architecture from the beginning as a key development step, rather than bolted on after the fact.
When an ADP AI agent requests information from a ServiceNow AI Agent, how does the receiving system confirm the request is legitimate? Just as you wouldn't give a new hire access to every system on day 1, agents need proper credentialing, foolproof verification, and appropriate access limits.
Organizations must establish behavioral baselines so that anomalies trigger alerts. Unexpected API call volumes, unusual data access patterns, or suspicious prompt length changes could signal a data compromise. Security shifts its focus from simply defending perimeters to monitoring and recognizing intrusions within the enterprise.
What agents are permitted to do is a business decision. How organizations enforce those boundaries is a technical one. The two are inseparable.
Who defines these restrictions and who enforces them? The answer isn't a single executive or a technical committee with advisory status. The scope of these decisions—spanning security, operations, compliance, and strategy—cannot rest within any single function.
Research from the ServiceNow Enterprise AI Maturity Index reveals that organizations achieving measurable AI benefits have established cross-functional governance councils with genuine executive authority.
To be successful, these councils require the input of three separate enterprise perspectives: business units defining what AI should accomplish, C-suite leadership setting strategic direction, and technical teams implementing safeguards. The key distinction is authority. Advisory committees produce recommendations; governance councils make binding decisions.
Gartner predicts that “more than 40% of agentic AI projects will be canceled by the end of 2027, due to escalating costs, unclear business value or inadequate risk controls.”2 Organizations that build governance structures will deploy ambitious use cases with confidence. Those that don’t will fall further and further behind.
The business promise and potential of AI cannot be fully realized without the accompanying commitment to safe and secure deployment of AI. Leaders who dump this responsibility onto IT’s desk risk incomplete security solutions.
Enterprises that align and involve technology, strategy, and people throughout the organization will be best positioned to effectively manage and secure AI agents.
Find out how ServiceNow can help you put AI agents to work responsibly.
1 Gartner, “AI’s Next Frontier Demands a New Approach to Ethics, Governance and Compliance,” Nov. 10, 2025
2 Gartner, “Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027,” June 25, 2025