Subscribe Home Conversations On AI App Development CRM Enterprise IT Ethics & Governance Futures HR Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All topics For Leaders In IT & Dev Customer Experience Finance, Operations & Strategy Employee Experience Security & Risk News & Events People & Culture My List Explore All
May 4, 2026 5 min If AI agents are autonomous, do they still need human managers? Everyone needs a boss to set strategy, clarify priorities, and provide oversight. AI agents are no exception. Ethics and Governance Thought Leadership
Sarah Struble
Sarah Struble Sr Mgr, Research Strategy, ServiceNow
Thomas McKinlay
Thomas McKinlay Founder and CEO, Science Says
Illustration of a person walking away under colorful circle and square shapes and a large question mark
Top takeaways In the era of agentic AI, what managers need to manage is changing to include systems that make decisions on their own. Without governance, AI agents create decision blind spots, lower accountability, and increase the risk of biased recommendations. To succeed, managers must know how to design governance systems where AI agents operate independently and effectively.
Alt text

This article is grounded in the latest scientific insights on agentic AI oversight, human-AI psychology, and the impact of AI on leadership. Findings and recommendations are based on research from institutions including MIT, The Wharton School, University of Michigan, University of California, and New York University.

When a manager gives an AI agent an objective, it's rarely a simple task on a clean slate. But the agent sees it that way. Every goal carries strategic context that's difficult to fully communicate: priorities still forming, decisions built on background knowledge the agent doesn’t have, directions that haven't been announced yet. And when objectives change, the agent doesn't know.

This creates a persistent disconnect between what a manager intends and how an AI agent acts on it. According to the ServiceNow Enterprise AI Maturity Index 2026, 51% of employees say they find it difficult to understand how autonomous AI agents arrive at their decisions.

Managers cannot correct judgment they cannot see, which makes the need for proper governance crucial, allowing systems to flag decisions that fall outside their context and boundaries, without depending on managers to spot errors after the fact.

“AI agents are missing the context and judgment to operate without human managers,” says Clare Snyder, an assistant professor at New York University Stern School of Business. “Our goals are not static, and AI agents do not necessarily understand that or share our priorities.”

Consider this scenario: A retailer uses an AI agent to help manage some of its marketing campaigns. The agent keeps suggesting the retailer run promotions to counter competitors’ aggressive pricing strategies.

On the surface, it sounds logical. But the agent can't know that leadership has spent months repositioning the brand as premium, and promotions would go against that. That context lives in strategy meetings the agent never attended, not in a dataset. A manager would have caught it immediately.

The solution is not limiting the agent’s autonomy but giving it defined goals, constraints, and strategic context before deployment. This is where management’s focus is shifting.

AI agents are missing the context and judgment to operate without human managers. Clare Snyder Asst Professor, NYU Stern
AI agents don’t eliminate the need for management. They relocate it. While human work relies on shared experience, AI requires explicit instruction. Paul Leonardi Professor, Tech Mgmt, UCSB

The accountability gap

As AI makes more decisions, people naturally stop seeing themselves as the decision-makers. Responsibility moves from "I decided" to "AI decided," not explicitly, but gradually.

Research shows this makes employees less likely to check AI's work, catch errors, or question biased outputs. They begin treating AI as an independent co-worker rather than a system they own.

In a global survey of 2,102 leaders across 21 industries, MIT found that one of the top challenges in AI adoption isn't technical; it’s knowing what to automate and what still needs a human owner.

“Even when agents perform well, responsibility cannot be delegated,” says Paul Leonardi, professor of technology management at University of California Santa Barbara, referring to his research.

“Someone must own the outcome. The organizations that struggle are the ones where people start treating the system as the decision-maker rather than as an input into decision-making.”

This tension creates a real management challenge: deciding what to fully delegate to AI and what to keep under human ownership. The ServiceNow Enterprise AI Maturity Index 2026 found that 53% of employees already worry that AI agents are making them lose control over their decisions.

That's not a people problem but a design one, and it signals that accountability wasn't built into the system before AI agents were deployed.

The organizations that struggle are the ones where people start treating the system as the decision-maker rather than as an input into decision-making. Paul Leonardi Professor, Tech Mgmt, UCSB
AI can misunderstand goals and priorities. Managers need to oversee the work and make sure it satisfies the goals of the organization and its role within it. Karen Feigh Professor, Georgia Institute of Technology

When pressure turns into over-reliance

The need for governance becomes most visible when decisions are high stakes. Under time pressure, people default to AI recommendations without questioning them.

For example, in lab experiments at New York University and the University of Michigan, algorithm reliance jumped from 39% to 48% under high pressure, driven by speed, not trust. The same study found that as workloads increase, people become more likely to default to AI advice without scrutiny. The moments that demand the most judgment are exactly when employees are least likely to exercise it.

This is precisely where governance is most valuable. Managers could define in advance when autonomy applies and when escalation is required, so human attention is allocated where it genuinely matters.

“AI agents improve performance when they are used less like tools of optimization and more like tools of exploration,” Leonardi says. “When they are used to speed up execution, they often reinforce existing blind spots.”

Why it keeps getting harder

These gaps aren’t static. Without proper governance, risks compound as technology improves. As AI agents make fewer errors, people trust them more and monitor them less.

This is already the case for some organizations. For example, 52% of employees say they trust the accuracy and reliability of AI agents' outputs, according to ServiceNow’s Enterprise AI Maturity Index. But they also report not understanding the logic behind the decisions.

This increases risks. Research from the Wharton School of the University of Pennsylvania shows that without the right governance in place, the cycle is only interrupted after something goes wrong. 

59% of organizations are using agentic AI. But AI agents don't just need human managers. They need organizations that have prepared their people to manage them. Brian Solis VP, Head of Global Innovation, ServiceNow

Recommendations

The organizations that avoid this will be the ones whose managers make the right design decisions before deployment and build systems that stay governed as AI capabilities grow. We recommend three ways to do this:

1. Redesign workflows around AI

As agents take on more tasks, workflows must be redesigned to reflect a new division of labor between humans and systems. Organizations should redefine roles and responsibilities, redesign decision flowcharts, and build processes around autonomous decision-making. 

“Managers must make sure agents and humans work well together,” says Samantha Keppler, assistant professor of technology and operations at the University of Michigan Stephen M. Ross School of Business.

“They must carefully consider when the agent can add value, and where they could be obstacles instead. The downside of putting AI agents in the wrong positions tends to be much, much bigger than the upside of putting AI agents in the right positions.”

Managers must make sure agents and humans work well together. Samantha Keppler Asst Professor, University of Michigan
You need to provide context...and build in checkpoints to make sure that the AI is performing as intended. Karen Feigh Professor, Georgia Institute of Technology

2. Make an agent’s decision logic always visible

The person accountable for an AI agent’s decisions needs to see the reasoning of the agent and the information the agent based its decisions on. Oversight and governance cannot be a one-time setup.

Managers need real-time visibility (e.g. live dashboards) into how AI agents make decisions, not just their outputs. For example, a manager overseeing a pricing agent should be able to question how demand, competition, and margins were weighted. 

“It’s like working with a new or very junior employee,” says Karen Feigh, a professor at Georgia Institute of Technology. “So you need to provide context that may seem obvious to you and build in checkpoints to make sure that the AI is performing as intended.”

3. Build governance in before deployment

If work is being automated, then oversight needs to be automated as well. Otherwise, managers risk having to constantly monitor agents to ensure they’re operating properly, and any efficiency gains immediately evaporate. To set up appropriate guardrails, they need to define: 

  • What decisions can be automated, and under what conditions
  • What permissions and constraints apply
  • When escalation to a human is required
  • How decisions can be audited after the fact

“Design agents like employees you’d like to hire and manage,” recommends Brian Solis, vice president and head of global innovation at ServiceNow. “Give them a job description, permissions, goals, escalation paths, and measurable outcomes. Onboard them like you would a high-performance candidate.”

Find out how ServiceNow can help you put responsible AI to work for people.

Design agents like employees you’d like to hire and manage. Brian Solis VP, Head of Global Innovation, ServiceNow
Next up
Dive into more conversations AI App Development CRM Enterprise IT Ethics & Governance Human Resources Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All Topics
Stay in the know Join Us
stay in know image
Alt