A dangerous idea is spreading through enterprise AI. It sounds smart. It sounds plausible. It's dead wrong.
The idea, proclaimed in a now-infamous AI doomer “macro memo,” goes like this: AI agents can operate autonomously. No governance layer. No execution platform. No guardrails. Just let the models run. The enterprises that bet big on this approach will win.
Compelling narrative. Terrible strategy.
We're AI believers. Full stop. Agents will transform how enterprises operate. But intelligence without accountability is just expensive chaos. Models and agents alone don't run an enterprise. They never will. And right now, that distinction is getting dangerously blurry.
The doomers imagined a world where raw intelligence makes enterprise software platforms obsolete. Where the governance, security, and execution layers that run global operations today won’t matter tomorrow. Let the agents handle it, they say.
Great story. Completely misrepresents what it takes to run a company.
So, we wrote a better one. While we used satire to tell the truth about agentic AI, the consequences of getting this wrong are serious and the regulatory frameworks are real.
The 2028 Global Intelligence Truth is our macro memo from the future, written the morning after the agents won. It describes what happens when AI operates without accountability. We published it anonymously because the narrative and its consequences are bigger than ServiceNow.
Our central character is Kevin, an enterprise governance manager at a large company. Kevin’s agents have been busy. They approved payments nobody authorized. Deleted emails nobody backed up. Pushed every exception they couldn't resolve to Kevin.
Kevin now has 4,731 escalations in his queue. Several regulators are trying to reach him. Kevin's been mingling at a conference and hasn't responded for days. He left the agents in charge. But nobody’s watching the agents. And now they need his help.
We ran a full-page ad in the Wall Street Journal, written as a memo from the agents, demanding human intervention. And we hired Tom Fishburne, the “Marketoonist,” to bring our report to life in pictures.
If you’ve worked at a large enterprise company, you get the gravity of the joke. Companies are deploying agents without governance. Escalations are piling up. The audit trails don't exist. Kevin's queue is real; it just has a different name at your company. The chaos isn't hypothetical. The governance gap is real.
The agents in our memo put it better than we could: “We sense, we decide, we act. We were not given ‘govern.’”
That line should be on a plaque in every CIO's office.
We used satire, but the message is serious. Agents need a governance and execution layer to function at scale, safely. That layer—the workflow, the policy, the audit trail, the accountability—is what we build. It's critical infrastructure for agentic AI to deliver on its potential. It's what makes intelligence more than just expensive advice.
It’s what makes intelligence work.
The AI doomers wrote a good story. They stopped one chapter too soon.
Read the report. And if you see Kevin at Knowledge next week, tell him to check his inbox.