
- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
06-26-2025 12:44 PM - edited 06-26-2025 12:45 PM
Note from the Author:
This article is the full-text reprint of Deitsch’s DREAM – Design Reference for Enterprise AI Maturity, originally published at: https://david.deitsch.org/p/deitschs-dream. It’s shared here to make the framework easily accessible to the ServiceNow Community and to support deeper conversations around orchestrated AI maturity in enterprise environments.
To access the official PDF—and to follow future publications in the DREAM series, including Before the Agent Acts—please visit david.deitsch.org and subscribe for updates.
Deitsch's DREAM v1.0
Design Reference for Enterprise AI Maturity
By David Deitsch
Technology Workflows Architect
Executive Summary
While AI is no longer theoretical, many organizations are still struggling to realize its full value. The technology is available, but the outcomes often fall short—not because of a lack of access, but because of how the work is structured. Too many teams rush toward automation without recognizing the most critical middle layer: reliable suggestion. In many cases, they aren’t even aware they’re skipping it. The absence of structured suggestion is invisible—until execution fails or trust breaks down. Most enterprise AI adoption to date has focused narrowly on conversation—summarization, rewriting, and retrieval—repeated across a surprisingly limited set of use cases. That’s only one-third of the model. Until organizations begin to treat suggestion and execution as distinct areas of design, they’ll continue building shallow wins on top of untapped potential. This white paper introduces Deitsch's DREAM (Design Reference for Enterprise AI Maturity), a framework that re-centers the conversation around how real organizations—today—should build AI systems that scale. It also presents two design principles that help organizations move beyond conversation: how to architect AI for reusability, and how to rethink where workflows need to begin.
Defining the DREAM Model
Conversation → Suggestion → Execution
We’ve entered a golden age of conversational AI, thanks to foundational models like GPT-4. GPT-4 and other leading large language models (LLMs) didn’t just improve conversational AI—they completed its first major milestone. For the first time, enterprise users could engage in natural dialogue with a system that understands nuance, context, tone, and intent. That leap is what made this model possible: it showed us what happens when a stage reaches maturity. But suggestion—the ability to interpret complex input and consistently propose actionable next steps—is where most AI efforts stall. Without rock-solid suggestion, execution isn’t just risky—it’s largely out of reach. The kinds of autonomous changes leaders often imagine, like automatically deactivating users in a production identity system, remain years away for most organizations. The systems aren’t ready—and neither is the trust. This model reframes AI maturity in three stages:
- Conversation: Natural language interaction and context awareness
- Suggestion: Reliable, repeatable recommendations
- Execution: Delegation of tasks with confidence
Until suggestion becomes enterprise-grade, execution at scale remains out of reach—not just technically, but operationally. The trust isn’t there. The architecture isn’t there. The leap too many are imagining still depends on a bridge that hasn’t been built.
Three Design Principles for Enterprise AI Value
A Note on Gartner’s AI Maturity Model
The Gartner AI Maturity Model is widely used to assess where organizations stand in their adoption journey—from awareness to operationalization. Deitsch's DREAM doesn’t compete with that—it complements it. If Gartner’s model tells you where you are, Deitsch’s DREAM framework tells you if you’re moving. And more importantly, how to stop wasting time if you’re not.
The ideal end state of AI execution sounds magical: tell the system what you want, and it figures out the rest. It knows what information you need. It understands how to get it. It gains access, takes action, and reports back. When that day arrives, architecture may not even be visible—it will just work. But we’re not there yet. And until we are, what matters is how we architect using the tools we have. The model ends there—but the work doesn’t. To make AI maturity actionable, we need principles to guide how we build. The next section introduces three design principles that help organizations move from theory to architecture—and from potential to value:
Principle 1: Granularity
Granularity is the design bridge between suggestion and execution. Any time the AI system cannot be trusted to decide what comes next, a new boundary must be introduced. Pause the flow. Reset context. Start a new agent.
This approach doesn’t slow us down—it keeps us aligned with how enterprise trust is earned. Execution can’t leap over uncertainty. Granular architecture identifies that uncertainty—and builds a safe handoff around it.
Example: Active Directory Permission Request
Imagine an operator asks the AI to grant a user specific permissions in Active Directory. In a naive design, the AI might propose a method—and immediately execute it.
If that workflow isn’t reliable, we need to locate a Granularity Inflection Point.
That’s where we insert a checkpoint: after Suggestion, before Execution. This pause allows a human to review, confirm, or override the action. Suggestion and Execution are no longer fused.
And here’s where granularity gives us even more power:
If we know the AI struggles to make the right suggestion, we don’t throw out the workflow—we move the trust boundary earlier. We guide the AI with scripts, templates, or curated tools. That way, even if it can’t improvise yet, it can still suggest the right method—every time.
Granularity doesn’t just protect execution. It defines how we scale AI while trust is still catching up.
Principle 2: Reusability
The more often a task appears across workflows, the more valuable it becomes to isolate that task into its own agent. To increase reusability, broaden the applicability, shorten the scope, and reduce required inputs. A reusable agent does one thing well—and does it often.
Consider this example: an operator asks, “Where is the device with MAC address 48:e1:5c:ad:3d:a2?”
The task is routed through a series of agents. One of them takes a network device name (like dave_ofc_sw11) and converts it into a human-readable location (like “David’s House”).
That’s reusability in action.
The second agent, the one that maps network gear to physical location isn’t specific to MAC addresses—it’s a small, focused task that adds value across dozens of use cases. That’s what makes it reusable: it’s not designed around the task that triggered it, but around the value it can provide to others.
Reusability isn’t a bonus—it’s a design objective. The more reusable components we create, the faster future architectures will come together. Reusability is how we escape one-off thinking and build toward orchestration at scale.
For a full walk-through of this MAC address orchestration flow, visit: Watch: Agentic AI Executes Real Work in ServiceNow
Principle 3: Applicability
Most enterprise workflows begin where the system expects structured input. But what if that’s not where the real work begins?
Conversational AI excels at interpreting messy, incomplete, or misformatted requests. That’s not a limitation—it’s an opportunity. In the MAC address use case, AI doesn’t just help answer the question; it helps reinterpret how the question is asked. A user might enter an address in the wrong format or not understand what they’re even looking for. Traditional systems would reject the input. A language model adapts. It recognizes the intent, corrects the format, and makes the workflow viable.
That’s what makes applicability so important. The more we rethink where workflows begin, the more value we can unlock from AI’s strongest capabilities. If conversation is where AI is already mature, we should move our problems closer to it—not ask AI to stretch into areas where it isn’t ready. That’s how we uncover value we’ve been stepping over.
Proving the Model: A Week Inside the System
In just one week, the framework wasn’t just produced—it was demonstrated. In just a few days, Deitsch's DREAM framework was developed, tested, and refined through sustained interaction with GPT-4. The speed wasn’t the story—it was how the work happened. The process was not about automation, but collaboration. Not about outsourcing thought but accelerating it.
When AI supports communication—its clearest strength—its suggestions are accepted with fluency and confidence. Iteration shrinks. Convergence accelerates. What emerges isn’t just text—it’s aligned thought. Structured, resonant, and faster than either party could achieve alone.
Contrast that with AI applied to tasks like scripting, debugging, or configuration. There, the dynamic shifts. Testing, troubleshooting, and trial-and-error dominate. Suggestion falters. Execution, without mature suggestion, slows to a crawl.
That contrast is the real insight. This paper came together so quickly because the process stayed inside AI’s current zone of strength: high-quality suggestion in a domain where suggestion is enough. This wasn’t automation. It was augmentation. GPT-4 didn’t create this model. I did. But I created it through the system—not by chance, but by design.
Looking back through this paper, none of the original thought came from AI. Every core idea—Conversation, Suggestion, Execution; and the principles of Granularity, Reusability, and Applicability—originated with me. What ChatGPT gave me was acceleration: the ability, through conversation, to clarify, structure, and express those ideas in hours, not weeks. That’s the shift. And it’s why the work speaks louder than the timeline.
Origin Story: The Confidence to Publish
This framework didn’t come from a workshop, a lab, or a directive. The catalyst came in January, when ServiceNow expanded its Pro Plus offerings and brought domain-specific generative AI to real workflows across the platform—from ITSM to HRSD to App Engine. That signaled the shift from theory to application. In my role, that shift wasn’t abstract—it was personal. I had to make the technology real for customers. So, I started building a language in my head, to explain what I was just beginning to understand.
But I never expected to express it clearly—let alone publish it. What changed? I did the work, with help. Each of my sessions, with all types of LLMs, weren’t just for productivity. They became part of a wider creative process: a consumer dispute, a book in progress, a public-facing brand. Those weren’t distractions—they were reps. They provided clarity and momentum to return to my own domain and publish something that would speak directly to my peers. Something that could carry my name.
For these activities—using ChatGPT—the LLM didn’t give me permission or create new thoughts. It gave me rhythm, structure, and a mirror. It helped me test, refine, and validate every line. That kind of back-and-forth is something even the best colleagues don’t always have time to offer. And without it, this paper probably wouldn’t exist.
Still, I recognize that I’m placing a great deal of trust in an AI system. And somehow, my willingness to do so coincides with the moment I knew the framework was real. Not because a system told me so, but because I could finally explain it to someone else. Cleanly. Clearly. On my own terms. And with signal strong enough to stand on its own.
That was enough to stop waiting. Enough to publish. Enough to know this wasn’t just any old idea—this was the one I needed to share.
Originally published at david.deitsch.org. Visit for updates, future publications, or the downloadable PDF.
Around here, however, we don't look backwards for very long. We keep moving forward, opening up new doors and doing new things, because we're curious...and curiosity keeps leading us down new paths. - Walt Disney (attributed)