- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
2 hours ago - edited an hour ago
Most enterprise teams that license AI do not use it.
That is not a complaint. It is the most interesting problem in the field right now.
The gap between "we have AI" and "AI is working for us" is not a technology gap. It is a decision gap. And once you see it that way, the path forward becomes surprisingly clear.
Welcome back to our series from the AI Center of Excellence (CoE) team at ServiceNow.
I'm Vyoma Gajjar, Senior Principal AI Architect on the PACE team.
My work centers on AI activation strategy: helping enterprise customers close the gap between licensing and realized value. Through hands-on engagements and advisory work, our team has built a clear picture of what separates AI investments that deliver from those that stall.
What you will walk away with:
- A concrete picture of what AI Agents do inside production environments, not theory, real workflows
- Proof points with specific numbers from enterprises already seeing results
- The three blockers that stall most activations and exactly what to do about each one
- A direct action you can take this week with what you have already licensed
AI Agents Are Not Robots. They Are Delegation at Scale.
There is a useful mental model here. Think about what actually happens when an employee submits an IT request today. It lands in a queue. A human reads it. That human categorizes it, assigns it, routes it. Often that person needs more information. The ticket bounces back. Eventually it resolves.
Notice what each of those steps actually is. It is cognitive work following a predictable pattern. Reading comprehension. Classification. Lookup. Decision-making against known criteria. These are not creative acts. They are pattern-matching acts performed thousands of times per month by skilled people who could be doing higher-order work.
This is the core insight: the service desk queue is not a workflow problem.
It is a delegation problem.
The work is delegatable, but until now, there was nothing capable enough to delegate it to.
An AI Agent in the ServiceNow context receives a request, understands what it means, determines the right steps, coordinates across workflows, and completes the task without requiring a human handoff at every stage. It reads. It classifies. It checks for known resolutions. It applies a fix when one exists. When a human is needed, it routes with full context already assembled. No half-formed handoff. No missing fields. The human gets a complete picture, not a ticket that says "laptop issue."
This is not theoretical. This is running in production today. And the results are worth examining carefully, because they reveal something about the economics of this shift that is easy to miss.
The Results That Changed How I Think About Scale
The proof points that have stayed with me are not the flashy ones. They are the operational ones. The reason is that operational numbers compound. A 15-minute improvement on a single incident is noise. A 15-minute improvement across thousands of monthly incidents is a structural change in how your organization works. That distinction matters.
đš Large enterprise, ITSM deployment, thousands of incidents per month. After activating AI Agents: 33% reduction in mean time to resolve and 18% fewer incidents requiring specialist escalation. Per major incident, the team recovered 15 to 25 minutes of resolution time. The important thing to notice here is what that 18% escalation reduction actually means. It means specialist engineers are spending less time on tickets that did not need them in the first place. That is not an efficiency gain at the margins. It is a reallocation of your most expensive and scarcest resource toward the problems that actually need it.
đš Technology services organization, Now Assist + AI Agents across CSM and ITSM. Outcomes: greater than 47% decrease in mean time to resolution, 46% reduction in workflow bottlenecks, and 20% improvement in self-service rates. That last number deserves a closer look, because it reveals the economic model of AI Agents working correctly. Every self-served request resolves faster for the employee and costs the business nothing beyond the initial investment. The marginal cost of resolution approaches zero for categories the agent can handle. As self-service rates climb, total cost of service delivery drops while employee experience improves. These are not competing objectives when AI handles the routine work.
đš ServiceNow's own deployment, running across ITSM, CSM, HR, Finance, and Supply Chain. Published outcomes: 90% of IT support requests self-served, 7x faster incident closure, 20x faster resolution of HR inquiries, and $355M+ in value driven by AI Agents across the enterprise. These are production outcomes, not projections. They represent what happens when an organization moves through the decision gap and actually activates what it has licensed. The 90% self-service number is worth sitting with. It means the default mode of IT support shifted from "a human processes your request" to "the system resolves your request and a human handles the exceptions."
Which Products Are Doing Which Work
One of the most common questions I get is "where does Now Assist end and AI Agents begin?" It is worth being precise about this, because the answer affects how you plan your activation and what you should turn on first.
Now Assist for ITSM generates AI-assisted responses, summarizes incident history, and uses AI Search to surface answers from your existing knowledge base at the point of need. Think of it as augmentation: the human is still in the loop, but the blank-page problem disappears. An agent no longer stares at a 47-message incident thread trying to reconstruct what happened. The AI reads the thread and produces a summary. The agent verifies and acts. It installs on top of workflows already in place, though it requires Now Assist licensing separate from a base ITSM subscription.
Now Assist for CSM gives customer-facing agents AI-generated case summaries, drafted response suggestions, and generated resolution notes. Every draft requires agent review before it goes out. This is a deliberate design choice: keeping a human accountable for customer-facing communication while removing the cognitive load of composing from scratch under volume pressure. The agent's job shifts from "write the response" to "verify and send the response." That is a meaningfully different task with meaningfully lower cognitive cost.
AI Agents complete tasks autonomously within defined parameters. This is the shift from augmentation to delegation. A concrete example: the account unlock workflow. The platform receives the request, verifies identity through your directory integration (Active Directory, Okta, Azure AD), executes the reset, confirms with the employee, and closes the ticket. No human touches it. This requires Virtual Agent for the conversation layer and an IntegrationHub spoke connecting to your directory service. The important architectural point is that the AI Agent is not just answering a question. It is executing a multi-step workflow with integrations, verifications, and confirmations. It is doing the work, not describing the work.
Now Assist in Virtual Agent provides the LLM-powered front end for natural language interaction. This is where the practical significance of the shift from rules-based NLU to LLM-based intent recognition becomes visible. Traditional NLU-based Virtual Agent required manually configured intents and utterances. Every new use case meant new training data, new testing, new maintenance. The LLM assistant determines intent from a plain-language description. An employee who types "my laptop is running slow and I have a presentation in an hour" gets triaged correctly, not sent to a generic hardware category. The system understands urgency, context, and specificity without being explicitly trained on that exact phrasing. Organizations migrating from the NLU model are replacing it entirely rather than layering on top. That is a signal worth paying attention to.
Now Assist for SPM and Process Mining extend into portfolio management and workflow analysis. I want to call out something that gets overlooked here: Process Mining lets you identify where operational friction actually lives before you apply AI on top of it. This sequencing matters more than most teams realize. If you automate a broken process, you get faster broken outcomes. If you map friction first, then apply AI to the right intervention points, you get compounding returns. The advice is simple: know where your friction is before you automate.
Three Things That Block Activation (and What to Do About Each)
Here is what I have observed across dozens of enterprise engagements. The blockers are remarkably consistent across industries, company sizes, and maturity levels. And none of them are technical. They are all organizational. That is both the bad news (you cannot solve them with a configuration change) and the good news (you can solve them with a conversation and a decision).
Blocker 1: Data Confidence
This is the most common one. Teams assume AI requires a major data transformation project before it can deliver value. They picture months of ETL work, data lake migrations, and schema normalization before they can even start.
That assumption is mostly wrong.
AI Agents in ITSM operate on the structured workflow data already in your ServiceNow instance. No external ETL pipeline is required. The data is already there. It lives in your incident records, your knowledge articles, your resolution notes, your categorization fields.
What does matter is the quality of that existing data.
Specifically two things: knowledge base hygiene and incident record completeness.
Resolution notes that say only "fixed" give the AI nothing to learn from. Stale KB articles that reference retired systems give the AI bad information to reason on. But the good news is that a focused data readiness review takes days, not months. It is a fundamentally different scale of effort than a data transformation project. You are not building new data infrastructure. You are improving the quality of data you are already capturing.
đ Your move: Pull 50 recent incident resolutions. How many have meaningful resolution notes versus "fixed" or "resolved"? That ratio tells you exactly how ready your data is. If it is above 60%, you are in better shape than most organizations I work with. If it is below 40%, you have a clear, bounded improvement project to complete before activation.
Blocker 2: Capacity
IT teams are stretched. Everyone knows this. And activating AI feels like another project competing for the same constrained resources.
The pattern that works is counterintuitive: go smaller than you think you should. Scope the initial activation to a single high-volume workflow with clear resolution patterns. One workflow. Not a platform-wide rollout. Not a multi-department initiative. One workflow, activated correctly, produces a measurable result within weeks. That result becomes the internal business case for the next expansion. This is how the organizations that have succeeded have actually done it. They did not try to boil the ocean. They found the smallest possible proof point and let the data make the argument for the next step.
đ Your move: Pull your top 10 ticket categories by volume. Find the one with the highest percentage of consistent, repeatable resolutions. That is your starting point: high volume, clear patterns, fastest path to a measurable outcome.
Blocker 3: Organizational Readiness
This one is subtle and it is the one most teams skip. Activation is not a pure configuration exercise. It involves communicating to service desk staff what the AI is and is not authorized to do. I have seen this pattern repeatedly: teams that have explicit conversations about AI scope before go-live have consistently smoother rollouts than teams that treat it as a pure technical deployment.
The reason is straightforward. Service desk staff are professionals with deep domain knowledge. When a new system starts handling work that used to be theirs, they need to understand the boundaries. Not because they will resist the change, but because they need to know when to trust the AI's output and when to intervene. That clarity is what transforms skepticism into adoption. Without it, you get shadow processes where agents redo the AI's work "just to be safe," and your utilization numbers collapse.
đ Your move: Before you activate anything, schedule a 30-minute session with your service desk leads. Cover three things: what the AI Agent will handle autonomously, what it will recommend for human approval, and what it will never touch. That single conversation prevents the majority of adoption friction we see in the field.
None of these are showstoppers. What makes them blockers is not their difficulty but the fact that they require someone to make a decision and act on it. The technology is not waiting. The organization is.
Start with What You Have Already Licensed
If you are a ServiceNow practitioner, the question worth asking this week is direct: which features have you licensed that are not yet active?
For most enterprise accounts, the answer is several. Now Assist features, AI Agents capabilities, or Virtual Agent enhancements that are included in a contract and not yet turned on. Every quarter those capabilities sit inactive is a quarter where value did not materialize. This is not an abstract cost. It is a measurable one: the delta between what your service desk is doing manually today and what it could be doing with tools you already own.
The activation path does not have to be complex. Block 45 minutes with your platform owner. Bring your ticket volume data and your licensing summary. Identify one workflow with high volume and clear resolution patterns. That is your activation starting point. From there, your ServiceNow account team and the PACE team can help scope what the first activation looks like in practice.
The technology is ready. The results are documented. The blockers are solvable. The one thing left to close is the decision.
Your Action This Week: Pick One
đ
° The License Audit
Open your ServiceNow entitlement summary.
List every Now Assist and AI Agent capability you have licensed.
Flag the ones that are not active.
That gap is your opportunity.
đ
ą The Data Readiness Check
Pull 50 recent incident resolutions.
Count how many have meaningful resolution notes versus "fixed" or blank.
That percentage is your AI-readiness score for ITSM.
đ
˛ The Activation Conversation
Schedule 45 minutes with your platform owner and your top service desk lead.
Identify the single highest-volume workflow with the most consistent resolution pattern.
That is your starting point.
Whichever you choose, you will have a concrete answer by end of week.
Not a plan to make a plan, but an actual data point to act on.
This article is part of a series from the AI Center of Excellence (CoE) team at ServiceNow. We work directly with enterprise customers to close the gap between AI licensing and realized value. If your team is navigating activation, reach out or drop a comment below with your biggest blocker. We read every one.
Found this useful? Hit the "Helpful" button so others find it too. đ
- 40 Views
