anindyabhat
ServiceNow Employee

You have been there

The post-incident review is two hours in. Someone asks why the implementation diverged from the approved design. The architect pulls up the CAB record — a demand, an email thread, a Teams message nobody saved.

I've been in that room more times than I'd like. Governance doesn't fail because teams don't care — it fails because the process depends on tribal knowledge, sequential reviews, and manual checks applied under pressure. By the time something reaches production, the original intent has been eroded by undocumented decisions, and the architects who understood them have moved on.

AI can make this structural rather than personal. But only if the foundations are right, and most organizations skip them.

 

You could change this picture. But there is some pre-work 🙂

Three prerequisites. Skip them and AI amplifies problems instead of solving them — this section matters more than anything that follows.

 

Knowledge Management must be curated. RAG is only as good as what it retrieves. Most ServiceNow KM implementations are graveyards — articles written once, never reviewed, never retired. In my experience it's the most underestimated blocker. Audit, tag, and maintain that content before connecting a Retriever. It's not optional.

 

SPM must be mature enough to carry metadata. Risk classification, dependency mapping, platform impact scope — this only exists if SPM's been configured to capture it and the business trained to populate it. If SPM is a ticket queue today, AI surfaces that data quality problem faster; it won't fix it. For domain-separated or multi-instance environments, also factor in NASK skill scope — a skill in one application scope can't access governance records in another without cross-scope configuration.

 

NASK skills must be built by someone who understands governance and prompt engineering. A poorly constructed skill appears to work — surfaces plausible constraints, passes review — while quietly hallucinating policy references and building false confidence in structurally unsound requirements. Use NASK's built-in prompt evaluation tooling to test against known scenarios before deploying. Treat every AI suggestion as a draft requiring expert verification, not an output requiring approval.

 

With those foundations in place, three capabilities address where governance breaks down.

 

At intake, NASK lets you build custom governance skills for the first pass on every demand — surfacing gaps, suggesting constraint clauses, drafting acceptance criteria before a human opens the record. A Retriever grounds suggestions in your curated KM content at runtime. Fewer clarification cycles for the architect, a structured demand on arrival for the consultant. One boundary worth knowing: the MCP Server exposes Now Assist Skills and NASK skills as tools — Flows, Script Includes, and REST APIs aren't yet first-class MCP tools. If your governance logic lives in a flow, wrap it in a NASK skill first.

 

At acceptance, the native MCP Server — included in every Now Assist and AI Native SKU — lets architects and consultants in Claude or Cursor query governance policies, check for conflicts, and push validated records to SPM without switching tools. A2A (Zurich Patch 4+, Now Assist AI Agents 6.0.x) lets a governance agent coordinate with external agents in parallel before sign-off. Two honest caveats: A2A is at v0.3 and external agents must be built, not just connected; and the gate only protects you if the reviewer can catch hallucinated references, missed scope boundaries, and false-negative guardrail checks.

 

At build, Now Assist for Creator keeps governance context live in App Engine Studio — flagging anti-patterns, generating test scripts from demand metadata upstream. GRC maps controls to demands at intake and enriches CAB submissions with AI-generated risk context — which module and integration design applies depends on your instance configuration, so don't assume it's plug-and-play. Drift gets caught in the moment, not at audit.

Two principles span all three phases. Treat every AI-generated governance output as a draft — LLMs don't signal uncertainty, so a skill retrieving a partially relevant policy generates a confident constraint clause with no indication it's guessing. Active skepticism at every review gate is the architecture. And NASK skills need a named owner; they become stale as the platform evolves, and stale AI guidance is documentation rot with more confidence behind it.


Solution Persona Outcome
NASK Platform Architect Reclaims hours spent on intake review
NASK Technical Consultant Arrives with a structured demand, not a blank page
RAG / Retriever Platform Architect Fewer clarification cycles, less rework from missed constraints
SPM Platform Owner Traceable decision record, no context lost across handoffs
MCP Server Architect / Consultant Full governance context in working tool, no switching
A2A protocol Platform Architect Acceptance pre-validated across multiple dimensions
Guardrail checks Technical Consultant Remediation context on violations, faster path to approval
Now Assist for Creator Technical Consultant Drift caught in the moment, not at audit
Now Assist for Creator Platform Owner What goes live matches what was approved
GRC integration Platform Owner Fewer emergency changes, traceable audit trail

Where to start:

Platform Architects — start with one NASK skill for the demand type causing the most rework. Test it using prompt evaluation before deploying and assign a named owner from day one. One well-governed skill beats ten ungoverned ones.

 

Technical Consultants — connect Claude or Cursor via the MCP Server. Pull the last ten demand records, identify governance gaps that reached SPM unchallenged — that's your skill backlog. Audit KM content before connecting the Retriever, or the suggestions will confuse you.

 

Platform Owners — assess SPM maturity before anything else. If demand records don't carry structured metadata today, that's the first investment, not the AI layer. The AI amplifies what's already there — make sure it's worth amplifying.

 

The platforms that get this right build something genuinely compounding. The ones that skip the foundations — or treat AI output as fact — end up automating their governance debt, which is harder to unpick than the original problem.

 

 

Disclaimer: All views expressed in this post are my own and reflect my understanding of ServiceNow Technical Governance & AI Solutions