We've updated the ServiceNow Community Code of Conduct, adding guidelines around AI usage, professionalism, and content violations. Read more

Harsimrat Kohli
ServiceNow Employee

Screenshot 2026-03-03 at 3.03.29 PM.png

 



The Problem It Solves

In most GRC programs, risk managers identify risks by having managers manually search risk libraries, consult business owners, and review external sources periodically — a slow, inconsistent, and expertise-dependent process.

The Risk Suggestion AI Agent (part of the com.sn_grc_sharegenai plugin) automates this end-to-end. Give it an entity's sys_id and it does the rest: it retrieves entity details, asks additional clarifying questions to get the context, pulls relevant risks from three distinct intelligence sources, deduplicates them against what already exists, collects your feedback, and creates new triage risk records — all in a single conversational session.

Flow-DiagramFlow-Diagram

 


Three Sources of Intelligence

What makes this agent different from a simple risk library search is that it triangulates across three independent sources, each tagged so you always know where a suggestion came from:

 

Source Mechanism Category Tag
Internal risk library Semantic search over sn_risk_definition (leaf level risk statements only) Internal
LLM reasoning NowAssist Skill using Azure OpenAI / AWS Claude / Google Gemini / NowLLM Model
External public web Real-time web search + LLM structured extraction External

 

This triangulation catches risks your library has, risks your library should have (inferred by AI), and risks emerging in the world right now (from public sources).


The 12-Step Workflow

The agent runs as a ReAct (Reasoning + Acting) agent on the nap_and_va channel. Here is what happens from the moment you invoke it:

  1. Entity retrieval — Fetches name, description, class, location, department, and existing linked/triage risks from sn_grc_profile.
  2. Risk focus — Asks you to prioritize domains (IT, ESG, third-party, compliance, operational, etc.).
  3. Get Additional Context — After you select the risk focus, the agent evaluates whether your response, combined with the entity details, is enough to determine risk category, severity, scope, regulatory exposure, and likelihood. If a gap remains, it asks one targeted follow-up question — then proceeds. The agent asks at most two questions before showing results, by design. This is one of the most deliberate parts of the workflow — it is designed to improve signal quality without over-questioning the user.
  4. Internal search — Runs a semantic search  sn_risk_definition with a similarity threshold of 0.7, returning up to 10 leaf-level risk statements. Already-linked risks are excluded via encoded query.

    Similarity thresholda score between 0 and 1 that controls how closely a result must match the query before it is returned. A threshold of 0.7 means only results with at least 70% semantic similarity to the query are included.

  5. LLM inference — A NowAssist Skill sends entity context and the internal results to the LLM, which reasons about additional risks from industry knowledge.
  6. Display — Both Internal and Model risks are shown to you together.
  7. External search permission — The agent asks before going to external sources (opt-in).
  8. External extraction — If approved, raw web content is passed to an LLM with a strict extraction prompt that returns 3–10 structured risks. No hallucination — only what the source material explicitly states.
  9. Deduplication & Categorization— All risks (Internal + Model + External) are semantically compared against sn_risk_risk and sn_risk_risk_triage at a threshold of 0.8. Risks that are missing  grc_category get one assigned via a semantic match against sn_grc_choice.
  10. Consolidated display — Only the unique, categorized risks are shown.
  11. Your feedback — Rename, edit descriptions, or remove any suggestions. The loop continues until you approve.
  12. Triage creation — Approved risks are written to sn_risk_risk_triage via GlideRecord INSERT. A JSON repair algorithm handles any malformed LLM output automatically.
  13. Summary — The agent reports the total count of triage records created.

Under the Hood: Key Tools

 

Tool 1 — Get Entity Information

A GlideRecord script that queries sn_grc_profile by entity_sys_id and returns entity attributes, linked risks, triage risks, and available GRC categories as a structured JSON object. This object seeds every subsequent tool call.

Tool 2 — Search Relevant Risk Statements

Uses the AIS RAG Retrieval API with the sn_grc_sharegenai_leaf_risk_statements search profile. The E5FT embedding model converts entity context into a vector and finds the 10 closest leaf-level risk statements at a similarity threshold of 0.7. Results carry the sys_id of the matched sn_risk_definition record — so the link back to your library is preserved all the way into the triage record.

Tool 3 — Suggest Risks via LLM

Invokes a NowAssist Skill capability backed by the sys_one_extend framework. The capability sends entity context + internal results to one of the LLMs previously opted at the instance level (Azure OpenAI primary, AWS Claude, Google Gemini, NowLLM) and receives additional risks that your library may not cover. These carry category = "Model" a null grc_category until Step 8 enriches them.

Tool 4 — Search External Risks (AIA Web Search)

This is the platform's built-in AIA Web Search tool (sys_id: 576cf8eaffd922101dbaffffffffff66), aliased in the agent as "Search external risks". It is not a custom script — it wraps web search and page scraping and returns a synthesised prose answer to a search query.

The agent calls it with entity-specific queries (entity name + risk domain + context) to retrieve real-time public intelligence. The output is raw prose, not structured data, which is why it is always immediately followed by Tool 5.

Two-step external risk pipeline:

Tool 4: AIA Web Search       
Tool 5: Transform to JSON query + entity context  →  raw data  →  structured risk array

 

Tool 5 — Transform External Risks to JSON

A NowAssist Skill that takes the raw data from Tool 4 and uses an LLM to extract 3–10 distinct, structured risk objects. The prompt explicitly prohibits the model from generalizing, merging, or inventing risks beyond what the source says, with temperature = 0.2.

Temperature  a value between 0 and 1 that controls how predictable the LLM's output is. All configs here use 0.2 low enough for consistent, structured JSON output, with a small allowance for natural phrasing variation.

 

Tool 6 — De-duplicate and Categorize

The most technically interesting tool. It runs the combined risk array through two AIS search profiles simultaneously:

  • sn_grc_sharegenai_risks — compares against sn_risk_risk (linked risks)
  • sn_grc_sharegenai_triage_risks — compares against sn_risk_risk_triage (existing triage risks)

Similarity threshold is 0.8 — deliberately higher than retrieval (0.7) to avoid false-positive deduplication. For risks without a category, it semantically matches sn_grc_choice records to assign the best-fit GRC category.

Tool 7 — Create Triage Risk Records

GlideRecord INSERT into sn_risk_risk_triage. Before parsing, a JSON repair algorithm runs in sequence (if the input got truncated):

  1. Strip markdown code fences
  2. Extract the first JSON block from the surrounding prose
  3. Remove trailing commas before ] or }
  4. Close unclosed brackets
  5. Filter records missing risk_name or risk_description

This makes the triage step robust even when an LLM response is truncated or slightly malformed.


Semantic Search Architecture

The agent maintains four AIS search profiles, all using the E5FT embedding model with the Words semantic index:

Profile Target Table Used For Threshold
leaf_risk_statements sn_risk_definition Retrieve internal risks 0.7
risks sn_risk_risk Dedup vs linked risks 0.8
triage_risks sn_risk_risk_triage Dedup vs triage risks 0.8
grc_categories sn_grc_choice Auto-assign categories

 

The deliberate 0.1 gap between retrieval (0.7) and deduplication (0.8) thresholds is intentional: cast a wider net when finding new risks, be more conservative when deciding something is already covered.


The Risk Object Schema

Every risk flows through the entire pipeline in the same JSON shape:

{
    "risk_name":        "string — max 30 characters",
    "risk_description": "string — max 100 characters",
    "risk_statement":   "sys_id from sn_risk_definition, or null",
    "category":         "Internal | Model | External",
    "grc_category":     "sys_id from sn_grc_choice, or null"
  }

risk_statement is only populated for Internal risks — it's the direct link back to your risk library. grc_category starts null for Model and External risks and is assigned during Step 8.


What Gets Created

At the end of the workflow, each approved risk becomes a record in sn_risk_risk_triage with these field mappings:

Risk Object Field Triage Table Field
risk_name short_description
risk_description description
risk_statement risk_statement (ref to sn_risk_definition)
grc_category category (ref to sn_grc_choice)
(implicit) profile = entity_sys_id

Key Takeaways for Builders

  • ReAct strategy means the agent reasons step-by-step before acting — it doesn't call all tools at once. Each tool result informs the next decision.
  • Threshold design matters: the 0.7 / 0.8 split between retrieval and dedup is a deliberate trade-off between recall and precision.
  • JSON repair is production-necessary: LLM outputs at higher token counts will occasionally truncate. The repair pipeline is what makes triage creation reliable at scale.
  • The external source step is opt-in: the agent always asks before reaching out to public sources, respecting organizational policies around external data usage.
Version history
Last update:
30m ago
Updated by:
Contributors