GouthamAnumolu
ServiceNow Employee

Overview

 

The Optimize GRC Issue Resolution use case works out of the box, but every organization has different data volumes, resolution patterns, and process expectations. This article covers every supported customization — from tuning which past issues are used as reference data, to adjusting how the action plan and remediation tasks are generated, and modifying agent behavior and messaging.

 

The use case, two agents and their skills are accessed from:

  • Usecase: All → Now Assist AI Agents → AI Use Cases 
  • Agents: All → Now Assist AI Agents → AI Agents
  • Skills: All → Now Assist Skill Kit → Skill
  • Indexed Sources: All → Indexed Sources

Quick Reference

What you want to change

How

Which resolved issues are used as reference data

Edit the Issue closed filter condition in the GRC issues indexed source

How similar a past issue must be to be included

Edit SCORE_THRESHOLD in the GetSimilarRecords tool of the Issue Action Plan Generation skill

How many past issues are retrieved for comparison

Edit MATCH_COUNT in the GetSimilarRecords tool of the Issue Action Plan Generation skill

Agent messages, flow behavior, and display format

Edit instructions on the Issue Action Plan AI Agent and Remediation Tasks AI Agent


1. Adjusting Which Issues Are Used as Reference Data

The action plan generation only draws from issues that are closed. By default, this means issues in Closed Complete or Closed Incomplete state. You can change this filter to expand or narrow the pool of reference issues.

Navigate to

All → Indexed Sources → search for GRC issues → open → find the Issue closed filter condition → edit.

Current filter

state = 3 (Closed Complete) OR state = 4 (Closed Incomplete)

What you can safely change

  • Add additional closed states if your organization uses custom issue states
  • Add further conditions to narrow the pool — for example, restrict to issues from a specific category or assigned group
Tradeoff: A broader filter increases the pool of reference issues and can improve plan quality — but only if those issues have meaningful action plans and remediation tasks. Issues without action plans or linked tasks are automatically excluded at runtime even if they pass the filter, so adding more states with poor data quality will not help. A narrower filter may leave too few relevant matches for sparse datasets.

2. Adjusting the Similarity Threshold

The similarity threshold controls how closely a past issue must match the current one before it is included in the comparison set. The default is 80%. A past issue that scores below this threshold is excluded regardless of how many results are available.

Navigate to

All → Now Assist Skill Kit → Skill → search for Issue Action Plan Generation → open → Tools → open GetSimilarRecords tool → edit the script.

Example snippet

var SCORE_THRESHOLD = 0.80;  // default — lower to include more results, raise for stricter matching

Value

Effect

0.70

More past issues included — useful when data is sparse but may introduce loosely related patterns

0.80 (default)

Balanced for most environments

0.90

Only very closely matched issues included — higher precision but may return nothing on new issue types

Tradeoff: Lowering the threshold increases the number of reference issues but risks introducing irrelevant patterns into the generated plan. Raising it improves precision but may result in no plan being generated if your historical data is limited. Test against real issues in your instance before deploying.

3. Adjusting the Number of Past Issues Retrieved

By default, up to 20 past issues are retrieved and passed to the AI model for synthesis. All retrieved issues must also meet the similarity threshold — so the actual number sent to the model may be lower.

Navigate to

All → Now Assist Skill Kit → Skill → search for Issue Action Plan Generation → open → Tools → open GetSimilarRecords tool → edit the script.

Example snippet

var MATCH_COUNT = 20;  // default — increase for richer synthesis, decrease to reduce latency
Tradeoff: A higher count gives the AI model more patterns to synthesize from, which can improve plan quality when your historical data is rich and varied. However, each retrieved issue includes its full action plan and all linked remediation tasks — a higher count increases the size of the prompt sent to the model and can slow response time. Keep the value within a reasonable range for your data volume.

4. Editing Agent Instructions

Each agent has its own instruction set that controls its conversational behavior — the messages it displays, how it presents options to the user, and how it handles each user choice. These are separate from the skill prompts and can be updated independently.

Issue Action Plan AI Agent

Navigate to:
All → Now Assist AI Agents → AI Agents → search for Issue action plan AI agent → open → edit the Instructions field.

Part

What you can change

Accept / Edit / Dismiss prompt

The wording of the options presented to the user after the action plan is shown

Acceptance message

What the agent says when the user accepts — including the note about the plan being saved to the issue record

Dismissal message

What the agent says when the user dismisses — currently tells the user to populate the plan manually

Edit refinement behavior

How the agent asks for and incorporates feedback when the user chooses to edit

Error messages

What the agent says when the issue record cannot be found or the plan cannot be generated

Remediation Tasks AI Agent

Navigate to:
All → Now Assist AI Agents → AI Agents → search for Remediation tasks AI agent → open → edit the Instructions field.

Part

What you can change

Accept / Edit / Dismiss prompt

The wording of the options presented after task suggestions are shown

Task display format

How suggested tasks are displayed when the user edits — currently uses a Name / Description format

Acceptance message

The confirmation message after tasks are created — currently tells the user to assign owners manually

Dismissal message

What the agent says when the user dismisses — currently tells the user to create tasks manually from the issue page

Tradeoff: The LLM is sensitive to instruction wording. Changes to how options are presented or how feedback is collected can affect agent behavior in non-obvious ways. Always test with a range of real issue scenarios before saving changes to production.

Tradeoffs at a Glance

Customization

Benefit

Risk

Broaden issue index filter

More reference data available for plan generation

Issues without action plans are excluded at runtime — adding low-quality data doesn't help

Lower similarity threshold

More past issues included; helpful on sparse datasets

Loosely related patterns may dilute the generated plan

Raise similarity threshold

Only closely matched issues inform the plan

May return no results for new or uncommon issue types

Increase match count

Richer synthesis from more historical context

Larger prompt sent to LLM; slower response time

Edit agent instructions

Tailor messages and flow to your org's tone and process

LLM is sensitive to wording — test thoroughly before deploying


⚠ Important: Testing and Upgrade Compatibility

All customizations described in this article modify components that are part of the out-of-the-box use case delivery. Before deploying any change to production, test thoroughly in a sub-production instance using a representative set of real issue records — including issues with rich action plans, sparse action plans, and no historical matches.

When ServiceNow ships upgrades to this use case — including changes to skill prompts, agent instructions, datasource scripts, or indexed source configuration — those updates will reflect the out-of-the-box defaults. Any customizations you have made will need to be reviewed, reconciled, and re-applied against the upgraded version. This includes:

  • Filter changes on the GRC issues indexed source
  • Threshold and match count changes in the GetSimilarRecords tool script
  • Instruction changes in either agent

Maintain clear documentation of every customization your organization applies, including the original out-of-the-box value and the reason for the change, to make upgrade reconciliation straightforward.

Version history
Last update:
yesterday
Updated by:
Contributors