- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
2 hours ago
Overview
The Report a GRC Issue agent is ready to use out of the box, but most organizations have unique terminology, governance structures, and data models. This article covers every supported customization — from updating a definition in a table to adjusting search behavior and mapping custom fields to the issue record.
Quick Reference
|
What you want to change |
How |
|---|---|
|
Update |
|
|
Edit agent instructions |
|
|
Edit the GRC Issue Validator skill prompt |
|
|
Update skill prompt + agent instructions + tool script + script include |
|
|
Override |
|
|
Override |
|
|
Add fetch method + search profile + indexed source + surface in agent |
1. Updating the Issue Definition
The agent reads your organization's GRC term definitions at runtime from the sn_grc_context_definition table before every call to the AI model. The issue definition directly controls whether a user's description is accepted as a valid GRC issue.
Navigate to
Filter Navigator → type sn_grc_context_definition.list
Five terms are available to update:
|
Internal Name |
What It Influences |
|---|---|
|
|
Whether the description qualifies as a GRC issue ( |
|
|
How the agent identifies affected entities in the description |
|
|
Provides context to help the LLM accurately classify control-related issue types |
|
|
Provides context to help the LLM accurately classify policy-related issue types |
|
|
Provides context to help the LLM accurately classify regulation-related issue types |
Update the Definition field on the relevant record. No code changes are needed — the updated definition is injected into the AI prompt automatically on the next submission.
2. Editing Agent Instructions
The agent's conversational behavior — its messages, retry logic, and the fields it displays to the user — is driven by its instruction set.
Navigate to
All → Now Assist AI Agents → AI Agents → search for Report a GRC issue → open → edit the Instructions field.
The parts you can safely change:
|
Part |
What to Change |
Example |
|---|---|---|
|
Welcome message |
Opening text shown to user |
Add org-specific tone or guidance |
|
Retry logic |
How many times the agent asks for clarification if input doesn't qualify |
Change from 1 retry to 2 |
|
Display template |
Which fields appear in the structured summary |
Hide fields not relevant to your org |
|
Edit behavior |
How the agent presents suggestions when a user edits a field |
Change suggestion phrasing |
|
Cancellation message |
Text shown when a user cancels |
Point to an alternate intake channel |
3. Editing the GRC Issue Validator Skill Prompt
The GRC Issue Validator skill is responsible for extracting structured fields from the user's free-text description. Its prompt defines exactly what the AI model is asked to extract and how.
Navigate to
All → Now Assist Skill Kit → Skill → search for GRC Issue Validator → open → edit the Prompt field.
The current prompt instructs the model to extract seven fields: isIssue, issueTitle, problemStatement, issueType, dateOfOccurrence, rootCause, and potentialControls.
What you can safely change
- Extraction instructions for any existing field (e.g., make
rootCausemore or less strict) - The
issueTypeclassification guidance - Date handling behavior (e.g., how to treat ambiguous formats)
4. Extracting Additional Fields from Description and Mapping to the Issue Record
If your organization needs to capture additional information from the user's description — such as affected department, severity, or business impact — you can extend the extraction pipeline end-to-end.
The full chain is:
Skill Prompt → Agent Instructions → Create an Issue Tool → CommonGenAiUtils (extract) (surface + pass) (receive + forward) (map to record)
Step 1 — Add the new field to the skill prompt
Navigate to the GRC Issue Validator skill prompt (see Section 3). Add a new numbered instruction block and include the field in the OUTPUT FORMAT section.
Example snippet — skill prompt addition:
8. affectedDepartment
- Extract the name of the department or business unit most affected by this issue.
- Return empty string ("") if not found.
OUTPUT FORMAT:
{
"isIssue": "",
"issueTitle": "",
"problemStatement": "",
"issueType": "",
"dateOfOccurrence": "",
"rootCause": "",
"potentialControls": "",
"affectedDepartment": ""
}
Step 2 — Surface the field in agent instructions
In the agent's Instructions field (Section 2), add affected_department to:
- The structured summary display template so users can review it before submitting
- The inputs passed to the Create an Issue tool
Step 3 — Add the field to the Create an Issue tool
Navigate to:
All → Now Assist AI Agents → AI Agents → open Report a GRC issue → open the Create an issue tool → edit the Script field.
Example snippet — Create an Issue tool script:
// Add to input_schema (JSON array):
{
"name": "affected_department",
"description": "Department affected by the issue, identified from description.",
"mandatory": false
}
// Add to the fields object in the script body:
var fields = {
problemStatement: inputs.problem_statement,
name: inputs.issue_title,
description: inputs.issue_description,
issueType: inputs.issue_type,
entity: inputs.actual_entity,
control: inputs.actual_control,
controlObjective: inputs.actual_control_objective,
policy: inputs.actual_policy,
regulation: inputs.actual_regulation,
rootCause: inputs.root_cause,
date: inputs.date_of_occurrence,
affectedDepartment: inputs.affected_department // new field
};
Without the mapping step — what happens by default
If you stop here and skip Step 4, the extracted value is not lost. Because the field won't match any known column on the issue record, it gets added to the unmatchedInputs list and automatically appended to the issue description in a readable format. Your triage team can still see it when they open the record.
This is a safe default — no data is dropped. You only need Step 4 if you want the value mapped to a dedicated field on the issue table rather than written to the description.
Step 4 — Map the field in CommonGenAiUtils
Navigate to:
All → System Definition → Script Includes → search CommonGenAiUtils → open.
In the _createIssueRecord method, add your field mapping after the existing field assignments. The mapping should handle both target tables.
Example snippet — CommonGenAiUtils field mapping:
// Read the new field from the fields object
var affectedDepartment = fields.affectedDepartment || '';
// Map to a dedicated column if it exists on the target table
if (affectedDepartment) {
if (targetTable == this.ISSUE_TRIAGE_TABLE_NAME) {
// For sn_grc_advanced_issue_triage
issueGr.u_affected_department = affectedDepartment;
} else {
// For sn_grc_issue
issueGr.u_affected_department = affectedDepartment;
}
}
5. Adjusting the Similarity Threshold
The similarity threshold controls how closely a record must match the issue description to be included in suggestions. The default is 0.7 (70%).
Navigate to
All → System Definition → Script Includes → search CommonGenAiUtils → open.
The threshold is set via the documentMatchThreshold parameter in _getRAGSearchResponse. You can override individual fetch methods to apply different thresholds per object type.
Example snippet — entity search with a custom threshold:
var ragResponse = this._getRAGSearchResponse({
tableName: this.ENTITY_TABLE_NAME,
searchProfile: "sn_grc_now_assist_search_for_grc_entities",
inputQuery: description,
indexedFields: ['name', 'description'],
documentMatchThreshold: 0.6 // lowered from default 0.7
});
|
Value |
Effect |
|---|---|
|
0.6 |
More suggestions returned, lower precision |
|
0.7 (default) |
Balanced for most environments |
|
0.8 |
Fewer but more precise suggestions |
6. Adjusting the Number of Suggestions
By default the agent returns up to 5 results per object type. These suggestions surface during the edit flow — when a user reviews the structured summary and chooses to edit a field, the agent presents the available alternatives retrieved during the search. Increasing this limit gives users more options to pick from; decreasing it keeps the edit flow focused.
Navigate to
All → System Definition → Script Includes → search CommonGenAiUtils → open.
Override the relevant fetch method calls to pass a different limit value.
Example snippet — entity search with a custom suggestion limit:
var ragResponse = this._getRAGSearchResponse({
tableName: this.ENTITY_TABLE_NAME,
searchProfile: "sn_grc_now_assist_search_for_grc_entities",
inputQuery: description,
indexedFields: ['name', 'description'],
limit: 8 // increased from default 5
});
7. Adding a New Object Type to Suggestions
If your organization wants the agent to suggest a custom object type — such as a risk record or a vendor — you can extend the suggestion pipeline.
Step 1 — Create an indexed source and search profile
Navigate to:
All → Indexed Sources → click New to create an indexed source for your custom table. Set the appropriate filter conditions to control which records are included. Once the indexed source is created, associate it with a search profile.
For detailed guidance on setting up indexed sources and search profiles, refer to the AI Search: Indexed Sources documentation.
Step 2 — Add a fetch method in CommonGenAiUtils
Navigate to:
All → System Definition → Script Includes → search CommonGenAiUtils → open.
Example snippet — custom fetch method:
_fetchCustomObjectData: function(issueDescription, result) {
var results = this._getRAGSearchResponse({
tableName: 'your_custom_table',
searchProfile: 'your_search_profile_name',
inputQuery: issueDescription,
indexedFields: ['name', 'description'],
indexNames: ['body']
});
if (results.length > 0) {
result["actual_custom_object"] = results[0].columns.name;
result["suggested_custom_objects"] = this.formatResult(results);
}
},
Step 3 — Call it from fetchRelatedObjectInformation
Example snippet — adding to the result object and calling the new fetch method:
// Add defaults to the result object alongside existing ones result["actual_custom_object"] = ""; result["suggested_custom_objects"] = ""; // Call your fetch method at the appropriate point in the decision tree this._fetchCustomObjectData(description, result);
Step 4 — Surface the result in agent instructions
In the agent's display template (Step 4 of the agent instructions), add your new field:
Example snippet — display template addition:
**Custom Object**: {actual_custom_object}
And handle it in the edit flow (Step 6 of the agent instructions) so users can select from suggested_custom_objects when editing a field.
Tradeoffs at a Glance
|
Customization |
Benefit |
Risk |
|---|---|---|
|
Update issue definition |
Agent validates against your org's language |
Too broad accepts noise; too narrow rejects valid input |
|
Edit agent instructions |
Tailored conversational experience |
LLM is sensitive to wording changes; test thoroughly |
|
Edit skill prompt |
Extract exactly the fields your org needs |
Output schema changes cascade to tool and script |
|
Extract + map custom fields |
Richer issue records with org-specific data |
Four-point change; any mismatch causes silent failure |
|
Lower similarity threshold |
More suggestions surfaced |
More irrelevant suggestions; user confusion |
|
Raise similarity threshold |
Cleaner, more precise suggestions |
Sparse data returns no suggestions |
|
Increase suggestion limit |
More alternatives available to users in edit flow |
Slower response time; potential noise |
|
Add custom object type |
Agent surfaces org-specific records |
Added latency; requires indexing setup |
⚠ Important: Testing and Upgrade Compatibility
All customizations described in this article modify components that are part of the out-of-the-box agent delivery. Before deploying any change to production, ensure it is thoroughly tested in a sub-production instance using a representative set of real issue descriptions — including edge cases, partial inputs, and varied issue types.
When ServiceNow ships upgrades to the Report a GRC Issue agent — including changes to the skill prompt, agent instructions, tool scripts, or the CommonGenAiUtils script include — those updates will reflect the out-of-the-box configuration. Any customizations you have made will need to be reviewed, reconciled, and re-applied against the upgraded version on your end. This includes:
- Prompt changes in the GRC Issue Validator skill
- Instruction changes in the agent
- Custom field mappings in the tool script and script include
- Modified search thresholds or limits
- Any added fetch methods or object types
We recommend maintaining clear documentation of every customization your organization applies, including the original out-of-the-box value and the reason for the change, to make upgrade reconciliation straightforward.
