- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-31-2025 07:33 AM - last edited a month ago
Hi,
is there a way to recalculate only the Risk Rating fields for the Remediation Task table(s)?
We've followed the documentation on how to update Risk Score Weights (sn_sec_cmn_risk_score_weight).
Additionally we've added a few new Risk Score Weights and corresponding choices to the Risk Rating fields of several tables (Vulnerable Item, Remediation Task, Remediation Task [the one for Workspaces]).
We're now facing the issue of having to update the Risk Rating fields of both Remediation Task tables.
Reapplying the Remediation Task Rules seems like a very unattractive option, because of the amount of records that would be recreated and the subsequent noise (assignments, mails, etc.) it would create.
Is there another one-shot functionality that recalculates only the Risk Rating fields?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a month ago
I wrote a Fix Script to do the job in the end.
Perhaps it's useful to someone faced with the same task.
Goal of the script was to be as efficient as possible, so I used these techniques:
- Use a narrow as possible query, to let the database do most of the job.
- Work with chooseWindow to chunk data (credits to User SlightlyLoony and his community post ).
- Use setWorkflow(false), so as not to trigger any Notifications or other adjacent processes.
Script:
(function VR_RecalculateRiskRating() {
var logString = 'VR log:\n';
var updateCount = 0;
var weights = [0, 19, 39, 69, 79, 89, 100];
var queryMaps = [{ // none
queryValue: [0, 0],
targetValue: 7
},
{ // very low
queryValue: [1, 19],
targetValue: 6
},
{ // low
queryValue: [20, 39],
targetValue: 5
},
{ // medium
queryValue: [40, 69],
targetValue: 4
},
{ // high
queryValue: [70, 79],
targetValue: 3
},
{ // very high
queryValue: [80, 89],
targetValue: 2
},
{ // critical
queryValue: [90, 100],
targetValue: 1
}
];
// Convenience in case we need to rerun this for more or less tables in the future.
var targetTables = gs.getProperty('sn_vul.tables_to_process', '').split(',');
if (targetTables) {
for (var table = 0; table < targetTables.length; table++) {
logString += 'current table: ' + targetTables[table] + '\n';
for (var map = 0; map < queryMaps.length; map++) {
var glideRecord = new GlideRecord(targetTables[table]);
// explicit lower and upper bound
glideRecord.addQuery('risk_score', '>=', queryMaps[map].queryValue[0].toString());
glideRecord.addQuery('risk_score', '<=', queryMaps[map].queryValue[1].toString());
glideRecord.addQuery('risk_rating', '!=', queryMaps[map].targetValue.toString());
var getNextChunk = _glideRecordChunker(glideRecord, 10000);
while (getNextChunk()) {
while (glideRecord.next()) {
for (var i = 0; i < weights.length; i++) {
var riskScore = Number(glideRecord.risk_score.toString());
var riskRating = Number(glideRecord.risk_rating.toString());
var currentRating = weights.length - i;
if (riskScore <= weights[i] && riskRating != currentRating) {
glideRecord.risk_rating = currentRating.toString();
glideRecord.setWorkflow(false);
glideRecord.update();
updateCount++;
break;
}
}
}
}
}
}
}
logString += 'updateCount -> ' + updateCount + '\n';
gs.info(logString);
})();
function _glideRecordChunker(glideRecord, windowSize, totalLimit) {
var windowStart = 0;
if (!totalLimit) {
totalLimit = false;
}
return nextChunk;
function nextChunk() {
if (totalLimit && windowStart >= totalLimit) {
return false;
}
glideRecord.chooseWindow(windowStart, windowStart + windowSize);
glideRecord.setNoCount();
glideRecord.query();
return glideRecord.hasNext();
}
}
Results:
We had a runtime of ~3:40h for about 2.5 million records.
We considered this an acceptable runtime for a one-time data correction of this size.
Some further notes:
- I considered other things such as .setUseEngines(false), the undocumented GlideUpdateMultiple, etc.
In the end they either didn't add anything in terms of speed or they just weren't available in scoped scripts. - I also would have liked to query the Risk Score Weights directly. But the corresponding table is inside the Security Commons scope and didn't seem to yield any values when querying from the Vulnerability Response scope.
- The chunking function has a "totalLimit" property, which which limits the overall records to query for unit tests.
- The Fix Script had "Record for rollback" enabled. Perhaps it can be even faster if this property is disabled.
Hope this helps someone in the future 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-31-2025 07:38 AM
You can recalculate only the Risk Rating fields on Remediation Task records by executing the "Rollup vulnerable item values to vulnerability and group" scheduled job (or invoking its underlying RollupCalculatorUtil script include) to refresh the risk_score, which then triggers the "Update Risk Rating from Risk Score" business rule—without reapplying full remediation task rules or recreating records..
Kaushal Kumar Jha - ServiceNow Consultant - Lets connect on Linkedin: https://www.linkedin.com/in/kaushalkrjha/
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-31-2025 07:49 AM
Thank you for the suggestion.
As far as I know the BR "Update Risk Rating from Risk Score" only runs when the Risk Score changes.
I suspect that running Scheduled Job would not trigger the Business Rule because Risk Scores aren't changing, right?
We thought of changing the rollup weights (and then changing them back) as a workaround, but it's also not a nice solution.
Last resort would be a Fix Script to target the Risk Rating directly.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a month ago
I wrote a Fix Script to do the job in the end.
Perhaps it's useful to someone faced with the same task.
Goal of the script was to be as efficient as possible, so I used these techniques:
- Use a narrow as possible query, to let the database do most of the job.
- Work with chooseWindow to chunk data (credits to User SlightlyLoony and his community post ).
- Use setWorkflow(false), so as not to trigger any Notifications or other adjacent processes.
Script:
(function VR_RecalculateRiskRating() {
var logString = 'VR log:\n';
var updateCount = 0;
var weights = [0, 19, 39, 69, 79, 89, 100];
var queryMaps = [{ // none
queryValue: [0, 0],
targetValue: 7
},
{ // very low
queryValue: [1, 19],
targetValue: 6
},
{ // low
queryValue: [20, 39],
targetValue: 5
},
{ // medium
queryValue: [40, 69],
targetValue: 4
},
{ // high
queryValue: [70, 79],
targetValue: 3
},
{ // very high
queryValue: [80, 89],
targetValue: 2
},
{ // critical
queryValue: [90, 100],
targetValue: 1
}
];
// Convenience in case we need to rerun this for more or less tables in the future.
var targetTables = gs.getProperty('sn_vul.tables_to_process', '').split(',');
if (targetTables) {
for (var table = 0; table < targetTables.length; table++) {
logString += 'current table: ' + targetTables[table] + '\n';
for (var map = 0; map < queryMaps.length; map++) {
var glideRecord = new GlideRecord(targetTables[table]);
// explicit lower and upper bound
glideRecord.addQuery('risk_score', '>=', queryMaps[map].queryValue[0].toString());
glideRecord.addQuery('risk_score', '<=', queryMaps[map].queryValue[1].toString());
glideRecord.addQuery('risk_rating', '!=', queryMaps[map].targetValue.toString());
var getNextChunk = _glideRecordChunker(glideRecord, 10000);
while (getNextChunk()) {
while (glideRecord.next()) {
for (var i = 0; i < weights.length; i++) {
var riskScore = Number(glideRecord.risk_score.toString());
var riskRating = Number(glideRecord.risk_rating.toString());
var currentRating = weights.length - i;
if (riskScore <= weights[i] && riskRating != currentRating) {
glideRecord.risk_rating = currentRating.toString();
glideRecord.setWorkflow(false);
glideRecord.update();
updateCount++;
break;
}
}
}
}
}
}
}
logString += 'updateCount -> ' + updateCount + '\n';
gs.info(logString);
})();
function _glideRecordChunker(glideRecord, windowSize, totalLimit) {
var windowStart = 0;
if (!totalLimit) {
totalLimit = false;
}
return nextChunk;
function nextChunk() {
if (totalLimit && windowStart >= totalLimit) {
return false;
}
glideRecord.chooseWindow(windowStart, windowStart + windowSize);
glideRecord.setNoCount();
glideRecord.query();
return glideRecord.hasNext();
}
}
Results:
We had a runtime of ~3:40h for about 2.5 million records.
We considered this an acceptable runtime for a one-time data correction of this size.
Some further notes:
- I considered other things such as .setUseEngines(false), the undocumented GlideUpdateMultiple, etc.
In the end they either didn't add anything in terms of speed or they just weren't available in scoped scripts. - I also would have liked to query the Risk Score Weights directly. But the corresponding table is inside the Security Commons scope and didn't seem to yield any values when querying from the Vulnerability Response scope.
- The chunking function has a "totalLimit" property, which which limits the overall records to query for unit tests.
- The Fix Script had "Record for rollback" enabled. Perhaps it can be even faster if this property is disabled.
Hope this helps someone in the future 🙂