Sarah Wood
Administrator
Administrator

SecOps Resource Library button.png

Performance

 

Overview

Performance considerations are important for larger volume tables and integrations in ServiceNow, primarily around data processing and user experience.

 

VR Performance Angles.png

 

With Vulnerability Response, it is recommended to review these key Support KB Articles as a primary starting point. These articles shed light into notable optimizations for handling large data volume implementations, primarily around data ingestion/processing along with user experience and reporting.


Key Support KB Articles

Contents

The sections below provide detailed guidance related to the following topics, as well as additional resources:

1) Data On-Boarding

2) Data Management

3) Data Normalization for Improved Performance

4) Vulnerability Response Reporting

5) Monitoring Vulnerability Response Performance

 


Base Knowledge

 

1) Data On-Boarding

As we load data from 3rd party integrations, we may want to look at filtering data brought into ServiceNow to iteratively focus on actionable data (e.g. selective flavors of vulnerability detections based on Severity, to initially start from and iterate forward with).

Where certain 3rd party integrations for Vulnerability Response do not support robust filtering with their API, the feature “Vulnerability Response Exclusion Rules” can help further optimize what Detection records, become Vulnerable Items.

 


2) Data Management

 

VR Data Management.png

Over time, larger volume Vulnerability Response tables can be optimized in two key areas:

 

I) Handling Stale Active Records

  • The use of Auto-Close Rules ensures that records such as Detections and in-turn Vulnerable Items reflect current exposures that are actionable.
  • Most commonly, Active Vulnerable Items are closed out when the 3rd party vulnerability scanner reports a previously identified exposure has been confirmed as “Fixed/Remediated/Closed”.
  • Imported Vulnerable Items may often relate to assets that are decommissioned or destroyed, and possibly configurations that were changed – where, the 3rd party vulnerability scanner does not have a mechanism to synchronize those updates to ServiceNow via their API.
  • The result, is that records such as Vulnerable Items appear “Active” even though they have not been “seen again” recently.
  • Using Auto-Close rules, we can better align our “Active” records, to accurately reflect current exposures, and where needed, we can close out Vulnerable Item records that have not been reported in a specified duration of time (i.e. infer these records are no longer relevant or depict current exposures, and close them as “stale”).
  • Auto-Close Rules, are especially handy when it comes to ephemeral or non-persistent hosts like Virtual Machines – such that we can configure different thresholds based on rules we define (e.g. Auto-Close VITs for Virtual Machines which are short-lived, if they have not been reported to ServiceNow, i.e. last seen, in 7 days) – such that another Auto-Close rule has an extended threshold of 30 days for traditional Network gear, Servers that we anticipate are longer lived and also scanned via traditional scheduled network based scans as opposed to more frequent agent based scans.
  • Maintaining hygiene of Active records on large volume tables is beneficial, as most list queries, reports, scheduled operations focus primarily on records of “Active = True” and it helps with improving performance over time.

Resource: https://www.servicenow.com/docs/bundle/yokohama-security-management/page/product/vulnerability-respo...

 

II) Removing Older Closed Records

  • Over time, records on larger tables like Vulnerable Items will become Closed for a duration of time (e.g. Closed more than a year, more than 2 years and so forth).
  • Based on data retention needs and desire to keep data around for a duration of time, these Closed records can be removed from the primary large volume table.
  • Managing Closed records can be done in two forms via configurable thresholds today to meet varying data retention needs:
    • Archiving (with Archiving Rules, Archive Destroy Rules)
    • Auto-Delete (with Table Cleaner)

Resource: https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0999117

 


3) Data Normalization for Improved Performance

  • Consider the illustrated data volume of core tables involved with Vulnerability Response, for comparison/magnitude.

 

Data Volume of Core Tables.png

 

  • Certain core configurations like Assignment Rules, Risk Scoring Calculators are evaluated on each record on larger VR tables such as Vulnerable Items.
  • Having inefficient queries such as “Vulnerability summary > Contains Java” can be expensive queries on larger volume tables, especially running it on each record.
  • Where possible, it is suggested to use Vulnerability Response Classification Rules to normalize data and inefficient queries.
  • This allows you to compute the expensive query one time (e.g. on the Vulnerability Entry, Third-Party Entry, Discovered Item table) and use it many times after in core configurations or even list queries, reports, watch topics and so forth.

Classification Rules can be applied to each Vulnerability Entry to normalize them to meet the needs of repeatable queries, reports, configurations like Assignment Rules, and so forth.

 

Data Volume of Core Tables 2.png

 

When configuration Assignment Rules for Vulnerable Items, the Classification values from the Vulnerability Entries, can be used as an input.  This way, Assignment Rules are not crafted with inefficient queries (e.g. Vulnerability > Summary CONTAINS <value>), which are evaluated over and over again for each Vulnerable item.

 

Rather, the logic is executed once per Vulnerability, and then used over and over again across ad-hoc queries, reports, and configurations that run on every Vulnerable Item, such as Assignment Rules.

 

 

Data Normalization for Improved Performance.png

Resource: https://www.servicenow.com/docs/bundle/yokohama-security-management/page/product/vulnerability-respo... 

 


4) Vulnerability Response Reporting

Reporting on larger volume tables may have poor response times depending on the query and report operation. Where we have appetite to create data visualizations on larger volume tables, we can leverage cached reporting to improve user experience. 

 

Resources: 

 


5) Monitoring Vulnerability Response Performance

It is highly recommended to leverage the SecOps Vulnerability Response Health Dashboard within the ServiceNow Instance. This provides insights into several areas of performance and overall configurations: 

  • Integrations (e.g. Failed vs. Successful runs)
  • Slow queries and slow scripts
  • Optimized configurations (e.g. CI Matching, Assignment Rules)
  • Technical debt (e.g. insights into high impact customizations that can present upgrade challenges and possibly performance issues)

Resource: https://www.servicenow.com/docs/bundle/yokohama-security-management/page/product/vulnerability-respo...

 


Additional Resources

 

 

Version history
Last update:
yesterday
Updated by:
Contributors