kspaeth
Kilo Contributor

Automating the Process of "Closing" VIs on Non-Running Red Hat Kernels in VR with the Qualys Integration

This post is for anyone using the Qualys integration with Vulnerability Response that has had issues with VIs for the "Red Hat Update for kernel" QIDs from Qualys on non-running kernels not closing. There are many questions posted on these forums and I have never seen an answer or solution posted.

Note: This post is not from ServiceNow developer. I am a Security admin of VR at my organization so any back end implementation of scripts, API changes, VR updates, etc. I do not have control over and work with a dedicated SN developer to implement. 

Root Cause of the Problem:

When a Red Hat kernel is updated, the previous kernel is deactivated but still stored on the server. When scanning, Qualys will still flag vulnerabilities on the non-running kernel. This however poses minimal risk (depending on your definition of risk) and there is nothing within best practices that the server owner can do to remediate the VI created. 

Problem:

  1. Qualys scans and finds vulnerabilities in Red Hat kernels regardless of their running state. 
  2. Qualys notates the status of the kernel as running or not running and passes to ServiceNow (if using the more recent Qualys integration). 
  3. These detections are passed to ServiceNow. The value "affects running kernel" has values that reflect its status.
  4. The VIs for these will still reflect the state of Open regardless of the affects running kernel value by default.
  5. If the VI is closed, it will reopen on the next integration run where the detection is updated.

Potential solutions that do not work we tried (and others on the forum):

  1. Filter out non-running kernels in the API - this falls short when the VI was imported with "affects running kernel" as Yes as once the value turns to No, the detection will not be imported by the API and you will be stuck with a VI in an Open state. This can be worked around using stale functionality or other methods but these are either manual or inaccurate.
  2.  Set up a script to close VIs on Red Hat Kernel VIs with ARK = No - this falls short as the VI will just reopen each time the detection is updated.

Below is the solution we came up with to get around the problem. While still a "hack" it does automate the process.

Prerequisites:

  1. Updated Qualys integration that supports the Affects Running Kernel field
  2. Updated VR that supports Exception Rules
  3. Ability to update logic that allows Vulnerability Groups/Remediation Tasks to close
  4. No API filters for red hat kernel updates in your Qualys Integration
  5. Your company does not utilize all Deferral reasons available in VR and can select one that will not and can not be used for any other deferral outside of this scenario. This is key as this solution relies on allowing VR to close VGs based on a specific Deferred state.

Solution:

We essentially used Exception Rules to lock VIs for Red Hat Kernel Updates on non-running kernels in a deferred state. We then modified the logic that VGs use to close to account for the unique Deferred reason/state combo to allow any VGs that have these VIs to close.

  1. Referencing #5 above, determine which Deferral Reason you want to use for your exception rule. Again, this cannot be used anywhere else. In our case, we only allow VGs to be deferred with Risk Accepted or False Positive. We chose to use Mitigating Control in Place for this exception rule.
  2. In the logic available that allows VGs to close, add the state and reason combination you selected in the previous step. In our case, we added the ability for VGs  to close on the Deferred - Mitigating Control in Place state along with the already existing Closed - Fixed and Closed - Stale states.
  3. Implement the Exception Rules function of VR if it is not already. We only use 1 approver level.
  4. Create a new exception rule that meets this criteria:
    1. Vulnerability Summary contains "Red Hat Update for kernel"
    2. Affects running kernel is No
    3. Select execute on existing data
    4. Select an assignment group this will go to. The VG this will make will be essentially a black hole for VIs for non-running Red Hat kernel updates. This is up to your organization.
    5. Wait for the rule to populate all VIs that meet the criteria.
  5. At this point, any VIs that meet this criteria will now be deferred and locked in that state. Any new VIs will follow the exception rule into that black hole VG. Any Open VGs with those special Deferred VIs will now close just as they would if they were Closed - Fixed.

 

This is obviously not a perfect solution as it relies on how your organization uses VR but this has been a multi year frustration for our organization and this was the creative solution to it we came up with. It ensures you do not need to wait for these VIs to close as stale, manually track them, modify the API, etc.

 

Comments
Howard7
ServiceNow Employee
ServiceNow Employee

Another option, if it works for your organization, would be to keep vulnerabilities open for non-running kernel VITs but to downgrade the risk rating (to low or informational).

kspaeth
Kilo Contributor

Wouldn't the issue still exist that they are Open VITs on Open VULs assigned to end users with in progress SLAs? 

When these are first assigned they are valid vulnerabilities (on a running kernel) but when the end user fixes them (the kernel is now non running) we need to acknowledge their remediation efforts so they are not still being held to an SLA for something they fixed.

Eugene8
Mega Contributor

Thank you for the article. We have just created a scheduled job that goes through VITs with "Affects running kernel" = "NO" and defers them.

Two comments:

- Exception rules for vulnerability deferral in this scenario does not work because they are not triggered when "Affects running kernel" is changed from "YES" to "NO" for the same VIT.

- The same approach can be used for the two other categories - "Affects running service" and "Affects exploitable config". Note however that the values "YES" have to be deferred. The reason is that there was an error in the Qualys API and the values 0 and 1 are inverted. Qualys has updated their API document just a few days ago and I ServiceNow need to update their documentation. I will create a ticket to inform ServiceNow about this.

New Qualys API doc: Qualys API (VM, PC) XML/DTD Reference

 

Version history
Last update:
‎03-04-2022 07:42 AM
Updated by: