Separating VI by Proof - any feedback on experiences?

Aaron Molenaar
Mega Guru

ServiceNow recently added the ability to separate out detections by proof into individual VI (https://docs.servicenow.com/bundle/vancouver-security-management/page/product/vulnerability-response...).

 

A classic example would be log4j, where there might be multiple detections of the same vulnerability in different directories (different applications) on a server. These different directories need to be broken out to be assigned to individual application teams to address. OOB these all are consolidated into a single VI. Separating by proof would give you multiple VI to route to multiple teams for clear responsibility/remediation.

 

To get multiple VI by proof, the solution is that you have to list the specific vulnerabilities you want separated by proof in . For log4j, this might be achievable, as there might be a dozen vulnerabilities that you need to list.

 

However, another example is Java installs, where there might be handful of different installs in different directories on a given device supporting different applications and where vulnerabilities need to be sent to different application support teams for remediation. There are many hundreds of unique java vulnerabilities with up to dozens of new ones appearing per month. Manually maintaining this in a list is far from ideal and rife for mistakes.

 

Has anyone implemented this solution to break apart VI by proof? If so, how broadly are you doing it? Any thoughts on how to do it holistically [ideally with low/no maintenance] for all vulns with multiple detections? Our team was really looking for this to be a holistic all-or-nothing kind of switch.

 

We are a Rapid7 shop.

 

Input appreciated, thanks!

3 REPLIES 3

andy_ojha
ServiceNow Employee
ServiceNow Employee

Hey Aaron,

 

Great call out here.

At this time, the ability to use "Proof" as a unique key for Vulnerable Items, is limited to Rapid7 - and does require maintaining the list of Vulnerabilities individually as you pointed out.

It is still a great step forward, from where we were before - with no real way to carve out certain flavors of vulnerabilities into their own granular Vulnerable Items (e.g. Log4J as you called out).

The use-case you present is totally fair - for wanting the ability to do this at a larger scale across several "flavors" or "families" of vulnerabilities (perhaps condition or rules based) -- and potentially even holistically across all records being brought in (on or off, like the Port granularity config).

----------------------------------------------------------------------------------------------------

 

If you have some time, would you mind creating an Idea Submission, to get this tracked as an Enhancement opportunity?   
   - This would really help out others in the SecOps Community as well 
   - From the Community site here, under Resources -> Idea Portal

 

----------------------------------------------------------------------------------------------------

 

That said, I probably would not recommend it - but thinking out loud:


In order to create Vulnerable Items with Proof factored in as a unique key (split by proof), we'd have to look at overriding the "_viLookupKeys" object in the 'DetectionBase' Script Include, to incorporate the Proof value...

 

Doing so on a fresh install is one thing, but doing this on an existing installation with Detections and Vulnerable Items already created would be tricky - essentially, the existing Vulnerable Items would need to be migrated to the new unique key combination (i.e. retrofitted) ... 
   -- Check out the "External ID" column on a Vulnerable Item 
   -- That hash value in the "External ID" would need to be recomputed properly to account of the VIs being split up with Proof as another key
   -- Not doing so, would present problems with maintaining States appropriately (VIs not closing when they should, etc.)

 

Unfortunately, it's not a trivial path to do the retrofit when factoring in the holistic path (on /off --> Use Proof as a VI Key) .

I requested one of our team with HI access to submit the idea (wish that wasn't behind HI).

 

Having discussed and dug into this a bit more, I have another question. If we want to stick with the OOB functionality, as the underlying list is simply a table, we could load the table initially with the 2600+ unique log4j and Java vulnerabilities we want to break apart today (and perhaps even maintain it automatically as new vulnerabilities that match criteria come into the system).

 

However, what script would need to be run on these to break apart legacy existing VI and what would be the easiest way to trigger it holistically on initial table load and also on new table entries (if we are automatically populating the table on new vulnerability import with select vulns)?

 

Appreciate any thoughts/input on this approach as an interim.

Hey there - not sure if you made forward movement on this, but figured to share a thought.

I think there might be a way to tackle this retrofit (break apart the legacy / existing data) - but would strongly recommend opening a NOW Support Case for a better set of eyes / guidance with the retrofit of existing data, and perhaps doing this in a Sandbox instance first (perhaps a throw-away instance cloned from your PROD or SUBPROD).... 
 --> This is especially  recommended if you were a Rapid7 Data Warehouse (DWH) shop and used the "De-dup" migration property to move to Rapid7 InsightVM (IVM) and still have those legacy DWH records in play....

When the Detection Key Config feature came out - there was the concept of having to retrofit the Detections and Vuln Items to re-align to the adjusted Keys (for fresh installs and net new data imports, it worked fine, but older data had to be accounted for with the Detection Keys changing)...

I think that the code and logic for that retrofit may help you.

Check out these two components:

  • Scheduled Job: Fixing the detections for updated key for Rapid7
  • Script Include: DetectionsConfigurableKeyFixUtil

Would be worth considering - perhaps after you make the update to call out the specific vulns to account for running that "retrofit" may potentially align your existing DETs and VIs.

Even if it does not do that cleanly out the gate - a bunch of the logic in there would be what you want to to repurpose vs rebuilding on your own.

 

--------------------------------------------------------------------------

 

Reference -- See the VR v14.0 Release Notes

 

https://docs.servicenow.com/bundle/sandiego-release-notes/page/release-notes/security-operations/sec...

Changes to Detection key configuration for third-party integrations:

Prior to v14.0, you might avoid problems with unmatched CIs, duplicate CIs, and vulnerabilities not closed but reported ‘Fixed’ by a scanner by modifying the keys on detections from third-party scanners. See the Dynamically changing Keys on detections [KB0859692] article in the HI Knowledge Base for more information.

If you modify these detection keys, the detection counts will not match between third-party scanner(s) and your instance. You have to clean up any duplicate records created by the change. To delete these duplicate detections and clean up your detection data, you need to create a fix script and run it as described in the previous KB article.

Starting with v14.0, detection data is cleaned up automatically by an integration-specific scheduled job that is triggered post-upgrade. For customers with existing detection data, the this job might take time to run.

If you have previously implemented the clean up fix using the previous KB article, this scheduled job is not needed and it is not triggered for updating the data.

The job names for the detection keys are specific to each integration (Qualys, Rapid7, and Tenable):

  • Fixing the detections for updated key for Rapid7
  • Fixing the detections for updated key for Qualys
  • Fixing the detections for updated key for Tenable

See the Dynamically changing Keys on detections [KB0859692] article in the HI Knowledge base and Vulnerability Response vulnerable item detections from third-party integrations for more information.