- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎10-16-2020 07:32 AM
Qualys host detection daily job completing with Substate status Failed. For past couple of days our daily host detection job is not able to ingest all of Qualys data into Snow Vulnerability Response module. Qualys Import Source Substate failed comes back with message under Notes "At least one import queue entry is in error. No more data to process at this time". When you drill down Qualys import source and look at the VINTPXXXXX number the message in Notes states "Some import queue entries are in error". After drilling down the VINTPXXXXX number the Qualys Vulnerability Import has status Error and with processing notes message with "Job exceeded processing time and was forced to complete status". This issue started happening in our prod instance without any recent changes to prod instance. Can someone advice what could be causing the some of the Qualys import to fail? I have also attached screenshot for reference.
Solved! Go to Solution.
- Labels:
-
Vulnerability Response

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎10-16-2020 01:50 PM
Hey there,
Typically when this happens, you are bumping up on the preset 3600s (60m) timeout threshold on the import queue -> (time to process the individual paginated payload files sent to ServiceNow).
Currently, this threshold is hardcoded in the core script include "VulnerabilityDSAttachmentManager" and is not configurable without customizing the script include.
By chance - was this your very first import from Qualys (for detection data)? Or was this from a delta data load (import since, set to yesterday), or a backfill data load (where you set the Import Since a days or weeks ago)?
In some cases, these files can grow to be very large (especially if we are using Qualys Agents) - where a given host from Qualys, is returned with a large amount of data to be processed (vulnerability detections).
Your best bet is to open a HI Support Ticket - they can review the files returned from Qualys to assess if they are too large to be processed in the 60m window - and provide you some guidance on tuning for your specific situation in order to move forward.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎10-16-2020 08:31 AM
Hi,
I have seen this caused by:
1. A large back of Jobs in ServiceNow
2. An issue on the Qualys side
Take a look at your System Diagnostics > Diagnostics Page > Scheduler queue length (It should be a small number or rapidly decreasing). Consider when other intensive process are running also.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎10-20-2020 07:50 AM
Thank you Chris, our support admin has a support ticket open. I will keep you in loop with the fix.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎10-16-2020 01:50 PM
Hey there,
Typically when this happens, you are bumping up on the preset 3600s (60m) timeout threshold on the import queue -> (time to process the individual paginated payload files sent to ServiceNow).
Currently, this threshold is hardcoded in the core script include "VulnerabilityDSAttachmentManager" and is not configurable without customizing the script include.
By chance - was this your very first import from Qualys (for detection data)? Or was this from a delta data load (import since, set to yesterday), or a backfill data load (where you set the Import Since a days or weeks ago)?
In some cases, these files can grow to be very large (especially if we are using Qualys Agents) - where a given host from Qualys, is returned with a large amount of data to be processed (vulnerability detections).
Your best bet is to open a HI Support Ticket - they can review the files returned from Qualys to assess if they are too large to be processed in the 60m window - and provide you some guidance on tuning for your specific situation in order to move forward.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎10-20-2020 07:55 AM
This is not our first import and has been running successfully. Our Admin team has a support ticket open and they suggested turning off setWorkflow(true). This was done in our UAT instance and it was successful in reducing sub-state failures but may have side effects. I will keep the thread updated with the fix. Thanks