NVD integration failing consistently

superhumanben
Tera Contributor

Since rolling out VR late last year we've never seen a fully clean pull from NVD. Recently with the change from API 1.0 to 2.0, we have not been able to pull anything at all. The initial errors are either NVD:0 or 404 errors. We're at a supported application version, we've signed up and implemented an API key, still having errors. Any thoughts on how to troubleshoot this issue further?

12 REPLIES 12

I did find how to adjust that in a different Qualys-related article prior to posting my question; however that article also notes that there may be performance issues from increasing the payload size and that pcrs_page_size could be used to reduce the payload. I can't find that value, perhaps it is only a Qualys-related setting?

 

I've had to adjust the max_integration_payload_size value upwards 3 times now - currently at twice the original - as the size of the payload from NVD keeps going up.

 

Is there any page/record limitation that can be applied to the NVD calls to reduce the payload size, versus just continually increasing the max? This is what I thought the "Modify the integration instance parameters to ensure that the payload attachment size is within the specified limit." portion was referring to, but I can't find any info to that effect.

superhumanben
Tera Contributor

Interestingly enough today, I decided to run the CVE and CPE individually rather than relying on the "NIST National Vulnerability Database Integration - API (CVE and CPE)".

CVE completed quickly and without a single error. 
CPE was failing with NVD:0 and 503 errors as well but then started picking up on retry. So for NOW... we have CVE and CPE data from the individual integrations.

Does anyone know why this would be happening? And would we want to run the unmapped CPE integration as well?

Randy Ritzer
Tera Expert

Getting the same 404 errors consistently with the API key entered.  API key tested outside of ServiceNow and working.