Tenablei.o complaince integration job
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
We have a Tenable VR integration running ~15–20 hrs (same behavior even before Zurich).
ServiceNow confirmed delay is due to queue time. We already increased:
- Data sources (4 → 20)
- Import jobs (4 → 20)
- Nodes in PROD
Still seeing high runtime.
Any suggestions to reduce queue backlog or improve performance?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Try these ..
The import itself may be fast, but transform maps and async business rules firing on sn_vul_vulnerable_item (or related tables) during each insert/update can massively inflate total time. Check if any custom business rules, flows, or notifications are running on insert/update of vulnerability tables. Disable or defer anything non-critical during the import window, or gate them with a condition that skips execution during bulk loads.
Coalesce / Matching logic
VR import uses coalesce fields to match incoming records to existing ones. If coalesce is hitting unindexed fields or doing expensive lookups, each row gets slow at scale. Verify that all coalesce fields on the transform map have database indexes. You can check sys_db_index or use the table index inspector.
Scheduled Job stacking
If your import overlaps with other heavy scheduled jobs (CMDB Health, Discovery, PA aggregations, mid-server queues), they compete for the same semaphore and worker threads. Stagger your VR import to a low-activity window, and check sys_trigger for concurrent heavy jobs during that window.
Import Set table bloat
If old import set rows aren't being cleaned up, the staging tables grow and slow down each subsequent run. Confirm the "Clean import set table" scheduled job is active and running frequently enough. Also check the row count on your VR import set tables directly.
Mid Server tuning
Since Tenable pulls often go through a MID server, check whether the MID is the actual bottleneck rather than the instance. Look at MID server memory/CPU during the run, max.threads property on the MID, and whether the MID is shared with Discovery or other heavy integrations. A dedicated MID for VR can help significantly.
Chunking & API pagination
On the Tenable side, check how the integration is pulling data. If it's pulling the full export every run rather than doing delta/incremental pulls, you're reimporting the entire vulnerability dataset each time. Confirm last_run_datetime or the equivalent delta filter is working correctly so you're only pulling net-new or updated findings.
Semaphore and worker thread limits
Even with 20 import jobs configured, the instance may not actually run 20 in parallel if semaphore limits are lower. Check these properties:
glide.scheduled_worker.threads— controls parallel worker capacityglide.db.pooler.connections— database connection poolimport.max.workers— max concurrent import workers
If these are still at defaults, your 20 import jobs may be queuing behind a 4–6 thread ceiling anyway.
Quick diagnostic steps
- Pull
sys_import_set_runrecords for a recent run — look at the gap between "queued" and "started" timestamps to confirm it's genuinely queue wait time vs. transform time. - Check
syslog_transactionfor the slowest operations during the import window — this tells you whether it's the import, transforms, or downstream async processing eating the clock. - Run a test import with business rules disabled on the target table (in sub-prod first) to isolate whether the raw import is fast and the overhead is in post-processing.
If ServiceNow support confirmed it's queue time specifically, the semaphore/worker thread limits are the most likely culprit — your parallelism config says 20 but the engine may still be throttling to a much lower number.
