Find your people. Pick a challenge. Ship something real. The CreatorCon Hackathon is coming to the Community Pavilion for one epic night. Every skill level, every role welcome. Join us on May 5th and learn more here.

Tenablei.o complaince integration job

usmanalisye
Tera Contributor

We have a Tenable VR integration running ~15–20 hrs (same behavior even before Zurich).

 

ServiceNow confirmed delay is due to queue time. We already increased:

 

  • Data sources (4 → 20)
  • Import jobs (4 → 20)
  • Nodes in PROD

 

 

Still seeing high runtime.

 

Any suggestions to reduce queue backlog or improve performance?

1 REPLY 1

Naveen20
ServiceNow Employee

Try these ..

The import itself may be fast, but transform maps and async business rules firing on sn_vul_vulnerable_item (or related tables) during each insert/update can massively inflate total time. Check if any custom business rules, flows, or notifications are running on insert/update of vulnerability tables. Disable or defer anything non-critical during the import window, or gate them with a condition that skips execution during bulk loads.

Coalesce / Matching logic

VR import uses coalesce fields to match incoming records to existing ones. If coalesce is hitting unindexed fields or doing expensive lookups, each row gets slow at scale. Verify that all coalesce fields on the transform map have database indexes. You can check sys_db_index or use the table index inspector.

Scheduled Job stacking

If your import overlaps with other heavy scheduled jobs (CMDB Health, Discovery, PA aggregations, mid-server queues), they compete for the same semaphore and worker threads. Stagger your VR import to a low-activity window, and check sys_trigger for concurrent heavy jobs during that window.

Import Set table bloat

If old import set rows aren't being cleaned up, the staging tables grow and slow down each subsequent run. Confirm the "Clean import set table" scheduled job is active and running frequently enough. Also check the row count on your VR import set tables directly.

Mid Server tuning

Since Tenable pulls often go through a MID server, check whether the MID is the actual bottleneck rather than the instance. Look at MID server memory/CPU during the run, max.threads property on the MID, and whether the MID is shared with Discovery or other heavy integrations. A dedicated MID for VR can help significantly.

Chunking & API pagination

On the Tenable side, check how the integration is pulling data. If it's pulling the full export every run rather than doing delta/incremental pulls, you're reimporting the entire vulnerability dataset each time. Confirm last_run_datetime or the equivalent delta filter is working correctly so you're only pulling net-new or updated findings.

Semaphore and worker thread limits

Even with 20 import jobs configured, the instance may not actually run 20 in parallel if semaphore limits are lower. Check these properties:

  • glide.scheduled_worker.threads — controls parallel worker capacity
  • glide.db.pooler.connections — database connection pool
  • import.max.workers — max concurrent import workers

If these are still at defaults, your 20 import jobs may be queuing behind a 4–6 thread ceiling anyway.

Quick diagnostic steps

  1. Pull sys_import_set_run records for a recent run — look at the gap between "queued" and "started" timestamps to confirm it's genuinely queue wait time vs. transform time.
  2. Check syslog_transaction for the slowest operations during the import window — this tells you whether it's the import, transforms, or downstream async processing eating the clock.
  3. Run a test import with business rules disabled on the target table (in sub-prod first) to isolate whether the raw import is fast and the overhead is in post-processing.

If ServiceNow support confirmed it's queue time specifically, the semaphore/worker thread limits are the most likely culprit — your parallelism config says 20 but the engine may still be throttling to a much lower number.