Find your people. Pick a challenge. Ship something real. The CreatorCon Hackathon is coming to the Community Pavilion for one epic night. Every skill level, every role welcome. Join us on May 5th and learn more here.

Creating Software Installation Records

RahulY00040
Tera Contributor

Hello everyone,

 

I’ve come across an interesting integration scenario and would like your input. We are receiving device details along with associated software installation data from a source system. Each device can have 100+ software installations. This data is being ingested into a staging table via the integration.

Our objective is to:

  1. Identify or match the CI using device attributes such as Serial Number, FQDN, and related identifiers.
  2. Once the CI is successfully identified, create the corresponding Software Installation records in the cmdb_sam_sw_install table.

 

 

Since we will have one row per software installation, I want to avoid repeated IRE calls to retrieve the device sys_id for each record. Does anyone have suggestions on how to handle this more efficiently?

My current approach is as follows:

  1. Configure the Import Set to run asynchronously.
  2. Execute a Scheduled Job first to:
    • Group records based on device identifiers such as Serial Number and FQDN.
    • Make a single IRE call per device to retrieve the corresponding sys_id.
    • Update that sys_id on all related rows in the staging table.
  3. Once the above step is completed, trigger the Transform Job to map and insert the data into the target table.

I’d appreciate your review and any feedback or alternative suggestions.

 

Thanks in advance,

R

6 REPLIES 6

Naveen20
ServiceNow Employee

Try this approach

Pre-Resolve CIs → Then Transform

  1. Import data into the staging table with async transform disabled (don't auto-transform).

  2. Run a pre-processing Script (Scheduled Job or Fix Script) that:

    • Groups staging rows by device identifier (Serial Number / FQDN).
    • Performs one GlideRecord lookup per unique device against cmdb_ci_computer (use IRE only if you need CI auto-creation).
    • Stamps the resolved sys_id into a u_resolved_ci field on all matching staging rows.
    • Marks each row's u_resolution_status as resolved or not_found.
  3. Trigger the Transform Map which:

    • Filters on u_resolution_status = resolved only.
    • Maps u_resolved_ciinstalled_on in cmdb_sam_sw_install directly — no lookups needed during transform.
    • Rows marked not_found are skipped and available for review/retry.

 If a device has 150 software rows, you make 1 lookup instead of 150. The transform becomes a pure insert operation with zero redundant CI resolution, and unmatched records are cleanly separated for troubleshooting.

Thanks, Naveen, for your response. This aligns well with what I had in mind.

I’m currently using the InsertMultiple method of the Import Set APIs, which creates asynchronous import sets. However, I’ve noticed that the import sets created through this method are transformed within a few seconds. After digging deeper, I found that these import sets are automatically grouped under multi_sets, and because of that, they bypass the asynchronous settings and are processed almost immediately after creation.

As a result, my approach of controlling the transformation using a scheduled job isn’t working as expected. Could you please take a look and share your thoughts on this?

Thanks,
Rahul

 

Naveen20
ServiceNow Employee

This is a known behavior — insertMultiple groups rows under sys_import_set_multi, which auto-triggers the transform almost immediately, making it impossible to inject a pre-processing step between import and transform.

Fix: Move CI resolution INTO the transform using map.scratchpad as an in-memory cache.

Instead of fighting the multi-set auto-transform, work with it. Use the transform map's onBefore script to resolve CIs once per device and cache the result for all subsequent rows:

// onBefore Transform Script
(function runTransformScript(source, target, map, log, isUpdate) {

    var serial = source.u_serial_number.toString();
    var fqdn = source.u_fqdn.toString();
    var cacheKey = serial || fqdn;

    // Initialize cache on first row
    if (!map.scratchpad.deviceCache) {
        map.scratchpad.deviceCache = {};
    }

    // Check cache first
    if (map.scratchpad.deviceCache.hasOwnProperty(cacheKey)) {
        var cachedId = map.scratchpad.deviceCache[cacheKey];
        if (cachedId) {
            target.installed_on = cachedId;
        } else {
            ignore = true; // device was looked up before but not found
        }
        return;
    }

    // First time seeing this device — resolve once
    var ciSysId = _resolveCI(serial, fqdn);
    map.scratchpad.deviceCache[cacheKey] = ciSysId;

    if (ciSysId) {
        target.installed_on = ciSysId;
    } else {
        log.warn('CI not found for serial: ' + serial + ', fqdn: ' + fqdn);
        ignore = true; // skip row, no orphaned sw_install records
    }

    function _resolveCI(serial, fqdn) {
        if (serial) {
            var ci = new GlideRecord('cmdb_ci_computer');
            ci.addQuery('serial_number', serial);
            ci.setLimit(1);
            ci.query();
            if (ci.next()) return ci.getUniqueValue();
        }
        if (fqdn) {
            var ciFqdn = new GlideRecord('cmdb_ci_computer');
            ciFqdn.addQuery('fqdn', fqdn);
            ciFqdn.setLimit(1);
            ciFqdn.query();
            if (ciFqdn.next()) return ciFqdn.getUniqueValue();
        }
        return null;
    }

})(source, target, map, log, isUpdate);

Why this works: map.scratchpad persists across all rows within a single transform run. So if a device has 150 software rows, the GlideRecord lookup happens only on the first row — the remaining 149 hit the in-memory cache. Unresolved devices are cached as null so even the "not found" path avoids repeated queries. Rows with no CI match are skipped via ignore = true.

Thanks for your quick and thorough response. I will review and go through this approach. Meanwhile, do we see any impact on the platform due to this caching. We are expecting to execute this transform on 15000 devices * 100 SW installations. So going to check if there is any platform wide impact of this. 

 

Many thanks Naveen,

-Rahul