ram_mandadapu
ServiceNow Employee
ServiceNow Employee

The Duplicate Row Processor identifies the data that has not been changed since prior imports to limit the rows written to the import set table. The processor computes the configurable hash for each import row, and then it will use hash to identify unchanged rows in future imports. This helps when the source APIs cannot send delta information in the last import. 

The Service Graph Connectors provides a step in guided setup to turn on the Duplicate Row Processor. Customers can provide a comma delimited list of fields too. The ignore fields will not be used in calculating a hash of the import set row. 

Important notes:

  • Each row should include columns that identify the row uniquely. For example, device id, software name, version, publisher etc.
  • If the calculated hash of each row exactly matches with the previous import, the current run won’t insert any records in the import set table.
  • Customers can turn off Duplicate Row Processor for any SG connector by going to the guided setup or sn_cmdb_int_util_duplicate_row_rule table.
  • The Duplicated Row Processor won’t work with the nested payload/data in a single column feature.

Example Rule:

find_real_file.png

 

Example script to use duplicate row processor:

var inserter = new sn_cmdb_int_util.DuplicateRowProcessor("<data source sys_id>", import_set_table); 

while(getRow()) { 

inserter.insert(row); 

} 

inserter.close();

 

3 Comments