- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi All,
I am working on configuring the Service Graph Connector for Cisco Meraki in ServiceNow and need some guidance regarding duplicate records being created in the Meraki Devices table after scheduled job execution.
Import Set:
✅ Configuration Setup Completed
1. Created two API Key Credentials (one for each dashboard)
2. Created two HTTP Connections (Global and China endpoints)
3. Configured System Property with Meraki Organization IDs (comma separated)
4. Configured Credential & Connection Alias
5. Activated Scheduled Jobs:
• SG-Meraki Credential Affinity Sync
• SG-Meraki Devices (Data Import Job)
-> Observations During Testing
During first job Run:
MerakiDevices:
• Table: x_caci_sg_meraki_merakidevices
• Count: 3988
After second job run:
MerakiDevices:
• Table: x_caci_sg_meraki_merakidevices
• Count: 7956
Note: Duplicates are only created in MerakiDevices Table, other tables are like cmdb_ci, network, organization, Meraki Custom CI Fields remains same.
Thanks in advance for your help and guidance.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi @Aman_07 ,
This 7 day retention time for the staging table data is for all the tables which extends the sys_import_set_row table that means for the staging tables not only for standard Import Set tables .
If this answers your question, please mark it as helpful and accept the solution for better community visibility.
Thanks,
Vishnu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi @Aman_07 ,
The table MerakiDevices(x_caci_sg_meraki_merakidevices) is a staging table, so the presence of duplicate records is expected behavior and not a concern.
A staging table is an intermediate holding area where data is first collected from an external source (in this case, Meraki) before it is validated, transformed, reconciled, or moved into the target tables such as CMDB or custom production tables.
Because staging tables:
Store raw data
Can receive multiple imports or syncs
duplicates can naturally occur. These duplicates are later handled during downstream processes like:
Data transformation
Identification and Reconciliation
So, duplicates in this table are by design and do not indicate a data issue.
Hope this helps you, Please do mark it as helpful . And accept the solution.
Thanks,
Vishnu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi @Vishnu-K
Thank you for the clarification regarding the MerakiDevices (x_caci_sg_meraki_merakidevices) table and its behavior as a staging table.
Understood that duplicates are expected in staging since the data is raw and gets validated or reconciled downstream.
I had one more question:
What are the recommended and best practice approaches in ServiceNow to handle such duplication during the downstream processes (e.g., while transforming or reconciling the data)?
Thanks again for the guidance.
Regards,
Aman
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi @Aman_07 ,
When transforming data from a staging table into the CMDB, there are two common approaches that can be used: using a Transform Map or using ETL (Extract, Transform, and Load).
ETL (Extract, Transform, and Load):
In an ETL-based approach, data is loaded from the staging table directly into the target CMDB class. Identification is handled using the Identification Rules defined for that class.
For example, if data is being inserted into the cmdb_ci_computer table, the Computer class inherits out-of-the-box Identification Rules from its parent class, cmdb_ci_hardware. These rules define the identifiers such as serial number, name, etc., which are used to uniquely identify a CI.
If 10 records are received in the staging table and each record has a unique serial number, the identification rules will treat each record as a unique CI. As a result, 10 new records will be created in the cmdb_ci_computer table.
In this approach, identification happens implicitly based on the class-level identification rules. There is no explicit call made to the Identification and Reconciliation Engine (IRE). Deduplication and updates rely purely on the configured identification rules.
Transform Map approach using IRE:
When using a Transform Map, the CMDB best practice is to explicitly invoke the Identification and Reconciliation Engine (IRE) during the transform process.
This is done by calling the IRE API using IdentificationAndReconciliationScriptableApi.createOrUpdateCI(payload, discovery_source).
Here, the payload contains the CI attributes and identifier values, and the discovery source specifies the origin of the data. IRE evaluates the incoming data against the configured Identification Rules and based on the evaluation, it either creates a new CI or updates an existing one.
If the same 10 records with unique serial numbers are processed through the Transform Map, IRE will identify each record as a distinct CI and again 10 records will be created in the cmdb_ci_computer table.
The key difference is that with Transform Maps, identification and reconciliation are explicitly enforced, and reconciliation rules and data source precedence are fully respected. This is the recommended and supported approach for inserting or updating CMDB data in ServiceNow, as it provides better governance and long-term CMDB data quality.
if you want to know more about ire then go through this docs
Hope this helps you, Please do mark it as helpful . And accept the solution.
Thanks,
Vishnu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago - last edited 3 weeks ago
Hi @Vishnu-K
How about the Scheduled Script Execution?
Approach: Create a scheduled job that delete stagging records older than 30 days
var days = 30; // retention period
var gr = new GlideRecord('x_caci_sg_meraki_merakidevices');
gr.addQuery('sys_created_on', '<', gs.daysAgoStart(days));
gr.query();
gs.info('Deleting old Meraki staging records...');
while(gr.next()){
gr.deleteRecord();
}because my issue is related to duplicates in stagging records, there is no duplicates are created in cmdb.

