Deletion of 3.0M records from resource table

VenkataSaiM
Tera Contributor

I'm working on cleaning up 2.5M records from a CMDB table and have tried tried multiple approaches such as Background script, Data manager policy, cleanup table option, direct delete from table list view , but each has performance impacts or takes too long. i also tested using the archive rule but there is an time interval for 1 hour, tried to change the interval but it is impacting other archive rules. So my question is "what is the most efficient and safe way to delete such large volume (3.0M) of CMDB records without impacting instance performance or other scheduled jobs with in short time.

1 ACCEPTED SOLUTION

AJ-TechTrek
Giga Sage
Giga Sage

Hi @VenkataSaiM .

 

As per below are best practices which might helps you.

 

1 – Key Principles Before Deletion
* Never bulk-delete CMDB data directly in one go on a production instance.
* Always delete in batches to avoid locking DB threads and causing node slowdowns.
* Maintain referential integrity (relationships, child tables, history).
* Consider archiving instead of hard deleting if regulatory/audit requirements exist.

 

2 – Best Practice Approach


Step 1 – Identify and Isolate the Dataset
* Use a saved filter or scripted query to target only the CIs you truly want to delete.
* Ensure they do not have active relationships (check cmdb_rel_ci table).
* Check for dependent records like:
* Tasks (task.cmdb_ci)
* Discovery status records
* Child CI classes
* Use cmdb_ci_delete API or CMDB Util methods for safer cleanup.

 

Step 2 – Use Chunked Deletes (Scripted)

Instead of deleting millions in one transaction, batch them in chunks of e.g., 5,000–10,000 per execution.
Example background script approach

 

var gr = new GlideRecord('cmdb_ci_computer'); // Example class
gr.addQuery('install_status', '7'); // Retired or custom condition
gr.setLimit(5000);
gr.query();

while (gr.next()) {
gr.deleteRecord();
}

 

Tips:
* Run in off-peak hours.
* Repeat until all records are gone.
* Monitor node performance via Node Log Stats.

 

Step 3 – Use the deleteMultiple() API
If you can safely build a GlideRecord query:

 

var gr = new GlideRecord('cmdb_ci_computer');
gr.addQuery('install_status', '7');
gr.deleteMultiple();

 

This is faster than looping through deleteRecord() because it’s a bulk SQL operation, but be careful — it bypasses some business rules and triggers, so test first.

 

Step 4 – CMDB Data Manager Cleanup Policies
* If possible, use CMDB Data Manager to set retirement and cleanup rules.
* Set batch size and execution window.
* Schedule over multiple days to avoid instance slowdowns.

 

Step 5 – Offload to Sub-Production First
* Clone Prod → run the delete in sub-prod → measure runtime and performance impact.
* Adjust batch sizes based on results before doing in Prod.

 

Step 6 – If Archiving Is an Option
* Create a custom archive rule just for this table.
* Run in parallel with other archive rules by isolating it into a separate schedule.
* This preserves historical data but removes load from the CMDB.

 

3 – Additional Tips for Safe High-Volume Deletes
* Disable Index Updates Temporarily (if deleting a massive subset) — re-enable after.
* Disable auditing on the table during cleanup (saves DB writes).
* Use Maintenance Window to avoid conflicting with Discovery or IRE jobs.
* Delete relationships first, then the CI records — prevents orphans.

 

4 – Recommended Sequence
1. Identify CIs → confirm no dependencies
2. Delete related records from cmdb_rel_ci
3. Batch-delete CI records via chunked script or deleteMultiple()
4. Rebuild indexes after completion
5. Run CMDB Health to confirm no orphaned references

 

Please appreciate the efforts of community contributors by marking appropriate response as Mark my Answer Helpful or Accept Solution this may help other community users to follow correct solution in future.
 

Thank You
AJ - TechTrek with AJ - ITOM Trainer
LinkedIn:- https://www.linkedin.com/in/ajay-kumar-66a91385/
YouTube:- https://www.youtube.com/@learnitomwithaj
Topmate:- https://topmate.io/aj_techtrekwithaj (Connect for 1-1 Session)
ServiceNow Community MVP 2025

 

View solution in original post

14 REPLIES 14

JackieZhang
Tera Contributor

try run fix scripts in background. 

Thanks for your response tried the fixed background script it is impacting the performance of the instance.

Hi @VenkataSaiM ,

 

Have you checked my answer about batch approach.

 

Regards,

Nikhil Bajaj

Please appreciate my efforts, help and support extended to you by clicking on – ā€œAccept as Solutionā€; button under my answer. It will motivate me to help others as well.
Regards,
Nikhil Bajaj

Thanks for the suggestion, Nikhil. I tried using a similar approach, but at a safe batch size it takes too long for 3M records. Looking for a faster yet low impact method.