Deletion of 3.0M records from resource table

VenkataSaiM
Tera Contributor

I'm working on cleaning up 2.5M records from a CMDB table and have tried tried multiple approaches such as Background script, Data manager policy, cleanup table option, direct delete from table list view , but each has performance impacts or takes too long. i also tested using the archive rule but there is an time interval for 1 hour, tried to change the interval but it is impacting other archive rules. So my question is "what is the most efficient and safe way to delete such large volume (3.0M) of CMDB records without impacting instance performance or other scheduled jobs with in short time.

1 ACCEPTED SOLUTION

AJ-TechTrek
Giga Sage
Giga Sage

Hi @VenkataSaiM .

 

As per below are best practices which might helps you.

 

1 – Key Principles Before Deletion
* Never bulk-delete CMDB data directly in one go on a production instance.
* Always delete in batches to avoid locking DB threads and causing node slowdowns.
* Maintain referential integrity (relationships, child tables, history).
* Consider archiving instead of hard deleting if regulatory/audit requirements exist.

 

2 – Best Practice Approach


Step 1 – Identify and Isolate the Dataset
* Use a saved filter or scripted query to target only the CIs you truly want to delete.
* Ensure they do not have active relationships (check cmdb_rel_ci table).
* Check for dependent records like:
* Tasks (task.cmdb_ci)
* Discovery status records
* Child CI classes
* Use cmdb_ci_delete API or CMDB Util methods for safer cleanup.

 

Step 2 – Use Chunked Deletes (Scripted)

Instead of deleting millions in one transaction, batch them in chunks of e.g., 5,000–10,000 per execution.
Example background script approach

 

var gr = new GlideRecord('cmdb_ci_computer'); // Example class
gr.addQuery('install_status', '7'); // Retired or custom condition
gr.setLimit(5000);
gr.query();

while (gr.next()) {
gr.deleteRecord();
}

 

Tips:
* Run in off-peak hours.
* Repeat until all records are gone.
* Monitor node performance via Node Log Stats.

 

Step 3 – Use the deleteMultiple() API
If you can safely build a GlideRecord query:

 

var gr = new GlideRecord('cmdb_ci_computer');
gr.addQuery('install_status', '7');
gr.deleteMultiple();

 

This is faster than looping through deleteRecord() because it’s a bulk SQL operation, but be careful — it bypasses some business rules and triggers, so test first.

 

Step 4 – CMDB Data Manager Cleanup Policies
* If possible, use CMDB Data Manager to set retirement and cleanup rules.
* Set batch size and execution window.
* Schedule over multiple days to avoid instance slowdowns.

 

Step 5 – Offload to Sub-Production First
* Clone Prod → run the delete in sub-prod → measure runtime and performance impact.
* Adjust batch sizes based on results before doing in Prod.

 

Step 6 – If Archiving Is an Option
* Create a custom archive rule just for this table.
* Run in parallel with other archive rules by isolating it into a separate schedule.
* This preserves historical data but removes load from the CMDB.

 

3 – Additional Tips for Safe High-Volume Deletes
* Disable Index Updates Temporarily (if deleting a massive subset) — re-enable after.
* Disable auditing on the table during cleanup (saves DB writes).
* Use Maintenance Window to avoid conflicting with Discovery or IRE jobs.
* Delete relationships first, then the CI records — prevents orphans.

 

4 – Recommended Sequence
1. Identify CIs → confirm no dependencies
2. Delete related records from cmdb_rel_ci
3. Batch-delete CI records via chunked script or deleteMultiple()
4. Rebuild indexes after completion
5. Run CMDB Health to confirm no orphaned references

 

Please appreciate the efforts of community contributors by marking appropriate response as Mark my Answer Helpful or Accept Solution this may help other community users to follow correct solution in future.
 

Thank You
AJ - TechTrek with AJ - ITOM Trainer
LinkedIn:- https://www.linkedin.com/in/ajay-kumar-66a91385/
YouTube:- https://www.youtube.com/@learnitomwithaj
Topmate:- https://topmate.io/aj_techtrekwithaj (Connect for 1-1 Session)
ServiceNow Community MVP 2025

 

View solution in original post

14 REPLIES 14

Bhuvan
Kilo Patron

@VenkataSaiM 

 

Read this for mass deletion best practices,

 

https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0717791

 

Thanks,

Bhuvan

Nikhil Bajaj9
Tera Sage

Hi @VenkataSaiM,

 

Scheduled jobs will be a good option but don't try to delete all records in one go - rather create batches and delete. Schedule these jobs periodically untill all batched got deleted. Since data will be deleted in chuncks so it will not impact system performance. I am giving you a dummy code- you can write this script in schduled job and twik it as per your environment.

 

var batchSize = 1000; // Define batch size
var gr = new GlideRecord('your_table_name');
gr.addEncodedQuery('your_encoded_query'); // Define your deletion criteria
gr.query();

var counter = 0;
while (gr.next() && counter < batchSize) {
gr.deleteRecord(); // Deletes the current record and triggers business rules
counter++;
}

 

If my answer helped you in anyways, please mark it solution accepted.

 

Regards,

Nikhil Bajaj

Please appreciate my efforts, help and support extended to you by clicking on – ā€œAccept as Solutionā€; button under my answer. It will motivate me to help others as well.
Regards,
Nikhil Bajaj

Bhuvan
Kilo Patron

@VenkataSaiM 

 

Also, have a look on CMDB Data Manager policy option for deletion as it will create a batch of  max 10,000 records automatically when more records are marked for deletion and is an optimized way of cleaning up bulk data.

 

https://www.servicenow.com/docs/bundle/yokohama-servicenow-platform/page/product/configuration-manag...

 

Thanks,

Bhuvan

TamoghnaB
Tera Expert

@VenkataSaiM ,

 

The SNOW recommended approach is to use low code no code method.

You should be able to achieve this via Data Management Job.

Mass data deletion in ServiceNow using Data Management involves creating and executing a delete job. This process allows for the removal of a large number of records without scripting, offering features like previewing changes and scheduling.

Please follow the steps here:

https://www.servicenow.com/docs/bundle/yokohama-platform-administration/page/administer/managing-dat...

 

Please try in lower envs first and ensure proper impact analysis.

 

Please mark reply as Helpful/Correct, if applicable. Thanks!


Please consider marking my reply as Helpful and/or Accept Solution, if applicable. Thanks!

 

Thank you!

Dr Atul G- LNG
Tera Patron
Tera Patron

Hi @VenkataSaiM 

Deleting 3.5M records is not easy, and there’s a 100% chance it will impact performance. It may also affect any records where these entries are referenced. You can use ServiceNow’s Data Management feature, or a background script, or a flow — but run the deletion in batches. Don’t delete all 3.5M records in one go; instead, break it down by table, record type, or condition, and then proceed accordingly.

*************************************************************************************************************
If my response proves useful, please indicate its helpfulness by selecting " Accept as Solution" and " Helpful." This action benefits both the community and me.

Regards
Dr. Atul G. - Learn N Grow Together
ServiceNow Techno - Functional Trainer
LinkedIn: https://www.linkedin.com/in/dratulgrover
YouTube: https://www.youtube.com/@LearnNGrowTogetherwithAtulG
Topmate: https://topmate.io/atul_grover_lng [ Connect for 1-1 Session]

****************************************************************************************************************