A Deep Dive into Record Deletion Performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-30-2025 02:00 AM - edited 07-30-2025 03:07 AM
While experimenting with large-scale data import following this excellent guide: How to import 4 million records in 3 hours , I faced a practical challenge - how to efficiently delete millions of records for repeated testing.
This led me to explore a post on the community discussing undocumented improvements to the Table Cleaner job: Undocumented Table cleaner - OOTB improvements . To evaluate its effectiveness, I conducted a benchmark comparing three methods for deleting 1,000,000 records from a custom user table. This table is simple, containing only four fields: first_name, last_name, email, and id, with no reference to other tables. Below is a breakdown of each method, how I configured or executed it, and what I observed.
Background Script
My first approach was using a background script with deleteMultiple():
var grFakeUser = new GlideRecord("u_fk_user");
grFakeUser .query();
grFakeUser.deleteMultiple();
This method is fast to implement, but on a Personal Developer Instance (PDI), the transaction was terminated by the system. Before it was stopped, it had already spent 164 seconds deleting just 18,842 records. This showed that while background scripts are convenient, they’re not suitable for large datasets.
Pros
- Easy to implement
- Immediate execution
- Near-limitless flexibility
- Can bypass business rules, workflows, and other engines
- Records can be rolled back
Cons
- Can hang the instance if dataset is large
- No built-in batching
Delete Job
Next, I tried the built-in Delete Job feature. This feature is available since Tokyo release. It’s more stable and runs in the background. Here’s how I configured it:
- Access the list of records in custom table
- Right click the first column
- Choose Data Management > Delete All with preview...
- Uncheck the option “Run business rules and engines”. You can click Preview Cascade related link to preview the number of cascade record will be deleted.
- Click Execute Now
This method took around 2 hours 17 minutes to delete 1 million records. It’s slower but safer, and disabling business rules is essential for performance.
Pros:
- Safer and more controlled
- Runs in background
- Can be preview unexpected records
- Records can be rolled back
Cons:
- The speed is still slow
- Requires manual setup
Table Cleanup Policy
Finally, I explored an undocumented improvement to the Table Cleaner job, as discussed in this post.
Here’s how I configured it:
- Create an Auto Flush record in sys_auto_flush
- Tablename: u_fk_user
- Matchfield: sys_created_on
- Age in seconds: set to 0 to delete all records
- Go to Today's Scheduled Jobs module, find DMScheduler, and adjust the Next action to trigger the job
- Monitor execution in sys_dm_run by filtering records where Run details starts with the sys_id of your Auto Flush record
This method deleted 1 million records in just 78 seconds, with minimal impact on system performance.
Pros:
- Very fast for large-scale deletion
- Minimal system impact
- Uses platform-native cleanup logic
- Set-it-and-forget-it feature. Runs in the background on a schedule
- Flexible configuration options
- Will not trigger business rules/workflows (unless the table has the iterativeDelete attribute)
- Respects the reference cascade rule
Cons:
- Undocumented and less intuitive
- Requires understanding of internal job structure
- Monitoring requires manual inspection
- Records can not be rolled back
- Cascade records cannot be previewed
- Designed for maintenance, not ideal for one-time deletions (Refer to: KB0717791 )
Benchmark Summary
Method | Time to delete | Notes |
Background Script | 2 hours 25 minutes (estimated) | Instance hangs, transaction stopped |
Delete Job | 2 hours 17 minutes | Safest for deleting data |
Table Cleanup | 78 seconds | Fastest, minimal impact on performance |
Conclusion
Each method has its strengths and trade-offs. For small datasets, background scripts might suffice. For safer execution, Delete Jobs are reliable. But for large-scale deletion speed, the Table Cleanup with new scheduled job is the clear winner.
Have you tried any of these methods? I’d love to hear your experience or improvements.
- 614 Views