The Zurich release has arrived! Interested in new features and functionalities? Click here for more

A Deep Dive into Record Deletion Performance

Khoi Bui Quoc
Tera Contributor

While experimenting with large-scale data import following this excellent guide: How to import 4 million records in 3 hours , I faced a practical challenge - how to efficiently delete millions of records for repeated testing.

 

This led me to explore a post on the community discussing undocumented improvements to the Table Cleaner job: Undocumented Table cleaner - OOTB improvements . To evaluate its effectiveness, I conducted a benchmark comparing three methods for deleting 1,000,000 records from a custom user table. This table is simple, containing only four fields: first_name, last_name, email, and id, with no reference to other tablesBelow is a breakdown of each method, how I configured or executed it, and what I observed.

 

Background Script

My first approach was using a background script with deleteMultiple():

var grFakeUser = new GlideRecord("u_fk_user");
grFakeUser .query();
grFakeUser.deleteMultiple();

This method is fast to implement, but on a Personal Developer Instance (PDI), the transaction was terminated by the system. Before it was stopped, it had already spent 164 seconds deleting just 18,842 records. This showed that while background scripts are convenient, they’re not suitable for large datasets.

 

KhoiBuiQuoc_2-1753864199783.png


Pros

  • Easy to implement
  • Immediate execution
  • Near-limitless flexibility
  • Can bypass business rules, workflows, and other engines
  • Records can be rolled back

 Cons

  • Can hang the instance if dataset is large
  • No built-in batching

Delete Job

Next, I tried the built-in Delete Job feature. This feature is available since Tokyo release. It’s more stable and runs in the background. Here’s how I configured it:

  1. Access the list of records in custom table
  2. Right click the first column
  3. Choose Data Management > Delete All with preview...
  4. Uncheck the option “Run business rules and engines”. You can click Preview Cascade related link to preview  the number of cascade record will be deleted.
  5. Click Execute Now

This method took around 2 hours 17 minutes to delete 1 million records. It’s slower but safer, and disabling business rules is essential for performance.

KhoiBuiQuoc_4-1753864839338.png

Pros:

  • Safer and more controlled
  • Runs in background
  • Can be preview unexpected records
  • Records can be rolled back

Cons:

  • The speed is still slow
  • Requires manual setup

Table Cleanup Policy

Finally, I explored an undocumented improvement to the Table Cleaner job, as discussed in this post.

Here’s how I configured it:

  1. Create an Auto Flush record in sys_auto_flush
    • Tablename: u_fk_user
    • Matchfield: sys_created_on
    • Age in seconds: set to 0 to delete all records
  2. Go to Today's Scheduled Jobs module, find DMScheduler, and adjust the Next action to trigger the job
  3. Monitor execution in sys_dm_run by filtering records where Run details starts with the sys_id of your Auto Flush record

This method deleted 1 million records in just 78 seconds, with minimal impact on system performance.

KhoiBuiQuoc_5-1753865842815.png

 

Pros:

  • Very fast for large-scale deletion
  • Minimal system impact
  • Uses platform-native cleanup logic
  • Set-it-and-forget-it feature. Runs in the background on a schedule
  • Flexible configuration options
  • Will not trigger business rules/workflows (unless the table has the iterativeDelete attribute)
  • Respects the reference cascade rule

Cons:

  • Undocumented and less intuitive
  • Requires understanding of internal job structure
  • Monitoring requires manual inspection
  • Records can not be rolled back
  • Cascade records cannot be previewed
  • Designed for maintenance, not ideal for one-time deletions (Refer to: KB0717791 )

 

Benchmark Summary

MethodTime to deleteNotes
Background Script2 hours 25 minutes (estimated)Instance hangs, transaction stopped
Delete Job2 hours 17 minutesSafest for deleting data
Table Cleanup78 secondsFastest, minimal impact on performance

 

Conclusion

Each method has its strengths and trade-offs. For small datasets, background scripts might suffice. For safer execution, Delete Jobs are reliable. But for large-scale deletion speed, the Table Cleanup with new scheduled job is the clear winner.

Have you tried any of these methods? I’d love to hear your experience or improvements.

0 REPLIES 0