
- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 04-20-2024 09:38 PM
Hi All
Just thought to bring many points at one place for Quick reference even though all are available in Docs here n there.Recently i was working with a customer where data was not archived in a decade resulting in Slow performance and high DB growth which may result in cost. DB subscription space was exceeded which may add some Infrastructure Cost too.
NOTE:KEEP DATA DEBT OPTIMAL
There are few approach which we though and carefully tested.
(1) Direct Deletion Using Background Script - This approach is not supported/Approved by Service Now. With Consent from customer you can proceed but this is not a certified way from SaaS perspective.
Sample Script for you which you can alter it as per Your need.
=================================================================
// Define the date range for the deletion
var startDate = '2024-04-01'; // Adjust the start date as per requirement
var endDate = '2024-04-03'; // Adjust the end date as per requirement
// Query for incidents within the date range
var incGr = new GlideRecord('incident');
incGr.addQuery('sys_created_on', '>=', startDate);
incGr.addQuery('sys_created_on', '<=', endDate);
incGr.query();
incGr.setWorkflow(false);
while (incGr.next()) {
var incidentSysId = incGr.sys_id.toString();
// Delete attachments associated with the incident
var attGr = new GlideRecord('sys_attachment');
attGr.addQuery('table_sys_id', incidentSysId);
attGr.deleteMultiple();
// Delete SLA records associated with the incident
var slaGr = new GlideRecord('task_sla');
slaGr.addQuery('task', incidentSysId);
slaGr.deleteMultiple();
// Delete work notes (and additional journal fields if needed) associated with the incident
var jnGr = new GlideRecord('sys_journal_field');
jnGr.addQuery('element_id', incidentSysId);
// Optionally filter by element for specific journal types, e.g., work_notes
//jnGr.addQuery('element', 'work_notes');
jnGr.deleteMultiple();
gs.print('Incident :'+ incGr.number + 'has been deleted');
// Finally, delete the incident itself
incGr.deleteRecord();
}
** Note - You can add more Child Records Table of Incident like TASK_SLA, SYS AUDIT LOG etc. Add As per you environment needs so no Orphan Records are Left.
=================================================================
Since deletion is audited so this deletion will create an entry in SYS_AUDIT_DELETE Table and SYS_AUDIT_RELATION table. This needs to be suppressed before you fire Delete Script.
Use Property to Suppress This - glide.db.audit.ignore.delete
Also you make sure to Flag off -Cancel After 4 Hours Flag , Keep Rollback Flag on as this will help you in gathering Stats.
Because rollback contexts contain a significant amount of data, they're deleted after 10 days by default.
The Clean Expired Rollback Contexts scheduled job runs daily to delete expired records in the Rollback Context [sys_rollback_context] table. If you need to retain a rollback context for more than 10 days, you can do so by adding a system property.
Snap Shot of Rollback Segment for clarity.
(2) Table Cleaner- Table Cleaner can also be used with Cascade but this has few Limitations so we avoided this option. This is primarily good for Ongoing Maintenance than one time bulk destroy.
- If a table cleaner rule has a query that takes longer than 30 seconds to complete, the entire table cleaner job is stopped
- For a particular table the job will delete the records for 1200 seconds on a particular run, and once it crosses this 1200 seconds the job will stop the deletion for the particular table and will resume for this table in next run. Below is a snippet from the local host logs when the threshold of 1200 is reached:
- Table Cleaner would not work for the tables which is part of extension and for cleaning the extended tables the Data Archiving plugin and Email Retention plugin should be used, which will archive and eventually destroy email messages that is no longer needed.
- Though there are properties which can control but still speed depends on Data Size
- Do not move Table Cleaner rules with a slow query (60+ seconds) into production
(3) Archive & Destroy - As of Now this option is best suited for our scenario . Archive rule is archiving and Destroy rule is Destroying it. Destroy is very fast and Its suggested Destroy is only done in alignment of Customers Data need (Contractual & Regulatory).
Based on Research and observation you can fine tune archival Jobs Using Below Parameters.
glide.db.archiving.max_consumer_workers = 4 to 8.
glide.db.archive.chunk_size=1000 to 2000
(4) DB Space Reclaim - After Archive of Main Table space gets free and data is shifted to AR_ Tables. When Destroy is also ran in conjunction with Archive data is gone from Main Table & AR_ Tables.
In Utah or Above Releases Service Now Published the DB Free Reclaim Job . You can find in Scheduled Jobs.
Table free space information can be seen per table from the "Telemetry - Table Growth" dashboard by clicking into a table or from per table from /sys_physical_table_stats_list.do, field 'Reclaim Estimate in GB'.
Table REBUILD Frees Spaces and Compacts the Table and Free Up Space from DB.
- Requirements for table rebuilds
- The instance is on the Utah or new release
- The table has a Table Cleaner rule (see 'Rebuilding tables that do not have a Table Cleaner rule but meet all other requirements' below for more details)
- Has more than 10gb for 'Reclaim Estimate in GB'
- To be eligible for compaction, the table must have over 50% free space, as indicated by the 'Reclaim Estimate in GB' value in the related sys_physical_table_stats record. For example, if the 'Table Size in GB' is 100GB, then the 'Reclaim Estimate in GB' must exceed 50GB (i.e., more than 50% of 100GB) for the table to be considered eligible for compaction
- This requirement can be adjusted from 50% to 30%.
- Add the property: glide.db.compaction.criteria.reclaim_percentage
- Type: integer
- Value: 30
- This requirement can be adjusted from 50% to 30%.
- The table has less than 50 million rows.
- Job: DB Compaction > Executes daily, this job triggers the rebuild of tables that meet the requirements for compaction. This job does not rely on the data in sys_physical_table_stats and pulls the data table independently.
- Table: Review /sys_schema_change_list.do for the table record with 'alter_type' field as 'compact_table'
- Field: reclaim_size_estimate > Estimated reclaim size before compaction
- Field: table_size_before > Table size before compaction
In our case we have altered/added the Property for Reclaim to 30.
glide.db.compaction.criteria.reclaim_percentage=30
Learning from this - Plan your archival policy in well advance and configure it and use it. Else in one go doing this very cumbersome as Job needs to trigger in off business hours . Till this time we have destroyed 30 Million records which adds up to just 1 Year on data. We have few more Years to go .
!! HAPPY LEARNING !!
Regards
RP
- 2,827 Views
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
This really helped. Thanks for sharing.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Very Helpful article.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Very interesting and informative article. Thanks Rahul.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
What are the things we should so, do we don't reach this position?

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Talk to your Customer on Data retention needed-- as per Contractual Obligation or as per Regulatory needs.
Accordingly make your archival & Destroy decision.
Regards
RP
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Quite informative. Thanks for sharing this.