- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 09-21-2022 05:00 AM
The problem
How many times you've been tasked to delete or update big amout of data in ServiceNow?
Deleting/updating a big amount of data is oftenly achieved via scripting - either background scripts or fix scripts are used.
This is always leading to some uncertainty related to what exactly is going to be deleted/updated, as well as the cascading deletion/update that is triggered as a consquence.
Of course, one can (actually must) use log messages to verify the data before the delete/update statements execution. But still, this is inconvinient and might lead to data loss if not done right.
System Data Management in Tokyo
Since Tokyo, ServiceNow is introducing new application menu - System Data Management.
It consists of two modules:
- Delete Jobs
- Update Jobs

Since these are more or less similar, apart from the operation that they do, I will give some examples with the delete jobs, but these are also applicable to the Update jobs as well.

To create a new job, one first must select a table and the apply some filtering if needed. There is also a checkbox Run business rules and engines (gr.setWorkflow (false), right...)
There is also a Preview button, to show us the total count to the records that fulfil the condition, as well as a link to these.

Clicking Continue will give tou possibility to select a time to run the deletion (Run at), as well as two related links - to preview the Cascade and one to Execute the job immediately.
Clicking Preview cascade will populate the related lists with all cascading records that will be also deleted, along with their count (and the actual deleted records count, once the job is executed):

Clicking Execute now will prompt us to Proceed or Cancel (along with the warning message). If we proceed we will see a screen, similar to the Data import, with information about the execution:

If we go to Check execution results, we will be presented with information about the deleted records (both direct and cascaded).

But what is more important - We are presented with a related link to Rollback the transaction if needed.

Conclusion
In my opinion, this functionality was much needed since ages, as the only way to manage big amount of data was scripting. I am really happy that we have it now in Tokyo and I want to share it with as many people as I can.
Please mark Helpful, Bookmark and feel free to share with your network. Thanks in advance!
Martin Ivanov
Community Rising Star 2022
- 3,105 Views
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Martin,
is it possible to schedule these job, for example to make it run at a certain time every day?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
What is the solution to delete data using Delete Jobs scheduled daily, e.g. before office hours?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @lmundere ,
Yes, we can schedule these job. see original post - https://www.servicenow.com/community/now-platform-articles/update-or-delete-bulk-records-without-scr...
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
hi @Martin Ivanov , I think the images are broken, would you consider re-adding them for easy understanding?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
The problem is, the planning can only be scheduled for a single day. We need the Delete Jobs to be scheduled to start at the same time once a day, 365 days a year, or once a week.
How do you configure this? This is not described in the original post
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi all,
the cleanup of the cmdb database is a neverending story. Therefore the deletation of old unused CI's has to be done periodically. At the moment the Delete JObs can only be ran once... You can set now or one schedule time to run the Job. So we have create a new Job each time we want to cleanup the cmdb database.
After the JOb is successfully done, the Job is deleted and cannot be re-run..
So we created one delete Job, but we will not run this one, but always do a Insert and Stay, so the Job is duplicated. Then we run the duplicate Job.
But this is time consuming and needs always a manual step..
Is there a way to automate this and set a period accoridng the Scheduled Jobs functionality?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Marc Zbinden - have you tried setting up an Archive Rule? Once you set up an Archive Rule, you can then create an Archive Destroy Rule to get rid of archived records.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
I have a similar question to others that have posted here - I am wondering if anyone has used a scripted scheduled job to generate a delete job with the parameters (table, conditions etc) and once saved, execute it? I don't know enough scripting to do it, but was wondering if anyone had done this previously. I have looked at table clean-up policies and archiving, but it doesn't meet my needs.