Fix Script doesn't update all the records
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-10-2020 03:53 AM
Hi~
I use below fix script to update status value of cmdb_ci table. I expect it could update all those 500000+ records, but it only updates a few records when it's run. For example, the first time it updates 28 rows, and second time I run it, it updates 133 rows etc...
So I add setLimit(20000) and then it updates 20000 records.
Does anyone know why it acts this way? Is there anything wrong with my code? I expect it could update all those 500000+ records when it's run, without setLimit().
var gr = new GlideRecord('cmdb_ci');
gr.addQuery("operational_status","5");
//gr.setLimit(20000);
gr.query();
while (gr.next()){
gr.operational_status = 20;
gr.update();
}
If I use below fix script to see how many records it should update, it shows "*** Script: Fix Script:Map Old status to new value - Number of records processed is 554837".
var count=0;
var gr = new GlideRecord('cmdb_ci');
gr.addQuery("operational_status","5");
gr.query();
while (gr.next()){
gr.operational_status = 20;
count=count+1;
}
gs.log("Fix Script:Map Old status to new value - Number of records processed is "+count);
- Labels:
-
Scripting and Coding

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-10-2020 12:12 PM
Hi
I'd like to make a further suggestion for running scripts to update/delete (or indeed insert) large numbers of records. I would recommend having the logic 'sleep' for a second between iterations if you dont want to use setLimit method. Since they run as a single huge MySQL statement (e.g. UPDATE X WHERE A = B) in the database instead of thousands of tiny ones, they are prone to blocking other operations; which may become queued up against the table being worked upon. I usually use a script that will break the operation into chunks and use a two step operation to do the actual deletion, where the first step pulls a list of sys_id's and the second step does the multiple update using the sys_id's as the delimiter. The reason to do the multiple update using sys_id in the query is that because it is the primary key of the table, you mitigate the risk of hitting gap lock (see InnoDB's gap locks - Percona Database Performance Blog).
So something like this:
- Large loop that continues as long as there are records to be deleted
- GlideRecord query against target result set
- Set limit on query for X records at a time (X should be less than or equal to 50,000)
- While loop
- Puts each sys_id into an array, listOfSysIDs
- End while loop
- If listOfSysIDs is empty we are done, exit out of loop.
- Else, run multiple update/delete using addCondition("sys_id", listOfSysIDs)
- Start the next loop
Here is an example of a script that uses this principle in this post: Trying to delete multiple duplicate records on a table
Hit Helpful or Mark correct based on the impact
Abdul Azeez

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-10-2020 12:33 PM
Interesting, What makes you say that it runs as one huge SQL statement when there is the possibility of different business logic running against each record you updated and causing other fields to be different?
The only time that I have had issues with large blocks of updates is when they have run longer than 24 hours at which point the temp file the DB uses gets deleted and the script crashes.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-10-2020 12:36 PM
var gr = new GlideRecord("cmdb_ci");
gr.addQuery("operational_status", "5");
gr.setValue("operational_status", 20);
gr.updateMultiple();