

- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 06-30-2023 10:37 AM
Understanding Scheduled Job Workers
For scheduled jobs within the #ServiceNow platform, did you know that there are 3 types of workers that process the jobs each node claims? Each node has its own "Background Scheduler" which claims up to 20 jobs at a time and puts them in the "Scheduler Queue".
The 3 types of workers that work that queue are:
- Scheduler Worker - each node has 8 of these to process jobs within the Scheduler Queue (scheduler workers are also threads and are numbered 0-7)
- Burst Worker - This is a special 9th worker thread that is only used under specific scenarios to ensure the most critical jobs do not get delayed. For a job to run on the Burst Worker it must have a priority of <=25 and have been queued for 60 seconds (meaning there's a delay and it has been sitting there)
- Progress Worker - These run on scheduler threads and are designed to handle long running jobs where we want to display the progress/percentage in the UI (use cases such as: upgrades, plugins, update sets, etc.). By functioning as a "wrapper" around the job, this allows it to provide updates on the activities it's conducting back to you in the UI to keep you informed of the status.
You can see the current details of the workers related to the node you're on by visiting the "stats.do" page. You'll then want to scroll down a bit to see the related information. Here's an example of what you'll see:
Do you have any interesting facts or tips you want to share related to ServiceNow scheduled job workers? Feel free to comment below and let us know your thoughts!
If you enjoy ServiceNow content like this, please visit and consider subscribing to my ServiceNow focused YouTube channel: Allenovation!
- 5,539 Views
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hello @Allen Andreas ,
Can you please let me know how can we enable "Burst" worker for a scheduled job? As i have a long running job which can go upto 15+ hours I was searching for a method where i can actually execute scheduled job in different nodes instead of one node.Then i found this interesting concept called BURST SCHEDULERS .
Thanks in advance,
Mohith Devatte
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
I recommend posting your question to the community rather than as a comment on this article. That way if someone answers your question they will get credit. In any case, I've seen your question and I'll try to answer it. I think you are asking for a way to redesign your long running job into a few smaller jobs and have them run on separate nodes. Bust workers won't help with that. Burst workers are automatically assigned by the platform when all the threads are busy on a node and a high priority job needs to be run. Burst workers are not configurable and you can't control when a burst worker will be spun up or spun down. It is automatic.
What I think you are looking for is a way to:
1. Have multiple jobs all doing the same thing
2. Have them run on different nodes
There are many ways to accomplish this. I'll briefly outline two:
1. Once you create your scheduled job there will be a record created in the sys_trigger table. Open that record. There is a field named "System ID" that is blank when the job is not running and contains the ID of the node which is running the job while the job is running. If you change the value of the System ID field to "ACTIVE NODES" and click "Save" or "Update", it will make a copy of that job for any node where sys_cluster_state.schedulers is set to "any" - in other words, all active nodes that can take scheduled jobs. That's it, if you have two nodes, then you now have 3 copies of your job, one parent copy and two child copies that will each be "pinned" to a certain node via the system_id field.
2. Write a repeating script execution job that will spawn other jobs. The script of this repeating job will look something like this:
var nodes = new GlideRecord("sys_cluster_state");
nodes.addQuery("schedulers", "any");
nodes.query();
var nodesArr = [];
while(nodes._next()) {
nodesArr.push(nodes.getValue("system_id"));
}
gs.print(nodesArr);
var script = "script here";
var nowGDT = new GlideDateTime();
//create 10 jobs, that will run 1 second apart, starting now, spread evenly across my nodes
for (var ia = 0; ia < 10; ia++) {
nowGDT.add(1000);
var job = new GlideRecord("sys_trigger");
job.initialize();
job.setValue("trigger_type", 2);// 0 is run once, 2 is ondemand
job.setValue("script", script);
job.setValue("next_action", nowGDT);
job.setValue("system_id", nodesArr[ia%nodesArr.length]);
job.setValue("name", "testName");
job.insert();
}
Performance Warning: Instead of having one job that takes 15+ hours you will now have X simultaneous jobs that will each take 15/X hours. This will mean higher concurrency. High concurrency means more pressure on resources. More pressure on resources means potential for performance degradation if the resources are strained. Be careful to avoid over taxing system resources like number of worker threads (there's only 8 per node), memory usage, slow query bottlenecks, etc.
Multi-threading Warning: These strategies both result in multi-threading. Logical problems like race conditions can result in data inconsistency. If you aren't well versed in programming for multi-threading, it might be best to keep your job as a single job.
Please ✅ Correct if this solves your issue and/or 👍 if Helpful
"Simplicity does not precede complexity, but follows it"
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Mwatkins This was a nice explanation .Thanks for the knowledge .
Just a quick question . Lets say my job is trying to update 100k records is it like the work is divided between all the nodes which are active or they will be doing the same job ? As in lets say if 25k records are updated in one node these updated ones will not be picked up in other nodes and it will process the remaining untouched records ?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Mohith Devatte All depends on how you code your job, doesn't it? 😉
Happy coding!
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Mwatkins I think i got the answer ! Awesome this is .Thanks for the help 🤩

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @Mwatkins,
I have a Scheduled job(sys_trigger) set to the Active Nodes, which executes the GlideEventManager(GEM) to process my events in a custom queue.
Although, I can't seem to achieve multi-threading, as seemingly the events are always Claimed By one node, and as such the events are processed sequentially by the GEM.
In contrast to what @Mohith Devatte asked, (I think) I'm already handling the processing of records by creating the events with different batches.
So, how can I influence the system to delegate this and use the rest of the nodes?
Is there a class or method name or a technique I can use that tells the system to run these concurrently?
I'd appreciate if you could point me in the right direction with this. Also, please let me know if I'm misunderstanding the concept(s).
Best regards,
Lori