Configuring scheduled jobs following a clone
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-12-2017 08:50 AM
We are doing an on-premise clone. The instructions include running a script clone_snc_db.sh that runs a statement:
$MYSQL -e "update $DBNAME.sys_trigger set system_id = NULL where system_id is not null;"
This removes the node from the scheduled jobs which run on a specific node.
After cloning, is it necessary to re-schedule the jobs to run on the nodes in the new environment? In the cloned ServiceNow we don't see the application server nodes in the System Id LoV of the scheduled job page. How can this be rectified?
- Labels:
-
Instance Configuration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-12-2017 10:27 AM
There are a couple types of sys_trigger records that might have the system_id field filled in.
1. Someone might have manually set the system_id field to run on a specific node (not recommended - not out-of-box)
2. There are special values for the system_id field called "ALL NODES" and "ACTIVE NODES". These types of scheduled jobs were introduced in later versions of ServiceNow. When were on-premise clone instructions that you are using last verified by ServiceNow?
The way that "ALL NODES" works is that it automatically creates a duplicate of itself, one for each node in the cluster - including nodes on the secondary side of the High Availability cluster (i.e. the passive/standby nodes). An example of an "ALL NODES" job would be the "Reduce System Resources" job. This job needs to run all the time on all nodes to free up resources that may have not been released by the Java Virtual Machine - avoiding critical performance dangers.
The way that "ACTIVE NODES" works is similar except that they run only on nodes on the active (or primary) side of the cluster.
The original, "ALL NODES" or "ACTIVE NODES" job acts as a kind of controller for the duplicates. If you update the controller job, then a business rule named "Propagate run all nodes changes" will fire that makes sure all the duplicates are setup correctly. However, this update only happens in the business logic layer, so it will not happen when you run a MySQL command.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-18-2017 02:13 AM
Many thanks for the info.
It's not clear to me what I need to do with these jobs after I start up my cloned instance. I have seven instances of the following jobs:
- Clean logs
- Clean Temp Files
- System - reduce resources
- UA Aggregated Analytics Timed Uploader
- UA Platform Analytics Timed Uploader
- UsageAnalytics App Persistor
On the source instance, there was one of each of these jobs set to ALL NODES and the other to each of the 6 nodes. On the clone, I have 4 nodes. So do I delete all but one instance of each of these jobs, setting the remaining instance to "ALL NODES"?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-18-2017 08:31 AM
If you change the one that is marked "ALL NODES" through the UI, then the business rule will delete any of the remaining jobs and create new ones on the appropriate nodes for the cloned instance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-18-2017 10:39 AM
How do you tell which is the one that had previously been marked as "ALL NODES", as the system_id has been cleared during the cloning process?