Configuring scheduled jobs following a clone
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-12-2017 08:50 AM
We are doing an on-premise clone. The instructions include running a script clone_snc_db.sh that runs a statement:
$MYSQL -e "update $DBNAME.sys_trigger set system_id = NULL where system_id is not null;"
This removes the node from the scheduled jobs which run on a specific node.
After cloning, is it necessary to re-schedule the jobs to run on the nodes in the new environment? In the cloned ServiceNow we don't see the application server nodes in the System Id LoV of the scheduled job page. How can this be rectified?
- Labels:
-
Instance Configuration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-18-2017 10:46 AM
You could log in to the source instance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-18-2017 05:28 PM
Adam, what is the status of the System ID field following jobs on the target instance after your clone finishes?
UA Platform Analytics Timed Uploader
UsageAnalytics App Persistor
Clean logs
Clean Temp Files
System - reduce resources
If they are marked "ALL NODES", with one duplicate for each of the nodes, then there is nothing more to do, you are good to go.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-19-2017 01:39 AM
Hi,
I can find the master instance of each of these jobs by finding the one without a parent. I then set the System Id to ALL NODES. However, when the business rule fires, it creates another child job to run but on the database server, rather than the application server nodes. In fact, the application server nodes are not seen in the LoV for System_id. It also doesn't delete the child jobs (copied from the source instance) that don't have a system_id.
Any thoughts?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-19-2017 09:53 AM
Hm... the logic for the "Propagate run all nodes changes" Business Rule is in the "Scheduler" Script Include. The logic goes like this:
- A change happens to a sys_trigger record that either currently or previously had the values "ALL NODES" or "ACTIVE NODES" in the System ID field (i.e. a "parent" job).
- Get all records from the sys_cluster_state table where "status" is "online". (i.e. all nodes, both active and standby).
- If the parent job was/is for "ACTIVE NODES", only get sys_cluster_state records where "schedulers" is "any" (i.e. only nodes on the active side of the cluster)
- For each node that was found, do the following loop:
- Check if there is already a child job for that node, delete it
- Insert a new copy of the parent sys_trigger record
So... is your DB listed in the sys_cluster_state table? In our hosted instances we only have app nodes in that table.
There are also two Business Rules for the sys_cluster_state table that may be coming into play, "Propagate Run All Nodes on status change" and "Propagate Run All Nodes on delete". They work in reverse of the above logic. They fire when there is a change to sys_cluster_state and run the same propagate loop for every sys_trigger that is "ALL NODES" or "ACTIVE NODES".
One more thing here, if a parent job is deleted then there is a cascade delete rule on the sys_trigger.parent field that will delete all of the children.
I think we may need to update the clone process for our on-prem customers to handle these "ALL NODES" or "ACTIVE NODES" cases. Can you please open an incident in HI and I will make sure it gets to the right team. Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-20-2017 02:53 AM
Right, I am beginning to believe the issue is my app server node configurations. The glide.cluster.node_name parameter was missing from the glide.properties file on the app server. When I put this in, and restart the nodes, I see my two nodes in the sys_cluster_state table. Hurrah. However, they both appear as on "localhost" rather than the actual app server names.