
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-15-2022 10:47 PM
Hi All,
We've been working with ServiceNow since Geneva back in 2016 and have run 90% of upgrades and patching without the support of a Service partner and over the years we've worked on cutting down the timeframe to patch and upgrade and generally we work to a 2 - 3 week timeframe from upgrading development to upgrading production.
We have 3 instance environments DEV, UAT and Production and our DEV/UAT portion of the upgrade plan will look something like this:
Day 1
- (Morning / Afternoon) Pre-clone activities, comms etc.
- (5pm) Lock out testers and developers. (Upgrade administrator access only)
- (5pm) Backup all update sets in development.
Day 2
- (3am) : Scheduled clone for both DEV and UAT.
- (5am) : Scheduled upgrade both DEV and UAT.
- (8am) : Post clone / upgrade smoke tests
- (9am) : Process skipped updates
Once the skipped updates have been processed and we're happy the instances are in a good state, we'll open the environments to testers and developers.
We're shifting this activity to our Service Partner so our core team can focus on other work, but they're of the opinion that the environments, clones, and upgrades should be done on different days as there is a risk in cloning and upgrading both environments on the same day which means this portion of the process gets extended by 3 or 4 days.
What risk is there in doing the clones and upgrades on the same day?
Solved! Go to Solution.
- Labels:
-
Multiple Versions

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2022 03:09 PM
If you've been able to follow a similar cadence since Geneva for 90% of your upgrades, you've got a pretty good sense of where things can go wrong with your upgrade.
It also suggests to me that your instances are small enough to be cloned and upgraded comparatively quickly, and that you've managed to keep customization to a minimum, making the skip list process go fairly smoothly.
As instances get larger, and customizations (and integrations) get more complex, then there are more opportunities to fail -- both in the clone and the upgrade. If your organization has the ability to absorb those atypical events when they happen, I would personally recommend against *planning* to fail by pre-emptively moving clones and upgrades to different days. However, it is important to be able to be resilient in the case one of the actions do fail, and to learn and adapt for "next time."
When an instance clone by itself takes between 1-3 days, and it fails on the third day requiring a restart that takes another 3 days, and this has happened on each of your last 2 clones, then definitely plan more time next time. The same consideration applies for the upgrades.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-22-2022 06:15 PM
Thanks for the update Terri, very helpful!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2022 02:34 PM
I usually block out three or four weeks for major release upgrades twice a year. We could probably do it in two weeks if we focused, but sometimes there's too many BAU distractions.
We never clone dev and UAT at the same time for our major release upgrades. We keep the UAT instance at the same level as our production instance while we run regression tests on the dev instance. We do this in case there are production issues to replicate (and fix) during this period. And if there is, we fix in UAT, push to prod and then backport to dev on a case/case basis.
We then clone and upgrade the UAT instance after our two/three week regression testing is complete. This clone/upgrade of UAT is a dry-run of the production upgrade, including following a checklist of fixed steps that we've developed that include the list of bug fix update sets, plugin updates, etc. performed on the dev instance.
So my upgrade path is
Step 1 - Clone prod over dev. Upgrade dev to major release build/patch. Update any plugins or store apps. Regression test on dev for two or three weeks. Capture regression bug fixes in update sets. Test/evaluate/document new features that we are exposing with the upgrade.
Step 2 - Run a test upgrade on our UAT instance: Clone prod over test. Upgrade UAT to major release build/patch. Update plugins and store apps. Apply any regression test fixes. Run a smoke test on key areas, including any areas with regression issues.
Step 3 - Upgrade prod instance after a week of UAT testing. Clone prod back over dev and UAT (and our sandbox).
I've followed this same path since Aspen->Berlin and its worked well for me.
BTW I've never needed a partner for upgrade testing. I've worked in "smaller" companies with one prod instance and a small team, so we can handle the upgrade ourselves.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-18-2022 02:49 PM
Hi Paul,
This is great, you've made me visualise the process in another way and have given me something to take back to the team! Thank you for the reply!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-16-2022 03:09 PM
If you've been able to follow a similar cadence since Geneva for 90% of your upgrades, you've got a pretty good sense of where things can go wrong with your upgrade.
It also suggests to me that your instances are small enough to be cloned and upgraded comparatively quickly, and that you've managed to keep customization to a minimum, making the skip list process go fairly smoothly.
As instances get larger, and customizations (and integrations) get more complex, then there are more opportunities to fail -- both in the clone and the upgrade. If your organization has the ability to absorb those atypical events when they happen, I would personally recommend against *planning* to fail by pre-emptively moving clones and upgrades to different days. However, it is important to be able to be resilient in the case one of the actions do fail, and to learn and adapt for "next time."
When an instance clone by itself takes between 1-3 days, and it fails on the third day requiring a restart that takes another 3 days, and this has happened on each of your last 2 clones, then definitely plan more time next time. The same consideration applies for the upgrades.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-18-2022 02:59 PM
Hi Eric,
I think you're bang on, our instances are small and "un-customised" enough that we can probably afford to take a calculated risk - but as you say learning and adapting is important, which is actually what I'm doing here; Posting this question was an action from our retrospective. 🙂
Our clones take just under 2 hours. A clone taking 1-3 days blows my mind!
We did have a problem with our UAT instance during our San-Diego upgrade where a developer committed a handful of update sets in error and we made the choice to re-clone, and re-upgrade which was done overnight and didn't impact our production timeline.
Thanks for your reply!