kevinanderson
Giga Guru

So,   I have been developing on ServiceNow for a little over a year now, and one of the biggest pain points for me   has been the deployment of code.   Update Sets are ugly in my opinion.   As a web developer, I am used to pushing a bunch of files up to a server via a single sync from an ftp client, and then moving on.   Deploying code via update sets in the ServiceNow UI is painful. Lots of clicks are required to get code migrated.   It's slow.   It's really awful.

In my current role, we often have large deployments.   Sometimes there are hundreds of update sets.   This means lots of clicks, often lots of errors and warnings, and long evenings during the day of go-live getting past all this.

I went to Knowledge16 and picked quite a few brains on how this could be improved.   One good idea that was shown to me was the concept of overwriting update sets after they have been deployed.   This is nice, as it helps keep our update set count down.   Got and error moving an update set to QA?   Go back to DEV, re-open the update set, capture the fix, go to QA, delete the received update set, and then pull the fixed update set back in and re-apply. Nice.

Ok, so now using this approach, we are in the tens of update sets instead of hundreds...still a LOT of clicking to get 20 or 30 update sets previewed and committed.   I decided at the Knowledge16 hackathon to start working on a way to automate the deployment process.

My Approach would be:

1. pull in remote update sets via a script

2. auto preview the update sets

3. produce a preview report for ALL retrieved and previewed update sets

4. auto-commit all update sets with zero errors

Working with a few of my team mates, we hacked together a script include   to achieve the above goals a few weeks after Knowledge16.     I have since spent the last 2 months creating a UI page to interact with that script include to allow anyone to deploy update sets quickly. I would like to present the code here for you to take a look at.   Please play with it, feedback is appreciated.   This is version 1 of this code.   It has not been battle tested yet, so use at your own risk.   I DO NOT recommend this is used in production at this point.

A few of the features:

  • filter remote update sets to deploy by date created, release date, keyword matching
  • uses the native "remote instances" module in ServiceNow to fetch update sets from the remote system
  • filter the preview results by keyword, only clean update sets, only update sets with errors
  • print preview results
  • commit log displayed for all committed update sets

Update Set (attached)

auto-deploy_update_sets_v1.0_ka_sys_remote_update_set_.xml

u_client_templates (u_descriptionLIKEupdate set autodeployer).xml

Screen Shots:

Capture_updateset_autodeployer.JPG

Capture_updateset_autodeployer_print_preview.JPG

Capture_updateset_autodeployer_preview_filter.JPG

Capture_updateset_autodeployer_commit_log.JPG

35 Comments
Foster Hardie
Tera Expert

It looks like the script include has everything I need. Thank you!

kevinanderson
Giga Guru
Norm Link
Tera Contributor

Hi Kevin,

 

I have submitted this to be a Knowledge presentation twice and both times it has been declined.  I will be greatly disappointed if anyone takes this information and is able to present it without giving me credit.

 

It's been a long time (5 years?) since anybody posted to this thread!  Glad you’re still interested but sad to see you’re still searching for a solution.  So, please allow me to indulge in a long-winded explanation of my (almost) fully automated process.  This is not trivial stuff.

 

Your proposal is on the right track.  I've been using the below automated process for years now.  And while it works, manual intervention is occasionally required.  And deployments to production are not 100% automated.  But I find that manually previewing and committing a single batch update set for a release is quite painless.  I haven’t found a way to make it bullet (idiot) proof either.  Most of the time we do not get preview errors, but s**t happens.  Our success is due to how we manage change in DEV and the rest of the update set pipeline.  We have 3 sub-prod instances – DEV, ST, & UAT.  Update set migration issues get weeded out along the way so by the time the update sets are in UAT they are usually in a state where they won't produce preview errors when migrating to production.

 

Update sets can be a nightmare to manage unless you truly understand the development environment and how updates and update sets work.  Once you have that understanding, then you can automate the pipeline from DEV to PROD and Migration Life will be Good.

I’m going to describe what we have in place.  No, I don’t have an update set for this.  No, I’m not including all the nitty-gritty details.  No, I’m not telling you how to do it.  I’m only going to describe what we do and why.  I hope you find it useful.  And it is not simple.

GENERAL PIPELINE STRATEGY

The migration of update sets through the pipeline must be sequential and automation must process them in the same order they were installed in an instance.  To migrate an update set, it must not include older updates to files contained in other update sets that are not yet complete. 

Oh, by the way, OOB ServiceNow will leave an update set in the Complete state after it is committed in an instance.  Those need to be set back into In progress when committed.  An automated pipeline means automating update set retrieval.  You don’t want to retrieve update sets that aren’t ready to be migrated.

  • In DEV, all update sets must be associated with a story, and they are scanned, completed, and migrated to TEST individually by the developer.  It is the developer’s responsibility to monitor the migration progress and resolve any preview issues in TEST.  The story is required to facilitate the next step.
  • When migrating from TEST to UAT, we are migrating a Story (a group of update sets).  Here, when a Story has passed testing, we mark it as Complete which in turn scans and completes its update sets.  Again, it is the developer’s/tester’s responsibility to monitor the progress and status of the migration and resolve issues that may arise.
    • It is possible that a story will have update dependencies on another story that has not yet passed testing.  So, all update sets for a given story are scanned for issues and completed  in the same order they were installed in TEST.  If any uncompleted update set for a different story contains an older update, then none of the given story’s update sets may be completed/migrated.  It must wait for the offending story to pass testing.
    • We use a scheduled job to look for completed Stories.  Runs every 10 minutes on the 7’s.  The job will attempt to scan and complete the update sets for the completed Stories. 
    • UAT runs another scheduled job (runs every 10 minutes on the 5’s) to retrieve and process completed update sets from TEST.
  • When migrating from UAT to PROD, we are migrating a Release (a group of stories). 
    • Again, all update sets must be scanned in the same order in which they were installed.  The same rules apply as they did when migrating from TEST to UAT
    • If a Story’s update set cannot be completed, then the Story and its update sets are removed from the Release (set to a later release).
    • All completed update sets for the Release are added to a batch update set.  The batch update set is then migrated to PROD where it will be previewed and committed.

IT ALL STARTS IN DEV…

ServiceNow development with update sets is a serial paradigm.  Akin to developing ‘on trunk’.  There is no branching and merging.  If you follow the rules below, you will avoid the Update Set Migration Hell that arises from people making multiple changes to the same object in different update sets before making them complete and trying to migrate them.  There are a lot of rules here, but they are all enforceable by implementing business rules in the right places.

  • Don’t make changes in the Default update set.  If you need to try out an idea, use a sandbox instance.  You can get away with creating something in DEV because it doesn’t exist anywhere else.  But if you mess with something that is in the pipeline, you’re asking for Trouble.
  • Story based development – All update sets must be related to a story.  This greatly reduces the chance that 2 people will need to change the same thing at the same time.  It is also required because we need to be able to identify the group of update sets that must migrate together from TEST to a target instance.
  • Only 1 update set per story/per scope may be In progress at the same time.  You can have multiple In Progress sets for the same story, but they must be in different application scopes.
  • Do not allow an application file to be saved if there is an update in another update set that is still In progress.  Different stories don’t usually involve common application files.  We have 40+ developers all working in a single dev instance, and we rarely have the case where one developer is holding up the work of another. 
  • Do not allow an application file to be saved if it is an OOB file most recently updated by a patch/upgrade unless the Story’s release is set to the same or later release than the upgrade’s (we associate patches/upgrades with a Release).  This ensures a story modifying an OOB file won’t get to production prematurely.
  • Don’t feel like you need to make all the changes for a story in a single update set.  Use multiple update sets, completing and migrating them frequently as manageable units that will still pass automated testing.  Make incremental change sets that won’t ‘break the build’.
  • Before completing an update set, run a scan to look for the following issues that will cause migration automation failure: (we call this a Local Update Set Preview).  Developers must resolve these things before being allowed to complete the update set. Unfortunately, I haven’t figured out a way to detect missing reference problems.  Those have to be dealt with in the target instance.  But you can detect and resolve most migration issues in DEV which gives your pipeline automation a 95+% chance of succeeding.
    • Multiple Versions – if an update set contains multiple updates for the same application file, invariably, the oldest update will be the one committed in the target instance.  If you find multiple versions, either delete the older update records or move them to a Salvage Yard update set.
    • Migration Issue - A newer update for a file is in the Default set.  Usually caused by people making a last minute change but forgetting to make sure they were in the right update set.  The update set being migrated does not contain the current version of the file which could lead to failures in TEST (or even PROD) even though it passes tests in DEV.
    • Inclusion Issue - An older update for a file is in the Default set.  Usually caused by someone hacking around ‘just to see’.  Your update contains (is based on) somebody’s hack that was not intended to be migrated.  This can leak the hack into the pipeline and lead to test/production failures.
    • Update Collision – a newer update is in a different update set that has already been completed and migrated.  This shouldn’t happen according to our rules, but we still see it occasionally for some inexplicable reason.  Delete the update or move it to the Salvage Yard.
    • Release Collision – we don’t require a release for a story until it’s ready to go to UAT.  But some stories for a special effort are targeted for a specific release date.  If the update set has no release specified and it has an update for a file that is targeted for a specific release, then we flag that as an error and have the developer set the release of their story to that specific release.  This keeps the change from being released before it should be.
    • Scope Mismatch – Updates that have an application scope that do not match the update set scope.  Not sure how this happens, but it does.  Move the update to another update set of the correct scope.

MIGRATING FROM DEV TO TEST

Make Completing an update set a UI Action and do not manually set the state of an update set to Complete.  Make the State field read-only (but give yourself a way to change it for housekeeping purposes).  The UI Action will:

  • Look for additional issues that could cause problems: (we don’t show the Complete button unless these checks pass).
    • Make sure the above scan was completed and that there are no outstanding issues
    • Older updates in update sets for other stories that are still In progress. See the pipeline strategy above.
    • Collisions with a Patch or Upgrade.  We associate patches and upgrades with a Release.  Lower environments are upgraded before production.  We won’t migrate the story to UAT in this case until the story’s release is changed to a release that is >= the upgrade release date.
    • Other conditions that may be unique to your environment.  (we have some that are probably unique to us)
  • Make a REST call to TEST to automate the retrieval, preview, and commit of the update set

MIGRATING FROM TEST TO UAT

Same thing as migrating from DEV to TEST except that the Story to is set to Complete.  A  scheduled job that runs every 10 minutes on the 7’s finds the completed story’s update sets and previews and completes them in the same order they were installed in the TEST instance.  If dependency issues are found, any update sets that were completed for the story are set back to In Progress.  See the Pipeline Strategy above.

Every 10 minutes on the 5’s, a scheduled job in UAT previews and completes any retrieved update sets.

PACKAGING THE RELEASE

We use the Agile 2.0 module, so we have Release (rm_release_scrum) records to relate our stories and update sets to a release.  The Package & Stage process is a UI Action on the Release record:

  1. Stories are supposed to be signed off by the stakeholder before the release date.  If a story is not signed off in time, then the story is moved to the next routine release.
  2. Scan each story in the release for dependencies on stories that are in a later release.  If there are any, move the story (and its update sets) to the later release.  If there are none, set the story to Complete.
  3. For the Completed stories, preview the local update sets in the same order in which they were installed (same rules as migrating from DEV->TEST->UAT).  Halt the process immediately if any issues are found.  Once the issues are resolved, the packaging process is be re-started to pick up where it left off.
  4. Once all update sets are complete, create a batch update set and associate all the child update sets with it.  Mark the batch update set Complete.
  5. Make a REST call to the Production and other target instances to retrieve (Stage) the completed update sets.

DEPLOYMENT TO PRODUCTION

Since we are dealing with a single batch update set, this is quite simple.  Use the OOB Preview and Commit UI Actions on the retrieved batch update set.  You can try to automate this like the rest of the pipeline, but you’ll still need to monitor for and resolve preview issues that might crop up.  In my experience, after doing a few hundred weekly releases involving 30-100+ update sets, I’d say about 5% throw preview issues.  And there are only a handful at most.  Most of our issues related to deployments have to do with story dependencies which require us to pull a story from the release and move it to another.

HANDLING PREVIEW ERRORS WITH AUTOMATION

I use an approach similar to your #2.  I have an after update business rule on sys_remote_update_set that fires when the state changes from previewing to previewed.  It calls a script include method to handle preview errors.  If there are any unresolved errors after that method is finished, then I halt the process.

The method queries the sys_update_preview_problem table for problems with the update set and loops through them, handling each type of error based on the problem description (using a regex).  Here’s how we handle them.  And if you can think of other solutions, I’m all ears.

  • /^found a local update/i - Collision with a local update that is newer – 9.5 times out of 10, the collision is with an update in the local Default update set.  Query the sys_update_set table where sys_id=problem.missing_item_update_set.  If it is the Default update set, then we ignore/accept the error.  Otherwise it should probably be skipped – human decision here.
  • /^could not find/i && !/this update requires/i - Missing Reference -
    • If description also contains /another uncommitted/, then you can ignore/accept the problem because you'll be committing the other update set too.
    • If the description matches /ecc_agent/, then this is caused by a MID server reference.  You can ignore/accept these, but you'll need to make sure you get the reference fixed in the target instance (by hand or by code)
    • (hack to overcome a ServiceNow bug) If the remote update's name matches /sc_cat_item/ and the problem description matches /wf_workflow/, then you can query the sys_update_xml table (remote_update_setISNOTNULL^name=problem.missing_item) to see if there any updates in another update set that will be committed.  If there are, ignore/accept the problem.  ServiceNow doesn’t always detect that the missing reference is in another uncommitted update set.

 

Saib1
Tera Guru

@kevinanderson - Do you have the latest version for this automate process

GG_Indirakumar
Tera Contributor

Hi Kevin, Is it possible to push the update set from lower instance to upper instance using the same feature provided by you?