
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Maybe an alternative title would be "Evening ServiceNow Musings With Steve" or some such?
Anyway, I was reviewing our ServiceNow development process for the umpteenth time, and decided I would share what we have ended up with over the last couple of years.
Caveats:
1. It is imperfect. Still a work in-progress. Not written in stone.
2. It is a hybrid. A mish-mash. Did I mention it is imperfect?
3. Sadly we do not currently use ServiceNow's SDLC. We have legacy systems for that: Rally and Quality Center.
Now with that nonsense out of the way I feel better about sharing.
Our company was an early adopter of ServiceNow (December 2007). For the next four years there was very little in the way of development control in the environment (read: none). In 2011 I was brought in by our CIO to help constitute a development team for ServiceNow in our company. Part of his mandate was to create a development process complete with standards documentation. This turned out to be a bit of a task. It would require adopting Agile development methodologies to a PaaS environment.
Our company has five instances. Three were already set up for the common promotion environments (Development, QA, Production). After some discussion we activated a fourth instance to be our Sandbox (this for playing with new releases of ServiceNow, plugins, or evaluation of 3rd party vendor products), and the final one to be our MAINT instance; ostensibly for a parallel development environment. It turned out that this latter would have been ideal for our UAT environment (and may still end up being that).
Ideally I would have liked to have had four instances just for promotion, but was unable to justify the money for a sixth instance. UAT is the only thing missing from my recommended model. I cry about this nightly.
IDEAL:
Development -> Test (QA) -> Staging (UAT) -> Production
OURS:
Development -> Test (QA) -> Production
So, now that I have describe our environment what do we do about development? The process as we have it written:
The Constraints:
1. All development is done ONLY in the development environment. This INCLUDES ALL changes identified as being needed during QA testing. More on this later.
2. All of our QA and UAT will be done on the QA instance. Sigh.
3. Production is sacrosanct. It can only be modified by Admins via management sanctioned, fully tested, fully reviewed update sets. Control! Control! Control! If I sound rabid about this it is because I am! EXCEPTION: Data changes, database views, reports, etc. I am talking about development here after all.
4. We use SVN as a code repository. All code must be checked in separately to SVN. Currently this is manual and tedious. This allows for baselining, code comparison, and revision history with finer granularity than that allowed in ServiceNow. Besides, it was a mandate by management (and auditing).
5. Audit requires creation of SDLC documentation. In other words an analysis and design document of some sort.
6. The Business Owners create the User Stories. Spike stories will be identified and created by Technical Management.
7. Business Analyst(s) will ferret out the User Story requirements.
8. Project Management oversees the processes.
9. We conduct two-week iterations with a release at the end of each.
10. We developed a cloning strategy to make sure everything stays in synch.
11. We turned on auditing for several table structures. This reduced the "who changed what" pain by a large degree.
Development:
1. We are an Agile based shop. I have mixed emotions on this. I may share those in a later blog entry. All development will be done based on User Stories. Hopefully with decent requirements. But, then again I dream a lot.
2. Creation of an Analysis Document. This is just a regurgitation of the requirements with any research the coders and/or Business Analysts might dig up during requirements investigation.
3. Creation of a Design Document. May include screen shots, mock-ups, code snippets. Whatever we can slap on the page to describe the modifications we plan on making. This document WILL be used by the QA team to create their test cases. Each coder is responsible for their own User Story.
4. Design is reviewed by the Technical Manager or Team Lead.
5. Creation of an Update Set. The name of which will contain the User Story number and title. One update set per user story. Yes, I know we coders are lazy and to us it makes sense to put it ALL in one user story, but the auditors are not crazy about not having traceability. Oh, and I might mention that it has the very nice added benefit of rolling back one user story easier so as not to hold up an iteration release. 🙂
6. Unit testing during development. This is a must. As a technical manager I want to know that the code has been exercised and is functioning as desired. Here Sandbox could be used to test update sets (I recommend this process).
7. All of our developers must communicate. Given the nature of developers being anti-social I understand this can be a challenge, but persevere! Actually we were stepping on each other in some cases with our development. The coordination will save a lot of headaches.
8. Integration testing. We smoke test the overall functionality when we have several update sets shipping at once.
9. Developer then downloads all code modifications, and recently, all form mods to their local disk as XML. These are then checked into SVN. We originally checked in everything as XML and PDF. That became way too tedious. We also renamed the files prior to checkin, but that became a mess, and we leave them as exported (with sys_id in the name).
10. Developer then sends a code review request naming the SVN checkin number, the user story, and what modifications were made to the team.
11. Assuming the developer does not get beat up too badly; all recommended modifications to the code are made, tested, and re-checked in.
12. Developer notifies the Admin by email that the Update Set is ready for promotion to QA.
13. The Admin coordinates with the QA group as to when to apply the new update set.
Quality Assurance Testing
1. After the update set has been promoted to QA the developer is responsible for support.
2. QA proceeds to test the new features. We record all defects in Quality Center.
3. The developer is responsible for resolving each defect. Defects are not to be worked until QA has completed the round of testing.
4. A new Update Set is created with the name {previous update set name} + "-2" etc. to contain the changes/fixes. This is a headache. We used to do update set scrubbing. But that was even more of headache. Either way rollbacks are still a headache. We use a lot of ibuprofen should we have a bad release.
5. After successful QA testing; QA notifies the business owner to begin UAT. The same process for defect discovery and resolution applies.
6. After successful UAT the QA team notifies Project Management that the User Story is ready for release to production.
7. Project Management is responsible for creation of the release plan, the rollback plan, and identification of support personnel. The developer is brought in to fill in any blanks.
8. At this point, or earlier, the Project Manager cuts a change request for the entire release and sends it through the approval process. PIR person is designated in the CR.
Production Release
1. Developer is responsible to assist the Admin if necessary. Unless something goes very, very badly we don't usually roll back, but make it work in production via hotfix pushes.
2. Should a hot-fix be required, for a post-production defect, it goes through the same process as a normal release.
3. Should a rollout to production fail it may be necessary to roll it back. We have yet to do this, but it is written down in our process and makes the auditors happy. 🙂
4. If you followed the process of one Update Set per User Story then rolling back a defective release actually gets pretty easy. You simply roll back the bad User Story.
5. Data updates are a bit trickier. You might want to back up any tables to be modified locally before applying a data import (update transform).
6. Admin smoke-tests the application.
7. If everything looks good then Admin updates the change request.
8. PIR verifies changes were implemented and updates change request.
9. Project Manager announces to concerned parties that the release has been completed successfully with a list of features pushed.
Post Release
1. Depending on the size of the iteration release we do a clone of DEV and QA from Production at the end of each release.
End of process.
Side notes:
1. We have a fifth instance (MAINT) that we attempted to do parallel development on next to our DEV instance. This turned into a major nightmare when merging packages for release to QA and integration testing. We tried stepping the release from DEV and MAINT to QA (i.e. move one out to production first, then the other), but this also became an oversight nightmare (we went through several bottles of generic ibuprofen on that one). It was finally decided that it was simpler to just do everything on the DEV instance and deal with the minor coordination and "being stepped on" headaches. Things are a bit calmer now.
2. With Aspen we attempted to use the ServiceNow Chat available in the SocialIT plugin. While fun we didn't find it to be a necessity for each production release (we have Microsoft Lync and softphones so it is really duplicated functionality). I do recommend checking Chat out, and you might find it to work well for you. It has one big benefit: It keeps a record in ServiceNow of everything accomplished for the release to production).
3. During development we create development Knowledgebase articles to capture any new techniques or coding breakthroughs. This has proven very useful and keeps us from having to reinvent the wheel when covering the same or similar ground at a later time. Part of the analysis step is to go to the KB first (and record the usage in our analysis document).
4. We use Microsoft OneNote for our Analysis/Design document. We like the collaboration ability. Evernote or something like it would work just as well.
5. I use SoapUI to unit test my ServiceNow Web Services.
6. We started with Agile Iteration then moved to Agile KanBan. After several weeks of KanBan we are back to iteration releases. Management was having difficulty seeing what was actually being done and were more comfortable with iteration releases. With KanBan we rolled each story out as it was completed. I have mixed emotions about both. Currently if a story is incomplete for one iteration it is simply moved to the next. So we are doing KanBan under the guise of Iteration. 🙂
7. If you have yet to decide on a product for either Agile Project tracking, or defect tracking then look at adopting ServiceNow's SDLC plugin.
And that is our current process. Hope it helps you all in developing and/or fine-tuning your own processes.
Enjoy!
Steven Bell
Santander Consumer USA, Inc.
Dallas, TX
- 4,805 Views
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.