Legacy Workflow to Flow - Testing Using a Data Mining Approach
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 hours ago - last edited 25m ago
In the continuing migration effort started here: What do you do with more than 500 legacy ServiceNo... - ServiceNow Community, I would like to provide some details about our unit testing process.
For the ServiceNow legacy workflow migration project we have been undergoing since this July, one of the bigger pain points is regression testing the new flows to ensure they match the myriad of logic pathways found in our legacy workflows. A common pattern is a single request that contains a dropdown such as request type. This allows a single form to act as an intake portal for all the functions of that department or group. This often results in a workflow that has multiple combinations of approvals, catalog tasks, process or integration scripts, and notifications coming off a switch statement or a series of if-else activities in the workflow. Often the related forms for these workflows are rather technical in nature or contain extensive required fields for submission, leading to long and arduous manual testing via the service portal, submitting request after request to verify flow behavior.
These legacy workflows have been active in production for quite some time, going five to ten years. Some of these workflows are attached to dynamic business processes, where the forms and related workflows are updated on an annual basis, while some of these workflows haven’t changed in years. All of these requests that are in target for migration are submitted in high volume each year, and significant data exists representing the customer usage of these forms and workflows in the wild.
This long history of request data represented a pathway to automation. Several components were sitting around that could help speed up this testing. The first component is getting the data out. A custom library that is used in one of our integrations proved to be very helpful in this first step. The library takes a reference to a requested item and returns a JSON blob of the parent request item and all the associated variables in a name-value pair format. A minor fork of the library and an adjustment to the output was made to add a display value to the name-value pair objects in the JSON package, along with names of any associated reference tables the variables point to.
Now extracting this data is a bit of a complexity. We can’t run this ad-hock script in production. Current production clone setting remove 99% of the data for our sub-prod environments. To resolve the issue, our production clone settings for exclusion and preservation rules were temporarily updated to produce a clone in a lab environment with 90 days of all our catalog request and variable data, along with intact workflow execution contexts.
Working with GitHub copilot, and the GPT 4.1 model, a data extract script was created. The script has several parts to allow quick reuse with multiple different catalog items. For a given catalog item, a list of target variables are provided. These fields are identified from the workflow audit, as they have workflow logic tied directly to their values in switches, if-else activities, or script activities. The data mining script first queries the request data for the given catalog item, and samples data across the 90-day data set, pulling records with form values that match all the possible combinations for the list of target fields. The selected records are shuffled for a pseudo randomness, and a PII cleanup process is run to mask user data in the extract. The final step of the script is to output the data set in two ways. The beginning of the output is the requested item ticket number list, with a prod URL for each record (used for manual comparison), and the second part of the output is a JSON blob of the selected records.
With the JSON blob in hand, the next step is recreating these tickets in our development environment where the workflow to flow refactor efforts are underway. ServiceNow has a utility that is an exact match for this testing effort. CartJS provides an API interface to create requested items directly from a script. In the true vibe-coding spirt of the times, a few example JSON blob extracts from our data sample, along with the ServiceNow CartJS API details and an example we had from a few years back were thrown at GitHub copilot (chatGPT 4.1 again) resulted very quickly in a working script include that we can send the catalog item sys_id, the JSON blob, and a few flags (random sample, ticket count, target record) and result in a newly generated requested item matching to prod record exactly, minus the user data scrub (replaced with test user accounts).
The final step is creating a fix script that contains the extracted production data blob, catalog item sysid, the ticket generation settings for random sampling, ticket generation count, or reproduction of a single target, along with the our CARTJS using script include to create a reusable package that can be executed over and over to generate one or more of the sample prod tickets against the new flow for quick validation of behavior for the current catalog item undergoing migration. We ensure the production ticket number is also written to a text field on the RITM to allow easy reference against the source data for the unit tester or QA individual using the fix script.
This end-to-end automation pipeline—from data mining in a dedicated production clone to bulk RITM creation via CartJS—has aided significantly in speeding up unit testing to verify migrated flow behavior exactly matches the source legacy workflow. The unit testing process is now a rapid, repeatable, and auditable process that delivers high accuracy regression coverage in a significantly reduced timeframe. Developers can quickly validate their migrated flows against a selection of scenarios that cover most, if not all, pathways in the process, allowing them to find and fix defects in a fraction of the time. This capability has been foundational to the success of the workflow modernization project, ensuring every migrated flow is delivered with a high degree of confidence and quality.
- Labels:
-
Service Catalog
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 hours ago
This is part 3 of an on going journal.
Part 1 can be found here: What do you do with more than 500 legacy ServiceNo... - ServiceNow Community
Part 2 can be found here: Modernizing a Complex Legacy ServiceNow Workflow w... - ServiceNow Community
