We've updated the ServiceNow Community Code of Conduct, adding guidelines around AI usage, professionalism, and content violations. Read more

Legacy Workflow to Flow - Process updates and vibe coded apps

kevinanderson
Giga Guru

This project starts with the following posts from 2025, where we kicked off our legacy workflow to flow migration last July, with the mighty task of moving more than 500 legacy catalog workflows over to flow designer.

 

Part 1

What do you do with more than 500 legacy ServiceNo... - ServiceNow Community

 

Part 2

Modernizing a Complex Legacy ServiceNow Workflow w... - ServiceNow Community

 

Part 3

Legacy Workflow to Flow - Testing Using a Data Min... - ServiceNow Community

 

 

I have wanted to provide an update for some time, but the process had grown a bit stale, doing the same rinse and repeat 25 step process for each target catalog item workflow to get it converted to flow and documented sufficiently for QA team handoff and review.  Since those last posts, two things have fundamentally changed, tooling maturity (example: VS Code agents) and ever improving AI models.

 

After opening the process up to additional team members, some helpful feedback came in regarding the workflow analysis and the data mining and RITM ticket replay testing process. Additionally, the capabilities of the AI models we have been using since July 2025 has improved drastically.  At the start of the project, the output from an AI interaction was verbose, but required considerable human refinement to either get solid working code or a good document produced.  That has changed significantly in the last 8 months.  Current top tier models consistently produce accurate, and fully functional code.  The AI now can author hundreds to thousands of lines of code in a single prompt shot that often works the first time executed.  Documentation produced by the AI is now robust, thorough, and very accurate when sufficient inputs are provided to the LLM.  

 

This rapid evolution of the capabilities of the AI models we are using in this project has provided some unique opportunities.  Ideas for micro apps that before might take a week to produce, can now be prototyped to a sufficient MVP in a day or less.  Output that is produced by the AI for different steps of the current process moves faster though the gates now, as they need much less human review and edits to be completed.  While review is still needed, the amount of required human rework has dropped significantly.  Agent mode in VS code is also accelerating these steps even further.   What before was a healthily amount of copy-paste of information back and forth between final documents and the AI chatbot, we can now assemble inputs into a file folder structure, and with enough example documents provided, a few simple prompts will get the agent producing multiple documents at a time.  The addition of these models directly into MS Word has also helped accelerate document authoring.

 

All of this together has accelerated each individual catalog item workflow migration.  What once was a week of work, is compressing to 3 or 4 days.  What was once a day of work, has also become significantly less in the last few months, maybe a few hours now.  This is opening up time for exploration where it did not exist before.

 

What have we done with this extra time?  Improved documentation.  Continued buildout of process amendment documentation.  All of this is added onto the LLM primer document that is used to start each AI chat or agent interaction. This document is the entire project bible which includes in-depth migration instructions and examples for the 25-step process.  Additionally, the script heavy data mining and replay steps have been consolidated into a single application for ease of use.  Going a little classic, a UI page (Jelly syntax!) and a script include was used to quickly mock out a user interface for both the data mining and RITM replay scripts.  This relatively simple app comes in at about 6500 lines of code and was built out in 2-3 days with VS code agent mode using Claude Haiku 4.5.  This has been a fantastic time savings as it opens up the data mining and replay to developers at any skill level, and the new app allows us to extract and replay a single targeted RITM ticket for quick validations of QA findings.  This was a critical upgrade as it helped the project to stop being so script‑driven for this important testing step.

 

Another observation from team members that participated in these migrations was the legacy workflow analysis - it is somewhat tedious. The original process made use of the 'new at the time' multi-modal capability of the models available to us in-org.  While image analysis using the AI along with AI analysis of update sets containing the legacy workflow was innovative at the time, the process was still quite manual and required considerable review and correction to the first pass AI analysis of the image.  After a recent water cooler discussion, an attempt was made the "one shot" prompt this problem with the most top tier model we have access to:  GPT 5.2 codex.  Amazingly within about 30-45 minutes of work, the AI in agent mode produced a 700ish line script that could take a legacy workflow version sys-id, and produce a document with workflow-to-text breakdown, business process summary, and script appendix very similar to our semi-hand build document we have been building for each migration thus far.  This is a huge time savings, reducing translation errors, and allowing us an automated input for workflow decomposition for the analysis and flow buildout documents. 

 

Things are moving faster.  Next step it to tackle the QA gate.  This has been a fully manual process throughout the project.  Hopefully we can use ATF jasmine scripts to bring some kind of automation to that step as well.  Stay tuned for more updates!

0 REPLIES 0