
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Flows have now become ubiquitous with any piece of processing logic executing in ServiceNow. Due to the heavy push towards making ServiceNow the leader in low-code / no-code development platforms, what I'm seeing in the real world is a trend of using this same tool in our toolbelt over and over again. As more and more OOB flows are used and we step back from standard business rules, and during implementations we continue to use our go-to solution for everything, we can actually be creating performance issues for ourselves if we aren't careful with how we design in ServiceNow.
As the number of flows grows across the platform, your scheduled worker queues may have a harder time keeping up with the load, especially if your flows tend to be inefficient or are called/executed at a large scale. Here's a real world example which I witnessed that took a customer's instance to its knees. They had a large table of sold products (several millions) which was fed from an external source of truth via an import set API. Some additional processing was required after each update to one of these records, so we instinctively went for our trusty solution to all things in ServiceNow...a flow. The flow itself was efficient and quick to execute. That wasn't the problem. What we failed to account for was the scale at which this flow COULD be called. In a normal day-to-day scenario, it was only being called a few dozen times per day...no issues. However, when the customer was undergoing a data quality effort and needed to clean up the millions of Sold Product records by slamming that API with work, our flow started firing back to back millions of times in a row, clogging up every scheduled worker queue to the point where no other flows were executing in the instance. Imagine going to the DMV and every window had a line a mile long. That's what we created.
So, what's an alternative? Event based design patterns.
Event based processing does not impact the scheduled worker queues. When defining an event, you also have the option to establish a dedicated event queue to process that particular event, should you be working with something at an extremely large scale. Your standard best practices of coding and design still apply. Make your code small and efficient. It is better to perform small amounts of processing, calling that code many times, rather than processing a large amount all at once. Make sure the columns you're querying are indexed. Write code defensively, using setLimit() on queries and checking for valid returned values and parms before you use them in additional queries.
Here's a picture of what this looks like.
With just a little bit of setup, you're off and running.
- Create a new event in Event Registry
- (Optional) Create a dedicated event queue
- Write your code in a Script Action
The only thing left is to figure out when and how your event will be triggered. It could come from a business rule, or AI Action, or any other place where server-side code executes. Notice I left off flows as an example since that's kinda what we're trying to get away from in this case.
The server-side code fires your event, an entry is created in the Event Log table and is processed by calling the associated Script Action. No scheduled workers are affected in the process, leaving all the other hundreds or thousands of flows to continue executing like normal. The DMV lines are short and you're a happy ServiceNow architect.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.