Creating tasks in series such that completion of one triggers creation of another
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-24-2017 08:29 AM
Hey all, I've been beating my head up against the wall in trying to implement this for several days, and I'm convinced that it's because I'm trying to go up against a best practice. I'm hoping I can get some thoughts on the best way to implement this.
We have a need to create a process workflow around decommissioning servers from our datacenter. The process has around 15 steps that have to be executed in a specific sequence.
For example, the process starts off with a request to our enterprise monitoring team to disable monitoring on a server that is being decommissioned. The next step is for the network team to take the server out of the load balancer. Those steps have to happen in that specific order, because if the server is taken out of the load balancer prior to the monitoring team taking it out of monitoring, they'll get alerts that transactions aren't being processed by the server that is being decommissioned.
We already have several catalog items that create requested item tickets to the respective group that needs to handle each part of the process. What I need is an overarching workflow to handle the sequence that these requested items need to be created in, along with change requests that need to be submitted as the process reaches certain phases, notifications that need to be sent out, and so on.
But I'm hitting a brick wall in how to implement this.
Our first inclination was to just create request tasks. The problem with that is that as I mentioned above, most of these steps already exist as requested items that are created by catalog items. It seems extraordinarily inefficient to me to have, for example, seven requested items submitted via the catalog item to disable monitoring, and four request tasks submitted via the decommission process. Reporting would quickly become a nightmare, and teams would undoubtedly get confused over why they're having to process these requests as two different ticket types.
Okay, so my next idea was that the server decommission process should be stored as a request record (not requested item). The request should have a workflow attached to it that drives creation of the requested items for each team as they're needed. I cobbled together a record producer to create the request. I don't want to use an order guide because, like I said above, there are around 15 different things that need to happen. I don't want a user to have to sit there and tab through 15 catalog item screens when about the only thing that's actually required is a server name, location, and IP address (which can almost always be pulled from the CI data, but I digress...). Also, I don't see any way for an order guide to implement the sequential nature of the request that we need. You can't, for example, have an order guide wait to create the request to run a full system backup until after the request to stop the services is complete.
I definitely want the tasks to be created sequentially. If I create them all up front, I run into three issues: 1) Teams will get confused because there will be requests out there in their queue that they're not supposed to work on, 2) reporting becomes a nightmare because a lot of them are based on when tasks are created, and 3) it seems a bit crazy to me to create all the tasks up front when, if along the way, something changes and the decommission request has to be cancelled, meaning that all pre-generated tasks would have to be cancelled as well.
So anyway, I created a record producer and modified the out-of-the-box request workflow to kick off a sub-workflow that I had planned on creating the individual requested item tasks and managing what gets kicked off and when. But then I read this in the documentation:
Note: If a workflow contains a Create Task activity that has executed on the current record, additional Create Task activities in the workflow or in subflows cannot create new task records. To create additional tasks in this situation, add a Run Script workflow activity activity that creates a task with a script.
@#$*!, seriously?
Okay, so I create some Run Script activities to create the request items. Problem is, unlike the Create Task activity, I can't get the things to run serially doing it this way. Once the task is created by the script, it just keeps blowing right on through the workflow.
I've tried a couple of options to implement waiting until one task completes until another is created. My first thought was to insert a Wait for WF Event activity, and create a business rule on the requested item table to post an "I'm done!" event back to the workflow. That works, but the problem is that if I have two requested items running simultaneously (such as a request item to disable a network switch port and a request item to deprovision storage that aren't dependent on each other), how do I know which task completed? The Wait for WF Event activity doesn't take any parameters, so there's no way to differentiate one "I'm done!" message from another without creating a unique event message for each step in the process. Of course, then you run into issues where the business rule that fires when a requested item is closed has to determine which specific message should be sent, probably in a giant switch or if-else-else-... statement, which is less than idea.
So then I came up with the idea of storing the running tasks in a variable associated with the workflow. When a task completes, the business rule would check all contexts to see if there's a variable that contains that requested item in the running items variable, and remove it. Then maybe I could use that in a Wait for Condition activity to allow the workflow to proceed. (Maybe? Frankly, I don't even know if that would work, if the condition would be checked on a workflow variable being changed, but I'm grasping at straws here.) But then that idea came to a halt because I can't seem to access workflow variables from business rules, and I can't even update them within the....
I'm quickly running out of ideas, and I can't seem to find any documentation on how to implement a serialized task workflow. At this point, I'm contemplating even more esoteric solutions like creating my own workflow activity that duplicates the Create Task functionality, but that leaves off the existing task check. But we're getting into the realm of stuff that's really hard to implement and nigh impossible to maintain.
I can't believe that this is such an odd requirement that someone hasn't run across it and solved it before.
Anyone have any advice or help I can use to make this happen? Am I just missing something obvious, a best practice that I'm trying diligently to work around and should be?
Help!
- Labels:
-
Service Catalog
-
Workflow
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-24-2017 02:46 PM
a lot depends on your process and how you want to work....
we try to keep the system oob as much as possible.. and keep it simple... as a catalog item the system is built to maintain and work it.. you don't have to do any customizations and upgrades work easier.. when you start building custom workflows on the request table.. and not having tasks groups have to change how they work depending on the item, which isn't a good thing.. they start asking why is this one jsut a task.. and this one a catalog task... cascaded fields and variables don't work right.. you have to add fields to the request table for one item instead of using variables etc...
wie do not process ANY items as a flat request except things that are one offs or just can't be a part of the catalog because they aren't standardized.. bottom line if it needs a workflow it goes in the catalog..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-26-2017 06:34 AM
I still think there's a disconnect here. I definitely want to keep the system OOB as much as possible, but here's where I'm getting tangled up.
Let's say that I take your approach to implement this. I create a new catalog item called "Server Decommission" that creates a requested item. I build a workflow and set it as the workflow attached to the catalog item called "Server Decomm WF". So far, so good.
Okay, the first thing I want that workflow to do is to create another requested item, "Remove Server from Monitoring." That requested item can also be created separately by a different catalog item, because there are times when people want to remove servers from monitoring that have nothing to do with decommissioning anything. That's why I don't want that to be a request task, because then if someone wants to know how many times servers were taken out of monitoring for the month, they'd have to report on requested items (generated by the catalog item) AND requested tasks (generated by the Server Decomm WF), not just requested items. Whether the request comes as a result of a server being decommissioned or because of a normal maintenance window doesn't matter to the Enterprise Monitoring team, and I don't want to have to explain to them that they'll be getting requests to disable monitoring via both RITMs (from catalog items) and TASKs (from decommission requests) assigned to their queue.
The second thing I want to do is create another requested item to take the server out of the load balancers. Again, I can't make this a TASK record because there may be other reasons to take servers out of load that have nothing to do with decommissioning; it needs to be a requested item like all other take-servers-out-of-load requests. So I can't use a Create Task workflow activity to launch either of these tasks and wait, because you can't have two Create Task activities on a workflow to create multiple tasks. (Why, ServiceNow? WHYYYY!!?)
I can't create the tasks in a Run Script activity, because as soon as it creates the task to remove the server from monitoring, the workflow will execute the second script that creates the requested item to take the server out of load, and I need it to wait for the first requested item to complete before proceeding.
My understanding of your suggestion is to create another workflow called something like "Remove Server from Monitoring Subflow" that does nothing but have a create task activity on it to create the RITM to remove the server from monitoring, and insert it next to the Start activity on the Server Decomm WF workflow, and then create a "Remove Server from Load Balancer Subflow" that does nothing but have a create task activity on it to remove the server from load, and insert it next to the monitoring subflow.
Am I understanding this correctly? Because if I do that, I'll effectively have to create two workflows associated with every requested item: One that is launched when the RITM is created from the catalog item, and one that exists solely to execute as a subflow in a parent workflow. That seems really broken to me, and not how ServiceNow is intended to work.
I don't see a way around this whether the parent workflow is attached to a REQ, a RITM, or any other record. My issue here isn't so much what type of record is the parent, it's how to drive a serialized workflow without falling back on stuff like creating two workflows for every request. I want to keep this sane and maintainable.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-28-2017 06:28 AM
The issue here is that you are trying to keep each item removed from monitoring on ONE request.. which means you can't run it as a child workflow.. nor for that matter can you run it as it's own requested item...
so.. you have an easy path and a slightly more difficult path... the more difficult path is almost the exact same.. but involves scripting the creation of items INSIDE the decom script.. then pausing for them to complete i prefer the order guide method as it is easier for everyone else to understand.
EASY: Make server decom an order guide... in it gather the item to be decommed etc... <if possible give these variables the EXACT same name they have on the child item...
I want to collect any variables used on multiple items on the order guide instead of the individual items that i can.. and carry them through.
now on each child item if the variables are NOT named the same write an onload script to make them read only if not empty.. < you can also use an onload script here if order_guide is not empty i think>
create a rule to call each individual item used in the order guide <or put in checkboxes for things like is the server modified>
create a workflow for the server decom item itself...in the workflow for the decom item you want to set the stages to cover each item that is a member of it.. so for example awaiting dns removal etc
now comes the tricky part... in each item you are going to put in a test for if current.workflow is blank
if it is NOT blank.. add an if statement that looks at the stage for the server decom item for this request and gets it stage and ensures it is the NEXT step and if so proceeds
at the end of the workflow before the end write a script block that will update the decom item worknotes with a short work note.. <ie DNS removal completed>; this triggers it to check it's conditions in the workflow make sure the stage on this item is closed complete.
now in the server decom workflow you have a series of if's and catalog tasks <the ones that don't fit in any specific item>
so for example if your process is...
turn down server --> remove firewall rules --> remove dns --> physically remove server n-->cleanup <no item for this>
your stages for your decom item workflow would be
- Waiting on approval value 100
- waiting firewall rule cleanup value 1000
- waiting DNS cleanup value 2000
- waiting Physical server decom value 3000
- Final cleanup stage value 10000 <allows for extra stages later>
now in the firewall item it looks on the sc_req_item table for the server decom item with the same value for request... if the stage is >= 1000 it fires...
once it is done it will update the workflow for that item with a note "Fire wall removal complete"
in the server decom item the if is looking for the sc_req_item on the same request for FIREWALL to be in stage closed complete or closed incomplete.. o
so once firewall is done.. it prompts server decom to check the if where it is waiting on the firewall it is now in a complete stage so it will go to the next step.. this will be a branch that goes to an if that waits for dns cleanup to close <same type logic> and on the other branch a 5 second delay..<to let the stage update> and a script to update the dns cleanup item with a work note that firewall rule cleanup for request number xxxx is complete... .this will trigger the dns item that was waiting for the decom item stage to be >= 2000 to check it's condition... it meets it so it will proceed.
hope that makes sense.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-28-2017 07:20 AM
This is going to take me a while to digest, but I wanted to at least post a reply saying thanks for hanging in with me on this. I'll post an update a bit later after I've processed your reply above and tried some stuff in my personal instance.
--Dennis R
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-28-2017 07:27 AM
no problem at all.. the trick here is we want all the sub items on the SAME request as the decom item.. we want each one of them to pause till the parent items workflow hits that stage..
the main workflow needs to pause for each item to be complete!
now if you could teach em to report on their items AND decom items it would be a TON easier.. no need to pause the main workflow through funky if's and update each other .. just a need to add in the items as children and feed them variables in from the decom workflow grins...
the trick that most people don't get is the update.. when a script hits a wait for.. it doesn't really check that condition unless the ITEM is updated.. so if you put in a funky wait like wait for record x's stage to be >= 1000 you need something to update that item once the condition is met so that it will re-evaluate the condition and move on!