
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
One of the things that’s always tough is to quantify the value added by an analysis tool. When used correctly, tools like ServiceNow’s Performance Analytics can result in savings of anywhere from thousands to millions of dollars depending on the size and scale and process in question. But why is it so hard to quantify?
The problem is that the savings are always realized in those other processes. When you use Performance Analytics to identify systemic issues in your Incident process the result might be a reduction in average resolution times for incidents (open to resolve time), or a reduction in amount of time worked on an incident (actual hands on time). While these things have real cost and real value, the savings shows up in Incident’s bucket. As an organization, it is easy to forget what helped us to identify those issues in the first place, and it can be difficult to quantify the impact of the change, even if we do give credit where it’s due.
Detailed reporting can give us a rough view of that value, and we may even be able to show the cumulative value of Performance Analytics using Performance Analytics itself! In this case, we’ll be taking a look at a good Request Management use case for Performance Analytics:
Request Management
In modern organizations, most of the requests entering the request management process are generated through the Service Catalog. However, with a large and constantly-changing number of items to request it can be hard to prioritize which future catalog item will provide the best benefit to the organization.
Text analytics was added to Performance Analytics starting in the Kingston version, and is starting to appear in more and more of the Content Packs Performance Analytics provides for analyzing and improving process flows. Using text analytics against Request Management gives us a great way to identify what users need. Here’s an example:
For most organizations, the largest number of requests incoming from Service Catalog are going to be through the generic “Submit an IT Request” type, because it can be used for anything that doesn’t already exist in the catalog. Users open a request by filling out a field with an explanation and it gets submitted. Once submitted, a Service Desk user opens it, reads the description, tries to determine exactly what was requested, and figures out who exactly it should go to. It wouldn’t be surprising if it gets bounced between a few different groups before it lands in front of the team that can actually help. Just that first step in the process can take anywhere from a few minutes to a few days -- time which gets added to your SLAs -- and also takes up time that the Service Desk personnel could be doing something else. How much money is wasted in those extra steps?
Let’s look at the flow of that generic request:
Using text analytics, we can analyze the unstructured text field that users complete to submit in these generic IT requests, and hopefully identify the most commonly requested items. By trending the words that appear and comparing day-to-day, we can determine which terms are appearing most frequently. Now we can use that information to figure out what catalog items to concentrate on creating next:
Hovering over the trend line at the bottom makes it easier to see specifics:
In this example, Outlook is one of the most frequent terms appearing in the description of the request, followed closely by Citrix and VPN (which would also be great candidates for automation!) For this example, we’ll focus on the Outlook requests.
Datapoint – Run an ad-hoc basic report against the Requests created from your Generic Request catalog item, reporting on Average Duration:
That’s our baseline for how long these generic requests take to be resolved.
If we dig into a sample of the requests text analytics identified, we’ll find that a majority are asking to have Outlook installed on their company laptop. The obvious next step here is to create a new catalog item to collect and properly route these requests. This allows us to not only begin with an accurate assignment, it cuts the initial response time out of the picture and allows us to use a specific workflow.
The new process flow with a specific catalog item for these requests is much cleaner:
The new process is much simpler and has far less (maybe zero) chance of landing in front of the wrong person. Assignments will be handled perfectly each and every time, and the initial response and interpretation delays are completely cut out of the process.
Realize that it will take a while to socialize this new change – users who have gotten used to submitting the Generic requests for Outlook may take time to realize there’s now a better way. But after a while, you’ll be able to see the average times for users using the new method against the baseline we previously collected.
Now let’s look at how we start to quantify the value of this change using Performance Analytics. Here’s your to-do list for setting up the measurements:
1. Create an Automated Indicator: Create a new indicator to measure the Number of completed Outlook requests (table is Requested Items, field is Item)
2. Create an Automated Indicator: Create a new indicator to measure Summed duration of completed Outlook requests
3. Create a Formula Indicator: Create a new formula indicator to calculate Average time to complete Outlook requests, using Summed duration of completed Outlook requests / Number of completed Outlook requests
4. Add a Target: Configure a Target on the Formula Indicator, that holds the Datapoint you measured above
Using the target makes it easy to see improvement in the new method, already a compelling argument for the value of analytics.
Assigning cost value
We can also take this a step farther if you have detailed cost information about the blended hourly rate of a Service Desk team member, or if you are willing to make some educated assumptions.
For those who don’t have that specific cost information, you can use the estimates from Help Desk Institute’s survey here. (https://www.thinkhdi.com/library/supportworld/2017/metric-of-month-service-desk-cost-per-ticket.aspx) Rather than Cost per Ticket, we’re going to use the Cost per Minute of Handle Time average of $1.60. Or, if you’d like to be more conservative, then you can use the minimum of $0.76. Or the max of $2.50 if you’re in sales (I kid.)
There are more elaborate and technical ways to do this that will give you a ‘cleaner’ number, but for the sake of ease and estimation, you can calculate the cost savings with this Formula Indicator:
Name: Cost Savings thanks to Performance Analytics: Outlook
Formula: (((Numeric value of datapoint measured above in milliseconds - (Summed duration of completed Outlook requests / Number of completed Outlook requests)) * Number of completed Outlook requests) / (1000*60)) * 1.60
Unit: $
That looks like a mess, so I’ll explain a step at a time. First, we need take our original average time spent, converted to milliseconds (duration fields are stored in milliseconds) and subtract our new average time to resolve. That’s the (Summed duration of completed Outlook requests / Number of completed Outlook requests), which gives us the average time savings as a number in milliseconds.
Now we multiply that savings by the total number of Outlook requests (i.e. if we save 50 minutes on average, or 3M milliseconds, and we had 10 requests that day, then we saved a total of 500 minutes that day).
That gives us the total time savings for the day, in milliseconds. Take THAT number and divide it by the product of 1000 (milliseconds in a second) * 60 (seconds in a minute).
Finally, multiply that by our cost per minute rate (1.60 per HDI, but if you already know your company Service Desk’s blended rate, use that instead.) For our example above using 10 requests per day, 50 minutes average savings, the answer is 500 minutes * 1.60 or $800 savings for the day.
With this in place, we can now use Performance Analytics to apply a time series against it and see total savings for each month, quarter, or year. If the number is consistent, that’s a $24,000 per month savings -- from a single catalog item -- that Performance Analytics told us we should build. Even if we exclude weekends to be more conservative on the rough estimate, that’s still around $16,800 in savings (an average of 21 weekdays per month * $800/day).
That would potentially be an annual savings of $201,600.
For one catalog request.
That we found thanks to Performance Analytics.
I wonder if we could use Orchestration and eliminate the manual request task altogether… how much would that improve the time savings?
Wait… don’t I remember seeing something about Citrix and VPN?
- 1,654 Views
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.