Dan_Kane
ServiceNow Employee
ServiceNow Employee

This article is a collection of frequently asked questions around Performance Analytics data collector. We’ll expand the list over time, so check back frequently. Feel free to post your own questions in the comments section following the article. Keep in mind that this is not a support forum. Urgent questions should be directed to ServiceNow support, or (if less urgent) created as a new community forum post.

 

Question

Occasionally, we will have a data point or two that is dramatically skewing the historical data (e.g., an old ticket or two that was “lost” is closed, thus skewing MTTR data) and we’ve been asked if we could delete this errant data point(s) and fix the supporting data and graphs. Is this is a good idea? How would we do it?

Answer

 I would strongly recommend NOT “fixing” the data and the charts. While I understand the desire to not let an anomaly skew reporting data, the reality is that the anomaly occurred. The late, great service management guru Malcolm Fry used to talk about the concept of the “fault”. The fault could be an incident caused by insufficient training, a failed change due to failing hardware, or a delayed response on an HR request. A fault is anything that occurs outside of the expected parameters of a process or workflow. Even the concept of “expected” parameters can lead to faults, as the customer’s expectations may be very different than those of the service provider.

 

ServiceNow reporting and Performance Analytics is about performance management, or measuring the performance of a process or workflow. The whole point of measuring performance is to get better. Every lost or forgotten case is a fault somewhere in the case workflow. Some customer’s experience was degraded, or there was a breakdown in the workflow that may result in a degraded experience. Every fault is an opportunity to figure out how we can get better. Removing the anomaly faults from reporting only delays the opportunity to improve.

 

After all that, If there was truly an error in reporting only, you have the option to modify the Scoresheet in Performance Analytics. Using the navigator go to Performance Analytics > Scoresheet. Select the indicator you wish to change from the drop down list, and manually make the change on the correct date. The “pa_contributor” role is required to make the update. Again, I stress that this is a LAST RESORT, and should only be done when there was an error in the way a score was calculated. Do not use this option to avoid reporting on one-off records.

 

Question

We have various Parent Groups and Tiers that we use for looking at aggregate data for multiple assignment groups. On occasion, new circumstances arise that cause us to want to change the assignment group mix that we have assigned to a Parent Group and have been collecting data on for a period of time. Is it possible to reconfigure the assignment groups within a Parent Group (or at least achieve the same results) and collect accurate historical data?

Answer

Like the anomaly data point question above, we generally do not recommend this course of action. The reason being that the assignment group parent(s) were accurate at the time the data was originally collected. If the score is for the month of February, I want the assignment group breakdowns to reflect the way assignment groups were defined at that time in February. The record lifecycles ran in the month of February under the old “rules” for assignment groups, and I want the historical trend to reflect the rules that were in place at the time of the actual score. However, this is more of a customer preference. There are use cases where a customer may want the historical trending to reflect the current assignment group parent-child relationships. In that case, I recommend creating a copy (Insert and stay with relations) of the impacted indicators, and then running a historical score collection against the new versions of the indicators. That way you retain the historical artifact of how the scores were distributed using the rules at that time, and you also have new indicators reflecting the same data, but from the perspective of the new assignment group parent-child relationships. 

 

Question

 As new circumstances arise, we realize the value in aggregating data from multiple assignment groups that we had not anticipated wanting to group together before in a Parent Group or Tier. Is there a simple way of aggregating multiple assignment groups and creating something like new Parent Groups for historical data? And can we place the same assignment group in multiple new groups depending on the view we want?

Answer

This is a great question, and the use case makes a lot of sense. The tiering example is a good one. You have assignment groups functioning as usual. But you might also have a need to analyze data by support tiers, security groups vs non-security groups, etc. The key in this scenario is to use different fields for the various data dimensions. For example, a common case is where a customer adds a “Tier” field to the Group table, and then creates a breakdown source and breakdown on that new field. The important thing is to keep the fields used for standard assignment group reporting separate from the fields used for other groupings. Keeping the fields separate avoids double counting the same record.

 

Question

Each assignment group is made up various users. New users join assignment groups and become active users. Others leave the group to move to another assignment group or leave the company and become inactive. When someone removes an inactive user from the group (using the edit command), when and how might that impact the reporting/performance analytics?

Answer

The best answer to this scenario depends on how your organization applies “assigned to” and “assignment group” to your task records. In most cases, once an assignment group is applied to a task, that group will stick until someone manually changes the assignment to another group. In that situation, changing which groups a user is assigned to should not make any difference in reporting, whether the context is real-time reports or previously collected PA indicator scores. For PA, all we really care about is the assignment group applied to a task at the time of score collection. This is why it’s important to run score collections as close to the date of the score as possible.

 

As an example, the following shows part of the lifecycle of an HR case.

timeline.png

If I want to accurately reflect the assignment groups at the various points in time, I need to run the data collection jobs immediately following each day. This is why we recommend most PA collections run during the early morning hours. If I run a collection job each day between March 3 and March 11, at 1:00AM each morning, Case123 will be grouped with the following assignment groups for the scores on each date:

  • March 2 = Service Desk
  • March 3 = Benefits
  • March 4 = Benefits
  • March 5 = Benefits
  • March 6 = Benefits
  • March 7 = Payroll
  • March 8 = Payroll
  • March 9 = Payroll
  • March 10: Service Desk

 

It doesn’t matter if the “Assigned to” person changes. The Assignment group stays as collected. If the “Assigned to” person belonged to the Service Desk on March 2, and was moved into the Payroll group on March 5, nothing about the Assignment Group daily scores change at all.

 

Now let’s say we found an error on March 15 in how indicators were defined. We want to recollect scores for the earlier dates in March to account for the corrected indicator definitions. If I recollect all scores, including the Assignment groups, each daily score would now include Case123 in the Service Desk group, since Service Desk was the last change made to the assignment group on the case record. Think carefully on whether you want to recollect all scores for those dates, or exclude the assignment group breakdown when recollecting.

Version history
Last update:
‎04-06-2023 12:54 PM
Updated by:
Contributors