Rozmin Parpia
ServiceNow Employee
ServiceNow Employee

ChatGPT Image May 1, 2025, 10_21_12 AM.png

Process Mining and survey results go very well together. Before we dive into it, let’s first understand these concepts.

 

Surveys are initiated from a task in ServiceNow e.g. incident, case, request or more. The purpose of it is to seek user or customer feedback on the service delivered. Each survey typically contains a bunch of questions, and one of it is always about the overall level of user satisfaction on a scale of 1 to 5 (or 10). A high rating indicates high level of service satisfaction.

 

Process Mining mines data to provide a visualised flow of work in your processes highlighting anomalies and bottlenecks along with detecting bad patterns to initiate ideas for process improvements. It also offers robust machine learning features that analyze root cause and unstructured data e.g. work notes, short description and much more.

 

Benefits of mining survey results using Process Mining

 

  • You can investigate deeper to find out if bad survey ratings are caused by inefficient process behaviours such as ping ponging tickets between groups leading to longer wait times.
  • And vice versa i.e. good survey ratings are achieved through better process behaviours such as tickets not on hold leading to faster resolution times.
  • You can compare the survey results to visualise the difference between the processes to improve by coaching your teams, creating effective knowledge articles, using Virtual Agent or automating non-value-added steps.
  • You could present the analysis to executive stakeholders for their buy-in on the action plans to improve service delivery and user / customer satisfaction.

 

Point Process Mining to correct data tables and fields

 

It is important to understand the data tables and fields to be targeted for effective mining.

 

  • Survey results are stored in ‘Metric Result’ table. The fields to consider on this table are:
    • ‘Category’ is the name of the satisfaction survey rolled out. There are several such surveys in your organisation. Make sure you select the correct one you’re interested in mining
    • ‘Metric’ is the question on the survey you’re interested in e.g. overall level of satisfaction on a rating scale
    • ‘Actual Value’ is the score provided by the survey respondent i.e. answer to the question - overall satisfaction
  • ‘Assessment Instance’ is referred by ‘Metric Result’ table
    • ‘Task’ field on ‘Assessment Instance’ table stores the task id from where the survey was initiated. A task can be an incident, service request, HR case, customer service case or more.

 

*The tables and fields provided in the baseline might have been relabelled or new ones created in your instance. So, please consult your system administrator to pick the correct fields for mining.

 

Now let’s test the hypothesis: customers or users who experience high resolution times or bad process behaviours (e.g. ping pong between groups, extra step etc.) rate us badly on the survey.

 

1: Report extraction

 

Go to the reports menu and create a new report.

 

RozminParpia_20-1746194485688.png

 

Point the report to ‘Metric Result’ table and choose ‘List’ as the report type. Add filters as shown below.

 

RozminParpia_21-1746194485710.png

 

The filters are meant to pick survey results that have a bad rating i.e. 3 or less.

 

You will notice that ‘task’ column is missing on the report. Select ‘choose columns’ and dot-walk to ‘instance’ table to add it in as show below.

 

RozminParpia_22-1746194485717.png

 

 The ‘task’ column will show up on the report as displayed below. You can then save and run the report and export it in Excel format. Download the report and copy the ids in ‘task’ column.

 

RozminParpia_23-1746194485730.png

 

2: Mine the data

 

Create a new project on the Process Mining workspace and point it to the table the survey was initiated from, as shown below.

 

RozminParpia_24-1746194485745.png

 

In step 2 of guided setup project creation, as shown below, add a filter condition to choose only those records where the survey rating was poor. The filter condition is: ‘Number’ is one of,  and paste the ids you had copied from the report earlier.

 

RozminParpia_25-1746194485781.png

 

Add ‘Activities’ as ‘state’, ‘assignment group or any other relevant for you. Then, under ‘Use Cases’ toggle the SLA Breach button to add it to the analysis. Proceed to step 4 to mine the data. You could add improvements (patterns), if available, at step 3.

 

You could repeat the steps above to mine good survey results to compare it against the bad ones we just mined for. The only difference is that when applying filter conditions for the report make sure you change the ‘actual value’ to 8 or greater. This will ensure that we pick only the survey results where satisfaction rating was good i.e. 8+.

 

3: Review analysis

 

After mining both good and bad survey results projects, my dataset revealed that records with bad survey results suffer from bad process behaviours (patterns) as displayed below.

 

RozminParpia_26-1746194485796.png

 

As compared to that, records with good survey results showed relatively fewer bad process behaviours and on lower volume (%) of records mined (visual below).

 

RozminParpia_27-1746194485816.png

 

The analysis also showed that SLA breaches (%) were much higher for bad survey results (at 30%) as compared to the good survey results (at 5%). Refer to visuals below.

 

RozminParpia_28-1746194485842.png

 

 

 

RozminParpia_29-1746194485862.png

 

 

Records with bad survey results also showed ‘rework’ where records go upstream to ‘awaiting info’ and ‘awaiting approval’ leading to higher resolution times (visual below).

RozminParpia_31-1746194982654.png

*Full recording with demo from the academy session is here.