Get a first look at what's coming. The Developer Passport Australia Release Preview kicks off March 12. Dive in! 

Sharon_Barnes
ServiceNow Employee

ServiceNow Assistant Analytics: How Sentiment Analysis Reveals What Users Really Feel

Articles Hub

Want to see all of our other articles and blogs related to ServiceNow AI Platform? We'll have more on Assistant Analytics soon.

You can copy the link above and share it!

We Value Your Feedback!

Have any suggestions or topics you'd like to see in the future? Let us know!

Overview

Understanding how users feel about their assistant interactions is critical to building effective AI experiences. This dashboard goes beyond simple metrics like conversation counts to analyze the emotional signals that reveal how users truly experience your assistant. It tracks frustration, confusion, empathy, and overall satisfaction, while also monitoring transfers to human agents and assessing whether issues are actually resolved. These insights help you understand not just whether your assistant is functioning, but whether it's delivering genuine value and creating positive user experiences.
Think of this as your quality assurance dashboard. It provides the signals you need to evaluate assistant performance from the user's perspective. By measuring emotional responses, resolution rates, and the clarity of guidance provided, you can identify where your assistant excels and where it falls short. The metrics presented here enable you to make data-driven decisions about improving conversation flows, response templates, and overall assistant design to create interactions that leave users satisfied rather than frustrated.
 

Family ReleaseZurich Patch 6

Release: Now Assist for Platform version 10.0.3
Roles Required: virtual_agent_admin
 

00:00: This video demonstrates, how to use the assistant, analytics sentiment page in servicenow to evaluate the emotional quality of assistant conversations.

00:08: It highlights key metrics like empathy frustration resolution and customer satisfaction to improve user experience.

00:22: The assistant analytics sentiment page, helps teams, understand the emotional quality of assistant conversations not just usage.

00:31: It analyzes trends like frustration, confusion, empathy, resolution and csat.

00:39: It provides a quality focused view on how well your agent is serving users and where their conversations may be breaking down.

00:50: First, we have the overall sentiment. This is your headlined quality metric, and it shows that single score of an average inferred CSAT

01:00: on a scale of zero to five. And it tells you if your users are generally satisfied with your Assistant interactions.

01:08: Below that is your number of percent increase or decrease over the last time period equal to the time period you're currently filtered to.

01:23: Next, we have conversations analyzed.

01:27: This one is very useful in knowing how many conversations were analyzed during that time period. And this is important context for all other metrics on this page.

01:37: If you have a low number, your sentiment scores may not be statistically meaningful. So a large sample size gives you more confidence in the trends you're seeing.

01:49: Next, we have the high empathy rate.

01:52: This shows what percent of conversations had assistant responses that demonstrated high empathy, which is calculated as conversations with high empathy markers divided by the total analyzed conversations times 100.

02:06: That high empathy rate suggests your assistant is good at acknowledging user feelings and responding appropriately.

02:21: The next one, takes a. Look at those insights from the opposite perspective.

02:25: Here. It is showing where the system detected, frustration or confusion.

02:31: A high percentage here is a red flag meaning that your users are struggling with your assistance.

02:40: Then we have the average inferred csat. Over time. This chart daily track csat scores over a date range and uses it to spot Trends. Such as satisfaction improving declining or holding steady.

02:54: Correlation dips occur with specific events perhaps assistant updates, outages or organizational changes.

03:02: If you see a sudden drop-off or something changes, we recommend investigating.

03:08: Next,

03:09: we have our transfers and escalations over time.

03:14: This tracks how often conversations get handed off to a live agent, and you can hover over the trend line to see the daily counts. Frequent transfers might mean your assistant is out of its depth

03:26: or that users don't trust it to handle their issues.

03:30: Some escalations are normal and expected,

03:33: but if the trend is climbing,

03:35: that's your signal that the assistant needs improvement.

04:09: Next, let's take a look at the assistant recommended next steps.

04:14: This measures. How clearly the assistant explained, what should happen next?

04:18: Or that the user needs to do.

04:23: A low means no clear. Guidance was given medium means some guidance was provided. And high means the Assistant gave clear complete instruction.

04:35: If users are confused about next steps, they're more likely to get frustrated or give up. So this is a key quality metric.

04:44: Then we have our conversations insights and inferred resolution state.

04:50: Now this categorizes whether user issues were solved.

04:54: Yes. Means the assistant met the user's needs. No means it did not and we got a clear signal from the user.

05:02: Unknown means the system couldn't determine a resolution.

05:07: This is arguably the most important metric if you're resolving issues.

05:14: Nothing else really matters. So focus on driving up the yes percentage.

05:25: Then we take a look at the empathy level distribution over time.

05:28: This shows empathy as high medium or low and its distribution across all conversations during your time period.

05:36: it tracks, whether your assistant is becoming more or less emotionally intelligent

05:42: If you're working on improving a response tone, this metric should be moving up.

05:49: Next and last we've got our negative emotion feedback over time.

05:53: This tracks, frustration, and confusion signals in conversations.

05:58: Over your time period. Use it to identify patterns.

06:02: Where negative emotions Spike?

06:05: on certain days or with certain assistance, if you see an upward trend,

06:10: In negative emotions, you need to diagnose why?

06:14: Ideally, this line will Trend downward as you improve your assistance.

06:44: The video explains how to interpret sentiment metrics track Trends, and identify areas for improvement in assistant interactions to enhance overall conversation quality.

 

Key Terms

 

Effort Score

This measures the time and energy users invested during their interaction. Values are categorized as Low, Medium, or High based on signals including transfers, wait periods, repetition, or difficulty being understood by the assistant.

Resolution

This indicates whether the user's issue was successfully resolved:
  • Yes: The assistant met the user's needs or provided the requested information
  • No: The issue was not resolved, often because the user was transferred or the agent promised follow-up
  • Partial: Only some issues were resolved or the user received part of what they requested

Frustration

This indicates whether the user expressed frustration with the assistant or the quality of assistance. Frustration may manifest as sarcasm, rude language, or expressing that the process is unfair, too difficult, or inefficient.

Confusion

This indicates when the user, agent, or bot misunderstood each other. Examples include the assistant failing to understand user requests or responding in a way that doesn't address what the user was asking.

Transfers and Escalations

This occurs when the user requests escalation or the conversation is transferred to another agent, team, or department. This includes requests to speak with a supervisor or increase the priority of an issue.

Empathy

This measures how empathetic, proactive, and personable the assistant was on a scale of Low, Medium, or High:
  • High empathy: The assistant actively listened, acknowledged concerns, showed genuine care, and communicated in a friendly manner
  • Low empathy: The assistant provided cold, robotic, or inattentive responses

Next Steps

This measures how clearly the assistant explained what happens next or what the user should do, rated as Low, Medium, or High:
  • High: Clear instructions with timelines and follow-up details were provided
  • Low: Little or no information was given about what has been or will be done

Key Visualizations

 

Overall Sentiment

This is your headline quality metric that shows the average inferred CSAT score on a scale of 0 to 5, indicating whether users are generally satisfied with their assistant interactions.
What you'll see:
  1. Review the single average CSAT number to understand overall satisfaction
  2. Check the percentage increase or decrease compared to the previous period
  3. Examine the line graph to track how this score has changed over time
  4. Correlate quality trends with changes you've made to your assistants
Indicator: Average Daily Inferred CSAT (Session)

Conversations Analyzed

This shows the count of conversations that were actually analyzed for sentiment, providing important context for all other metrics on this page.
Why this matters:
  1. Verify you have sufficient sample size for statistically meaningful results
  2. Recognize that low counts may reduce confidence in sentiment scores
  3. Ensure you have enough data before drawing conclusions from trends
Indicator: Conversation Insights (Processed)

High Empathy Rate

This displays the percentage of conversations where assistant responses demonstrated high empathy, calculated as (conversations with high empathy / total analyzed conversations) × 100.
How to interpret:
  1. Assess whether your assistant is acknowledging user feelings appropriately
  2. Identify if low empathy rates indicate a need to tune response templates
  3. Evaluate whether conversation flows need to be more human-centered
  4. Recognize that high empathy matters especially when users are frustrated or confused
Indicator: High Empathy Rate - Conversation Insights

Conversations with Negative Emotions

This metric shows the percentage of conversations where frustration or confusion was detected, calculated as (conversations with frustration or confusion / total analyzed) × 100.
Action steps:
  1. Treat high percentages as red flags indicating users are struggling
  2. Investigate the root causes of frustration and confusion
  3. Monitor this metric closely to identify deteriorating user experiences
Indicator: Percentage of Conversations with Negative Emotions

Average Inferred CSAT Over Time

This chart tracks daily average CSAT scores across your selected date range, revealing satisfaction trends over time.
How to use this chart:
  1. Identify whether satisfaction is improving, declining, or holding steady
  2. Correlate dips with specific events like assistant updates or outages
  3. Look for correlations with organizational changes
  4. Investigate sudden drops to identify what changed
Indicator: Average Daily Inferred CSAT (Session)

Transfers and Escalations Over Time

This tracks how frequently conversations are handed off to live agents, with daily counts visible when hovering over the trend line.
What to watch for:
  1. Recognize that some escalations are normal and expected
  2. Treat climbing trends as signals that your assistant needs improvement
  3. Consider whether frequent transfers indicate the assistant is out of its depth
  4. Evaluate whether users trust the assistant to handle their issues
Indicator: Conversation Insights (Processed)

Average Inferred CSAT (Virtual Agent)

This is the CSAT score specifically for the Virtual Agent portion of conversations, scored 0 to 5, reflecting satisfaction only with the assistant component.
Best practices:
  1. Use this metric to benchmark assistant performance independently
  2. Compare against Live Agent CSAT to understand relative performance
  3. Evaluate the assistant's effectiveness separate from human agent quality
Indicator: Average Daily Inferred CSAT (Virtual Agent)

Average Inferred CSAT (Live Agent)

This shows the CSAT score for the live agent portion of conversations, reflecting satisfaction only with the human agent when both assistant and human interactions occurred.
Analysis approach:
  1. Compare this to Virtual Agent CSAT to evaluate the impact of escalations
  2. Determine whether escalations are improving or worsening user experience
  3. Assess the quality of handoffs between assistant and live agents
Indicator: Average Daily Inferred CSAT (Live Agent)

Average Inferred CSAT (Session)

This represents the overall CSAT for the entire session, whether handled entirely by the assistant or involving a handoff to a live agent.
Why this matters:
  1. Review the most holistic view of user satisfaction
  2. Understand the complete experience from start to finish
  3. Evaluate the combined effectiveness of both assistant and human interactions
Indicator: Average Daily Inferred CSAT (Session)

Assistant Recommended Next Steps

This measures how clearly the assistant explained what happens next or what the user should do, with ratings of Low, Medium, or High.
Rating definitions:
  1. Low: No clear guidance was provided
  2. Medium: Some guidance was provided
  3. High: Clear, complete instructions were given
Impact:
  1. Recognize that unclear next steps lead to increased frustration
  2. Understand that users may give up without clear guidance
  3. Track this as a key quality metric for assistant effectiveness
Indicator: Conversation Insights (Processed)

Conversation Insight Inferred Resolution State

This categorizes whether the user's issue was actually resolved, with outcomes of Yes, No, or Unknown.
Categories:
  1. Yes: The assistant met the user's needs
  2. No: The issue was not resolved
  3. Unknown: The system couldn't determine resolution
Priority action:
  1. Focus on driving the Yes percentage upward
  2. Recognize this as arguably the most important metric
  3. Understand that without resolution, other metrics matter less
Indicator: Conversation Insights (Processed)

Empathy Levels Distribution Over Time

This shows how empathy levels (High, Medium, Low) are distributed across conversations over time, tracking the emotional intelligence of your assistant.
How to use this:
  1. Monitor whether your assistant is becoming more or less emotionally intelligent
  2. Evaluate whether changes to response tone are having the desired effect
  3. Track the impact of improvements to conversation design
Indicator: Conversation Insights (Processed)

Negative Emotion Feedback Over Time

This tracks frustration and confusion signals in conversations over time, helping you identify patterns and trends.
Analysis steps:
  1. Identify whether negative emotions spike on certain days
  2. Correlate spikes with specific assistants or changes
  3. Diagnose why upward trends are occurring
  4. Work to drive this line downward through assistant improvements
Indicator: Conversation Insights (Processed)

Conclusion

Congratulations! 🎉 You've completed this tutorial.

This dashboard provides the insights you need to understand and improve the quality of your assistant interactions. By monitoring emotional signals, resolution rates, empathy levels, and satisfaction scores, you can move beyond simply knowing that your assistant is operational to understanding whether it's truly serving user needs effectively. Use these metrics to identify areas for improvement, validate the impact of changes, and ensure your assistant is creating positive experiences that leave users satisfied rather than frustrated. Focus especially on resolution rates and negative emotion trends as leading indicators of assistant effectiveness, and remember that a large sample size gives you the confidence to act on the patterns you observe.

Check out the Next Experience Center of Excellence for more resources