- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
ServiceNow Assistant Analytics: How Sentiment Analysis Reveals What Users Really Feel
Table of Contents
- Key Terms
- Key Visualizations
- Overall Sentiment
- Conversations Analyzed
- High Empathy Rate
- Conversations with Negative Emotions
- Average Inferred CSAT Over Time
- Transfers and Escalations Over Time
- Average Inferred CSAT (Virtual Agent)
- Average Inferred CSAT (Live Agent)
- Average Inferred CSAT (Session)
- Assistant Recommended Next Steps
- Conversation Insight Inferred Resolution State
- Empathy Levels Distribution Over Time
- Negative Emotion Feedback Over Time
Articles Hub
Want to see all of our other articles and blogs related to ServiceNow AI Platform? We'll have more on Assistant Analytics soon.
You can copy the link above and share it!
Overview
Family Release: Zurich Patch 6
00:00: This video demonstrates, how to use the assistant, analytics sentiment page in servicenow to evaluate the emotional quality of assistant conversations.
00:08: It highlights key metrics like empathy frustration resolution and customer satisfaction to improve user experience.
00:22: The assistant analytics sentiment page, helps teams, understand the emotional quality of assistant conversations not just usage.
00:31: It analyzes trends like frustration, confusion, empathy, resolution and csat.
00:39: It provides a quality focused view on how well your agent is serving users and where their conversations may be breaking down.
00:50: First, we have the overall sentiment. This is your headlined quality metric, and it shows that single score of an average inferred CSAT
01:00: on a scale of zero to five. And it tells you if your users are generally satisfied with your Assistant interactions.
01:08: Below that is your number of percent increase or decrease over the last time period equal to the time period you're currently filtered to.
01:23: Next, we have conversations analyzed.
01:27: This one is very useful in knowing how many conversations were analyzed during that time period. And this is important context for all other metrics on this page.
01:37: If you have a low number, your sentiment scores may not be statistically meaningful. So a large sample size gives you more confidence in the trends you're seeing.
01:49: Next, we have the high empathy rate.
01:52: This shows what percent of conversations had assistant responses that demonstrated high empathy, which is calculated as conversations with high empathy markers divided by the total analyzed conversations times 100.
02:06: That high empathy rate suggests your assistant is good at acknowledging user feelings and responding appropriately.
02:21: The next one, takes a. Look at those insights from the opposite perspective.
02:25: Here. It is showing where the system detected, frustration or confusion.
02:31: A high percentage here is a red flag meaning that your users are struggling with your assistance.
02:40: Then we have the average inferred csat. Over time. This chart daily track csat scores over a date range and uses it to spot Trends. Such as satisfaction improving declining or holding steady.
02:54: Correlation dips occur with specific events perhaps assistant updates, outages or organizational changes.
03:02: If you see a sudden drop-off or something changes, we recommend investigating.
03:08: Next,
03:09: we have our transfers and escalations over time.
03:14: This tracks how often conversations get handed off to a live agent, and you can hover over the trend line to see the daily counts. Frequent transfers might mean your assistant is out of its depth
03:26: or that users don't trust it to handle their issues.
03:30: Some escalations are normal and expected,
03:33: but if the trend is climbing,
03:35: that's your signal that the assistant needs improvement.
04:09: Next, let's take a look at the assistant recommended next steps.
04:14: This measures. How clearly the assistant explained, what should happen next?
04:18: Or that the user needs to do.
04:23: A low means no clear. Guidance was given medium means some guidance was provided. And high means the Assistant gave clear complete instruction.
04:35: If users are confused about next steps, they're more likely to get frustrated or give up. So this is a key quality metric.
04:44: Then we have our conversations insights and inferred resolution state.
04:50: Now this categorizes whether user issues were solved.
04:54: Yes. Means the assistant met the user's needs. No means it did not and we got a clear signal from the user.
05:02: Unknown means the system couldn't determine a resolution.
05:07: This is arguably the most important metric if you're resolving issues.
05:14: Nothing else really matters. So focus on driving up the yes percentage.
05:25: Then we take a look at the empathy level distribution over time.
05:28: This shows empathy as high medium or low and its distribution across all conversations during your time period.
05:36: it tracks, whether your assistant is becoming more or less emotionally intelligent
05:42: If you're working on improving a response tone, this metric should be moving up.
05:49: Next and last we've got our negative emotion feedback over time.
05:53: This tracks, frustration, and confusion signals in conversations.
05:58: Over your time period. Use it to identify patterns.
06:02: Where negative emotions Spike?
06:05: on certain days or with certain assistance, if you see an upward trend,
06:10: In negative emotions, you need to diagnose why?
06:14: Ideally, this line will Trend downward as you improve your assistance.
06:44: The video explains how to interpret sentiment metrics track Trends, and identify areas for improvement in assistant interactions to enhance overall conversation quality.
Key Terms
Effort Score
Resolution
- Yes: The assistant met the user's needs or provided the requested information
- No: The issue was not resolved, often because the user was transferred or the agent promised follow-up
- Partial: Only some issues were resolved or the user received part of what they requested
Frustration
Confusion
Transfers and Escalations
Empathy
- High empathy: The assistant actively listened, acknowledged concerns, showed genuine care, and communicated in a friendly manner
- Low empathy: The assistant provided cold, robotic, or inattentive responses
Next Steps
- High: Clear instructions with timelines and follow-up details were provided
- Low: Little or no information was given about what has been or will be done
Key Visualizations
Overall Sentiment
- Review the single average CSAT number to understand overall satisfaction
- Check the percentage increase or decrease compared to the previous period
- Examine the line graph to track how this score has changed over time
- Correlate quality trends with changes you've made to your assistants
Conversations Analyzed
- Verify you have sufficient sample size for statistically meaningful results
- Recognize that low counts may reduce confidence in sentiment scores
- Ensure you have enough data before drawing conclusions from trends
High Empathy Rate
- Assess whether your assistant is acknowledging user feelings appropriately
- Identify if low empathy rates indicate a need to tune response templates
- Evaluate whether conversation flows need to be more human-centered
- Recognize that high empathy matters especially when users are frustrated or confused
Conversations with Negative Emotions
- Treat high percentages as red flags indicating users are struggling
- Investigate the root causes of frustration and confusion
- Monitor this metric closely to identify deteriorating user experiences
Average Inferred CSAT Over Time
- Identify whether satisfaction is improving, declining, or holding steady
- Correlate dips with specific events like assistant updates or outages
- Look for correlations with organizational changes
- Investigate sudden drops to identify what changed
Transfers and Escalations Over Time
- Recognize that some escalations are normal and expected
- Treat climbing trends as signals that your assistant needs improvement
- Consider whether frequent transfers indicate the assistant is out of its depth
- Evaluate whether users trust the assistant to handle their issues
Average Inferred CSAT (Virtual Agent)
- Use this metric to benchmark assistant performance independently
- Compare against Live Agent CSAT to understand relative performance
- Evaluate the assistant's effectiveness separate from human agent quality
Average Inferred CSAT (Live Agent)
- Compare this to Virtual Agent CSAT to evaluate the impact of escalations
- Determine whether escalations are improving or worsening user experience
- Assess the quality of handoffs between assistant and live agents
Average Inferred CSAT (Session)
- Review the most holistic view of user satisfaction
- Understand the complete experience from start to finish
- Evaluate the combined effectiveness of both assistant and human interactions
Assistant Recommended Next Steps
- Low: No clear guidance was provided
- Medium: Some guidance was provided
- High: Clear, complete instructions were given
- Recognize that unclear next steps lead to increased frustration
- Understand that users may give up without clear guidance
- Track this as a key quality metric for assistant effectiveness
Conversation Insight Inferred Resolution State
- Yes: The assistant met the user's needs
- No: The issue was not resolved
- Unknown: The system couldn't determine resolution
- Focus on driving the Yes percentage upward
- Recognize this as arguably the most important metric
- Understand that without resolution, other metrics matter less
Empathy Levels Distribution Over Time
- Monitor whether your assistant is becoming more or less emotionally intelligent
- Evaluate whether changes to response tone are having the desired effect
- Track the impact of improvements to conversation design
Negative Emotion Feedback Over Time
- Identify whether negative emotions spike on certain days
- Correlate spikes with specific assistants or changes
- Diagnose why upward trends are occurring
- Work to drive this line downward through assistant improvements
Conclusion
Congratulations! 🎉 You've completed this tutorial.
Check out the Next Experience Center of Excellence for more resources
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
