Sharon_Barnes
ServiceNow Employee

Overview

If you're investing in AI assistants, there's one brutal question you need to answer: Are users solving problems on their own—or just getting stuck before escalating to support?

The Assistant Analytics Self-Solve Performance Page exists to answer exactly that. It's not just a dashboard—it's your reality check. It tells you whether your assistant is actually deflecting tickets, helping users resolve issues end-to-end, or quietly pushing users toward human agents anyway.

If your goal is to scale support without scaling headcount, this page is where you validate that strategy. Most teams track AI performance at a surface level, but the teams that win track outcomes, investigate trends, and optimize for effort alongside resolution. This dashboard gives you everything you need—but only if you read between the lines.

 

Family ReleaseZurich Patch 6

Release: Now Assist for Platform version 10.0.3
Roles Required: virtual_agent_admin
 

00:00: This video demonstrates how to analyze the self-solved performance tab within Assistant

00:03: Analytics.

00:05: It highlights key metrics that measure AI assistants' effectiveness in helping users

00:10: resolve issues independently without human intervention.

00:14: Now on this page, you will see how effective all of your AI agent assistants are at

00:19: helping users solve their own problems without needing to speak to a human agent.

00:29: That means it's covering your Now Assist panel, virtual agent, and voice agents, and

00:33: how effectively they help users deflect issues.

00:40: Our first report is total deflection events.

00:44: This is the total number of self solve events recorded, meaning all of the

00:49: times all of the opportunities where a user was trying to resolve something on their own.

00:54: It's your baseline metric for understanding

00:56: how many self serve interactions are happening.

01:00: N

01:01: Next, we have total deflections.

01:04: This counts as the number of times a user actually resolved their issues through

01:08: self-service.

01:09: These are your total wins.

01:12: Next, we have total live agent transfers.

01:15: This tracks

01:16: how many conversations got escalated to a human agent.

01:19: And you'll also see a percent change from the previous period and whether that's

01:23: trending up or down.

01:28: Next, we have our deflection rate.

01:31: This shows us the interactions where users successfully deflected their problems

01:35: without contacting a live agent.

01:38: It's calculated as (total self-solved events / total events)

01:42: x

01:44: 100. A higher

01:45: deflection rate means your assistants are effective at keeping your users independent.

01:56: Next, we have our deflection rate over time.

01:58: This shows whether the deflection rate is increasing or decreasing.

02:03: You can hover over any point to see the exact deflection for that day.

02:08: Use this to identify deflection performance, whether it's improving or declining, and

02:14: see what those relate to

02:15: any changes

02:16: you may have made in your assistance at that time.

02:25: We can scroll down and see our deflection outcome distribution.

02:30: This explains self-solve outcomes:

02:32: Resolved means an AI asset or positive feedback;

02:36: Response Provided indicates a synthesized answer;

02:39: No Response Provided means no answer;

02:42: Not Resolved shows negative feedback.

02:44: It helps understand deflection, how it occurs, and its shortcomings.

02:49: Next, we have deflection types offered.

02:52: Your AI assets include a variety of deflection methods from catalog items to

02:56: synthesize responses to knowledge articles and so on.

03:00: It helps you understand which deflection mechanisms are being used the most and

03:05: whether you're using that full range of capabilities available.

03:13: And last, we have effort score.

03:17: Effort score tracks

03:18: how much effort users have to put in during the conversations, categorized as high,

03:23: medium, and low.

03:27: So low effort means a smooth, easier self-serve experience,

03:30: whereas high indicates your users are struggling somehow, even when they eventually

03:35: do solve their issues.

03:47: The video covers metrics like deflection events, resolution rates, escalation trends,

03:52: and effort scores to evaluate and improve

03:54: AI assistant self-service performance.

 
 

Understanding Key Metrics

 

Total Deflection Events: Self Serve Opportunities

This shows the total number of self-solve events recorded, meaning all the times the system attempted to help a user resolve something on their own. It's your baseline volume metric for understanding how many self-service interactions are happening. 

  1. More events mean more chances to deflect tickets.

  2. Low events indicate users aren't engaging with self-service.

Indicator: Deflection Logs


Total Deflections: Your Real Wins

This counts the number of times users actually resolved their issues through self-service.

These are sessions where:

  1. AI agent actions or a skill executions worked

  2. Topics executed successfully

  3. Or users explicitly gave positive feedback

These are your wins, where the assistant did its job and kept the user from needing a human. This is the number you want to see growing.

Indicator: Deflection Logs: Resolved


Total Live Agent Transfers

This tracks how many conversations got escalated to a human agent. You'll also see the percentage change from the previous period, which tells you if transfers are trending up or down.

  1. Monitor the trend direction rather than focusing solely on raw numbers

  2. Investigate recent changes to flows or responses if transfers are increasing

  3. Celebrate improvements in coverage and confidence when transfers decrease

Indicator: Total Live Agent Transfers


Deflection Rate: Your ROI Metric

This is the percentage of interactions where users successfully solved their problem without contacting a live agent. It's calculated as (total self-solved events / total events) x 100.

  1. A higher deflection rate means your assistant is effective at keeping users independent.

This is one of your key ROI metrics, as every deflected interaction is a support ticket you didn't have to pay a human to handle.

Indicator: Deflection Rate


Deflection Rate Over Time: Your Change Detector

This shows your deflection rate as a trend line over the selected date range. Hover over any point to see the exact deflection rate for that day. Use this to identify when deflection performance improved or declined, and correlate those changes with assistant updates, new content, or other factors. If you see a dip, something changed and you need to figure out what.

  1. Use this to correlate with: new assistant releases, knowledge updates, or workflow changes.

  2. Assume regression has occurred when you see a dip and investigate immediately

Indicator: Deflection Rate


Deflection Outcome Distribution: Resolution Quality Analysis

Not all self-service interactions are equal. This breakdown categorizes what actually happened during each session.

Outcome Categories
  1. Resolved means at least one AI asset executed or the user gave positive feedback.

  2. 💬 Response Provided means the assistant gave a synthesized answer.

  3. ⚠️ No Response Provided means the assistant had nothing to offer.

  4. Not Resolved means the user gave negative feedback.

Indicator: Deflection Rate


Deflection Types Offered: Capability Utilization

Your assistant can deflect through multiple channels: knowledge articles, catalog items, AI-generated responses, and automated actions.

  1. Examine which deflection types are being utilized most frequently

  2. Identify underutilized platform capabilities if one type dominates

  3. Develop strategies to activate unused deflection methods

Indicator: Deflection Rate


Effort Score: The Hidden User Experience Metric

This underrated metric measures how hard users had to work to resolve their issues, even when they succeeded. Use this to identify friction points in the self-solve process and make interactions more efficient.

Effort Categories
  1. Low → smooth experience

  2. Medium → acceptable

  3. High → frustrating

Indicator: Effort Score


How to Use This Dashboard Effectively

Apply this five-dimensional mental model to analyze your assistant's performance:

  1. Assess Volume through Total Events to determine if users are engaging

  2. Evaluate Success via Total Self-Solved Events (Resolved) to confirm problems are being resolved

  3. Identify Failure points using Transfers to locate where the assistant breaks down

  4. Measure Efficiency with Effort Score to gauge experience smoothness

  5. Calculate ROI using Deflection Rate to validate cost savings

Conclusion

 

The Self-Solve Performance Page provides comprehensive visibility into your AI assistant's true impact. By tracking outcomes instead of just interactions, investigating trends rather than snapshots, and optimizing for effort alongside resolution, you can build a self-service system users actually trust. This dashboard delivers everything you need to validate your support scaling strategy, the question is whether you're ready to read between the lines and act on what the data reveals.

Check out the Assistant Analytics Hub for more resources