- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Assistant Analytics Usage: Your Guide to Measuring ROI and Adoption of Conversational Interfaces
Table of Contents
Articles Hub
Want to see all of our other articles and blogs related to ServiceNow AI Platform? We'll have more on Assistant Analytics soon.
Assistant Analytics Hub
You can copy the link above and share it!
Overview
If you've deployed AI assistants across your organization, you're probably wondering: Are people actually using them? Where are they getting stuck? Which channels are driving the most engagement? The Usage page answers all of these questions. It gives you a real-time view into how your assistants are performing — from conversation volumes to completion rates to channel preferences. Think of it as your command center for understanding assistant adoption and identifying where to focus your optimization efforts.
The assistants that perform best are the ones that get continuously optimized based on real usage data. Rather than guessing what's working, the Usage page provides concrete insights into user behavior, engagement patterns, and potential problem areas. This data-driven approach allows you to make informed decisions about where to invest your time and resources.
Let's walk through the key metrics and how to use them to improve your AI assistant strategy.
00:00: This video demonstrates how to effectively monitor conversational AI assistant usage within ServiceNow.
00:07: It highlights key analytics features that help platform owners understand assistant performance and user engagement.
00:14: Four visualizations, built to answer the questions that matter most to platform owners.
00:20: Now that you're running Conversational AI assistants on ServiceNow, you need to know more than just whether people are using them.
00:27: You need to know how they're being used. That's exactly what the Usage tab in Assistant Analytics gives you.
00:34: Total conversations by assistant displays which assistance manage the most interactions and which ones are underperforming.
00:42: That Gap is worth investigating.
00:44: or is an assistant underutilized because it isn't addressing the right issues or because users are unaware of its existence
00:52: Total Conversations by Channel shows you where conversations are happening such as Teams, Slack, web portal, mobile, or other interfaces.
01:01: This helps you understand where your users like to engage, so you can place your best assistants in the right spots to make the biggest impact.
01:16: Result Types Offered shows what your assistants return, such as knowledge articles, catalog items, and synthesized responses.
01:25: Color intensity indicates frequency at a glance. This is where you catch intent mismatches.
01:31: If users consistently get catalog items when you expected knowledge articles to dominate, your assistant configuration may need a second look.
01:39: And Conversation State Flow is your diagnostic view.
01:43: When completed, trends up, great.
01:46: If you are seeing high fault rates, it means something's broken in your flow. High Canceled rates mean users are giving up.
01:54: Both are signals you want to catch early.
01:58: Together, these four views give platform owners a clear, actionable picture of how conversational AI assistants are performing in the real world and exactly where to focus to make them better.
02:18: The video explains four key visualizations that provide actionable insights into AI assistant conversations channels results and fault diagnostics for better platform management.
Track Conversation Volume by Assistant
What This Metric Shows
The total number of conversations each of your assistants is handling — your first stop for understanding which assistants are getting traction and which ones are sitting idle.
Interpreting Volume Patterns
If one assistant is handling 10x more conversations than the others, that's a signal. It might mean this assistant is solving a critical use case that users love, the other assistants need better visibility or tuning, or you should replicate what's working with the high-volume assistant.
Action Steps
- Identify which assistants are handling the highest conversation volumes
- Compare performance across all deployed assistants to spot outliers
- Prioritize optimization time on high-volume assistants that are working well
- Investigate low-volume assistants to determine if they need better promotion or configuration
- Document what makes your successful assistants effective for replication
Indicator: Assistant Conversations
See Where Users Are Engaging
Understanding Channel Distribution
A breakdown of conversations by channel (Teams, Slack, portal, etc.) tells you where users actually want to interact with your assistants. Not all channels are created equal, and this visualization reveals user preferences.
Reading the Data
Maybe your web portal is crushing it while your mobile app is crickets, or perhaps Teams is the surprise winner. These patterns reveal where users naturally gravitate for assistant support.
Optimization Strategy
- Deploy your best assistants to high-usage channels first
- Evaluate whether low-usage channels need better promotion or should be deprioritized
- Investigate unexpected patterns to understand why invested channels aren't performing
- Meet users where they already are rather than forcing adoption in unpopular channels
- Adjust your deployment strategy based on actual usage patterns
Indicator: Assistant Conversations
Understand What Your Assistants Are Actually Delivering
Result Types Visualization
The types of results your assistants are returning (knowledge articles, catalog items, synthesized responses, etc.) show whether your assistant is actually serving up the content users need. Color intensity matters here — darker shades mean high frequency, lighter shades mean occasional use.
Identifying Mismatches
- Compare expected result types against actual usage patterns
- Look for high-frequency result types that might signal unexpected user needs
- Identify barely-used result types that may indicate poor discoverability
- Check if catalog item requests exceed knowledge article requests when the opposite was expected
- Validate that your assistant strategy aligns with real user needs, not assumptions
Indicator: Now Assist in Virtual Agent Results returned
Diagnose Where Conversations Fail
Conversation State Flow
The flow of conversations through different states (Open, Completed, Faulted, Canceled) serves as your diagnostic dashboard for finding bottlenecks and failure points.
Understanding Each State
CompletedThese conversations reached a successful resolution — the gold standard for assistant performance.
FaultedSomething broke in your conversation flow. High fault rates mean technical issues that need fixing.
CanceledUsers gave up mid-conversation. This often means the assistant wasn't helpful or was asking for too much information.
OpenThese conversations are still in progress.
Remediation Steps
- Investigate conversation logs for high-fault-rate assistants to identify technical issues
- Review canceled conversations to understand why users abandoned the interaction
- Implement fixes and monitor whether completion rates improve
Metric tracked: Assistant Conversations
Putting It All Together
Your Continuous Improvement Workflow
The Usage page isn't just a reporting tool — it's your feedback loop for continuous improvement. A systematic approach to using these metrics ensures you're making data-driven decisions.
- Check conversation volume to see which assistants are getting traction
- Look at channel distribution to ensure you're deployed where users are
- Review result types to validate you're delivering what users need
- Analyze conversation states to find and fix failure points
- Document insights and create an action plan based on the data
- Implement changes and return to measure impact
- Share findings with stakeholders to demonstrate ROI and inform strategy
Starting Your Analytics Journey
The key to assistant optimization is treating the Usage page as an ongoing source of insight rather than a one-time report. Regular review of these metrics helps you spot trends early, respond to user needs quickly, and continuously refine your assistant strategy.
Conclusion
Ready to dive into your assistant analytics? Head to the Usage page and start exploring what your users are actually doing. The data is waiting to tell you where your assistants are succeeding, where they're struggling, and most importantly, where your optimization efforts will have the biggest impact. By following the workflow outlined above and regularly monitoring these key metrics, you'll transform your AI assistants from deployed tools into continuously improving resources that genuinely serve your organization's needs. Start with what the data is telling you, and iterate from there.
Check out the Assistant Analytics Hub for more resources
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.

