Deflection Tracking Configuration in ServiceNow
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-29-2025 05:23 AM
We need to configure the following controls in our ServiceNow instance to enable accurate and consistent tracking of Virtual Agent deflection metrics.
These controls are specifically aligned with our implementation of LLM-based Virtual Agent topics, where user interactions are handled through free-text input and generative responses.
1. Deflection Tracking When Asking a Question
Given the user asks a question in the Virtual Agent without selecting predefined options (e.g., Report an Issue, Catalog Items, Search KB),
When the system displays relevant results (e.g., knowledge articles),
Then the system must prompt the user: “Did this answer your question?”.
2. Deflection Outcome
If the user confirms the answer resolved their query,
Then mark the interaction as Deflected.
If the user indicates the answer did not resolve their query,
Then mark the interaction as Not Deflected.
3. Non-Deflection Capture
Given the user chooses to escalate to a Live Agent after viewing results,
Then the system must record the interaction as Not Deflected in the appropriate table (assuming the table exists for non-deflections).
4. Metric Logging
All Deflected and Not Deflected outcomes must be logged for reporting purposes.
If the non-deflection table exists, then ensure entries for escalations to Live Agent are captured there.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
Did you ever find a solution to this?
We had the same requirement in my company. Unfortunately, after reviewing this in detail with ServiceNow HI Support, Technical Consultants from ServiceNow, and even someone from the ServiceNow Now Assist Product Team around September 2025, it seems that it is not technically possible to meet these requirements with Now Assist as it stands today.
The constraint we found is that once you get into the LLM node, it gets "stuck" and you cannot force to come back to a specific node where you ask this validation question. We tried all variations of instructions for the LLM-enabled nodes and it just does not work.
We are considering implementing it in reverse instead:
At the beginning of the conversation we would ask if the employee 1. wants a status update or has a question about an existing record , 2. if they have a new question they need help with, or 3. they want to chat with an agent. this way we can capture their original intent. Then depending on the outcome you can compare the outcome (e.g. opened a case or not, live chatted or not) with the original intent. This is not a perfect solution but at least then we have some data.

