- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
54m ago - edited 54m ago
Why is topic discovery in NAVA harder than it looks?
Getting Now Assist to suggest the right Virtual Agent topic sounds straightforward. In practice, enterprise deployments tell a different story. Broad user language, overlapping business processes and catalog items create conditions in which discovery behaves in ways that are difficult to predict and hard to tune.
Now Assist in Virtual Agent works on semantic similarity, not keyword matching. That shifts the problem from training utterances to crafting topic descriptions that are differentiated enough to behave consistently.
This article focuses on a specific use case: getting Now Assist in Virtual Agent to surface two distinct topics from the same user prompt.
What is the use case?
When a user submits a prompt indicating they have an issue or problem, Now Assist in Virtual Agent must surface two topic options simultaneously: one to create an incident, and one to connect with a live agent. The user selects which path to follow. Neither topic should fire automatically or suppress the other.
Both topic descriptions are intentionally written to match prompts where the user reports an issue or problem. However, Now Assist in Virtual Agent provides no deterministic mechanism to guarantee that both topics are returned together in every response.
Discovery is driven by semantic-similarity scoring, meaning either topic can be surfaced independently based on how the prompt scores against each description at runtime.
Why is this a challenge?
Similarity thresholds can be adjusted via system properties, but these apply globally and affect topic discovery across the instance, not just this use case. Lowering the threshold to increase the chance of both topics surfacing increases the risk of unintended matches elsewhere.
Topic descriptions can be tuned to maintain close semantic proximity among themselves while remaining relevant to issue-related prompts, but this is not a reliable control mechanism. Small changes in user phrasing can shift scoring enough to drop one topic from the response.
How do we resolve this puzzle?
The solution was to introduce a parent topic that acts as an entry point for any prompt reporting an issue or problem. This topic owns the discovery responsibility; its description is written to match that intent reliably.
Once triggered, it presents the user with a choice: create an incident or speak to an agent. Based on the selection, the corresponding child topic is invoked directly, bypassing discovery entirely for that second step.
You might be wondering how to invoke a topic from within another topic. Good news! It is all documented in ServiceNow documentation:
Virtual Agent provides two methods to invoke a topic from within another: vaSystem.switchTopicByName() or vaSystem.switchTopicById(sysid).
In this scenario, both need to be called with 'resumeBehavior=skip' to ensure the conversation does not return to the parent topic once the selected topic completes.
With this solution, discovery happens once, deterministically, through a single well-scoped topic. The routing logic lives in the conversation design, not in the similarity scoring layer.
What needs to be considered?
♦️ A valid concern with this approach is the extra conversational step it introduces. Users who already know what they want should not have to navigate a parent topic to get there; adding an extra step is never good for the end-user experience. To address this, both the incident creation topic and the live agent topic are configured as promoted topics, but not discoverable. This means they surface directly in the conversation interface and can be selected immediately, without going through the discovery flow. The parent topic handles the prompts; promoted topics handle users who already know their intent.
♦️ One additional consideration is field pre-population. When a user reaches the incident creation topic, whether through discovery or by selecting it as a promoted topic, the short description field should be pre-populated with the original user prompt. This avoids asking the user to repeat themselves and keeps the conversation fluid.
♦️ The parent topic becomes a single point of failure for this entire use case. If its description drifts out of alignment with user language over time, or if a platform upgrade affects how it scores, both downstream paths are affected simultaneously. The topic description should be reviewed periodically and updated to reflect any changes in how users report issues; this is not a set-and-forget configuration.
🚀 This approach worked well for this particular use case, but it might not be the only way to solve this problem. If you have tackled it differently, share your solution in the comments; the community will appreciate it!
