Utterances are triggering incorrect topics
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-10-2025 03:44 AM
Hi all,
Context: Utterances are triggering incorrect topics when end users interact with the virtual agent.
Actions Taken:
- Submitted feedback via the NLU Workbench test panel, marking the utterance as not relevant to any model.
- Verified the feedback is present in the ml_labeled_data table.
- Added the utterance to the test set in the build and test module, selecting "not relevant" for the model.
- Reviewed all training utterances and found none similar.
- Trained the model, ran the batch test, and published the model, but the issue persists.
- Adjusted the confidence threshold.
- We are using Xanadu.
Question: What additional steps can I take to prevent this issue? I am encountering false positives that are affecting the model's accuracy.
Any guidance would be greatly appreciated.
1 REPLY 1
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-11-2025 09:39 AM
When I have this happen and after going through the steps you already stated, I go to the Intent add an Utterance, train and publish the NLU to force it to behave how I need.