NLU bringing very wrong results with 100% reliability.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-28-2023 07:14 AM
We have identified strange results using NLU. Some expressions that are not registered in the intent are achieved with 100% reliability. In addition, by using random letters we also hit some intentions the wrong way. Even giving negative feedback on these predictions, they keep showing up. We would like to know how to avoid these wrong results appearing in user searches.
I looked up "llllllllllll" and it found an intent with 100% reliability and no utterance of that intent has anything remotely like what I searched for:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎05-29-2024 05:21 AM
Hi Brian,
Thanks for the details. To your first suggestion with sys_nlu_model_status, it is listed as tuned as of yesterday. It does not have the NOITENT intent listed to it. I can try that but really VA seems to work better just using keywords than spending the time with any NLU model as the intent currently returning has zero of the words in the one I am testing...so it picks an intent with 99% when there is no word match.
On the second one, I did mark the phrase as a mismatch and irrelevant in feedback...saved, trained and republished and do not see the entry listed in the JSON.
Kurt