NLU bringing very wrong results with 100% reliability.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-28-2023 07:14 AM
We have identified strange results using NLU. Some expressions that are not registered in the intent are achieved with 100% reliability. In addition, by using random letters we also hit some intentions the wrong way. Even giving negative feedback on these predictions, they keep showing up. We would like to know how to avoid these wrong results appearing in user searches.
I looked up "llllllllllll" and it found an intent with 100% reliability and no utterance of that intent has anything remotely like what I searched for:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-28-2023 07:17 AM
Hi @Albergaria can you please send me the screenshots of intents and utterances ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-28-2023 08:12 AM
Hi, Uday!
theres a lot of utterances in this intent, none similar to what I typed. The point is that by typing random things like "ccccc" or "yyyyyy" I get sent to several different intents with 100% reliability.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-28-2023 09:31 AM
Hello Victor Albergar
Can you please raise a case for this NLU prediction issue with ServiceNow Support and request for "Brian Bakker" to work on your case. This will enable me to review your NLU Model to determine the root cause.
Many thanks,
Brian

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-04-2023 09:51 AM - edited ‎04-04-2023 09:51 AM
Hello Victor Albergar
Thanks for the Zoom meeting last week, where we reviewed your NLU Model. Here are the changes we made to fix this issue -
1. Created Vocabulary Items for all your business terms and acronyms and provided the synonym. For example, you had the vocabulary "Concur" in one of your sample utterances, and after we added this as a Vocabulary Item with the synonym "Software application", the test utterance "Copo" was no longer returning the intent with "Concur" in it. After you created all the required Vocabulary Items, it no longer returns any intents for "junk" utterances such as "yyyyyy"
For further information, please review the Community Articles NLU Best Practice - Using Vocabulary & Vocabulary Sources and Good practices for making NLU vocabulary updates.
2. You had many duplicate/similar and single word sample utterances. I asked you to remove the duplicate sample utterances and you must avoid single word utterances, as there is no context and single word utterances are ambiguous.
For further information, please review our Community Article on In-depth guide to building good NLU Models for further instructions on how to build a good NLU Model.
I have been testing your NLU Model and it is now returning no intents for these "junk" utterances, and your Brazilian Portuguese NLU Model is now making the expected predictions.
Best regards,
Brian