NLU bringing very wrong results with 100% reliability.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-28-2023 07:14 AM
We have identified strange results using NLU. Some expressions that are not registered in the intent are achieved with 100% reliability. In addition, by using random letters we also hit some intentions the wrong way. Even giving negative feedback on these predictions, they keep showing up. We would like to know how to avoid these wrong results appearing in user searches.
I looked up "llllllllllll" and it found an intent with 100% reliability and no utterance of that intent has anything remotely like what I searched for:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎05-07-2024 05:33 AM
HI Brian,
The first ticket is the most recent but providing the ones I have open as well as the original one that spawned these tickets. Issues around NLU, AI Search and Virtual agent.
CS7324502 - NLU model rating 99% (Setup Topic models). No idea why and not able to stop it. Does not occur in prior environment but was able to reproduce the 99% in PDI using the same phrase and using thumbs down does not work and since it is OOTB no way to modify it or deactivate it.
CS7321523 - AI Search vocab sources not syncing (blocking from trying to publish OOTB AI Search model). Think it is related to using custom URL for our instance which is odd as that is a SN feature.
CS7298339 - NLU model giving MS Teams notification instead of knowledge. This is related to a broken ACL which is also appearing in OOTB PDI instances. The broken ACL makes the Show Notification topic blocked by security.
CS7262927 - original issue which seemed to have been caused by the partner who setup the nearly identical models which I had to go through and delete to get working and was looking to move the update sets from this work to the TEST environment but which has now spawned CS7324502.
Please let me know if you have any questions.
Kurt

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎05-07-2024 09:34 AM
Thanks for the list of cases. Here is my feedback on them -
CS7324502 - I have asked the TSE to setup a Zoom meeting, and I will join to assist. Setup Topics NLU Model is read-only and designed to be a template where you import these intents into your own custom NLU Model, so that you can make changes by adding/removing utterances that conflict with you expected behaviour
CS7321523 - Custom URLs are supported, we just need to add a public/private URI mapping to the ML Scheduler, and we have engaged our internal team to add this mapping, which will fix the issue with syncing Tables Vocab Sources. Should be done soon.
CS7298339 - Is with a different Support Team, but added myself to the work notes watchlist, so that I can track and assist where I can.
CS7262927 - Case is closed, but I will assist in the follow up CS7324502 where we will schedule a Zoom.
Best regards,
Brian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎05-07-2024 09:52 AM
Thanks Brian.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎05-14-2024 08:56 AM
Thanks for joining the Zoom call yesterday for CS7324502, where we were able to get the desired behaviour in Virtual Agent.
Here is a quick summary of the changes we made -
1. We changed the custom "Live Agent Support" Topic to the type "Setup Topic" and removed the link to the NLU Model/Intent, so that it uses Keywords for this Topic and published it.
2. We changed the default Chat Experience in the Conversational Interfaces settings for Virtual Agent, and linked the Live Agent chat to the "Live Agent Support" Topic that is only using keywords.
3. We deleted the [sys_cs_topic_language] record linked to the out-of-box read-only VA Topic "Live Agent Support." (Has a full stop at the end of the name), so that the out-of-box NLU Model "Setup Topics Model" is no longer triggered for Live Agent, as table [sys_cs_topic_language] contains all the mappings of VA Topics to NLU Model/Intent for published VA Topics, per language.
4. We tested it in Virtual Agent Designer using the "Test Active Topics" button and confirmed in the [open_nlu_predict_log] that it is now triggering the correct intent for the correct utterances, and it is no longer triggering the Live Agent topic when using the utterance "call a contact from directory", and it now triggers the Fallback Topic, which triggers a search and returns the top 3 results, as per the expected behaviour that you required.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎05-29-2024 03:51 AM
We are still running into issues where the topics are being picked for incorrect utterances with 99% accuracy and there seems to be no way to stop it. Have tried the thumbs down options while training.
Answer:
Thumbs down option on the Test Panel Feedback that adds records to [ml_labeled_data] is not added to the NLU Model until the Model is tuned "Last Tuned On" field in table [sys_nlu_model_status]. If you have never tuned the NLU Model, you can use the following workaround in KB1318436 - [NLU] Utterance is returning an unexpected intent, when it should not return any intents in a custom NLU Model. You add utterances that should return no intents in the NOINTENT intent, and NOT link it to any VA Topic, so that it will return no VA Topic for these utterances and will then trigger the Fallback Topic. When testing it in the NLU Workbench, it will return the NOINTENT intent for these utterances that should return no intents.
However, if you have tuned the NLU Model, review KB1633901 - [NLU] Active Learning (AL) and Expert Feedback Loop (EFL) - Further insights on the utterance extraction from the Virtual Agent (VA) chat log to enable NLU Admins to provide feedback and further improve their NLU Models, and ensure the NLU Model is tuned, so that it adds the "NO_INTENT" in the model artifact, as it will not show this intent in the NLU Workbench. You can check by downloading the attachment on [ml_model_artifact] for the record with the Model ID="authoring.model.artifact" on the latest trained and active NLU Model and opening it in a Text Editor. You can also apply a JSON formatter to it, to make it easier to read, and it should include the utterances in the "text" field on table [ml_labeled_data], where it is marked as "Irrelevant" in the NO_INTENT intent in the NLU Model Artifact. When testing it in the NLU Workbench, it will return "No intents", as expected.
I hope this helps with ensuring specific utterances that return an existing intent, when it should return no intents, as we see many cases raised on how to resolve this behaviour.
Regards.
Brian