- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-22-2021 07:59 AM
I am currently working on implementing Virtual Agent and NLU, however I am having some issues when it comes matching the correct intents to the utterances we have listed. I have read and watched all the docs related to NLU and added vocabulary/synonyms and various entities to try and tune NLU however one thing that is never clearly explained is exactly how the confidence threshold is calculated.
An example of on of the issues I'm having:
When entering the utterance/keyword 'error' into the test section, we receive the following intents matched:
- SearchKnowledgeBase - 85% match
- ProvideVirtualAgentFeedback - 85% match
- RaiseIncident - 70% match
However, when we look at the utterances for each of the above you will notice that only 'RaiseIncident' actually has the keyword 'error' in any of the sentences, so I can't understand how either of the other 2 intents would come back with a higher % match?
Raise incident utterances:
Provide Virtual Agent Feedback utterances:
Search Knowledge Base utterances:
I have checked and ensured there are no synonyms for 'error' that would cause SearchKnowledgeBase or ProvideVirtualAgentFeedback to return, therefore can someone help explain exactly how the confidence threshold percentages are calculated so I can understand why the above is happening and rectify it?
Thanks in advance for any help provided.
Rich
Solved! Go to Solution.
- Labels:
-
Service Portal Development
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎02-08-2021 02:10 PM
Hey,
Thanks for reaching out.
Regarding the issue you are facing :
We are aware of some drawbacks from our earlier releases as our models were based on embedding distances. Sometimes, there are matches that you would see which cannot be intuitively explained. In Quebec (latest release at this time), we introduced Neural Network based algorithms that have significantly improved the prediction quality.
How to resolve the issue you are facing?
Using Machine learning algorithms from the Quebec release does not need glide upgrade, existing Paris and Orlando glide can use Quebec Machine Learning. A KB article is being prepared and will be released to give the information. If you need the information before that please contact support and we will give the info via a case.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎02-08-2021 11:00 AM
One word utterances are not great to infer the intent as you can imagine. The "machine" when you train it simply put creates an algorithm based on the model provided and subsequently uses that algorithm to make a prediction when a prediction request is send to it by the Virtual Agent.
As there are vocabularies and various "machine" parts involved that where used either to create the algorithm or to transform the user sentence into a binary format for the algorithm to make the prediction it is not straightforward to explain or understand.
If you have the right test utterances, that you expect your end users to use to get them to one of the intents and they don't get the right topic let us continue the conversation. What test utterances are realistic?
A one word request could work but you could also tell your users to describe their issue in a short sentence of 4 to 8 words just as they would when conversing with a real agent.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎02-08-2021 02:10 PM
Hey,
Thanks for reaching out.
Regarding the issue you are facing :
We are aware of some drawbacks from our earlier releases as our models were based on embedding distances. Sometimes, there are matches that you would see which cannot be intuitively explained. In Quebec (latest release at this time), we introduced Neural Network based algorithms that have significantly improved the prediction quality.
How to resolve the issue you are facing?
Using Machine learning algorithms from the Quebec release does not need glide upgrade, existing Paris and Orlando glide can use Quebec Machine Learning. A KB article is being prepared and will be released to give the information. If you need the information before that please contact support and we will give the info via a case.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎06-04-2021 10:40 AM
I just heard an example of this - arguably slightly worse - from our prod Orlando environment and was able to easily recreate it in our non-prod Quebec environment, with pretty much no difference, still getting a 100% match (and then backup 91% and 86%) matches on this one-word utterance that isn't used ANYWHERE in the model.
100% matches should ONLY be utterances that we have hardcoded in the model - there is absolutely no reason any NLU or AI should be 100% confident in anything that isn't programmed directly.
Moving to Quebec with AI Search Fallback in particular, I very much desire to get our confidence threshold bumped up - I'm thinking get it to 80% or so - to give AI Search more emphasis, but even that would have done absolutely nothing in this case.
I'm seriously considering putting in text validation to simply not allow one-word and short (<10 characters maybe?) utterances, but I know that's gonna have a huge impact - not all good - considering many of our users are so accustomed to using one-word utterances as shortcuts to their desired topics, carryover from last year when we still used keywords...