Best practices on NLU utterances

ayman_h
Kilo Sage
Kilo Sage

Hi all,

We have recently started using Virtual Agent with NLU. There are three separate NLU models with an intent each. Each intent is connected to a different Virtual Agent topic.

 

For each intent, we have a good number of utterances around 100 each, each utterance having 3 words or more. Most utterances are simple I need x or I got y. We copied some of the out-of-the-box NLU intents with addition of some common search terms used by users. We have tried to stay away from some of the common pitfalls of avoiding one word utterances and abbreviations.

 

The NLU was working quite well until we introduced the 2nd and 3rd Virtual Agent topic and now we can see a 100% match for quite a few phrases leading to the Virtual Agent coming up with 'I want to be sure I got this right. What item best describes what you want to do?'. We have tried adjusting the NLU Intent confidence delta threshold but didn't have any luck. 

 

Does anyone have an idea how we can resolve our NLU approach? Should we have one model with all the intents to make the testing easier? Should we consolidate the utterances to 20-30s instead of adding most of the common search terms?

 

Regards,

Ayman 

2 ACCEPTED SOLUTIONS

Lynda1
Kilo Sage

We duplicated the OOB Model and only use that one model we created. I have 103 intents, try to keep the utterances below 100 in each intent. I do get some that get close to 200 utterances. When that happens I find utterances that can be removed.

View solution in original post

Tricia Cornish
ServiceNow Employee
ServiceNow Employee

@ayman_h  - I think it depends on your intents and your end user access. I believe the recommendation is 1 Model per entry point (if you have a single portal, use a single model). What are your 3 intents in each of the 3 models? 
Without knowing your intents and matching topics, it's hard to make a suggestion. 
One item to consider might be how you are using tables as vocabulary sources and entity mapping in an intent. 
Another consideration is specific to "I need..." is that an intent for the catalog? There might be a better way to do catalog requests. The recommended approach for catalog requests and KB articles is to allow fallback search activities to catch that query v building topics around accessing the catalog. Are you using AI Search? 
If you're using AI Search as your fallback, it might be better to let the search return the result v trying to build a conversation around all the utterances users might present in a topic. 
A couple of ideas - if you have more specific details to share, I'll watch for updates. 

View solution in original post

3 REPLIES 3

Lynda1
Kilo Sage

We duplicated the OOB Model and only use that one model we created. I have 103 intents, try to keep the utterances below 100 in each intent. I do get some that get close to 200 utterances. When that happens I find utterances that can be removed.

Tricia Cornish
ServiceNow Employee
ServiceNow Employee

@ayman_h  - I think it depends on your intents and your end user access. I believe the recommendation is 1 Model per entry point (if you have a single portal, use a single model). What are your 3 intents in each of the 3 models? 
Without knowing your intents and matching topics, it's hard to make a suggestion. 
One item to consider might be how you are using tables as vocabulary sources and entity mapping in an intent. 
Another consideration is specific to "I need..." is that an intent for the catalog? There might be a better way to do catalog requests. The recommended approach for catalog requests and KB articles is to allow fallback search activities to catch that query v building topics around accessing the catalog. Are you using AI Search? 
If you're using AI Search as your fallback, it might be better to let the search return the result v trying to build a conversation around all the utterances users might present in a topic. 
A couple of ideas - if you have more specific details to share, I'll watch for updates. 

ayman_h
Kilo Sage
Kilo Sage

Thanks @Tricia Cornish and @Lynda1 . We converted our intents into one model and that seems to have resolved our issue