How Does Virtual Agent Intent Recognition Work Under the Hood?

Aaron6
Giga Expert

As a data scientist who's trying to optimize the recognition of the virtual agent, I'm having trouble making intelligent decisions with this product. My understanding is that the following happens each time a user submits an utterance:

 

1. The utterance is run against every 'enabled' NLU model for this VA, against every 'enabled' intent

2. Each model recommends a most confident intent, and VA picks the intent with the highest confidence

     - A business user can put their thumb on the scales by adjusting the confidence thresholds of each model. 

     - In some instances there may be close calls, user can disambiguate. This is configurable.

     - In case of no intent, configured fallback options are considered, like AI Search / default topics

 

Where I'm getting confused is... how do I optimize the intent recognition? I've got some questions for anyone who's experienced with the tool:

 

1. What is the recommended way to obtain F1 scoring for the entire virtual assistant mega-model (one single score encompassing multiple models).

2. What gotchas have you encountered with this multi-model approach vs. a single classifier, if any?

3. Is it at all possible to both limit the available intents at runtime dynamically  and use the predictions for decision making? Example below:

   - User asks 1 question: "Enter my Time"

   - VA checks against 2 big models "HR Question" , "IT Question"

   - VA decides "HR Question"

   - VA checks against several HR models "Time Entry",  "Retirement" , "Benefits"

   - VA decides "Time Entry"

   - VA checks against several "Time Entry" models "Time Off Inquiry" , "Enter my Time" , "Time Dispute"

   - VA decides "Enter my Time"

   - VA responds intelligently

0 REPLIES 0