The Zurich release has arrived! Interested in new features and functionalities? Click here for more

VA Expert feedback Loop & Irrelevant-utterances help needed

nttd-fcaballero
Tera Contributor

Hello everyone in the community! We need help with our NLU model. It's small, with only three intents initially. We have trained the model to detect these intents and it does this well. We have created a test set with many expressions that are irrelevant to the model so that in tests the bot falls back to Fallback; however, we haven't been able to use the tools to train the bot to dismiss irrelevant expressions because these tools don't allow us to select the model. I'm attaching some screenshots

NLU Model.png

Our NLU Model

 

NLU Model 2.png

Expert feedback does not allow us to select our model

 

What are we doing wrong? What's failing? Can you help us?

Thank you in advance!

10 REPLIES 10

Sure thing, @fabi0caballer0!

If I understood it right, the expert loop is to review the used utterances within the real chat conversations and to help you capture the "problematic" utterances, thus if you don't see any for your model, it means it interacts good.

So try to use the virtual agent, not only train and test but open the chatbot and give it some good and bad utterances and then it shall show up in the expert loop to eventually re-create or create new utterances for better performance..

(it is a long reading ⬇️)
https://www.servicenow.com/docs/bundle/yokohama-intelligent-experiences/page/administer/natural-lang...

———
/* If my response wasn’t a total disaster ↙️ drop a Kudos or Accept as Solution ↘️ Cheers! */


We have already identified that there are problematic utterances when testing the bot, so we have created a test set with these utterances. When running the tests, we observe that irrelevant utterances are incorrectly directed to wrong intents. The test application reports these errors. We haven't found a way to mark these utterances as irrelevant for the model, and that's why we are seeking help.

Here is a report from the test application

NLU Model 3.png

In any case, we greatly appreciate your support and guidance. Thanks @GlideFather 

The Expert Feedback Loop (EFL) gathers user phrases from the chat logs. The phrases in the tests do not get to the EFL. 

The EFL runs once every 30 days. I made it a habit to work the EFL once a month.

When testing, you need to manually make corrections to the intents, publish, test again.

@Lynda1,

thank you for your clarifications, so ELF and IE are only available when the bot is in production?

The NLU model training for expressions not relevant to the model only be done in production?

 

is this correct?

 

Thanks to all!

@fabi0caballer0 no, it's technically available in all environments, however it makes less sense as in TEST or DEV the VA is not used as a real thing, you for example do not simulate the whole process, keep it incomplete etc.

 

ELF could be relevant in TEST in case that many testers would take it seriously and try to use the VA frequent with al scenarios and for some time. More than just a simple check by one person..

 

Also, the NLU model must be trained and tested in order to be published, so it must be done several times in each environment and even slight change leads to retraining and retesting to republish 😛

———
/* If my response wasn’t a total disaster ↙️ drop a Kudos or Accept as Solution ↘️ Cheers! */