Eliza Gee
Tera Contributor
Hello Community! I am here to share some of the lessons I’ve learned while implementing Virtual Agent and NLU. I wrote an article focused on Virtual Agent specifically, so this article will focus on the NLU, or Natural Language Understanding, side of things. 
 
I’m going to write this based on the assumption that folks reading this have some familiarity with basic Virtual Agent and NLU concepts, so if that’s not you I’d recommend starting in the docs here and coming back later. Please also note that I’m basing this on what is available in Paris.
 
******************************************************************************************************
 
Utterance Entry
TLDR: you can upload a spreadsheet of utterances but it only fully works for Global intents/topics
 
The most obvious ways to enter utterances are manually in Studio or manually in Designer on the Virtual Agent topic associated to your intent. If you want to upload a spreadsheet of utterances to save time, it is possible - but because the utterances table only accepts uploads within the Global scope, it’s best if used only when the related intent/topic is also in the Global scope.
 
If your intent and topic are in a different scope, such as ITSM, you can still upload utterances to the Global table and use them successfully with your topic. HOWEVER:
  • You won’t be able to connect entities to these utterances, and
  • You won’t be able to edit these utterances in Studio or in Designer - but you can edit them directly on the utterances table
******************************************************************************************************
 
Utterance reporting and tracking
TLDR: the open_nlu_predict_log table is your friend
 
A lot of clients will want to know what users are entering when they first begin their interaction with Virtual Agent. This is extremely useful information that allows administrators to identify potential new utterances to add to the intent, decommission utterances that aren’t ever used, and note which utterances are most often used, among other things.
 
Each of these user entries is captured on the open_nlu_predict_log table. Not only that, but this table also captures the level of confidence your model has in the prediction(s) it made. You can use this information to get a picture about your model as a whole.
 
There isn’t a filter navigator module for this table that I’m aware of, so get there by entering open_nlu_predict_log.list in the filter navigator.
 
find_real_file.png
 
 
******************************************************************************************************
 
Be patient
TLDR: Grab some water between publishing your model and testing it
 
Depending on the size of your model and a few other factors, sometimes your changes won’t be immediately available or fully functional immediately after you publish. To avoid a potential testing headache, I recommend republishing your associated VA topics every time you retrain and republish your NLU model, then waiting a few minutes before getting too deep into testing. Grab some water or coffee and daydream about how awesome what you just built will be!
 
******************************************************************************************************
 
Thanks for reading! If you’ve encountered anything counter to these experiences or have anything you want to add, please let me know in the comments.
Version history
Last update:
‎03-05-2021 02:08 PM
Updated by: