Initial utterance from Teams

Lynda1
Kilo Sage

I am sure the answer is no but asking anyways hoping I am wrong!

We have Virtual Agent integrated in Teams, users have to type a word to wake the VA up to start a conversation.

Is the "wake up" words stored anywhere?

1 ACCEPTED SOLUTION

Victor Chen
ServiceNow Employee
ServiceNow Employee

You can start typing anything to kick off the Virtual Agent in Teams. 

There are certain "commands" for more actions, such as 'help', 'logout', (along with 'hi') etc. Those are shown when you first start. 

View solution in original post

9 REPLIES 9

Victor Chen
ServiceNow Employee
ServiceNow Employee

You can start typing anything to kick off the Virtual Agent in Teams. 

There are certain "commands" for more actions, such as 'help', 'logout', (along with 'hi') etc. Those are shown when you first start. 

I ask this question because I am noticing users enter words that I have answers in the bot, however, the user types the words as the wake up. I found an example:

 

find_real_file.png

The above is coming from Teams,  I have the answer to How to order headphones but the user typed that in the wakeup portion and not after the greeting.

I am wondering if the words How to order headphones is stored somewhere, I figured it has to be stored somewhere since it is in the transcript

Here is another example on why I ask the question

[12:04] Carina Espinoza: WEB0119127
[12:04] Virtual Agent: Hello, Carina.

I have a topic that provides the status of tickets that start with WEB. Since that WEB0119127 only woke the VA up, the user used the dropdown of choices and used the wrong choice so she went to live chat, that the VA could have answered if the wake up word/s were stored somewhere.

Maybe an enhancement?

Chris D
Kilo Sage
Kilo Sage

Besides the "hi" command (which I believe just triggers the greetings topic), wake up words aren't a thing. Whatever you type into VA in Teams is the utterance. So if it matches a topic(s), it should present options/auto-start that topic as you'd expect from keywords/NLU.

The examples you provided seem to just be triggering the greetings topic so something does not seem right there. If you find that user's Interaction record, you should be able to look at the Interaction Log related list and see the Utterance field with that user's initial input provided in the first log entry. If you're using NLU, you can also check the open_nlu_predict_intent_feedback table to see what NLU predicted (or not) with that user utterance.