- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2025 06:08 PM - edited 08-06-2025 06:15 PM
Hi everyone,
I’m reaching out for some help as I’ve been stuck on an issue that’s keeping me up at night.
I’m new to Virtual Agent, and my company recently purchased Now Assist. I started building an LLM topic with some straightforward instructions:
The flow should begin by asking, “How can I help?”
Then, it should collect more information, check for matching KB articles, and—if the user still wants to proceed—create an incident using the short description and additional details provided.
The problem is that when my topic runs, the “How can I help?” step keeps looping. Instead of simply asking the question, it returns a greeting and displays the user’s full name, company, and role—even though I only want it to say, “How can I help?”
Has anyone seen this before or have suggestions on how to get this working as intended? Any ideas or direction would be really appreciated!
Thanks so much for your time and for reading this.
Below is a sample for this loop behavior, in my test screen, I can see that the focus is still on the first node and it has not moved to anything below which is quite frustrating.---
----------------------------------------------------------------------------------
Hi Daniel, how can I help you today? I have you as Daniel Strowman from Miami, what seems to be the issue?
>>laptop broken
You mentioned your laptop is broken. Can you tell me more about what's happening with it? I have you as I have you as Daniel Strowman from Miami, what seems to be the issue with your laptop?
>>its not working
You mentioned your laptop is broken and not working. Can you tell me more about what's happening with it? I have you as Daniel Strowman from Miami, what seems to be the issue with your laptop?
>>its just not working can I get some help
You mentioned your laptop is not working. Can you please provide more information about what's happening with it? For example, are you getting any error messages or have you noticed anything unusual?
>>nothing as such
You've mentioned that there's nothing specific happening with your laptop. Can you please provide more information about what's happening with it? For example, are you getting any error messages or have you noticed anything unusual?
>>no!
Solved! Go to Solution.
- Labels:
-
AI Search
-
LLM
-
topic
-
Virtual Agent
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago - last edited 2 weeks ago
Hi @Sam10
I can help. The short answer is that the LLM is executing "disambiguation" in the synthesized response - asking clarifying questions in order to surface the right content to you.
It has not yet determined a source to answer your question, whether that is a KB Article, a catalog item, or an automation topic that you created in the VA Designer.
You're not directly asking to connect to a human, but I can see that's the intent - you may want to ensure that your Live Agent topic is published. We do have sensitivity filters, but it seems that's not yet configured for you - you may want to set that to the side for now.
You do not have to create a topic in the VA Designer in order to work with the Virtual Agent. If that is what you did, It seems as if you are building a topic that does what the Virtual Agent is designed to do. I'm guessing that based on your reference of the "LLM Topic Node".
Since you're new to the Virtual Agent, let me give you some context:
The VA conversation structure is like this - you have a "setup topic" (Greeting and closing), then you could have LLM VA topics for true automation (reset my password, connect to Live Agent), and you also have content(Knowledge, Catalog). All of that is "discovered" with a skills discovery function, powered by AI Search.
Let's say your user asks something outside your delivered content, the Assistant can direct the user to "Fallback" - create a ticket, or end the conversation. There might be content in your KB that addresses your laptop issue, so the VA is trying. Let's say you ask "When was the Declaration of Independence signed?" that should result in 0 results and your fallback option (you can configure web search to answer that question but that's another topic to set to the side for now).
The key to the new LLM VA experience is the use of AI Search for everything this "Assistant" can do.
This search experience is controlled by the Search Profile in the Assistant configuration (Information Sources). If AI Search is new to you, there is training on SNU and a few academy recordings on AI Search. The search profile has to be published.
Back to your example:
Perhaps the VA has retrieved some laptop troubleshooting information, however, because you're not specific enough, the VA keeps trying to get specifics.
Perhaps respond with some details in order to see a different response.
To see the above in action:
If you have created a VA topic, I would deactivate that topic so the LLM can't find it.
Verify your VA search profile has content (KB and Catalog) that is searchable.
Once you've verified your VA search profile is published, I would pick a KB article and see if the VA can find the knowledge article when you ask a question that you would use to answer your own question. (Sometimes I just ask the title of the KB article)
If it finds the KB article, ask a follow up question or 2 - you can ask open ended questions.
Then I would pick a catalog item that is available from your search profile. (sometimes I just start with "Request a (catalog item name)"
To see if your Live Agent topic is published, and available to your assistant, type in "I want to speak to a human" or "live agent"- that's a delivered topic that is usually available.
You can also test the search results in AI Search Preview.
Hopefully that clarifies how topics work within the LLM VA. "PW Reset" might be the only VA topic (and it's truly for automation) - all other conversational back and forth is because of AI Search.
Hope that helps!
Tricia
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-11-2025 05:17 AM
Hi Guys - Bumping this up..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago - last edited 2 weeks ago
Hi @Sam10
I can help. The short answer is that the LLM is executing "disambiguation" in the synthesized response - asking clarifying questions in order to surface the right content to you.
It has not yet determined a source to answer your question, whether that is a KB Article, a catalog item, or an automation topic that you created in the VA Designer.
You're not directly asking to connect to a human, but I can see that's the intent - you may want to ensure that your Live Agent topic is published. We do have sensitivity filters, but it seems that's not yet configured for you - you may want to set that to the side for now.
You do not have to create a topic in the VA Designer in order to work with the Virtual Agent. If that is what you did, It seems as if you are building a topic that does what the Virtual Agent is designed to do. I'm guessing that based on your reference of the "LLM Topic Node".
Since you're new to the Virtual Agent, let me give you some context:
The VA conversation structure is like this - you have a "setup topic" (Greeting and closing), then you could have LLM VA topics for true automation (reset my password, connect to Live Agent), and you also have content(Knowledge, Catalog). All of that is "discovered" with a skills discovery function, powered by AI Search.
Let's say your user asks something outside your delivered content, the Assistant can direct the user to "Fallback" - create a ticket, or end the conversation. There might be content in your KB that addresses your laptop issue, so the VA is trying. Let's say you ask "When was the Declaration of Independence signed?" that should result in 0 results and your fallback option (you can configure web search to answer that question but that's another topic to set to the side for now).
The key to the new LLM VA experience is the use of AI Search for everything this "Assistant" can do.
This search experience is controlled by the Search Profile in the Assistant configuration (Information Sources). If AI Search is new to you, there is training on SNU and a few academy recordings on AI Search. The search profile has to be published.
Back to your example:
Perhaps the VA has retrieved some laptop troubleshooting information, however, because you're not specific enough, the VA keeps trying to get specifics.
Perhaps respond with some details in order to see a different response.
To see the above in action:
If you have created a VA topic, I would deactivate that topic so the LLM can't find it.
Verify your VA search profile has content (KB and Catalog) that is searchable.
Once you've verified your VA search profile is published, I would pick a KB article and see if the VA can find the knowledge article when you ask a question that you would use to answer your own question. (Sometimes I just ask the title of the KB article)
If it finds the KB article, ask a follow up question or 2 - you can ask open ended questions.
Then I would pick a catalog item that is available from your search profile. (sometimes I just start with "Request a (catalog item name)"
To see if your Live Agent topic is published, and available to your assistant, type in "I want to speak to a human" or "live agent"- that's a delivered topic that is usually available.
You can also test the search results in AI Search Preview.
Hopefully that clarifies how topics work within the LLM VA. "PW Reset" might be the only VA topic (and it's truly for automation) - all other conversational back and forth is because of AI Search.
Hope that helps!
Tricia