Join the #BuildWithBuildAgent Challenge! Get recognized, earn exclusive swag, and inspire the ServiceNow Community with what you can build using Build Agent.  Join the Challenge.

Broken or Irrelevant links in NowAssist Virtual Agent response for any prompt

sridevirengasam
Tera Contributor

Hi,

 

We are working on deploying NowAssist with Virtual agent for our customer. There is a major issue where the links showing up in the response in VA for any prompt , are irrelevant to the prompt. 

 

For eg;, if we are prompting for 'password reset', the response brings links to the knowledge articles which is not relevant to the password reset though such knowledge articles are present in the customer knowledgebase.

 

This behaviour is consistent with almost every prompt. 

 

Has anyone faced this issue?

 

Raised the case with support, no fix yet. 

 

regards.

Sridevi

1 ACCEPTED SOLUTION

HI

 

You need to fetch required details from below Contents.

 

https://www.servicenow.com/community/now-assist-articles/model-provider-flexibility-servicenow-integ...

 

Here's a breakdown of what you'll find in each of our key resources:
 
 
This is your foundational resource. It provides a high-level yet comprehensive understanding of the architecture behind our AI products, including Now Assist. You'll gain insight into the terms and conditions that apply to their usage, and most importantly, a clear explanation of our overall approach to data handling. This article is your starting point for understanding where your data goes and how it's managed within the Now Assist ecosystem.
 
 
This FAQ addresses common questions specifically about the processing of your data within our Advanced AI and Data Products. It clarifies aspects such as how data is processed and what security controls and processes are in place to maintain data integrity throughout its lifecycle. This article contains detailed architecture diagrams, data flows, and network infrastructure information.
 
 
This article clarifies essential questions regarding data handling and AI model development. It outlines how ServiceNow offers optional data sharing programs for model improvement, emphasizing user control through opt-out options and ServiceNow's model development process.
 
 
This FAQ directly addresses the critical topic of our Responsible AI practices here at ServiceNow. Here, you'll find information on the methodologies and principles we employ to avoid and mitigate bias during the training of our Large Language Models (LLMs) and other AI systems. It outlines our commitment to ethical AI development and the steps we take to ensure fairness and accuracy.

 
Regards
RP

 

 

View solution in original post

5 REPLIES 5

Neel Patel
Tera Guru

If the links are not part of the content, it is definitely a product issue which only ServiceNow can resolved.

Went through the same experience this year, with Now Assist lot of things handled by LLM and/or ServiceNow logic with very little control within Admins/Devs.

rpriyadarshy
Giga Guru

@sridevirengasam few Pointers

 

Why am I getting a bad answer from Now Assist ?

 

  • All LLMs are known to have a risk of producing Hallucinations.
  • Customers are encouraged to participate in AI data sharing so that our models can be improved to reduce the chance of hallucinations.
  • Users can provide feedback using the "Was this suggestion helpful?" feedback mechanism on the result card.

Try the same with Other Models and let see the response.

 

Regards

RP

@rpriyadarshy 

 

One of our architect suggested to change the LLM model provider. We couldn't replicate the issue with Google LLM. It seemed to bring the right links in the responses and the responses also were quite limited when compared to Azure open AI. Customer is now worried about the data control and data sharing with Google LLM and looking for a ServiceNow data regulation legal document with respect to this to take a decision. 

HI

 

You need to fetch required details from below Contents.

 

https://www.servicenow.com/community/now-assist-articles/model-provider-flexibility-servicenow-integ...

 

Here's a breakdown of what you'll find in each of our key resources:
 
 
This is your foundational resource. It provides a high-level yet comprehensive understanding of the architecture behind our AI products, including Now Assist. You'll gain insight into the terms and conditions that apply to their usage, and most importantly, a clear explanation of our overall approach to data handling. This article is your starting point for understanding where your data goes and how it's managed within the Now Assist ecosystem.
 
 
This FAQ addresses common questions specifically about the processing of your data within our Advanced AI and Data Products. It clarifies aspects such as how data is processed and what security controls and processes are in place to maintain data integrity throughout its lifecycle. This article contains detailed architecture diagrams, data flows, and network infrastructure information.
 
 
This article clarifies essential questions regarding data handling and AI model development. It outlines how ServiceNow offers optional data sharing programs for model improvement, emphasizing user control through opt-out options and ServiceNow's model development process.
 
 
This FAQ directly addresses the critical topic of our Responsible AI practices here at ServiceNow. Here, you'll find information on the methodologies and principles we employ to avoid and mitigate bias during the training of our Large Language Models (LLMs) and other AI systems. It outlines our commitment to ethical AI development and the steps we take to ensure fairness and accuracy.

 
Regards
RP