- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
04-16-2025 08:14 AM - edited 07-22-2025 10:30 AM
Introduction
Welcome to the AI Center of Excellence team at ServiceNow! We are a team of dedicated AI Strategists and Architects focused on advancing the implementation and adoption of AI solutions for our customers. Through numerous hands-on customer engagements, we have gathered valuable insights and guidance that we are excited to share with the ServiceNow community. This article is the first in a series outlining a subset of common customer problem areas and providing solution considerations based on our experiences. Here, we focus on Conversational Catalog Items and LLM Topics with Now Assist in Virtual Agent. It’s not a full guide, but a 'scrappy' small slice of real talk — what works, what doesn’t, and what to watch out for.
Common Implementation Challenges
Implementing AI solutions is a complex journey, and many customers encounter similar hurdles along the way. Here are three broad categories of common implementation challenges that we frequently see:
-
Understanding Product 'Configurable Knobs': Customers often struggle to fully grasp what ServiceNow products can and cannot do. There are many configurable feature and capability nuggets ('knobs') that are unknown.
-
Architecture Considerations: Many customers have been on the platform for years and have accumulated technical debt with hundreds of catalog items. Customers ask for guidance to navigate their existing configurations and make informed decisions on the best approaches. For example, when to use a conversational catalog item versus an LLM topic versus keep it as a form-based catalog on the platform/portal.
-
General Understanding of AI: AI is evolving rapidly, and it can be challenging to keep up. Many of us are accustomed to deterministic workflows and find it difficult to adapt to the probabilistic nature of AI.
Solution Considerations
In this section, we will outline a subset of solution considerations for each of the challenges described above.
1. Understanding Product 'Configurable Knobs'
a. Intent understanding and disambiguation
Challenge: Virtual agent struggles to understand users' intent, leading to incorrect catalog items or LLM topics being displayed.
Solution Considerations:
- Item and LLM Descriptions: The most significant factor in improving intent understanding is the clarity of item or LLM names and descriptions. Engage business users or champions to enhance these descriptions, as IT may add technical jargon that may confuse the LLM.
- Example Prompts: Include a few example prompts in descriptions can help the LLM understand when to trigger specific topics. Be careful NOT to add all possible intents as you would in the NLU world prior to GenAI.
- Pre-UAT Planning: Ensure that descriptions are planned and refined before user acceptance testing (UAT) to prevent issues from arising later.
- Choice of LLM: Starting Xanadu Patch 7, you have a choice of NowLLM or Azure OpenAI. With some of my customers, I have found Azure OpenAI to perform better with intent understanding and disambiguation. But you should do an apples-to-apples comparison in your environment and choose the right LLM that works better for you.
Examples:
1. A customer had an LLM topic for 'employee lookup' with a vague description like 'used for employee lookup'. In real-life, users would ask virtual agent, "Who is James Henning?" or "Whom does James report to?" The LLM struggled to match these queries.
A better description would be: 'This topic is used to look up employees or users and provide details such as their name, role, manager, department, location, etc. Sample prompts include: Who is James Henning? What is James' cost-center? Whom does James report to?'
2. A customer had a catalog item for 'Visitor Access'. The catalog description was 'Fill this form to request physical GRE access.' In real-life, users would ask virtual agent, "I want to give building access to an interview candidate." The LLM may struggle to match this query based on the description for the catalog item. LLM may not know what GRE is.
A better description would be: 'Use this catalog item to request access for visitors, clients, interview candidates, or partners to any of our physical buildings or locations such as GRE.'
3. A customer had a catalog item for 'Add user to DL'. The catalog description was 'DL requests' In real-life, users would ask virtual agent, "I want to add Sheela to a group." The LLM may struggle to match this query based on the description for the catalog item. Additionally, LLM would not necessarily understand that DL = Distribution List. In this example, LLM confused it with Data Lake.
A better description would be: 'Use this catalog item to add users to a Distribution List (DL) or a group.'
b. Mid-conversation switching
Challenge: Virtual agent is not smart enough to switch context and transition between different requests within a conversation. For example, virtual agent is helping you fill out a new laptop request conversationally and you ask it a quick question on mobile phone for the same new hire. And this confuses the LLM.
Solution: Mid-topic switching easily lets you switch between requests, using plain language whenever new queries are made in the same Virtual Agent conversation. There are 2 system properties that enable mid-topic switching:
- com.glide.cs.gen_ai.enable_mid_topic_ai_search (requires 'maint' role and so you should open a HI support case to change it.)
- sn_nowassist_va.enable_mid_topic_ai_search_catalog_result
c. Channels (Email / Phone vs Virtual Agent on portal, MS Teams, or Slack)
Challenge:
- How do we get users from traditional email/phone to Virtual Agent?
- Should users access the Virtual Agent on the portal or through existing enterprise channels like MS Teams or Slack?
Solution Considerations:
- Some of my customers have added an automated IVR and email response encouraging users to use their virtual agent. This requires organizational change management and the transition happens over time as users start liking the virtual agent experience. Stay tuned for a future article on OCM.
- When deciding between Virtual Agent options (portal, MS Teams, Slack), consider the trade-off between meeting users where they are (e.g., MS Teams or Slack) and the user experience. It's important to note that the experience in MS Teams or Slack may not yet match the Virtual Agent on the portal. However, bringing parity among these channels is on our H2 2025 roadmap (safe harbor for forward looking items).
Recommendation: Instead of making this an either/or decision, offer the Virtual Agent on both the portal and MS Teams or Slack. This allows users to choose the experience they prefer, enhancing overall satisfaction.
2. Architecture Considerations
a. Conversational Catalog Item vs LLM Topic vs Catalog Item Form
Challenge: Customers often have hundreds of catalog items, many of which may not be conversational. How should they plan their journey and decide when to use a catalog item, an LLM topic, or leave it as a form-based catalog item?
Solution Considerations: First, I encourage you to read this excellent guidance on how to make catalog items conversational. Now that may seem like a lot of effort and you may be thinking should I just build an LLM topic instead? It's important to understand the trade-offs between making a catalog item conversational versus building an LLM topic. Making existing catalog items conversational reduces ongoing maintenance since you only need to maintain the catalog item. On the other hand, LLM topics offer more control to enhance the user experience.
Recommendation: Adopt a use case-based strategy:
- Most Frequent/High Impact (Top 2-3): Use LLM topics for the most frequent and high-impact catalog items to provide an enhanced user experience.
- Vast Majority: Make the majority of catalog items conversational to streamline maintenance and improve usability. Remember this is a journey. You don't have to make all catalog items conversational on day 1. Start with the most impactful ones and incrementally build over time.
- Complex Forms and Business Logic: Keep complex forms and business logic as form-based catalog items. It is not necessary to make every catalog item conversational; you can simply provide a link to the catalog form.
By following this strategy, you can effectively balance user experience and maintenance efforts.
b. Catalog Categorization and Structure
Challenge: Many customers have built exhaustive catalog structures through categorizations. For example, you may have a category for hardware requests with hundreds of different catalog items for each laptop, phone, or other hardware requests. Should we give up this structure that users are accustomed to?
Solution Considerations and Recommendation:
Prioritizing user experience is key, as user preferences can vary. If the catalog is well-organized, let it co-exist with conversational elements. A use-case based approach is recommended.
Example: For one of my customers with hundreds of different catalog items for each laptop type, we retained the existing structure but also built a new LLM topic for all laptops. This flow offered eligible laptop choices as a drop-down within a single LLM topic. You do not need to make all of those hundreds of catalog items conversational. Just create one LLM topic for the conversational experience and let it co-exist with your well-organized catalog structure.
c. General leading practices
This section outlines some of general leading practices for catalog items to be conversational.
- Use Conversational Labels: Rephrase questions using conversational labels to enhance clarity and user understanding. If a variable has a conversational label, LLM will display that label consistently as opposed to creating its own.
- UI Policies: Set attributes directly on questions instead of using OnLoad() UI policies to make them mandatory, visible, or read-only. Avoid OnLoad() conditions for uninitialized variables by unchecking the OnLoad box.
- Minimize Variables: Keep the number of variables to a minimum to streamline conversation flow and reduce complexity.
- Turn Off Conversational Mode: For form requests, choose the "Make non-conversational in VA" option to simplify interactions.
- Ensure Clear Context: Use clear names, labels, and tooltips to provide context for items and improve user experience. Spell out the acronyms when possible.
- Use Standard Variable Types: Stick to standard variable types like Multiple Choice, Select Box, Single Line Text to maintain consistency and simplicity.
- Limit Scripting: Reduce client-side scripting and utilize "Validation Regex" for validation to ensure reliability and ease of maintenance.
- Simplify Dependencies: Streamline (or reduce) variable relationships to improve conversation flow and reduce potential issues.
- Test and Improve: Continuously evaluate items in the conversational interface and enhance them based on user feedback. There is always some level of trial and error.
3. General Understanding of AI
a. Myth: LLM trains itself with every interaction
Myth: Users believe that providing thumbs-up or thumbs-down feedback at every interaction will help the LLM improve continuously. However, they don't see immediate improvements.
Reality: NowLLM is instruction fine-tuned and cannot train itself on the fly, even if you share your data with ServiceNow. This process requires a lot more effort and resources such as training GPUs.
b. Level set expectation: Evaluating LLM-based apps
Many of us have been navigating the probabilistic world of AI for a while now. We've experienced tools like ChatGPT, Claude, or Gemini in our personal lives, where responses can vary each time. However, it's surprising how often customers expect a high degree of unreasonable consistency from these LLMs in business settings. It's important to prioritize value and progress over perfection. While reducing hallucinations is crucial, there needs to be a fundamental change in how we evaluate LLM-based apps.
Traditional deterministic software testing operates on a pass or fail basis. This approach doesn't align well with the nature of LLMs.
LLM-based apps should be tested for relevance in solving the user's objective, often measured by a percentage. This shift in evaluation criteria acknowledges the probabilistic nature of AI and focuses on the practical effectiveness of the solutions provided.
Conclusion
I hope this article provided you with practical implementation guidance. This is by no means a comprehensive coverage of all implementation aspects, but rather a subset of learnings and insights from our customer engagements. If you have more questions, please feel free to comment, and I will answer them and update this article as needed. If you found this helpful, please share your feedback. I plan to create more articles, with the next one focusing on AI search with Now Assist in Virtual Agent. Stay tuned!
𝘗𝘚: 𝘝𝘪𝘦𝘸𝘴 𝘢𝘳𝘦 𝘮𝘺 𝘰𝘸𝘯, 𝘢𝘯𝘥 𝘥𝘰 𝘯𝘰𝘵 𝘳𝘦𝘱𝘳𝘦𝘴𝘦𝘯𝘵 𝘮𝘺 𝘵𝘦𝘢𝘮, 𝘦𝘮𝘱𝘭𝘰𝘺𝘦𝘳, 𝘱𝘢𝘳𝘵𝘯𝘦𝘳𝘴, 𝘰𝘳 𝘤𝘶𝘴𝘵𝘰𝘮𝘦𝘳𝘴.
- 2,866 Views

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Thank you! Very helpful article which I've bookmarked and subscribed to. Would love to see a follow on article going into detail on these points you made, with examples:
- Use Conversational Labels: Rephrase questions using conversational labels to enhance clarity and user understanding. If a variable has a conversational label, LLM will display that label consistently as opposed to creating its own.
- UI Policies: Set attributes directly on questions instead of using OnLoad() UI policies to make them mandatory, visible, or read-only. Avoid OnLoad() conditions for uninitialized variables by unchecking the OnLoad box.
- Minimize Variables: Keep the number of variables to a minimum to streamline conversation flow and reduce complexity.
- Turn Off Conversational Mode: For form requests, choose the "Make non-conversational in VA" option to simplify interactions.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Ritesh Shah AI Please confirm, if this is similar #Mid-topic switching during Now Assist in Virtual Agent conversations which suggests that either of the 2 conversations has to be chosen to continue. Or the below solution is different?
Solution: Mid-topic switching easily lets you switch between requests, using plain language whenever new queries are made in the same Virtual Agent conversation. There are 2 system properties that enable mid-topic switching:
- com.glide.cs.gen_ai.enable_mid_topic_ai_search (requires 'maint' role and so you should open a HI support case to change it.)
- sn_nowassist_va.enable_mid_topic_ai_search_catalog_result
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Ritesh Shah AI It would be great if you can please provide an update on my above query as we are trying to navigate through some of the use cases.