- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on
04-29-2025
07:36 AM
- edited on
06-11-2025
10:08 AM
by
Andrew Swallows
Introduction
This article continues our series from the AI Center of Excellence Team at ServiceNow.
It focuses on solving a specific set of customer challenges related to AI Search, based on real-world project experience. Please note: this is not a comprehensive guide, but rather a focused and honest overview — highlighting what tends to work well, common pitfalls, and important aspects to consider.
Common Implementation Challenges
-
Understanding existing capabilities
Many teams struggle to get a clear picture of what AI Search actually offers, how to connect external data sources, and how to make use of AI Search Analytics. -
Desired results not being returned
A frequent concern is that the search doesn't return the expected results, often pointing to configuration gaps or issues with content quality. -
Lack of early architectural planning
Decisions around external source integration (e.g., connector configuration), or scaling are sometimes overlooked at the beginning of the project — leading to costly rework later on and loss of user trust. - Testing & Validation
Without a consistent testing process, it's difficult to detect regressions or measure improvements. Establishing a Golden Set of queries with expected results provides a clear benchmark for evaluating accuracy and maintaining trust in your search experience.
Let’s dive into each of these in more detail.
Understanding Existing Capabilities
AI Search in ServiceNow is a hybrid model, combining semantic and keyword-based search techniques.
-
Semantic search understands the intent and contextual meaning behind user queries, enabling it to return relevant results even if the exact words aren’t present.
-
Keyword search focuses on direct matches with the user’s typed input.
Furthermore, AI Search expanded its reach by supporting external data sources. This means organisations can now index and retrieve content not only from within ServiceNow but also from platforms such as Atlassian Confluence Cloud and Microsoft SharePoint Online.
This enhancement makes it possible to surface external information in Now Assist Q&A Genius Results, giving users a more comprehensive and integrated knowledge experience.
Now Assist Q&A Genius Results is a feature within ServiceNow’s AI Search that provides users with concise, actionable answers derived from knowledge articles. It understands user intent and delivers relevant content directly — without needing users to click through multiple results.
This improves self-service, increases knowledge utilisation, and reduces the time needed to find information.
ServiceNow also provides robust out-of-the-box analytics, including the User Search Analyzer dashboard, part of Now Assist Analytics. This dashboard helps track key metrics like:
-
Total search queries
-
Most common search phrases
-
Queries that returned no results
These insights are crucial for identifying knowledge gaps, optimising search relevance, and improving user satisfaction. Once insights are gathered, it's vital to act on them. Incorporate Continual Service Improvement (CSI) practices to evolve your AI Search setup over time:
-
Use “no result” queries to identify missing or unclear content.
-
Regularly update stop words and synonyms based on actual user behaviour.
-
Adjust chunking or boosting rules to match changing usage patterns.
-
Review promoted content quarterly to ensure it remains relevant.
By embedding CSI into your operational rhythm, your AI Search implementation becomes a living system — continuously adapting to meet evolving user needs.
Challenge: AI Search Doesn't Return Expected Results
Inconsistent Results Across Portals
If results differ between portals (e.g., Employee Center and Service Portal), ensure their search profiles are configured consistently:
-
Same search sources
-
Same genius results
-
Unified stop words and rules
Desired results are not being returned
In the Search Profile configuration, you can define Result Improvement Rules to promote, boost, or block certain results. Actions include:
-
Boost: Increases the relevance score of targeted results.
-
Promote: Forces specific items to the top, regardless of score.
-
Block: Hides unwanted results (e.g., deprecated services or outdated policies).
Also, configure stop words — common terms like “the”, “is”, or company-specific acronyms that add no value to search logic. Removing them sharpens result accuracy and improves processing speed.
Example: If internal terms like “XYZ” are used frequently but don’t help with search intent, they should be excluded as stop words.
Another key to improving relevancy is using an effective chunking strategy.
Instead of indexing long documents, break them into smaller, semantically meaningful pieces. Choose the "Passage"strategy under the Semantic Index configuration. You can define chunking based on word count, sentence boundaries, or custom logic.
This approach improves retrieval precision and enables Genius Results to deliver more focused, relevant answers.
Challenge: Lack of Architectural Planning
A common pitfall in AI Search projects is neglecting architecture early on. Without a solid foundation, teams often face performance issues, unnecessary costs, or security risks. Here are two major areas to focus on:
Indexing Scope & Cost Optimization
Define what data truly needs indexing:
-
Apply filters (e.g., only active records, or recent updates)
-
Avoid duplication between profiles
-
Align search profiles to user roles and business functions
If left unchecked, indexing everything leads to bloated data volumes, longer processing times, and increased storage or licensing costs.
External Content Connectors
If you’re integrating external sources (e.g., SharePoint, Confluence), pay special attention to:
-
Security Model: AI Search respects original system permissions. Validate they align with your organisation’s access policies.
-
Crawling Configuration: Define start points, inclusion/exclusion filters, and update intervals to avoid unnecessary load.
-
Volume Planning: One connector can index up to 1 million documents. For larger libraries, split the content across multiple connectors or apply filtering to stay efficient.
Failure to plan external connector use leads to incomplete indexing, mismatched access rights, or performance degradation — all expensive to fix later.
Data quality is key when enabling intelligent features like Now Assist Q&A Genius Results.
Activating Now Assist Q&A Genius Results can significantly enhance the user experience by delivering concise, intent-aware answers. However, rolling it out across all knowledge articles without assessing content quality can lead to vague, misleading, or outright incorrect responses — which undermines user trust and adoption.
To mitigate this, a best-practice approach is to segment your knowledge sources using Search Sources and enable Now Assist Q&A Genius Results only for high-quality, curated content. This allows you to:
-
Limit Q&A functionality to thoroughly reviewed or newly written KB articles
-
Exclude older or inconsistent content until it’s updated or rewritten
-
Test and optimise Q&A responses before deploying them more widely
In parallel, maintaining a well-structured catalog plays a vital role. Following best practices — such as using clear, consistent naming conventions and properly configured metadata — dramatically improves both the searchability and usability of your content. This ensures that users can find what they need quickly, leading to greater efficiency and a more satisfying experience.
As part of your Continual Service Improvement (CSI) efforts, consider gradually expanding your Now Assist Q&A Genius Results coverage:
-
Use AI Search Analytics to identify high-impact articles for improvement
-
Prioritise knowledge areas with frequent user queries and weak engagement
-
Migrate more search sources into the Q&A-enabled group as their quality improves
Testing & Validation
One of the most overlooked parts of a successful AI Search rollout is validation. Without a reliable way to measure accuracy, it's hard to know whether configuration changes are helping or hurting.
That’s where a Golden Set comes in.
A Golden Set is a curated list of realistic user queries, each paired with an expected result — grounded in high-quality knowledge articles or catalog items. These expected answers should be annotated with citations or reference links, so you can ensure the response is being generated from the correct source content or item.
This testing approach gives you a fixed benchmark to run search evaluations against — especially useful when:
-
Making changes to stop words, synonyms, or chunking logic
- Adding / Updating your content
-
Adding or removing external content connectors
-
Rolling out platform updates or AI Search enhancements
By comparing actual results to your golden set expectations, you can quantify relevance, detect regressions early, and build stakeholder trust in the system’s quality over time.
We suggest to:
-
Start small: ~20–50 high-impact queries based on real user behavior
-
Tie each query to specific KB or catalog sources with clear expected output
-
Run tests regularly after config changes or updates
-
Track trends in result accuracy and answer quality over time
This doesn’t just help with tuning — it becomes a shared quality baseline across your team, allowing faster iteration and safer experimentation.
Conclusion
AI Search in ServiceNow offers powerful capabilities — but like any enterprise feature, it requires strategic planning, ongoing optimisation, and a clear understanding of how users interact with it.
By addressing early architecture, understanding capabilities, and continuously tuning the experience using analytics, organisations can dramatically improve both search quality and user satisfaction.
𝘗𝘚: 𝘝𝘪𝘦𝘸𝘴 𝘢𝘳𝘦 𝘮𝘺 𝘰𝘸𝘯, 𝘢𝘯𝘥 𝘥𝘰 𝘯𝘰𝘵 𝘳𝘦𝘱𝘳𝘦𝘴𝘦𝘯𝘵 𝘮𝘺 𝘵𝘦𝘢𝘮, 𝘦𝘮𝘱𝘭𝘰𝘺𝘦𝘳, 𝘱𝘢𝘳𝘵𝘯𝘦𝘳𝘴, 𝘰𝘳 𝘤𝘶𝘴𝘵𝘰𝘮𝘦𝘳𝘴.
- 3,427 Views
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Impressive article describing the mechanics behind AI Search 👏
Terms like chunking, boosting etc are rarely heard and hidden from administrators who are configuring or debugging the AI search
@NataliaH - You and the entire team have done an outstanding job in bringing all of this together.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Refer to AI Search - Under the Hood at this link https://www.servicenow.com/community/now-assist-articles/now-assist-in-ai-search-nov-2024-release/ta...
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Great article @NataliaH, and wondering if there is more information available re: the chunking strategy and how to configure this. The information below is included, but its not clear to me exactly where to find these settings.
Instead of indexing long documents, break them into smaller, semantically meaningful pieces. Choose the "Passage"strategy under the Semantic Index configuration
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Matt Dodd - The below hyperlink would be helpful in configuring the chunking strategy for AI Search Indexed source