- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 05-29-2025 06:26 AM
Note: This article has not been drafted by AI, however AI is leveraged for formatting purposes.
So everyone would have searched some day or other used AI Search via portal, virtual agent or any other channel. However you would have ever given a thought of "how could I search better?" or going deep - "how does this search works in the background?","what should i do if my queries are not responding with search results?".
There are questions like these which waggle my mind when I started working on AI search in past few weeks.
So lets become a little nerdy, and explore what happens when we leverage AI search on the platform.
AI Search: Behind the scenes
The journey begins with the User Query (Portal, VA etc.). This is the initial input from the user, which could be a question, a command, or a request for information entered through a portal interface, virtual agent or may be a MS Teams chat.. The clarity and specificity of this query play a crucial role in the subsequent steps.
From this initial point, the LLM typically branches into three potential paths, depending on its ability to interpret the user's intent:
1. Understand the Intent:
This is the ideal scenario where the AI system successfully deciphers the underlying meaning and purpose of the user's query. This understanding is paramount for retrieving relevant information. It might happen that LLM might not understand the user's query (sounding more human like - isn't it)
- Intent not understood: In some cases, the AI system might fail to accurately grasp the user's intent. This could be due to ambiguous language, complex phrasing, or the query falling outside the system's domain of knowledge. When the intent is not understood, the typical outcome is No results. The system acknowledges its inability to process the request and usually informs the user accordingly. Spell checks are done (as much it can) and query is rephrased to be understood by LLM. Now don't get excited and ask random queries like "Could i swim in Atlantic sea?". The LLM would never know your swimming skills.
Note: To help a few use cases, synonyms under AI search could be useful where abbreviations and narrow set of enterprise specific phrases are used.
-
Intent unclear - Ask clarifying questions: Another possibility is that the AI system identifies some keywords or concepts but lacks sufficient context to fully understand the user's intent. In such situations, the system will proactively Ask more questions to clarify the user's needs. This interactive approach helps the AI gather the necessary information to refine its understanding and proceed with a more targeted search. By engaging in a dialogue with the user, the system aims to eliminate ambiguity and ultimately provide a relevant response.Lets click on a more specific example - "What is my number?" In this case LLM would try to drill this down to "Do you mean your employee number, contact number or number of time-off's for the year?". This kind of probing is more conversational and humane like.
2.Re-ranker Algorithm: Once the intent is understood, the system often employs a re-ranker algorithm. This algorithm analyzes a pool of potentially relevant content and prioritizes the most pertinent information based on the identified intent. It goes beyond simple keyword matching, considering semantic relationships and contextual relevance to ensure the top results are highly aligned with the user's need.
2.1 Retrieve Chunks from Knowledge Articles: Following the re-ranking, the system retrieves specific "chunks" or segments of information from its knowledge base, which often consists of a vast collection of knowledge articles, documents, and data. These chunks are the most relevant pieces of information identified by the re-ranker algorithm.
To test until here - you could also leverage "Search Preview" functionality under AI Search.
3.LLM Synthesized Result (user-friendly response): Finally, a Large Language Model (LLM) steps in to synthesize the retrieved chunks into a coherent and user-friendly response. The LLM doesn't just present raw data; it processes the information, structures it logically, and formulates a natural language answer that directly addresses the user's query. This step ensures the user receives information in an easily digestible and understandable format.
All the above steps could be viewed in generative ai log table (if you have access to it)
Conclusion:
The AI search flow is a multi-stage process designed to efficiently and accurately address user queries. From the initial understanding of intent to the synthesis of a user-friendly response via LLMs, each step plays a crucial role in delivering a positive user experience. While challenges remain in handling unclear or misunderstood intents, the ability of AI systems to ask clarifying questions represents a significant advancement in creating more intelligent and interactive search functionalities. As AI continues to evolve, we can expect even more sophisticated and nuanced approaches to understanding and responding to the diverse range of user queries.
Note: The views expressed are my own and do not reflect the views of my employer.
If you are interested in reading more about AI Search, I would recommend the article Making AI Search Work: Practical Lessons from the Field.
- 1,508 Views