Gerard Dwan
ServiceNow Employee
ServiceNow Employee

 

The AI Search Analytics Dashboard, a dashboard available with the Advanced AI Search Management Tools app provides insight into adoption and quality of search as well as what may require tuning and what may be gaps in content.  

This article covers the metrics in the context of configuring or tuning AI Search.  

 

Quality

GerardDwan_0-1684780912222.png

Genius Results Triggered vs. Clicked: Genius Result rendering and frequency they were clicked when rendered. The ‘# of Triggered’ indicates how often a Genius Result was presented to the end user. The ‘% of Clicked’ represents how frequently the Genius Result is clicked when it is rendered. For Q&A Genius Results, because the answer is provided as part of the result, we expect a smaller number of clicks.  

 

Average Click Position: Overall quality can be determined by reviewing the Average Click Position. An average click position of 3 or less performs excellently for most queries.  

 

Self-Solved Rate: This is how frequently users click on a result in the result list. Over 50% is very good, but lower does not necessarily mean poor quality. It could be an indication that the dynamic teaser text or Q&A Genius Result is providing the information the user needs.  

 

Tuning Possibilities and Content Gaps  

GerardDwan_1-1684780912224.png

Top Queries: The most popular queries for the search application. Offers the percentage as well. This helps determine what users are looking for most commonly.  

On clicking the ‘View all’ link an additional table is displayed that includes the Average Click Position for all the top queries.  

GerardDwan_2-1684780912226.png

Investigate instances where the average click position is 5 or higher. Oftentimes, administrators discover that either the users’ query term is not prevalent in the title or body of the content. Work with the knowledge manager to adjust the content. As an actionable tuning measure, add that term or phrase to the meta of the content. If there is a pattern or similarity between those terms who have high average click positions, this would be a good opportunity to use Boost Rules*. If there is dense content, i.e., the term laptop has over 10 results, but there is one supported laptop per region, a Promote Rule may be more appropriate.  

 

Queries with No Clicks: The most popular queries that the user abandoned or did not click on any results. These search terms or phrases do not always require improvement; the answer to the query may be present in the result set, as is the case with Q&A Genius Results. There is some further investigation needed into this before direct action can be taken. The recommendation is to test these queries and have a better understanding of why the user may not have clicked on the result.  

 

Queries with No Results: These are the terms used that had no results returned. The experience for the end user is that they are asked to submit a different query. There are several causes to a no result query:  

  • The user does not have access to the content  
  • There is a gap of information in the system, a missing Knowledge Base article 
  • There is information in the system that matches the intent but not the wording.  

If the user does not have access to the content, it would be prudent to understand why and if the information is sensitive. If it is not sensitive, it may be worthwhile to adjust access settings on the article. If there is a gap in content or a lack of Knowledge Base article, work with the organization’s Knowledge Manager to determine if this topic should be in the Knowledge Base. If this information exists, but does not match with the wording or terms used by users there are two options:  

  1. Create a synonym for the user’s query that aligns it with the content.  
    • Notes on synonyms: they carry the same weight and meaning for all queries and content, and they are applied to all queries in the specified language. This is ideal for instances where there are many articles with similar meaning that you would expect users to get to in the same way.  
  2. Add the user’s query terms to the meta** field of the desired content.  
  3. This offers a more precise approach as opposed to synonyms. This is ideal if the term should not be universally applied as a synonym. 

 

*Boosting Results  

There may be specific fields/attributes in content that can identify it as more relevant content. For example, the Policies knowledge base is the definitive destination for relevant information in the organization. In that case, you can boost that category.  

Navigate to AI Search > Search Experiences > Search Profiles > your_search_profile 

Create a new Result Improvement Rule from the appropriately named related list.  

  • Give it an identifiable Label, something like ‘Boost Policies’ in our example. 
  • Set the End Date to a time in the distant future.  
  • Check the 'Activate on all queries” box 
  • Click the Create Boost Action’ button: 

GerardDwan_3-1684780912227.png

 

  • Give it an identifiable Label, something like ‘Boost Policies’ in our example. 
  • Set Boost Type to: Boost by Field Match (static) 
  • Indexed Source: Knowledge Table 
  • When: kb_knowledge_base 
  • Contains: Policies  
  • Boost Weight: 1000 

GerardDwan_4-1684780912229.png

 

  • Click Submit 
  • Click Update, and Publish  

This will now apply a boost to those Policy documents. The Boost Weight can be increased or decreased in increments of 100 for testing purposes.  

 

Some customers may observe that regional or geographic attributes are particularly important for their users. If that is the case, check out Result Improvement Rules for Global Companies.  

 

**A note on meta  

The meta field (table.meta) is a legacy field that heavily drove which keywords influenced relevancy for a specific piece of content. AI Search also takes advantage of this field. While this field does not have the impact of the title (‘short description’ for the Knowledge table or ‘name’ for the Catalog Items table) it will still give a greater influence over other fields on the table. This makes it a good candidate for adding phrases that end users are expecting to appear in the article, if the article cannot be rewritten.  

 

For more context on the deploying, monitoring, and improving AI Search check out these articles:

Comments
StefanoZ
Mega Sage

Hi @Gerard Dwan  

can you clarify what you mean with "adding phrases" to meta? Are you meaning adding entire utterances inside the Meta field?

 

I would like a real example pls.
Scenario
 
You have an FAQ article where:
  • Question field: How can I reset my password?
  • Answer field: You have to x, y and z
  • Meta field: password reset
Q&A Genius Results are mostly popping out if there's an exact match between customer input and Question field.
How can I leverage better the meta field to enlarge the spectrum of words/phrases/questions that drive that specific Q&A Genius Result?
 
ex. "My password is not working", "password not accepted", "can't login using my password"

 

Gerard Dwan
ServiceNow Employee
ServiceNow Employee

hey there StefanoZ, the out of the box Q&A requires an almost exact match to return the snippets as you would expect. It's meant to keep it very precise. 

 

As far as the meta is concerned, you can think about it as adding utterances. Basically we want to align as much to what the user is looking for as possible. In your example, the meta field would literally be: 'my password is not working, password change, update password', etc. 

NNL
Tera Contributor

Hi @Gerard Dwan ,

 

I have two questions.

 

1. Queries with no click: Is there a way to differentiate if an user is not clicking on anything vs. if a genius results is triggered? Because currently for both click rank is showing as "0" and therefore "top queries" are also shown in the "queries with no click" section

 

2. Self-Solved Rate: Can you please explain in more detail how this rate is measured? In the ServiceNow documentation it is described as "Metric indicates the percentage of search queries that produced a search result click for the selected application and date range." How long is the date range, e.g. are we speaking within hours or days? Can you use this rate as a "case deflection rate"? If you have knowledge articles, record producers, content item, tickets in form of record producers, how can you say that the user didn`t open a ticket and could solve the issue by himself?

 

Thank you in advance!

 

BethanyMcCool
Tera Contributor

I have the same question as @NNL realted to Queries with no click and differentiating between when a user isn't clicking on anything and when a genius result is triggered since they both rank as "0"?

Shamus Mulhall
ServiceNow Employee
ServiceNow Employee

Hi @NNL 

 

1.  The metric showing the "Genius Results (triggered vs. clicked)" provides insight into the percentage of queries that produce a genius result.   However as users find their answers right there in the genius result, often they will not need to click on any result, which would contribute to the queries with no clicks.  

 

2.  This is a calculation of Click-through rate, and the ranges is defined by the range selector at the top of the dashboard.

NNL
Tera Contributor

Hi @Shamus Mulhall,

 

thank you for your reply. I still have some remarks regarding:

 

1. I know that "Genius Results" will be counted as queries with no clicks. However my question is if there is a way to differentiate Genius Results and "queries with no clicks". Because right now we have search terms e.g. leave that is both in "top queries" and "queries with no clicks" - which is confusing.

 

2. Can you please show me where I can find "ranges is defined by the range selector at the top of the dashboard". Also I still not understand how this metric will contribute to case deflection with the description given

 

Thank  you in advance!

NNL
Tera Contributor

Hi @Shamus Mulhall @Gerard Dwan ,

 

another question from my side: In the "queries with no results" there are some terms that will show search results in the portal when I`m searching in the Portal. Is it because I have Admin Role or why is it the reason that is not accurate?

 

Thank  you in advance!

Alex87
Tera Contributor

Dear, is it possible to be able to drill down in the search results data that is being displayed on this dashboard? We are a very big company that is using servicenow in many countries. now the dashboard only shows the top searched items.. but that is than from the country where we have the most employees. But to have a better understanding what is being searched on a specific region or country there is some need to drill down deeper. Now it just shows everythings what is search on globally. 

 

Thank you for reply.

matthew_hughes
Kilo Sage

Hi @Gerard Dwan 

 

We would like to be able to see the data behind the AI search analytics dashboard as currently the dashboard only shows a small number of results and we would like to view and analyse additional results lower down than what is shown currently as this data could also be useful to us.

 

We need to confirm if it is possible for us to see the data direct in the relevant tables or if not if there is some sort of report or scheduled job we could build that could export this data for us to view.

Kass3m
Tera Expert

@Gerard Dwan do users who have opted out of being tracked  impact the results in this dashboard or skew the results?  is theiir data counted in the queries on an anonymized basis?

Gerard Dwan
ServiceNow Employee
ServiceNow Employee

Hi @matthew_hughes - 

The supporting tables are sys_search_event, as well as the signals tables described in this doc: https://www.servicenow.com/docs/bundle/xanadu-platform-administration/page/administer/search-adminis...

Gerard Dwan
ServiceNow Employee
ServiceNow Employee

Hi @Kass3m -

Generally speaking, yes. You can verify by checking (or have an admin check) the sys_search_event table to ensure that information is being logged there, even if on an anonymized bases.

sachin_namjoshi
Kilo Patron
Kilo Patron

Hi @Gerard Dwan Is there any documentation available to improve average response time? Based on https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB1459876 article, it's recommended to convert scripted user criteria to non scripted user criteria which will not work for us since we are using custom attributes in scripted user criterias.

Version history
Last update:
‎06-28-2023 06:29 AM
Updated by:
Contributors