Ashley Snyder
ServiceNow Employee
ServiceNow Employee

Recording Link:

 

Knowledge Management User Group (KMUG) AI Search and Knowledge Management Best Practices

The presentation has been attached to this blog post as a PDF.

The Q&A have been transcribed into the comments on this post.

 

 

Hello everyone!

 

I'm excited to announce we have an upcoming KMUG session on AI Search and Knowledge Management best practices on 10/19 at 12 p.m. EST. We will have @Gerard Dwan and @Shamus Mulhall joining us to discuss the following topics alongside me presenting on content authoring best practices. This will be in a meeting format and not a webinar, so you will have a chance at the end of the session to ask questions, here's the registration link: https://servicenow.zoom.us/meeting/register/tJwsde6tpzkqHdfMIvZqQvNenoZmRl4lLrF4

 

Agenda Topics:

  • AI Search - Relevancy Cheat Sheet
  • AI Search - Content structure for multi-language and multi-geographical content
  • Now Assist for Search KM Authoring Best Practices
  • End to End AI Search demo with Now Assist for Search

 

If you can't make it, that's ok. I will find a way to post this externally, more than likely on our ServiceNow Community YouTube channel as I know this is going to be a big topic for a lot of us going forward. We will also be posting the content used in this session on the community and any Q&A as we want to make sure we are giving you everything you need as you explore AI Search and NowAssist!

13 Comments
Ian Phillips
Tera Contributor

Looking forward to it!

Ashley Snyder
ServiceNow Employee
ServiceNow Employee

Thank you for attending everyone! We had a great live turn out! I'll work on the video editing and post it to YouTube to link here by end of next week and we'll work on getting a text copy of all the Q&A posted to a comment here as well.

Gianluca Roncat
Tera Expert

@Ashley Snyder did you post the video on YouTube?

Ashley Snyder
ServiceNow Employee
ServiceNow Employee

Here's a copy of the Q&A transcribed from the session:

 

Q: On the relevancy cheat sheet slide, are the matches in order? 

A: The relevancy cheat sheet slide does show matches based on the order of importance for relevancy. For example, matches on KB number – if you type in a KB number we’re going to want to make sure you get a match on that knowledge article number first, assuming exact matching is not turned on.

 

Q: Are Keywords back in use for a user to input or is that Keywords that AI puts in? 

A: Keywords are the fields that are associated with the meta field on the article or the tags that are on the article.   

 

Q: Can you discuss how you calculate relevancy for non-KM contentFor example, Community or Catalog items. 

A: It’s similar to the knowledge articles, we do provide a unified list of results by default in the All tab, the All tab correlates all of the information that you have for a specific interface or experience, so we’ll use similar features that are nearly identical, except for KB Number or Popularity shown on the Relevancy Cheat Sheet slide. There is an evening out or flowing of information because Title and Content are so heavily used for relevancy.  

 

Q: On the relevancy sheet does items within the article body get weighted differently such as using bold for words or headings and does article body also include text in attachments 

A: There is nuance to this question, the article body is the text field of the knowledge article. There is not an account today for formatting that exists within the article body, for example a bolded word has the same relevancy as an non-bolded word. Attachments are separate documents/results in AI Search, in Vancouver we created a nesting result for attachments, from a relevancy perspective we’re setting the max relevancy for the combination of the two (i.e. KB Article and Attachment) as the relevancy result, if you have very good matches in an attachment in an article then you’ll see that KB article move up to the top.  Previous to Vancouver this was a separate result. AI Search Vancouver release notes. 

 

Q: Out-of-the-box meta is not indexed by AI searchDoes anyone advise switching this onOur meta is very heavily influenced by previous use in Zing 

A: There was a time where AI Search did not index meta by default, as of San Diego we are including Meta. A good way to check whether that is the case for you is to check under index sources you explicitly do not index meta which is under your field settings and mappings within your index source for AI Search. 

 

Q: If a word in the article body is in bold is that weighted differently than regular text? Same for headings in articles, are they weighted higher? 

A: Currently relevancy does not match on article body formatting such as bold text, underlining. 

 

Q: Is the relevancy cheat sheet relevant for searches on both Employee Center and the regular Service Portal?

A: Yes.

 

Q: If using Article Templates, is the SEO Description Tag field used?

A: By default, we index everything so you can get a result based on any part of the information on the article from any field. We do use this from a matching perspective, we may not use this as a relevancy perspective unless it’s mapped to meta or another field that carries up to relevancy. 

 

Q: Can we segment users based on criteria (e.g. geography) on the AI Search Analytics dashboard? 

A: User specific context is not part of the aggregated data on the AI Search Analytivs dashboard, we have received this request and are working with the broader team on AI Search to see how we can include this while adhering to GDPR standards. It not available today. 

 

Q: Search behaves very differently depending on whether you search for one word or more than one (resubmits search using 'and' if it doesn't find 'enough' resultsBut this is very difficult to diagnose when issues happen 

A: We do have and/or re-submission, enough for us is a full page of results and this is a configurable setting, you can set 10 results or 20 results for example – the more results you request the more likely you’re going to get your and/or re-submission which is a bit fuzzier. Safe Harbor – in the Washington DC time frame we are making an and/or enhanced setting that doesn’t re-submit for two-term queries. The and/or works fairly well for three or more terms, but for two-term queries it gets a bit fuzzy because we’re only counting on one term to match whereas if it’s a three-term query and we rely on 2, and so on, we’re considering this as well. As we make enhancements going forward and you are encountering issues, try using the Search Preview, which is part of the same plugin or app that Search Analytics is included in. 

 

Q: Can we promote the articles based on all user attributes, like if a person is on maternity leave, show the maternity leave policy on the top. 

A: We use out-of-the-box fields on the sys_user table, so the user table and attributes found on the table are passed to the search results out-of-the-box, fields like maternity leave may not exist out-of-the-box. 

 

Q: After developing search solutions in lower environments, are the solutions migrated to production instances by means of update sets, or by source control? 

A: This is based on your development and migration strategy as a customer, but you can use update sets if that is your migration strategy, or source control if that is your strategy as well. We do not prescribe which method to use from an AI Search perspective, but we do recommend a regular cloning cadence from production to sub-production instances, as the data in production will always be different than what is in sub-production. For example, the synonyms you need to use are going to be different in production versus sub-production. The click information we aggregate for machine learning relevancy will also only be found in production. Your production instance is going to likely have a different relevancy model than your sub-produciton instances unless you go through the process or training or cloning downwards.

 

Q: Are your analytics using the Knowledge Searches [ts_query_kb] table? If not, what information does that table provide? (I think I can filter by user search terms with that table) 

A: AI Search queries are stored in the sys_search_event table which is where the AI Search Analytics data comes from. 

 

Q: As well as being able to boost by country can you boost by office location? I am from a single country organization but some guidance would be specific to individual offices. 

A: Community article mentioned in Multi-Geo Best Practices slide. We are using user context attributes from the sys_user table, and using most out-of-the-box fields, so if the office field is available as an out-of-the-box field on the sys_user table, the knowledge article can follow the same process on the above community article to boost the content. 

 

Q: Can AI search automatically identify the language of the articles in the search results and add dynamic facets for language filter?

A: AI Search relies on the Language field for Knowledge Base articles. The field is facetable and in combination with the Vancouver country-to-language mapping I can see this as being a valuable addition. It would require a quick configuration. 

 

Q: Does this mean Now Assist works only in English becasue genius results are available only in English?  

A: Currently Now Assist in Search is will only process content from KB articlesExpanding on sources is on the roadmap for Now Assist in Search. 

 

Q: This snippetizes from the KB content but does it do that from attachments? 

A: Currently Now Assist in Search is will only process content from KB articlesExpanding on sources is on the roadmap for Now Assist in Search. 

 

Q: Do you have a roadmap to add generative AI capabilities here so it re-writes and doesn't just take content as it is written?

A: Our current roadmap is focused on creating a draft article based on documentation found in cases, and incidents. We are looking at ways to make article authoring more streamlined for agents while ensuring article draft quality. 

 

Q: Will now assist be able to link follow up questions to make the responses more relevant or is the results for each question unique?

A: Safe harbor (do not make purchasing decisions based on this), but Now Assist in Search will be able to support a multi-turn interaction with users specifically in the VA experience.

 

Q: As an example for this, if I have multiple articles around "How to request a new laptop" that are slightly different based on country, will the geo boost recognize the Users country from their Profile and move the relevant article to the top of their search? (OK for this to be answered in the KM discussion group) 

A: https://www.servicenow.com/community/ai-intelligence-articles/result-improvement-rules-for-global-co... 

 

Q: Do knowledge articles allow for SCORM files or does ServiceNow have that capability elsewhere? 

A: Allowed file types are controlled by your system administrator. Connect with your system administration team. 

 

Q: Would FAQ style articles negatively impact it's location in AI search results? 

A: Very long/verbose FAQ articles may seem less relevant for a specific questions than articles that are targeted to solve the specific question or problem.

 

Q: When Now Assist is pushed out, will we need to activate it or will it be on by default? 

A: There are licensing requirements for Now Assist some information in the community FAQ: https://www.servicenow.com/community/ai-intelligence-articles/now-assist-for-search-faq/ta-p/2686538

 

Q: Should we still be authoring 1 Question 1 Answer formats? 

A: Answer pending

Tania Duncan
Tera Contributor

Thank you so much for sharing this. It was great to share this with my team to help them understand how it will improve our user experience! 🙂

glmarshall
Tera Contributor

Does this cover the Solve Loop for technicians offering support?

 

i.e Customer makes contact, Technician searches for Article, if it doesn't exist they report a knowledge gap or create a draft. 

 

Can AI cover that? e.g 

Customer makes contact, Technician asks if the article exists, if it doesn't exists AI creates a draft to finalise later? 

 

Kaitlin Huntley
Tera Explorer

I am unable to open the PDF.  

Kaitlin Huntley
Tera Explorer

Hellp-  am unable to access the PDF. Is it possible to have it shared another way?

 

Error is: khoros.app.box.com refused to connect.

 

Thanks!

hopenesmith
Tera Contributor

@Ashley Snyder hi, on the Relevancy Cheat sheet, Tags are mentioned.  Can you help me understand how the tag needs to be shared, in order for it to be picked up/indexed?  By default tags are only visible to the creator.  I've searched and unable to find any doco on this.

narramounik
Tera Contributor

@Ashley Snyder 

@Sean Hughes 

How do we achieve this reporting

A report that provides insights into AI Search Assist usage data (genius results card)
How many articles were searched via AI search assist
How many were clicked
How many times users clicked on "Solves my issue"

By AI search analytics dashboard, user experience analytics dashboard these data is not avaliable OOTB, how do we configure by custom mappings.
And which custom tables stores that logs to track           
How many knowledge articles were searched via AI search assist
How many were clicked
How many times users clicked on "Solves my issue".

Please provide the relevant sources on this.
Thanks