The CreatorCon Call for Content is officially open! Get started here.

Anyone else using Knowledge Demand Insights?

Gary Kakazu1
Mega Guru

I'm looking to share my experiences with Knowledge Demand Insights, and learn from others.

One specific question I have is with the configuration. The first step is to configure & train the solution definitions. For the clustering definition, it runs against kb_task_knowledge_coverage table. Initially this table is empty so the training won't run. I continued on, and found that the table got populated after I ran the [Knowledge Curation]: Generate Incident Clusters scheduled job. Is that the way it's supposed to work? Is the documentation not correct?

Also, does anyone have a more detailed explanation of the purpose of kb_task_knowledge_coverage? How is it generated? How is it used to find the knowledge gaps?

 

10 REPLIES 10

JennyHu
Tera Guru

Hi Gary,

I'm also exploring Knowledge Demand Insights.  Like you, I had trouble understanding the order of the set up by reading the documentation site alone.   I got a better understanding after going through a lab and reading the code.

The [Knowledge Curation]: Generate Incident Clusters Scheduled Job calls a Script Include [sn_km_ml.KBCurationProcessor] as seen in the script below: 

var kc = new sn_km_ml.KBCurationProcessor("e6dac78973d40010f84de1d28bf6a7b6");
kc.process();

The Script Include can be found here https://[your-instance-name].service-now.com/nav_to.do?uri=%2Fsys_script_include.do%3Fsys_id%3D23fc0...

Looking at the code, here is the sequence of order in the Script Include:

  1. Initialization.  It initializes the job by preparing the config object using configurations defined in Knowledge Demand Insights > Demand Insights Configurations.
  2. Processing. 
    • First, the process() function deletes all [kb_task_knowledge_coverage] records.  I'm assuming that this is to reset to a clean slate for each scheduled job run.
    • Second, It goes through your task table to check if a task record doesn't have KB mentioned in its journal fields. If it doesn't, it gets added to batchTaskMap.
    • Third, it filters out the tasks where there are similar KB articles based on the similarity solution you trained.
    • Fourth, it adds the remaining tasks to the Task Knowledge Coverage [kb_task_knowledge_coverage] table.
    • Lastly, it triggers the clustering solution training on the tasks that need knowledge coverage.  The result is that you get to see the clusters in the Demand Insights for Incidents dashboard.

So I believe it's safe to say that you don't need to train the clustering solution manually as a first step, as it looks like the [Knowledge Curation]: Generate Incident Clusters Scheduled Job will do that for you!

Here's the lab that might help:

K20 lab "Accelerate Incident Resolution with Predictive Intelligence and Agent Workspace" on NowLearning.  Section 1.1.8 has a demo on Knowledge Demand Insights.  The lab guide is here: https://developer.servicenow.com/connect.do#!/event/knowledge2020/LAB2995/lab_LAB2995_part_4__experi...

Good luck with your implementation!

Jenny

Hi Jenny,

Thanks for your response. It is very thorough. This will help a lot with my understanding of the process.

I did go the the K20 lab once already. I may go through it again since I have more experience now.

Gary

This is very helpful, thanks! We are setting this up now and I agree the documentation is lacking. 

I'm curious if either of you found this plugin to be useful overall? Does it accurately identify gaps in knowledge? Also, we did run the 'Knowledge Similar Articles' similarity job but the results were all over the place and not helpful for us, at least at first glance. The solution seems to be picking up outdated versions of articles that are not even published, even though we have the conditions configured to only look at published articles (we use KB article versioning). I'm wondering if we are doing something wrong with that job...

UPDATE: Ran the jobs for the knowledge demand insights and the results seem confusing to me. For example, the pareto chart says we have a huge amount of incidents indicating a gap for password reset incidents even though we have a good number of password reset KB articles... Curious to hear if either of you got similar results or made adjustments to the solution definitions that improved things.

@Steve Kelly Were you able to train the solution better for the gaps created for incidents for which knowledge exists? The password reset example you provided?