- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a month ago
Dear all,
As part of my learing on Now Assist and custom skil, I try to build a skill which scan an incident for good data quality.
For that I have created a script include as below which specify the rules to follow :
var IncidentDataQualityEvaluatorTestCal = Class.create();
IncidentDQEvaluatorTestCal.prototype = {
evaluate: function(incident) {
var config = new IncidentDataQualityConfigTestCal().getRules();
var result = {
score: 100,
flags: [],
messages: []
};
// 1. Short description check
if (!incident.short_description || incident.short_description.length < config.minShortDescLength) {
result.score -= 20;
result.flags.push('UNCLEAR_SUMMARY');
result.messages.push('Short description is too short or unclear');
}
// 2. Description check
if (!incident.description || incident.description.length < config.minDescriptionLength) {
result.score -= 25;
result.flags.push('POOR_DESCRIPTION');
result.messages.push('Description lacks sufficient details');
}
// 3. Vague wording check
config.vagueWords.forEach(function(word) {
if (incident.short_description && incident.short_description.toLowerCase().includes(word)) {
result.score -= 10;
result.flags.push('VAGUE_LANGUAGE');
}
});
// 4. Assignment check
if (!incident.assignment_group) {
result.score -= 15;
result.flags.push('UNASSIGNED');
result.messages.push('Incident is not assigned');
}
// 5. Required fields
config.requiredFields.forEach(function(field) {
if (!incident[field]) {
result.score -= 10;
result.flags.push('MISSING_' + field.toUpperCase());
}
});
return result;
},
type: 'IncidentDataQualityEvaluatorTestCal'
};
Then I have define a starting prompt as below for my custom skill
## Role
You are an incident management expert. Your task is to suggest incident qualification improvement based on your analysis in incident fields. You will need to analyze the information and determine the most appropriate improvement for the incident on each bad quality reported fields.
## Context
The provided descriptions and configuration item:
short description: {{incident.short_description}}
description: {{incident.description}}
## Instructions
1. Use your tools for scanning through incident records
2. Analyze the description, short descriptions and assignee fields to qualify the data provided
## Output
The output should return in JSON format the result of the analysis suing the following format sample.
1- sample for short description length check
{
Quality score: 80,
flags: ['UNCLEAR_SUMMARY'],
messages: ['Your Short description is too short or unclear']
}
2- sample for description length check
{
Quality score: 80,
flags: ['POOR_DESCRIPTION'],
messages: ['Description lacks sufficient details']
}
3- sample for short description and description length check
{
Quality score: 70,
flags: ['UNCLEAR_SUMMARY','POOR_DESCRIPTION'],
messages: ['Your Short description is too short or unclear' ,'Description lacks sufficient details']
}
The next step in my configuration is to define the tool to be used by the prompt. For that I select the Add Tool tab on which I select my script include as seen below
Then on the next screen, I need to provide an Input value to my script as seen below :
The name field and DataType has been automatically provided as my Input is Incident record table.
But what should I add as Input value here ?
I guess the context should be the incident but then how to do in order that my script execute correctly as part of the tool used by my prompt ?
Thanks for help
regards
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago
I'm not a scripting expert but I think your first 2 lines need to match the variable defined and the call being made. IncidentDataQualityEval.... vs IncidentDQEval...
See this example by Eliza using the script tool as well. https://www.youtube.com/watch?v=eFlhntKTnBM
Some other input about your use case since I created a very similar custom skill for our instance and use. You are using a script tool where most of this can be handled with instructions directly to the LLM. Here is a snippet view of how I pulled our data directly from the incident record input and then provide instructions via the prompt to evaluate. There was no need to use a scripting tool IF you are able to get to the data and can then instruct the LLM to evaluate that data based on your desired score. Just define the fields and then include instructions directly in the prompt on how to evaluate this. Using the prompt instead of a scripted tool makes this much more straight forward.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago
We call our skill once the incident is closed and then assign a score to the incident record based on data in the short description, description, resolution notes/code, and a few other fields that we consider important. The score of groups/individuals are then averaged to identify areas that need coaching to improve overall data quality.
We also have the OOTB Resolution Notes Generation skill active to help with were users only entered "resolved" or in some instances "n/a". This skill is triggered automatically. This has improved the quality of our incident resolution notes when there is relevant work notes or info entered that the skill can leverage.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago
Hi @User687957
You can pass the sys_id in the input field so that in your script you can GlideRecord the incident table in the script include and get the related information of the current incident and you can check the score.
You need to use the tool in your prompt after this line getting the tool form the inputs then your custom skill works.
Use your tools for scanning through incident records
Hope that Helps!
