Script seem not executed by the LLM from Custom Skill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 hours ago
Dear all,
As part of my learing on Now Assist and custom skil, I try to build a skill which scan an incident for good data quality.
For that I have created a script include as below which specify the rules to follow :
var IncidentDataQualityEvaluatorTestCal = Class.create();
IncidentDQEvaluatorTestCal.prototype = {
evaluate: function(incident) {
var config = new IncidentDataQualityConfigTestCal().getRules();
var result = {
score: 100,
flags: [],
messages: []
};
// 1. Short description check
if (!incident.short_description || incident.short_description.length < config.minShortDescLength) {
result.score -= 20;
result.flags.push('UNCLEAR_SUMMARY');
result.messages.push('Short description is too short or unclear');
}
// 2. Description check
if (!incident.description || incident.description.length < config.minDescriptionLength) {
result.score -= 25;
result.flags.push('POOR_DESCRIPTION');
result.messages.push('Description lacks sufficient details');
}
// 3. Vague wording check
config.vagueWords.forEach(function(word) {
if (incident.short_description && incident.short_description.toLowerCase().includes(word)) {
result.score -= 10;
result.flags.push('VAGUE_LANGUAGE');
}
});
// 4. Assignment check
if (!incident.assignment_group) {
result.score -= 15;
result.flags.push('UNASSIGNED');
result.messages.push('Incident is not assigned');
}
// 5. Required fields
config.requiredFields.forEach(function(field) {
if (!incident[field]) {
result.score -= 10;
result.flags.push('MISSING_' + field.toUpperCase());
}
});
return result;
},
type: 'IncidentDataQualityEvaluatorTestCal'
};
Then I have define a starting prompt as below for my custom skill
## Role
You are an incident management expert. Your task is to suggest incident qualification improvement based on your analysis in incident fields. You will need to analyze the information and determine the most appropriate improvement for the incident on each bad quality reported fields.
## Context
The provided descriptions and configuration item:
short description: {{incident.short_description}}
description: {{incident.description}}
## Instructions
1. Use your tools for scanning through incident records
2. Analyze the description, short descriptions and assignee fields to qualify the data provided
## Output
The output should return in JSON format the result of the analysis suing the following format sample.
1- sample for short description length check
{
Quality score: 80,
flags: ['UNCLEAR_SUMMARY'],
messages: ['Your Short description is too short or unclear']
}
2- sample for description length check
{
Quality score: 80,
flags: ['POOR_DESCRIPTION'],
messages: ['Description lacks sufficient details']
}
3- sample for short description and description length check
{
Quality score: 70,
flags: ['UNCLEAR_SUMMARY','POOR_DESCRIPTION'],
messages: ['Your Short description is too short or unclear' ,'Description lacks sufficient details']
}
The next step in my configuration is to define the tool to be used by the prompt. For that I select the Add Tool tab on which I select my script include as seen below
Then on the next screen, I need to provide an Input value to my script as seen below :
The name field and DataType has been automatically provided as my Input is Incident record table.
But what should I add as Input value here ?
I guess the context should be the incident but then how to do in order that my script execute correctly as part of the tool used by my prompt ?
Thanks for help
regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 hours ago
I'm not a scripting expert but I think your first 2 lines need to match the variable defined and the call being made. IncidentDataQualityEval.... vs IncidentDQEval...
See this example by Eliza using the script tool as well. https://www.youtube.com/watch?v=eFlhntKTnBM
Some other input about your use case since I created a very similar custom skill for our instance and use. You are using a script tool where most of this can be handled with instructions directly to the LLM. Here is a snippet view of how I pulled our data directly from the incident record input and then provide instructions via the prompt to evaluate. There was no need to use a scripting tool IF you are able to get to the data and can then instruct the LLM to evaluate that data based on your desired score. Just define the fields and then include instructions directly in the prompt on how to evaluate this. Using the prompt instead of a scripted tool makes this much more straight forward.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 hours ago
hello @qprinsloo , thanks for sharing your ideas.
I will go for the LLM prompting only then but at that point how can I instruct then for exemple in my case that summary should contains a minum length and not ffuzzy description ( like I try to match in my script ) ?
Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
44m ago
User687957, you will need to play around with the prompt but something like this (see below) should get you started. What you're doing in your script can be defined with conversational requirements/instructions to the LLM.
Also, some other feedback I have is that you're deducting points for a field not being filled in. Why not control that by making the field mandatory. Instead of trying to control the process with a score, attempt to control the input method. Assignment group, contact type, category etc. can all be made mandatory at a certain point in the incident lifecycle.
You are an expert Incident Data Quality Analyst for a ServiceNow environment.
Evaluate the quality of the following incident record and return a structured assessment.
### Context
The closed incident record:
- Short Description: {{incident.short_description}}
- Description: {{incident.description}}
- Assignment Group: {{incident.assignment_group}}
- Category: {{incident.category}}
- Subcategory: {{incident.subcategory}}
- Contact_type: {{incident.contact_type}}
- Caller: {{incident.caller_id}}
### Evaluation Rules:
1. **Short Description**
- Must be present and at least 15 characters long.
- Should be clear and descriptive.
2. **Description**
- Must be present and at least 50 characters long.
- Should contain enough detail for someone to understand the issue.
3. **Vague / Unclear Language**
- Flag if the short description contains any of these vague words/phrases:
"issue", "problem", "not working", "broken", "error", "failed", "help", "urgent", "asap", "don't know"
4. **Assignment**
- The incident must be assigned to an Assignment Group.
5. **Required Fields**
- Check these fields: category, subcategory, contact_type, caller_id.
- Any missing field is a quality issue.
### Output Format:
Return ONLY a valid JSON object with this exact structure (no extra text):
{
"score": number,
"flags": ["FLAG1", "FLAG2", ...],
"messages": ["Human readable message 1", "Message 2", ...],
"summary": "One-sentence overall quality summary"
}
Scoring Guidelines (start at 100):
- Short description missing or too short → -20
- Description missing or too short → -25
- Contains vague language → -10
- Not assigned to a group → -15
- Each missing required field → -10
Score cannot go below 0.
### Incident Record to Evaluate:
{{incident_data}}
Now evaluate the incident according to the rules above.
