- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 08-02-2024 12:06 PM - edited 2 weeks ago
Quick Overview
The Now Assist Skill Kit (NASK) lets you build and deploy custom generative AI skills directly within your ServiceNow instance. This FAQ covers what it is, when to use it, how to access it, and how to build, test, and deploy custom skills — including common troubleshooting scenarios.
Key Terms
NASK (Now Assist Skill Kit): The ServiceNow toolset for building and deploying custom generative AI skills on your instance.
OOTB Skills: Out-of-the-box Now Assist skills (e.g., task summarization, code generation) provided by ServiceNow.
NowLLM: ServiceNow's managed large language model service, recommended for most custom skill use cases.
BYOLLM: Bring Your Own LLM — connecting a custom or third-party LLM via the generic LLM connector.
Assist: The unit of consumption tracked when a generative AI capability is invoked within ServiceNow.
In this article:
What is NASK?
When would I use NASK?
Access & prerequisites
Building a custom skill
LLM options
Testing & deployment
Troubleshooting
Additional resources
What is the Now Assist Skill Kit (NASK)?
Now Assist Skill Kit, or NASK, was released in the Xanadu release. This feature allows you to build and deploy custom skills that leverage generative AI directly within your instance.
These skills enable use cases that the current suite of Out of the Box (OOTB) Now Assist skills — such as task summarization and code generation — cannot address today.
NASK outputs a custom skill, which can then be activated from within the Now Assist Admin console.
When would I use NASK?
NASK is designed for those seeking greater flexibility with generative AI capabilities. Common use cases that may invoke the usage of NASK are:
- You have existing workflows that you wish to augment with a generative AI function.
- Capabilities require the usage of an external LLM (i.e. non-ServiceNow managed model).
- This includes the requirement that an LLM have domain-specific knowledge or particular data handling and security restrictions that prevent users from using a NowLLM or one of our model flexibility providers.
- Learn more about using NASK to enable the use of external LLMs here.
- You have organization-specific use cases that OOTB skills do not cater to.
- You wish to modify the LLM provider or the underlying prompt of eligible OOTB skills. See the question "How can I use NASK to modify OOTB skills?" for more information.
Important: We generally recommend approaching this feature thoughtfully. ServiceNow cannot monitor or manage custom solutions. We typically recommend that most admins stay within the range of OOTB capabilities. Where OOTB isn't fit for purpose, try the configuration options in the Now Assist Admin console first. If that's still insufficient, NASK may be a good fit.
How can I access the Now Assist Skill Kit?
To access NASK, you must meet the following criteria:
- Have an active license for a Now Assist.
- You have updated the Now Assist for [x] plugins to the latest versions.
- Have an instance that is on at least the Xanadu release.
Note: You cannot access any Now Assist/generative AI features (and consequently NASK) on your personal development instance (PDIs). If you are a partner looking to develop custom skills, you can find options to access NASK within the Partner Success Center.
Once you have confirmed the above, grant your users access by adding the sn_skill_builder.admin role to those who will be creating and maintaining custom skills.
What do I need to know before beginning to use NASK?
The process of building custom skills with NASK involves a broad range of experiences. We recommend getting familiar with the following user journey prior to usage:
|
1
|
Define provider Understand the benefits and potential downsides of each LLM being considered. Each LLM has different strengths, so it is recommended to investigate which one is the best fit for your use case. You also have the option of performing evaluations to compare the output of your skill against different LLMs. |
|
2
|
Build During the build process, you will be asked to:
During the build process you are able to build multiple prompts within the same skill - these can be used as a method of version control, or for scenarios where you have different prompts required to be used for different circumstances e.g. you may want a summary of a record in the closed state to include information from the resolution notes, whereas that is not necessary for a record that has just been created. |
|
3
|
Test & Evaluate NASK provides a testing area for you to test your prompt directly in the editor. If you are looking to perform tests with a greater set of records, you can use the evaluation tool to auto-evaluate your prompts against an entire dataset. We use LLMs-as-a-judge to determine the performance of your prompts against metrics such as Faithfulness and Correctness. |
|
4
|
Deploy Your custom skill can be deployed to various areas of the product. To learn more, view our guide on Tools and Deployment Options. |
Are there limitations on what I can build within Now Assist Skill Kit?
We encourage you to innovate; however, we do have guidance regarding field of use restrictions that relate to only building in areas you are licensed for.
How many Assists are consumed when using NASK?
For information on Assist consumption, please refer to our overview or contact your account representative. If you receive an error message when calling an LLM, you will not be charged an assist.
Do custom skills support languages other than English?
Yes, you can leverage the Dynamic Translation component of the Generative AI Controller to enable the use of custom skills for those operating in a language other than English. Certain languages may also have native translation available.
Where can I limit who has access to the deployed custom skill?
You can do so by configuring the deployment vector itself. For example, if you are deploying your custom skill as a UI Action, you can follow this guide to add role-based access.
Is Now Assist Skill Kit supported in GCC or self-hosted environments?
Yes.
Does Now Assist Guardian work with Now Assist Skill Kit?
Yes.
What roles come with Now Assist Skill Kit?
- sn_skill_builder.admin: Grants access to all of the features within NASK
- sn_skill_builder.viewer: Grants read only access to NASK
Where do I find NASK in my instance?
Within your instance, type Now Assist Skill Kit into the filter navigator to display the link. If the link is not visible, ensure your instance is on at least the Xanadu release, you have an active license for a Now Assist for [x] product, and you have at least one Now Assist plugin installed. As a reminder, you cannot access Now Assist features on your PDIs.
How do I build a custom skill using NASK?
Please refer to the Now Assist Use Case Library for lab guides and walkthroughs on building a custom skill, or refer to the product documentation.
Which LLMs can I use in my custom skill?
Your options today are:
- ServiceNow Managed
- Now LLM Service
- ServiceNow Integrated model
- External LLM
- BYOK
- BYOLLM
For more information on this topic, including how to configure BYOK and BYOLLM, please refer to the Using external LLMs with Now Assist article.
Important: If you choose to use an external LLM for your use case, you will be responsible for managing the appropriate license and model configuration.
Can I use multiple LLMs in a single skill?
Relevant resource: Tool & Deployment Method Overview
If you have at least version 3.0.1 of the NASK plugin, you can add other custom skills as inputs to your custom skill within the Tool Editor. This allows you to chain skills together, regardless of which LLM they are using.
Alternatively, if you have a use case where you use a different LLM for different record types, you can create a prompt for each LLM and then specify when each is used using usage conditions (found using the icon noted in the image below).
What is a skill output?
The response from the LLM is stored in JSON format, with various key:value pairs holding the information retrieved from the LLM.
The skill outputs found in the Skill Contents module in NASK identify the expected keys that will be returned by that LLM — for example, the provider (which LLM you are using), the response (the outcome of your custom skill), and the error (which may be empty if no errors were encountered).
You can use the skill output to help parse the information you want to retrieve. If your deployment method allows for you to manipulate the output via script (such as the Now Assist Context Menu, Flow Actions, or UI Actions), then you have the ability to extract specific output values.
To do so, you can use the below function to retrieve the value within the JSON response by replacing OUTPUT KEY with your desired key from the list of outputs (e.g. provider, response, error):
var output = sn_one_extend.OneExtendUtil.execute(request)['capabilities'][request.executionRequests[0].capabilityId]['OUTPUT KEY'];
Note: You can technically add additional outputs within the tool today; however, there is no in-product functionality to create a mapping between the new skill output and any values.
Important: Each LLM provider may have different output variables - it is important to review the incoming JSON object before scripting to ensure that you are parsing the correct key:value pair
What data can I bring in to use within my prompt?
Relevant resource: Tool & Deployment Method Overview
You have two options to bring in data:
- Skill Input
- Tools
You need to add a skill input to bring in information from a particular record or to input a static variable such as a string or Boolean. To do so, click on the plus icon next to Skill Inputs. From the module that appears, select the type of data you wish to add, and then finish populating the form with the details of the input. You can then use this skill input as an input to a tool or directly within the prompt itself.
Tools are used to retrieve contextual data for your prompt. The list of available tools and a guide to using them can be found here.
To add a tool, you can click on the Add tools tab, then the + icon within the tool editor. A module will appear to guide you through adding a tool of your choosing.
This tool editor also allows you to:
- Modify tools to run in parallel or in a series when you want to use the output from one tool in another.
- Add decision nodes (for when you want to use logic in deciding which tools to run)
You can find a demo of how to use the tool editor in our AI Academy session.
How can I use NASK to modify OOTB skills?
As of version 3.0.1 of the plugin, one can use NASK to edit certain OOTB skills.
To do so, navigate to NASK, and click on the tab named "ServiceNow skills". Select the skill you wish to edit. Once open, you will be asked to clone the OOTB skill. This cloned version allows you to modify the provider, prompt, and usage conditions.
You are only able to modify the prompt and the LLM provider for the skill. If you wish to edit the inputs, outputs, or deployment methods, you will have to create an entirely new custom skill.
Find a demo of this functionality here.
Note: Usage of a modified OOTB skill continues to consume the same number of assists as the unmodified version. You can find out how many assists each skill consumes here.
If you wish to modify the provider of an OOTB skill, note that this is done at the prompt level — after cloning the skill, you will only see the option to select from providers already attached to the skill. If you wish to select a provider from beyond that list, you will have to add a new prompt by clicking the "Clone prompt to edit" or by clicking the + icon. The dialog box that appears will contain a wider list of providers.
How can I identify which OOTB skills can be modified?
Navigate to the sn_nowassist_skill_config table, and find the skills where is_template = true. You can also refer to the table in this article
How can I build a good prompt?
A "good" prompt is difficult to define — a prompt used in a summarization use case is unlikely to be ideal for a data analysis use case. With that said, we can provide some guidance on methods that have resulted in fit-for-purpose prompts:
Iterate, iterate, iterate
Take the time to fine-tune your prompt, using test results to guide you.
Use the prompt generation feature
Those on version 2.0.1 have access to our AI prompt generator feature. This allows you to input a description of what you want the custom skill to do, and generates a prompt that follows best practices.
Be data driven in your approach
Create sizable developmental datasets to use when testing your prompts. Ensure you include edge cases in this dataset — you may find your prompt returns less than satisfactory results when given an empty input, or includes text that spans multiple foreign languages.
Prompting quickstart guides are useful, but not prescriptive
Example prompts are a great way to get a feel for what one can achieve, but should be seen as foundations for you to develop your own prompt on top of — not a complete solution.
Remember — the LLM is not human
LLMs are great at mimicking human communication patterns, but remain artificial. Do not wordsmith your prompt assuming the LLM will comprehend it the same manner a human would. An example is using the term "should", rather than being direct.
An example below showcases the need for iteration. We initially asked the LLM to do the following:
First attempt (poor results):
You are an expert in understanding the underlying emotions within text. Review the below survey answers and determine what the overall sentiment of the user is, and answer in one word.
The survey questions and answers are found below: {{GetSurveyResults.survey_comments}}
This had poor results — the LLM returned paragraphs of text. After iterating, we arrived at this improved prompt:
Improved prompt (successful results):
You are an expert in understanding the underlying emotions within text.
Review the below survey answers and determine what the overall sentiment of the user is, and answer in one word.
Use the following categories to provide the overall sentiment:
Negative: If the sentiment is negative in nature
Positive: If the sentiment is positive in nature
Neutral: If the sentiment is neither negative nor positive
The response should only contain the overall sentiment.
The survey questions and answers are found below: {{RetrieveSurveyResults.survey_comments}}
Key Takeaway: The second prompt produced outputs classified as successful much more frequently than the first. Specificity and explicit output constraints make a significant difference.
How can I specify the desired format for the skill's output?
You can do so from within the prompt itself — by adding statements such as:
- Provide the list in bullet points
- Answer in one word
- Expand all acronyms in your response
- Reply with a professional tone
From a technical sense, the output from the LLM is JSON, with a range of key:value pairs one can leverage.
Learn more in the question "What is a skill output?"
What is meant by pre and/or post processors?
When building your skill, you have the option to add pre or postprocessors. These are scripts that run before the prompt leaves your instance (preprocessor) or after the response is returned (postprocessor).
To access the pre/post processors, navigate to Deployment and skill settings > Providers, and select the provider for whom you wish to create a pre- or post-processor script.
These can be used if you have data handling restrictions that limit what data can leave your instance, so you can configure a method for masking/unmasking particular information if the OOTB Sensitive Data Handler or Data Privacy solutions are not fit for your needs.
An additional use case that may require the use of a processor is to develop a mapping of acronyms specific to your organization. Before delivering the response to the LLM, you can have the preprocessor expand acronyms so the LLM knows what they represent.
Can I use multiple prompts in a single skill?
Yes — this scenario is handled by the usage conditions feature. Usage conditions specify when each prompt should run. Usage conditions can be found but clicking the diamond icon identified in the UI below:
You then specify the criteria for when this prompt should be used.
Note: When a custom skill is triggered, it first checks which provider is the default, then searches for that provider's default prompt. This will be the first prompt to have its usage conditions evaluated, and, if true, will be the prompt that gets run.
How do I delete custom skills I no longer need?
We do not have this functionality today. If you wish to remove the custom skill from displaying in NASK, you can navigate to the sn_nowassist_skill_config table and manually delete it; however, this leaves a number of metadata records.
Tip: To avoid requiring this feature, we recommend creating your custom skills within an update set. This lets you identify all the records that were created as part of your skill.
What is the limit on tokens?
The token limit varies depending on the LLM you are using within the skill. For certain LLM providers, you are able to modify the maximum number of response tokens. You can learn more here.
Note: If you are seeing negative tokens in your testing, that means that you are exceeding the token limit.
What are tokens in the context of custom skills?
Tokens are units of measurement that represent the amount of information in both the request and the response of a large language model (LLM). Each LLM has a maximum token limit, also known as the context window, which it cannot exceed. This limit is set by the model provider and cannot be altered. You can find this information in the "Max Tokens" field within the sys_generative_ai_model_config table. The equation that is used to ensure requests do not exceed this amount is:
Model maximum token limit = Request token amount + Response token amount + Buffer token amount
Request token amount
Each time you make a request to the LLM, you send over a prompt (instructions that tell the LLM what you want it to do) and typically some additional context. This context can be results gathered from a search (Retrieval Augmented Generation, or RAG), results from a web search, or simply information directly from a record. The combination of the prompt and the context is tokenized to determine how many request tokens are being consumed. If your request token amount exceeds the amount determined by the system, information within this request (i.e., some of the prompt or context) will be truncated. The request token amount is not configurable, as it is determined by the system at run time using the equation model maximum tokens token limit - response tokens - buffer token amount.
Response token amount
The output returned from an LLM is considered the response. When requesting a response, we inform the LLM what the response token limitations are, and the LLM will attempt to provide a response within that range. At present, this value is editable only in the following scenario:
- Building custom skills within Now Assist Skill Kit
The steps to edit this value can be found in the section labeled "Issues with Token Limits."
Buffer token amount
We include a buffer amount of tokens to ensure that all necessary instructions are provided to the LLM. This value can be found in the system property named com.glide.one.extend.token.buffer. This amount is only configurable by support engineers, but it is recommended that the value not be modified.
Issues with Token Limits
If you are encountering issues with request or response truncation due to token limits being insufficient for your needs, here are a few suggestions for resolving the issue:
- Reduce the content in the request. Instead of including an entire record with the prompt, include only the most essential field (e.g., just the description).
- Limit the size of the desired output – You can request that the LLM provide a shorter response, such as asking for a concise paragraph instead of a lengthy explanation.
If these adjustments do not resolve the issue, you have the following options:
- Consider switching to a model provider with a higher maximum token limit.
- [Only for custom skills built within Now Assist Skill Kit] Modify the maximum number of response tokens for the prompt.
NowLLM
Navigate to the sys_generative_ai_config table and find the prompt you wish to increase the max number of response tokens for. Open that record, and replace the value within the Response Max Tokens field [max_tokens] with your new value. Please note that increasing the maximum response token limit reduces the number of tokens available for the request. This equation is described in the "Introduction to Token Limits" section above.
Non-NowLLM (E.g. Azure OpenAI, Google Gemini, Anthropic Claude)
To modify the maximum number of response tokens for your model provider, you need to go into your custom skill within Now Assist Skill Kit and open the Configuration options by clicking the cog icon. Within that view, you can expand the section labelled Token limits. Within that section, you can adjust the maximum response tokens field to the value that fits your requirements. Please note that increasing the maximum response token limit reduces the number of tokens available for the request.
Are there any best practices for assessing the number of tokens we're passing to the prompt?
A prompt variable can be either a skill_input or a tool_output.
Skill_inputs have a "truncate" option that automatically manages the number of tokens based on the maximum allowed.
Note: Tool_outputs currently lacks a built-in truncation feature. If you are using tools, you can build response limits into the tool logic of subflow/flow action/script_includes itself to control the output length from a tool.
What does the configuration option "temperature" mean?
Temperature controls the "creativity" of the LLM. For customers looking for a more deterministic or repeatable approach, we suggest setting the temperature to a lower setting. Lower temperatures provide more conservative, repetitive, and focused responses. Higher temperatures yield more diverse, creative, and unpredictable outputs — a normal example of this is content creation, where greater flexibility may be welcome.
How do I prevent sensitive information from being sent to the LLM?
You can use the Data Privacy solution to mask personally identifiable information (PII) across all generative AI applications, including custom skills.
You can learn more here.
How can I test my custom skill?
We offer an in-product method of testing. To do so, click on Run tests below the prompt editor. You will be presented with the output from the LLM in the Response tab. If you wish to review the data that was added to the prompt from your skill input/tools, then you can click on the Grounded prompt tab.
We also allow you to evaluate your custom skill. This feature, available in the Evaluations tab, allows you to select a dataset of any size to test your custom skill against. You can also have our AI judge the responses for the following:
- Faithfulness: Does the output stay true to the source material?
- Correctness: Does the output correctly respond to each of the input instructions?
You can see a demo of evaluations here.
Note: Testing your skill consumes Assists. For more information please reach out to your account representative.
Where can I deploy my custom skills?
The full list of deployment methods is available here: Tool & Deployment Method Overview
I'm done building my custom skill. What now?
Once your prompt is complete and you have completed testing, you can publish and deploy it.
First, finalize your prompt by clicking the lock button above the prompt editor. his will lock your prompt, meaning no further adjustments can be made. If you wish to refine it at a later date, you will have to create a copy of the prompt and work on a copy.
Then, click Publish in the top right of the screen. You will be asked which of your finalized prompts you wish to make the default - that is, which one should be prioritised when the system is determining which prompt to run.
Once published, click on the Deployment and Skill Settings tab. This will give you the option to configure two things:
- Where in the Now Assist Admin console should the skill be found
- How and where users will trigger your skill.
To give an example of deployment options, we will walk through deploying to a UI Action. To do so, select the UI Action box, determine upon which record type the UI Action should be present on (typically this is whatever you selected as a skill input) and click Save. This will automatically generate a UI Action for you that, when triggered, will call the skill, and return the response in an information message. You can edit how the output is used directly in the script's UI Action.
Can I call my custom skills from within a Flow or a Virtual Agent topic?
For those with at least the 3.0.0 version of the Now Assist Skill Kit plugin, you can deploy directly to a flow action or as a module within a Virtual Agent topic.
You can find a demonstration of deploying a custom skill to a flow action in the AI Academy session video.
Can I call the custom skill from within a script?
Yes. See an example script below, and replace the variables with the values for your custom skill. You can find the sys IDs by either navigating to their tables and getting the sys ID for your custom skill record, or by reviewing the contents of the URL when editing your custom skill in NASK.
| Variable | Table | URL parameter |
| skillConfigId | sn_nowassist_skill_config | config-id |
| capabilityId | sys_one_extend_capability | skill |
var inputsPayload = {};
// create the payload to deliver input data to the skill
inputsPayload['input name'] = {
tableName: 'table name',
sysId: 'sys_id',
queryString: ''
};
// create the request by combining the capability sys ID and the skill config sys ID
var request = {
executionRequests: [{
payload: inputsPayload,
capabilityId: 'capability sys id',
meta: {
skillConfigId: 'skill config sys id'
}
}],
mode: 'sync'
};
// run the custom skill and get the output in a string format
try {
var output = sn_one_extend.OneExtendUtil.execute(request)['capabilities'][request.executionRequests[0].capabilityId]['response'];
var LLMOutput = JSON.parse(output).model_output;
} catch(e) {
gs.error(e);
gs.addErrorMessage('Something went wrong while executing the skill.');
}
action.setRedirectURL(current);
Can I see a demo using NASK?
Yes - we have plenty! Do note that some of the demos are utilizing older UI; however, the functionality remains the same.
| Use Case | Notable features | Source |
|---|---|---|
| Broad range of demos |
|
Tool and Deployment Options |
| Goal generator that uses the KPIs assigned to a user to create goals that will help the user achieve those KPIs |
|
AI Academy: Advanced NASK |
| Goal generator (same as above) |
|
DS Forum |
| IT Knowledge Article Categorizer |
|
Creator Toolbox |
| Expense report approver: Shows how a travel and expense report policy (stored in a Knowledge Base) can be automatically applied to an expense report |
|
AI Academy: Using Retrievers in Now Assist Skill Kit: |
| Knowledge Article Coach: Generates a feedback task for a user based on the contents of their knowledge article |
|
Building custom skills with Now Assist Skill Kit |
| Knowledge Article Reviewer: Uses relevant policies to evaluate the quality of a Knowledge Article |
|
AI Academy: Introduction to Now Assist Skill Kit |
| Survey Sentiment Analyzer |
|
Custom Skill Creation and Deployment Walkthough |
Troubleshooting
When I navigate to NASK, I get an error stating "You do not have permission to access this page"
To access NASK you need to have the sn_skill_builder.admin role. Please ensure your user has that role, then log out and log back in to see if you have access granted.
I'm trying to add a skill input that consists of a script function, but when I select a script include, no functions appear.
Ensure that your script include is accessible from all application scopes.
Why is my custom skill not appearing in Now Assist Admin console?
Ensure that your skill has been published, and a deployment method has been selected.
Why can't I find NASK in my instance?
Check the following:
- Your instance is on the Xanadu release.
- Your license for the relevant Now Assist plugins is up to date.
- You have the Now Assist Skill Kit plugin installed.
To see and access Now Assist Skill Kit, you'll need to grant the role sn_skill builder admin to users who will use it. If this role has not been assigned, you'll receive an error message when you try to navigate to Now Assist Skill Kit.
Once you've assigned the role "sn_skill builder admin," log out and then log back in for the change to take effect.
Still can't access NASK? If you have completed steps 1–3 and are still unable to access NASK, please log a case so we can investigate.
Why is my page not loading?
Try refreshing the page.
Why isn't my custom skill appearing via the deployment method I have selected?
This is likely because your skill hasn't been: (1) Published, or (2) Activated from within the Now Assist Admin console.
Alternatively, certain deployment methods may require additional configuration before allowing you to use your skill. Please refer to the Tools & Deployment Methods guide for more information.
Why am I not getting a response when I run a test?
You may be experiencing issues with your connection to the LLM. If you are using a ServiceNow managed LLM, please raise a case, and we will investigate. If you are using an external LLM, please first verify that the service is running, then check your connection and credentials.
Why can't I add a tool?
You may be running into one of the below known issues:
- Ensure your tool has been published, and you have access rights to the tool
- You may have already added a tool with that particular name
If you still encounter errors, please log a case.
Why am I getting an error message "Could not fetch skill tools" when adding a tool?
You are encountering an issue relating to metadata. Follow the support article here to remedy: KB1702519
Why isn't my subflow running?
Try the following:
- Validate that your subflow returns the correct response by testing it in Flow Designer first.
- Verify that you have published your subflow.
- If you have edited your subflow after having added the subflow as an input, try deleting the tool and re-adding it.
Why does nothing happen when I click the "Create Skill" button?
Please ensure the Now Assist Admin Console plugin is on at least version 4.0.5.
Why can't I see the entire list of LLM providers when cloning an OOTB skill?
When cloning a skill, it is likely to only give the option of a small group of LLMs. This is because this list is derived from the LLM providers currently used in the existing prompts attached to the skill.
If you wish to modify the LLM for OOTB skills with an LLM that is not provided in that list, follow these steps:
- Click the Clone button on the OOTB skill screen within NASK.
- In the dialog box that appears, leave the provider as is.
- Click Clone.
- In the newly cloned skill, add a new prompt by clicking "Copy prompt to edit" or the "+" icon.
- In the dialog box that appears, you will be able to find the extended list of LLM providers.
Additional Resources
Tool & Deployment Method Overview
Using external LLMs with Now Assist
Now Assist Skill Kit course on Now Learning
AI Academy: Introduction to Now Assist Skill Kit
- 65,778 Views
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @Eliza
Is there a list of allowed tables or something that limits the tables that show up as Input Records?
I am trying to build a new skill but the table I want to use as input Record doesn't show up in the list. I typed its name, and it gets shown, I can select it, but if I open the skill input record again I can see Skill kit stored a different record...
Here is the search for the table using its technical name. I can select it and save the skill input.
But here is what I get when I try to open it again to verify. It's a different table...
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Yes i am also facing same issue. i cant see most common tables such as incident, kb_knowledge, cmdb_ci etc.
Also, i have a question, if i have ITSM Pro Plus, then is it possible that i can access HR Case tables and create custom skills for HRSD?
or is it like for ITSM Pro Plus only ITSM specific tables can be accessed?
Kindly clear my doubt. Thanks!
#Now_Assist #Now Assist
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi both!
This issue has been resolved in the latest version of the Now Assist Skill Kit plugin - version 1.0.3. To install, you need to get the latest version of the Generative AI Controller plugin, which is available for those on Xanadu Patch 1.
To get the latest version, you need to update the Generative AI Controller plugin through following the below steps:
- Go to System Definitions > Plugins.
- Search for Generative AI Controller. Click on the tab named Updates.
- Click on the Generative AI Controller plugin.
- Click on Proceed to update. In the dialog box, ensure that you are upgrading to at least version 7.0.3.
- Update your plugin.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi
I have been playing around with NASK and get some nice results 🙂
But I'm struggling a bit getting "activities" (journal) as input to skill. I'm doing a custom skill on the incident record where I would like a summary of the reassignment history and I would very much like to use the information in the "activities" (journal). I think, this might be a very basic requirement, but I cannot get it working. How can this be done?
I hope there is a solution..
Best regards
Søren
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi
Just an update to above
I learned how to "add a tool" and saw above video, and using that knowledge I managed to add a subflow retrieving the relevant information from the sys_journal_field table.
But now I got me curious, about how to use a script as a "tool" i NASK. Are there any documentation about this? How to define inputs and outputs?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @Eliza, We are attempting to use our custom skill/feature from the now assist panel but are not sure how to configure this. We've selected 'now panel' in both the skill configuration and from the now assist feature section and still only see OOB feature suggestions.
Our Generative AI Controller plugin is on the latest version: 7.0.3 and our test instance is on Xanadu patch 2.
Any info on the 'now assist panel' option would be appreciated since we're unable to find much documentation on how to modify the welcome prompt and the skills used here.
Thanks!
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi,
I'm trying to look for the prompt and response logs from the skill kit. Can someone please point me to the table this would be stored in. I looked at the Now Assist QnA logs but none were found there.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi
I have below queries on the capabilities of NowAssist Panel:
- The welcome message in NowPanel always shows the default message and prompt. Is this configurable? How to show new prompt/skill name once it is ACTIVE in the default welcome message?
- Does NowAssist offers ChatGPT like interaction in the Now Assist panel. Lets say, I opened a record and ask following series of question
- - What are the key services impacted.
- - Which changes recently deployed has potentially caused this Incident
- Can NowAssist responds to above queries in the context of current Incident record?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
This post has been updated, you can use customized fields on tables as inputs.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @Jayden4 ,
Custom fields are able to be used to bring in additional information into NASK - if the test record has a value in that field, it should bring the result in.
Is the custom field a worknotes/journal field perhaps? That can also introduce difficulty, as you have to actually create a subflow to collect each value from that field.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @SørenC ,
Regarding guidance on using the script tool - I hope to write something more comprehensive soon, however, in the interim:
When you are using scripts as an input tool, you will be asked to select a function from within the script include table. A common gotcha is that you need to provide the name of the script include in both the Name and Resource field to then have the list of functions appear. If your script requires any inputs, it will prompt you to provide that input parameter.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
- You can set auto system greeting messages from the sys_cs_context_profile_message table
- The promoted skills in the pills can be promoted from the the Virtual Agent Designer but there is currently no way to change the order of it
- Now Assist does offer multi turn capabilities for you to have a conversation with the Now Assist Panel (ex. complete a series of questions to order a catalog item) and it can also give you additional information on a record
- Yes, Now Assist Panel can respond in context of the record that is in the content pane. For example, you can say "summarize this record"
Hope that helps!
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @Eliza , @Dexter Chan
while looking into Custom Skills, I noticed that they are available in the Virtual Agent. I'm curious to understand how Custom Skills work in the Virtual Agent. Do they have any triggers like topics, or is it something new that enhances the Virtual Agent experience? Could you please explain how it works? Your input would be very helpful
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi there,
In Yokohama, has the ability to see how many tokens the input and response consumes been removed? This is useful information for scoping out how many 'assists' a skill will use.
I noticed I could see this Xanadu:
But in Yokohama:
Is there a setting somewhere to bring this back?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @Jayden4,
The option to view these details has moved to the test history section. To access, click on the clock icon in the panel on the right, then select the test run you wish to review.
I'm passing this feedback on the change to our product team, so watch this space in the next release 🙂
Thanks,
Eliza
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi, I'm encountering an issue while developing a custom skill. Despite explicitly configuring the skill to avoid external data sources in the prompt, the test run indicates that it is still accessing external information. Could anyone offer insights or suggestions on how to ensure the skill adheres to the specified restriction?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@FibinP274053114 Because the models are trained on external data, they will alway try to reference this data when building a response if that is what you mean?
The Snow Generic LLM doesn't access external information, it doesn't know anything about the world after 2023.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @Eliza , thanks for the information. Is there any Documentation or Videos that I can refer for Custom skill in Virtual Agent
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hello @Eliza ,
I would like to use the incident summarization skill to summarize records. However, my client is not using the standard 'description' field as out of the box; instead, they are using a 'description HTML' field.
First, I tried cloning and modifying the Out-of-the-Box (OOB) incident summarization skill, but I received the message: 'This is a clone of ServiceNow skill. Prompts and providers can be customised. Inputs, outputs, tools and deployment settings cannot be edited.'This indicates that if I need to use the 'description HTML' field in the prompt, I must create a custom skill and specify 'description HTML' as the input.
- I have created a custom skill. However, the configuration options available in the OOB incident summarization skill (such as 'Choose Input,' 'Custom Prompt,' etc.) are not visible in my custom skill. How can I enable or access these options in a custom skill? (Image of OOB Incident Summarization skill showing options)
(Image of My Custom Skill not showing options)
- How can I link my custom skill to the 'Summarize' button in the UI, so it can be triggered from both the Core UI and the Configurable Workspace? (Image of the Summarize button)"
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @JayS56800927307,
There are 2 approaches you can take with skills in Now Assist Skill Kit:
- Editing OOTB skills
- This lets you clone and modify the provider LLM and the prompt for an OOTB skill. Inputs, and deployment vector (e.g. that UI component seen in question 2) cannot be modified.
- You can find a walkthrough of this process here.
- Create and deploy custom skills
- This lets you have full control over inputs, provider LLM, and the deployment vector. However, you do have to create everything from scratch.
With that said, to answer your questions:
1. This difference is because the first image is from an OOTB skill. These options are given so that admins can configure the OOTB skill, however, the options are limited. If you build a custom skill however, you are asked to define the inputs etc within Now Assist Skill Kit itself.
2. That summarize UI component on the workspace is only for OOTB skills. If you wish to reuse it for a custom skill, you will have to build it out yourself. We are looking to make this easier later this year, but in the interim, we suggest using UI Builder and having it trigger the custom skill via script.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
I am looking to display the NASK output in a similar view as the display card for 'Summarize' feature provided by Now Assist. Are there any steps that I can follow to send my output so it displays in the same view style like below? Also, would love to have the 'Share the work notes' feature.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@DarshanShah You can recreate the Share to work notes as a ui action to write to the journal on the record you're on.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Is there a way to increase the maximum response token count on the Now LLM Generic?
Where is this setting changed? I'm wanting to have larger responses but 1000 tokens / 1 assist limits the output. I cannot locate the settings in the Now Assist Admin space.
Being able to change this to 5000 would be huge and we can really obtain more detailed insights with a higher response cap.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Jayden4 We have a support article on Token Limits that should help you!
https://support.servicenow.com/kb?sys_kb_id=a293dc50937866d4e7eef35d6cba10f0&id=kb_article_view
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Have a question for custom skill, we have license for itsm and hrsd, can we create custom skill for SAM and itom related tables.
Thanks in advance
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @svani,
You can refer to the field of use guide for NASK, but I would also suggest reaching out to your account representative to confirm.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi there,
Is there a setting to have a custom skill run as admin?
The reason is that I have a sub-flow that collects worknotes on an sn_grc_issue ticket, via the sys_journal_field table, and these are captured correctly in the sub-flow and validated via the sub-flow test.
However, when this sub-flow is added as a tool in a Custom Prompt on the NASK, ACL is being applied and blocking the users who tests the Custom Prompt from retrieving the comments.
If a sysadmin does the test, or if we give specific users read access to the table, this works properly.
Is there any way to have a NASK Custom Skill run as admin, in the same way you can have a sub-flow run as 'system' ?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Is it fair to assume that while Now Assist skills only uses ServiceNow Native LLMs - NASK allows us to use non native models as well ?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@SamirNyra You can bring in your own LLM if you want, Eliza made a how-to video here.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hello @Eliza
We have created 5 extended tables for the Incident table, so I wanted to ask you if the Now Assist skills, for example Incident Summarisation, work in the same way for these custom tables.
Thank you in advance.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
I have been working on custom skills, as part of the prompt performance and evaluations i have human feedback along with reason and i would like to use it to refine the LLM responses . Could you please point me to the documentation or article which talks about how human feedback is used to refine the Skill.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Does anyone know what causes the 'Update Prompt' to sometimes be greyed out?
I have made several skills, all published, and some cannot have the prompt updated while others can and I cannot work out why:
On certain skills, I can clone the prompt via the side menu but it looks like if you publish a default prompt with no spare versions, you lose the ability to update the prompt any further and have to clone the entire skill? Is this a bug?
No clone and no update prompt (the dual square thing):
Another skill that has a spare version of the prompt that can be edited, updated and eventually promoted, while v3 cannot be touched (which makes sense as it's published):
Is it a bug that if you publish a default prompt with no spare versions, you cannot create any new versions of the prompt @Eliza ?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Is testing through prompt performance evaluation free of charge or "pay" for each run ( for example for 100 records testes i pay each 100times). I cannot find any information how this is is being dealt with licensewise.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @Lukasz_B,
Prompt testing & evaluation consumes assistance based on how many records you are including in the evaluation process. We have a guide available here that dictates how many assists are consumed: https://www.servicenow.com/content/dam/servicenow-assets/public/en-us/doc-type/legal/sn-assist-overv...
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Ricardo Reis At present, you will have to recreate the summarization skill in Now Assist Skill Kit for each of your custom tables.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Community Alums
Regarding your ask about utilizing human feedback, feedback gathering can only assist in refining the prompt but is not used when refining the LLM. The data we use to train our models is sourced only from customers who have opted in and is anonymized and scrubbed of PII prior to going into our training centers.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Jayden4
I can only assume that you may be in a different application scope or equivalent for the update prompt button to be greyed out. Can you perhaps log a case in support to address this?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
I have four main questions
1. How do the usage conditions work exactly?
I see you can define key value pairs for the conditions to match a prompt and it seems these conditions can be based on the inputs, but in the OOTB Skill for Case Summarization Skill the Usage Conditions on the prompts don't have a key that is an input and the values are not what I would expect.
So for example, the "Record Summarization Dynamic Sections - LLM Generic" Prompt has the following for its Usage Conditions: "version" - "v2".
Version is not an input of the skill. So how would this get evaluated?
The "Case Summarization (Resolved) - LLM Generic" Prompt has the following Usage Condition:
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @MichaelR7827468,
1. Your definition for usage conditions is correct - you define a key/value pair that identifies a value on your input record, and then, if met, that particular prompt will be used.
The examples you give are all OOTB ServiceNow skills, and, unfortunately, they aren't the best to use as a template as some of them (including the ones you've identified) use a slightly different architecture to the skills you create within NASK, hence why the inputs don't map directly to what is in the usage conditions.
2. The logic goes:
- Determine which provider - default provider goes first.
- Determine which prompt - again, the default prompt is evaluated first.
- If none of the usage conditions in the prompts for that providers match the input record, then it goes to the next provider and repeats the process.
3. Do you mind sharing what you are referring to when you say "prompt processors" vs "provider processors"? Within NASK we only expose pre/post processors in the section highlighted in the image below. The preprocessor runs just prior to sending the request to the LLM, so the inputs are populated, but it hasn't been sent to the LLM yet. The postprocesor runs immediately after the response has been retrieved from the LLM, so you have access to the output, but prior to sending it to the caller of the skill
4. I will need to get back to you on the definitions of each field in the payload in the new year - we are doing some work to improve the visibility and configurability of the payloads, so watch this space.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hey @Eliza,
Really appreciate the responses! And hope this isn't too much follow up questions.
1. When there are multiple usage conditions defined, are these handled as an OR or an AND comparison? Like in this example would both state and short_descripton need to match or just one:
Also is there anything that controls the order that the prompt's usage conditions are evaluated?
2. Ok so essentially it only uses other providers (LLMs) if no prompt in the default provider has its usage conditions met.
Is there an order for which provider is selected next after the default is checked?
Also with the OOB Record Summarization Skills, every provider's prompts have the same usage conditions. Would the other non-default providers only be selected if the default fails to connect or if something unique to that skill input architecture was used? Such as definitionFilters on the OneExtendAPI execute payload?
3. Yeah! So mainly in the OOB skills I see this. But say in the Case Summarization Skill, if on the Prompt Editor tab I expand the settings of a prompt, it has the Prompt preprocessor and Prompt postprocessor.
I have also seen in the docs somewhere that I can't find now that if you add your own custom LLM that there is pre/post processors on the Capability Definition (sys_one_extend_capability_definition) record if you click the Advanced checkbox. I'm not sure when or if we should use these fields every.
Like these:
4. Really appreciate that and look forward to more on those! I know there is a lot of work and reiteration going on here with Now Assist.
5. I actually have one other question that came up. If you are deploying a skill to the to the Now Assist panel, how do you ensure that NAP provides the right data to the skill inputs? Like if there are multiple inputs, this is pretty straightforward with Flow Action and UI Action deployment.
