Eliza
ServiceNow Employee
ServiceNow Employee

 

Eliza_0-1748639783229.png

What is the Now Assist Skill Kit (NASK)?

Now Assist Skill Kit, or NASK, was released in the Xanadu release. This feature allows you to build and deploy custom skills that leverage generative AI directly within your instance. These skills serve to enable use cases that the current suite of Out of the Box (OOTB) Now Assist skills, such as task summarization and code generation, cannot do today.

 

NASK outputs a custom skill, which can then be activated from within the Now Assist Admin console.

Eliza_1-1748639933443.png

 

When would I use NASK?

NASK is designed for those seeking greater flexibility with generative AI capabilities. Common use cases that may invoke the usage of NASK are:

  • You have existing workflows that you wish to augment with a generative AI function.
  • Capabilities require the usage of an external LLM (i.e. non-ServiceNow managed model). 
    • This includes the requirement that an LLM have domain-specific knowledge or particular data handling and security restrictions that prevent users from using a NowLLM. 
    • Learn more about using NASK to leverage external LLMs here.
  • You have organization-specific use cases that OOTB skills do not cater to.
  • **Only available as of the January 2025 release** You wish to modify the LLM provider or underlying prompt of eligible OOTB skills. See the question "How can I use NASK to modify OOTB skills?" for more information.

We generally recommend that one approach using this feature thoughtfully, as we at ServiceNow are unable to monitor or manage custom solutions. Thus, we typically recommend that most admins stay within the confines of our OOTB capabilities. Where OOTB isn’t fit for purpose, one may experiment with the configuration options provided within the Now Assist Admin console. If this still isn’t sufficient, then NASK may be a good fit.

This question is answered more broadly in our article How to approach building custom generative AI solutions using Now Assist.

 

How can I access the Now Assist Skill Kit?

To access NASK, you must adhere to the following criteria:

  • Have an active license for a Now Assist for [x] product.
  • You have updated the Now Assist for [x] plugins to the latest versions.
  • Have an instance that is on at least the Xanadu release

As a note, you cannot access any Now Assist/generative AI features (and consequently NASK) on your personal development instance (PDIs). If you are a partner looking to develop custom skills, you can find options to access NASK within the Partner Success Center .

 

Once you have confirmed the above, you then need to grant your users access. To do so, add the sn_skill_builder.admin role to those who will be creating and maintaining custom skills.

Eliza_1-1722894461532.png

What do I need to know before beginning to use NASK?

The process of building custom skills with NASK involves a broad range of experiences, all of which we recommend getting familiar with prior to usage. To summarize, we have a user journey documented below:

Eliza_0-1737497481895.png

 

  1. Define provider: This process requires one to understand the benefits and potential downsides of each LLM being considered. Our recommendation is typically to use our generic NowLLM service where possible, but your use case may have particularities resulting in one LLM being preferred.
  2. Build: During the build process, you will be asked to:
  • Define where input data should come from to augment the prompt with the information it needs. This at a minimum requires you to have an understanding of the architecture of your instance, but can also necessitate you writing a script or building a flow to extract what you need.
  • Develop your prompt – within NASK we provide a text box in which you input your desired prompt. Within the prompt you will need to outline everything the LLM needs to know to provide the outcome you are seeking. This includes format, language, action, and references to the data you want it to refer to.
  • Adjust prompt settings. These settings can require you to write a script (such as when you wish to include a pre- or post-processor to augment the outgoing or incoming request) or simply have an understanding of LLM fundamentals, such as an understanding that temperature relates to how “creative” a LLM can be.
  1. Test: NASK provides an area for you to test your prompt from the editor itself. Having a rubric that defines success for the outcome of your skill is key.
  2. Deploy: You are currently able to deploy directly to the Now Assist panel, Now Assist Context Menu, Virtual Agent, Flow Action, or a UI Action. 

Are there limitations on what I can build within Now Assist Skill Kit?

We encourage you to innovate, however, we do have some guidance regarding field of use restrictions that relate to only building in areas you are licensed for.

 

How many Assists are consumed when using NASK?

For information on Assist consumption, please refer to our overview or contact your account representative. If you receive an error message when calling an LLM, you will not be charged an assist.

 

Do custom skills support languages other than English?

Yes, you can leverage the Dynamic Translation component of the Generative AI Controller to enable the use of custom skills for those operating in a language other than English. Certain languages may also have native translation available.

 

Learn more here or in the FAQ.

 

Where can I limit who has access to the deployed custom skill?

You can do so by configuring the deployment vector itself. For example, if you are deploying your custom skill as a UI Action, you can follow this guide to add role-based access.

 

Is Now Assist Skill Kit supported in GCC or self-hosted environments?

Yes.

 

Does Now Assist Guardian work with Now Assist Skill Kit?

Yes.

 

What roles come with Now Assist Skill Kit?

  • sn_skill_builder.admin: Grants access to all of the features within NASK
  • sn_skill_builder.viewer: Grants read only access to NASK

 

Eliza_1-1722623190049.png

Where do I find NASK in my instance?

Within your instance, you can type Now Assist Skill Kit into the filter navigator to display the link. If the link is not visible, ensure your instance is on at least the Xanadu release, you have an active license for a Now Assist for [x] product, and you have at least one Now Assist plugin installed. As a reminder, you cannot access Now Assist features on your PDIs.

 

Eliza_2-1722619645033.png

 

How do I build a custom skill using NASK?

Please refer to the Now Assist Use Case Library for lab guides and walkthroughs on building a custom skill or refer to the product documentation.

 

Which LLMs can I use in my custom skill?
Your options today are:

  • Now LLM Service
  • External LLM
    • Spokes
    • BYOLLM

We typically recommend utilizing the Now LLM service for most use cases. If you are looking for details on the Now LLM service, please refer to the Now LLM Service FAQ, or you can review the model card specific to the model used within NASK here.

 

Those with requirements that prevent the usage of a  can choose to leverage an external LLM. We have 2 methods of connecting to external LLMs – via spokes, or BYOLLM.

 

The prebuilt spokes we offer allow you to connect to external LLMs with ease. The list as of August 2025 is:  

  • Azure OpenAI 
  • OpenAI 
  • WatsonX 
  • Amazon Bedrock
  • Google's Vertex AI
  • Google's Gemini AI Studio
  • Aleph Alpha [Note that this model is being deprecated, so we don't recommend using this]

Note that although these are spokes, they don’t consume integration hub transactions but rather Assists. For more information on this topic, please contact your account representative.  

 

Instances on at least the Washington DC release are able to use the generic LLM connector to connect to any external LLM not listed above, i.e. BYOLLM. This process requires a fair amount of technical acumen. To integrate with a non-spoke-supported LLM, you need: 

  • An API key from the provider 
  • Endpoint for the LLM 
  • Access to API documentation for the LLM to assist with writing the transformation script to translate the input and response into an acceptable format 

You can find a demo of connecting to an external LLM using the generic LLM connect within this guide.

 

Regardless of the external LLM you choose to connect to, you will be responsible for managing the appropriate license and model configuration for your use case.  

 

Can I use multiple LLMs in a single skill?
Relevant resource: Tool & Deployment Method Overview

If you have at least version 3.0.1 of the plugin, then you can add other custom skills as inputs to your custom skill within the Tool Editor. This will allow you to chain skills together, regardless of which LLM they are using.

Eliza_0-1737498507625.png

 

What is a skill output?

The response from the LLM is stored in JSON format, with various key:value pairs holding the information retrieved from the LLM. The skill outputs list found within NASK identifies the expected keys that will be returned from that LLM - for example, we have the provider (which LLM you are using), the response (the outcome of your custom skill), and error (this may be empty if no errors were encountered).

 

You can use the skill output to help parse the information you want to retrieve from the JSON response by replacing OUTPUT KEY with your desired key from the list of outputs. var output = sn_one_extend.OneExtendUtil.execute(request)['capabilities'][request.executionRequests[0].capabilityId]['OUTPUT KEY'];

 

You are technically able to add additional outputs within the tool today, however there is no in-product functionality that will create a mapping between the new skill output and any values.

 

What data can I bring in to use within my prompt?

Relevant resource: Tool & Deployment Method Overview

You can bring in data from anywhere you have access to – records, RAG (using the retriever tool), flows, subflows, scripts, integrations, events, and web searches.

As long as it is stored somewhere in ServiceNow, and you have access rights to it, you can configure NASK to pull it in. It is important to note however that if the data is any more complex than fields on a record, you will likely have to create a subflow or script to parse the data into a string usable within the prompt. You can see an example here.

If using data external to ServiceNow, you can do so through the use of the following tools:

  • Subflows that make the API call to the resource
  • Web search
  • Scripts 

You need to add a skill input to bring in information from a particular record or to input a static variable such as a string or Boolean. To do so, click on the plus icon next to Skill Inputs. From the module that appears, select the type of data you wish to add, and then finish populating the form with the details of the input. You can then use this skill input as an input to a flow, script, or directly within the prompt itself.

Eliza_3-1722619645052.png

 

 

Tools are used to retrieve data to provide contextual data to your prompt. To do so, click the plus icon to the right of the Tools section. Within the module that opens, select which type of workflow you wish to add. If your flow requires a particular input, you can populate that with either a skill input (noted above) or a static value that you provide yourself.

Eliza_4-1722619645100.png


For those on at least version 3.0.1, the experience of adding and managing tools is done via the Tool Editor. This tool editor will allow you to perform the following activities:

  • Add new tools (including RAG retrievers, flows, and even other custom skills)
  • Modify tools to run in parallel or in a series when you want to use the output from one tool in another.
  • Add decision nodes (for when you want to use logic in deciding which tools to run)

You can find a demo of how to use the tool editor in our AI Academy session.

Eliza_2-1737499565977.png

 

How can I use NASK to modify OOTB skills?

As of version 3.0.1 of the plugin, one can use NASK to edit certain OOTB skills. To do so, navigate to NASK, and click on the tab named "ServiceNow skills". Select the skill you wish to edit. Once open, you will have to clone the OOTB skill. You will be able to edit this cloned version.

 

As a note, you are only able to modify the prompt and the LLM provider for the skill. If you wish to edit the inputs, outputs, or deployment methods, you will have to create an entirely new custom skill.

 

Find a demo of this functionality here.

 

Usage of a modified OOTB skill continues to consume the same number of assists as the unmodified version. You can find out how many assists each skill consumes here.

 

If you wish to modify the provider of an OOTB skill, please note that this is done at the prompt level - that is, after cloning the skill, you will only see the option to select from providers already attached to the skill. If you wish to select a provider from beyond that list, you will have to add a new prompt by clicking the "Clone prompt to edit" or by clicking the + icon. The dialog box that appears will contain a wider list of providers.

 

Eliza_2-1741813377004.png

 

How can I identify which OOTB skills can be modified?

You can navigate to the sn_nowassist_skill_config table, and find the skills where is_template = true. You can also refer to the table in this article.

 

How can I build a good prompt?

A “good” prompt is difficult to define. For example, a prompt used in a summarization use case is likely to not be ideal for a data analysis use case. With that said, we can provide some guidance on methods that have resulted in fit for purpose prompts:

  • Iterate, iterate, iterate: Take the time to fine-tune your prompt, using results from testing to guide you.
  • Use the prompt generation feature: Those on version 2.0.1 have access to use our prompt generator feature. This allows you to input a description of what you want the custom skill to do, and generates a prompt that follows best practices.
  • Be data driven in your approach: Create sizable developmental datasets to use when testing your prompts. Ensure you include edge cases in this dataset – you may find your prompt returns less than satisfactory results when given an empty input, or includes text that spans multiple foreign language.
  • Prompting quickstart guides are useful, but not prescriptive: You may see example prompts that somewhat achieve what you are looking for. These are a great way to get a feel for what one can achieve with a prompt but should be seen as the foundations for you to develop your own prompt on top of, rather than a complete solution.
  • Remember – the LLM is not human: LLMs are great at mimicking human communication patterns, but remain artificial.  Do not wordsmith your prompt assuming the LLM will comprehend the prompt in the same manner a human would. An example is using the term "should", rather than being direct.

 

An example below showcases the need for iteration, where we initially asked the LLM to do the following:

You are an expert in understanding the underlying emotions within text. Review the below survey answers and determine what the overall sentiment of the user is, and answer in one word.

The survey questions and answers are found below: {{GetSurveyResults.survey_comments}}

 

This had pretty poor results, with the LLM returning paragraphs of text, and thus we iterated on the prompt until we arrived at this prompt:

You are an expert in understanding the underlying emotions within text.
Review the below survey answers and determine what the overall sentiment of the user is, and answer in one word.
Use the following categories to provide the overall sentiment:
Negative: If the sentiment is negative in nature
Positive: If the sentiment is positive in nature
Neutral: If the sentiment is neither negative nor positive
The response should only contain the overall sentiment.
The survey questions and answers are found below: {{RetrieveSurveyResults.survey_comments}}

 

The second prompt provided us with outputs we classed as successful at a much higher frequency than the first.

This example is rather specific to our use case however, so we recommend spending the time to test and iterate on your prompts prior to deployment.

 

How can I dictate my desired format for the output of the skill?

You can do so from within the prompt itself – by adding statements such as:

  • Provide the list in bullet points
  • Answer in one word
  • Expand all acronyms in your response
  • Reply with a professional tone

However, from a technical sense, the output from the LLM is JSON, with a range of key:value pairs one can leverage. Learn more in the question "What is a skill output?"

 

What is meant by pre and/or post processors?

When building your skill, you have the option to add pre or postprocessors. These are essentially scripts that will run prior to the prompt leaving your instance (preprocessor) or after the response has been returned (postprocessor).

 

These are great to use if you have particular data handling restrictions that limit what data can leave your instance, so you can configure a method of masking/unmasking particular information if the OOTB Sensitive Data Handler or Data Privacy solutions are not fit for your needs. An additional use case that may require the use of a processor is to develop a mapping of acronyms specific to your organization. Prior to delivering the response to the LLM, you can have the preprocessor expand the acronyms so that the LLM knows what they represent.

 

Eliza_5-1722619645124.png

 

Can I employ the use of multiple prompts in a single skill?

Yes - this scenario is handled by the usage conditions feature. Usage conditions allow you to state when each prompt should run. 

 

How do I delete custom skills I no longer need?

We do not have this functionality today. If you wish to remove the custom skill from displaying in NASK, you can navigate to the sn_nowassist_skill_config table and manually delete it; however, this leaves a number of metadata records. 

 

To avoid requiring this feature, we recommend creating your custom skills within an update set. This lets you identify all the records that were created as part of your skill.

 

What is the limit on tokens?

The token limit varies depending on the LLM you are using within the skill. For the generic Now LLM service, the limit is 16,000, which means that both the request and response have to be at or lower than 16,000.

 

If you are seeing negative tokens in your testing, that means that you are exceeding the token limit.

 

Is it possible for us to define the maximum request tokens for a skill?

The max token limit (i.e the context window ) is a fixed property of the an LLM and cannot be changed.

Eliza_3-1737500297425.png

The maximum request tokens are calculated as: max token limit - max response tokens.

The max response tokens can be controlled for each prompt at design time within the prompt editor’s right-side panel; this can be done for any LLM configured for the prompt (except NowLLMs currently).

Eliza_0-1741812813323.png

Eliza_4-1734636261505.png

 

If you are running into token limits within the input, then you can experiment with using recursive summarization. It breaks down the requests to the large language models (LLMs) into smaller pieces so that you can maintain the context for generative AI capabilities.

 

Are there any best practices for assessing the number of tokens we’re passing to the prompt (prior to passing it) to ensure the skill is error-proof and provides a proper error response if something isn’t working?

A prompt variable can be either be a skill_input or a tool_output.

Skill_inputs have a “truncate” option that automatically manages the number of tokens based on the maximum allowed.

Eliza_1-1741812821376.png

Eliza_2-1734636233236.png

 

Tool_outputs currently lack a built-in truncation feature. For now, if you are using tools, you can build some response limits into the tool logic of subflow/flow action/script_includes itself to control the output length from a tool.

 

What does the configuration option "temperature" mean?

Temperature controls the "creativity" of the LLM. For customers who are looking for a more deterministic or repeatable approach, we suggest setting the temperature to a lower setting. Lower temperatures will provide more conservative, repetitive, and focused responses. Higher temperatures provide more diverse, creative, and unpredictable outputs; normal examples of this are content creation, where more flexibility may be welcomed.

 

How do I prevent sensitive information from being sent to the LLM?

You can use the Data Privacy solution to make personally identifiable information (PII) across all generative AI applications, including custom skills.

You can learn more here.

 

Eliza_2-1722623369592.png

How can I test my custom skill?

We offer an inproduct method of testing. To do so, click on Run tests below the prompt editor. You will be presented with the output from the LLM in the Response tab. If you wish to review the data that was added to the prompt from your skill input/tools, then you can click on the Grounded prompt tab.

Eliza_6-1722619645157.png

 

We also have the ability for you to perform evaluation of your custom skill. This feature, available in the tab labeled "Evaluations" allows you to select a data set or any size to test your custom skill against. If so desired, you can also have our AI judge the responses for the following:

  • Faithfulness: Does the output stay true to the source material?
  • Correctness: Does the output correctly respond to each of the input instructions?

 

As a note, this evaluation feature leverages the ServiceNow OEM Azure OpenAI (GPT-4o) model. This model is not available in the APAC region as of January 2025. Please refer to this page for updates.

Eliza_4-1737500316278.png

 

Do note that testing your skill consumes Assists. For more information please reach out to your account representative.

 

 

Eliza_3-1722623405049.png

Where can I deploy my custom skills?

Relevant resource: Tool & Deployment Method Overview

As of the 3.0.0 version of Now Assist Skill Kit plugin (March 2025), you can deploy to a UI action, Now Assist panel, Virtual Agent, Flow Actions, or from within a script.

 

I’m done building my custom skill. What now?

Once your prompt is complete, and you have completed testing, you can now look to publish and deploy it.

 

 To publish it, click Publish in the top right of the screen. This will lock your prompt, meaning that no further adjustments can be made. If you wish to refine it at a later date, you will have to create a copy of the prompt and work on a copy.

Eliza_7-1722619645179.png

 

Once published, click on the Skill Settings tab, then click on Deployment Settings in the left navigation bar. This will give you the option to configure two things:

  1. Where in the Now Assist Admin console the skill should be found.
  2. How and where users will trigger your skill.

 

To give an example of deployment options, we will walk through deploying to a UI Action. To do so, select the UI Action box, determine upon which record type the UI Action should be present on (typically this is whatever you selected as a skill input) and click Save. This will automatically generate a UI Action for you that, when triggered, will call the skill, and return the response in an information message. You can edit how the output is used from within the script in the UI Action directly.

 

Eliza_8-1722619645187.png

 

Can I call my custom skills from within a Flows or a Virtual Agent topic?

For those with at least the 3.0.0 version of the Now Assist Skill Kit plugin, you can deploy directly to a flow action or as a module within a Virtual Agent topic.

You can find a demonstration of deploying a custom skill to a flow action in the below video:

 

Can I call the custom skill from within a script?

Yes. See an example script below, and replace the variables with your data.

 

var inputsPayload = {};

// create the payload to deliver input data to the skill

inputsPayload[‘input name’] = {

  tableName: 'table name',

  sysId: 'sys_id',

  queryString: ''

};


//create the request by combining the capability sys ID and the skill config sys ID

var request = {

    executionRequests: [{

        payload: inputsPayload,

        capabilityId: ‘capability sys id’,

        meta: {

            skillConfigId: ‘skill config sys id’

        }

    }],

    mode: 'sync'

};


//run the custom skill and get the output in a string format
try {
var output = sn_one_extend.OneExtendUtil.execute(request)['capabilities'][request.executionRequests[0].capabilityId]['response'];
var LLMOutput = JSON.parse(output).model_output;
} catch(e) {
 gs.error(e);
 gs.addErrorMessage('Something went wrong while executing the skill.');
}
action.setRedirectURL(current);

 

 

 

Can I see a demo using NASK?

You can find one here.

 

Eliza_4-1722625255023.png

When I navigate to NASK, I get an error stating “You do not have permission to access this page”

To access NASK you need to have the sn_skill_builder.admin role. Please ensure your user has that role, then log and log back in to see if you have access granted.

 

I'm trying to add a skill input that consists of a script function, but when I select a script include, no functions appear.

Ensure that your script include is accessible from all application scopes.

Eliza_0-1733787867625.png

 

 

Why is my custom skill not appearing in Now Assist Admin console?

Ensure that your skill has been published, and a deployment method selected.

If you selected “Other” under the deployment settings, you will find you skill in the tab named Available.

Eliza_9-1722619645195.png

 

 

Why can’t I find NASK in my instance?

  1. Check the following:
    a) Your instance is on the Xanadu release.
    b) Your license for relevant Now Assist plugins is up to date.

    c) All relevant Now Assist plugins (Now Assist for ITSM/HRSD/CSM/ITOM, etc.) are up to date. Now Assist Skill Kit comes bundled with the latest version of the plugins, as shown in the screenshot below.

Eliza_2-1722894967786.png

 

  1. To see and access Now Assist Skill Kit, you’ll need to grant the role “sn_skill builder admin” to users who will use it.  If this role has not been assigned, you’ll receive an error message when you try to navigate to the Now Assist Skill Kit.

Eliza_3-1722894967786.png

  1. Once you’ve assigned the role “sn_skill builder admin,” log out and then log back in for the change to take effect.

If you have completed steps 1 - 3 and are still unable to access NASK, please log a case and we will investigate.

 

Why is my page not loading?

Try refreshing the page.

 

Why isn't my custom skill isn’t appearing on the record I have selected?

This is likely because your skill hasn’t been:

  1. Published
  2. Deployed as a UI Action
  3. Activated from within the Now Assist Admin console

 

Why am I not getting a response when I run a test?

You may be experiencing issues with your connection to the LLM. If you are using the generic Now LLM service, please raise a case, and we will investigate. If you are using an external LLM, please first verify that the service is running, then check your connection and credentials.

 

Why can't I add a tool?

You may be running into one of the below known issues (these have all been remedied in Xanadu Patch 2):

  • Tool names cannot contain a space (e.g. you can name it "retrievePolicies", but not "retrieve policies".
  • The name of the flow action/subflow should not be the same as a skill within the instance.
  • When calling a function from within a script tool, the input parameters of the function cannot be in camelCase. See example below:

Additionally, if you are adding a flow/sub-flow, please verify that they are published and have the correct access rights.

 

Why can't I add a skill input to the prompt?

This is a known issue when working with Now Assist Skill Kit in Firefox in instances operating on Xanadu Patch 1. Please try a different browser or update your instance version to resolve.

 

Why am I getting an error message "Could not fetch skill tools" when adding a tool?

  • You are encountering an issue relating to metadata. Follow the support article here to remedy: KB1702519 

 

Why am I getting an error message "There was an error while loading the list of skills " when I first open Now Assist Skill Kit?

This is a known issue in Xanadu Patch 1 instances. After you create your first skill, this message will go away.

 

Why is my JSON array skill input not working?

This skill input type is unsupported at present. For the interim, please use a string skill input type.

 

Why isn't subflow isn't running?

Try the following:

  • Validate that your subflow returns the correct response by testing it in Flow Designer first.
  • Verify that you have published your subflow.
  • If you have edited your subflow after having added the subflow as an input, try deleting the tool and re-adding it. The current version of the product cannot detect with inputs or outputs of a subflow are modified.

 

Why does nothing happen when I click the "Create Skill" button?

Please ensure the Now Assist Admin Console plugin is on at least version 4.0.5.

 

Why aren't any responses being returned when I click "Run Test"?

This is a known issue if you have Now Assist Guardian's prompt injection turned on. We have a fix targeted for release soon, but in the interim please turn off this feature while developing custom skills.

 

Why can't I find the table I'm looking for in the drop down options?

A fix has been deployed for this in version 1.0.3 of the Now Assist Skill Kit plugin. To install,

  1. Go to System Definitions > Plugins.
  2. Search for Generative AI Controller. Click on the tab named Updates.
  3. Click on the Generative AI Controller plugin.
  4. Click on Proceed to update. In the dialog box, ensure that you are upgrading to at least version 7.0.3.
  5. Update your plugin.

This version of the Generative AI Controller should update your Now Assist Skill Kit plugin to the required version to remedy this issue.

 

Why can't I see the entire list of LLM providers when cloning an OOTB skill?

When cloning a skill, it is likely to only give the option of a small group of LLMs. This is because this list is derived from the LLM providers currently used in the existing prompts attached to the skill.

 
 

111.png

 

If you wish to modify the LLM for OOTB skills with an LLM that is not provided in that list, you have to do the following:

  1. Click the Clone button on the OOTB skill screen within NASK
  2. In the dialog box that appears (as seen above), leave the provider as is
  3. Click Clone
  4. In the newly cloned skill, Add a new prompt by clicking "Copy prompt to edit" or the "+" icon.
  5. In the dialog box that appears, you will be able to find the extended list of LLM providers.

Eliza_3-1741813845983.png

 

 

Additional Resources

Comments
Alexandre Assis
ServiceNow Employee
ServiceNow Employee

Hi @Eliza  

Is there a list of allowed tables or something that limits the tables that show up as Input Records?

I am trying to build a new skill but the table I want to use as input Record doesn't show up in the list. I typed its name, and it gets shown, I can select it, but if I open the skill input record again I can see Skill kit stored a different record...

 

Here is the search for the table using its technical name. I can select it and save the skill input.

AlexandreAssis_0-1724345319339.png

 

But here is what I get when I try to open it again to verify. It's a different table...

AlexandreAssis_1-1724345417935.png

 

mahajanravish5
Tera Contributor

Yes i am also facing same issue. i cant see most common tables such as incident, kb_knowledge, cmdb_ci etc. 

Also, i have a question, if i have ITSM Pro Plus, then is it possible that i can access HR Case tables and create custom skills for HRSD? 

or is it like for ITSM Pro Plus only ITSM specific tables can be accessed? 

 

Kindly clear my doubt. Thanks! 

#Now_Assist #Now Assist

 

@Eliza @Alexandre Assis 

Eliza
ServiceNow Employee
ServiceNow Employee

Hi both!

 

This issue has been resolved in the latest version of the Now Assist Skill Kit plugin - version 1.0.3. To install, you need to get the latest version of the Generative AI Controller plugin, which is available for those on Xanadu Patch 1. 


To get the latest version, you need to update the Generative AI Controller plugin through following the below steps:

  1. Go to System Definitions > Plugins.
  2. Search for Generative AI Controller. Click on the tab named Updates.
  3. Click on the Generative AI Controller plugin.
  4. Click on Proceed to update. In the dialog box, ensure that you are upgrading to at least version 7.0.3.
  5. Update your plugin.

 

@Alexandre Assis @mahajanravish5 

SørenC
Tera Contributor

Hi

I have been playing around with NASK and get some nice results 🙂

But I'm struggling a bit getting "activities" (journal) as input to skill. I'm doing a custom skill on the incident record where I would like a summary of the reassignment history and I would very much like to use the information in the "activities" (journal). I think, this might be a very basic requirement, but I cannot get it working. How can this be done?

I hope there is a solution..

Best regards

Søren

SørenC
Tera Contributor

Hi

Just an update to above

 

I learned how to "add a tool" and saw above video, and using that knowledge I managed to add a subflow retrieving the relevant information from the sys_journal_field table. 

 

But now I got me curious, about how to use a script as a "tool" i NASK. Are there any documentation about this? How to define inputs and outputs?

jsz1234
Tera Explorer

Hi @Eliza, We are attempting to use our custom skill/feature from the now assist panel but are not sure how to configure this. We've selected 'now panel' in both the skill configuration and from the now assist feature section and still only see OOB feature suggestions.

Our Generative AI Controller plugin is on the latest version: 7.0.3 and our test instance is on Xanadu patch 2. 

 

Any info on the 'now assist panel' option would be appreciated since we're unable to find much documentation on how to modify the welcome prompt and the skills used here.

 

Thanks!

 

Screenshot 2024-10-25 at 3.24.33 PM.pngScreenshot 2024-10-25 at 3.25.20 PM.png

aayushshah998
Tera Contributor

Hi, 

 

I'm trying to look for the prompt and response logs from the skill kit. Can someone please point me to the table this would be stored in. I looked at the Now Assist QnA logs but none were found there. 

Viswanatha Redd
Tera Contributor

Hi

I have below queries on the capabilities of NowAssist Panel:

  1. The welcome message in NowPanel always shows the default message and prompt. Is this configurable? How to show new prompt/skill name once it is ACTIVE in the default welcome message?
  2. Does NowAssist offers ChatGPT like interaction in the Now Assist panel. Lets say, I opened a record and ask following series of question
    •    -  What are the key services impacted.
    •   - Which changes recently deployed has potentially caused this Incident
  3. Can NowAssist responds to above queries in the context of current Incident record?

 

 

Jayden4
Tera Contributor

 

This post has been updated, you can use customized fields on tables as inputs. 

Eliza
ServiceNow Employee
ServiceNow Employee

Hi @Jayden4 ,

 

Custom fields are able to be used to bring in additional information into NASK - if the test record has a value in that field, it should bring the result in.

 

Is the custom field a worknotes/journal field perhaps? That can also introduce difficulty, as you have to actually create a subflow to collect each value from that field.

Jayden4
Tera Contributor

Hi @Eliza , this was user error on my part, thanks for the info. Will update my post to remove any confusion. 

Eliza
ServiceNow Employee
ServiceNow Employee

Hi @SørenC ,

 

Regarding guidance on using the script tool - I hope to write something more comprehensive soon, however, in the interim:

 

When you are using scripts as an input tool, you will be asked to select a function from within the script include table. A common gotcha is that you need to provide the name of the script include in both the Name and Resource field to then have the list of functions appear.  If your script requires any inputs, it will prompt you to provide that input parameter.

Eliza_0-1741817402444.png

 

Dexter Chan
ServiceNow Employee
ServiceNow Employee

Hi Viswanatha Redd,

  1. You can set auto system greeting messages from the sys_cs_context_profile_message table 
  2. The promoted skills in the pills can be promoted from the the Virtual Agent Designer but there is currently no way to change the order of it
  3. Now Assist does offer multi turn capabilities for you to have a conversation with the Now Assist Panel (ex. complete a series of questions to order a catalog item) and it can also give you additional information on a record
  4. Yes, Now Assist Panel can respond in context of the record that is in the content pane. For example, you can say "summarize this record"

Hope that helps!

sanjay02
Tera Guru

Hi @Eliza , @Dexter Chan 
while looking into Custom Skills, I noticed that they are available in the Virtual Agent. I'm curious to understand how Custom Skills work in the Virtual Agent. Do they have any triggers like topics, or is it something new that enhances the Virtual Agent experience? Could you please explain how it works? Your input would be very helpful


image.png

Jayden4
Tera Contributor

Hi there, 

 

In Yokohama, has the ability to see how many tokens the input and response consumes been removed? This is useful information for scoping out how many 'assists' a skill will use. 

 

I noticed I could see this Xanadu:

Jayden4_0-1744761790586.png

 

But in Yokohama:

Jayden4_2-1744761830502.png

Is there a setting somewhere to bring this back?

Eliza
ServiceNow Employee
ServiceNow Employee

Hi @Jayden4,

 

The option to view these details has moved to the test history section. To access, click on the clock icon in the panel on the right, then select the test run you wish to review.

 

Eliza_0-1745277516496.png

 

I'm passing this feedback on the change to our product team, so watch this space in the next release 🙂

 

Thanks,

Eliza

 

Eliza
ServiceNow Employee
ServiceNow Employee

Hi @sanjay02,

 

When you deploy a custom skill to Virtual Agent, the custom skill is triggered in the same manner you would have a topic triggered.

fibinpious
Tera Contributor

Hi, I'm encountering an issue while developing a custom skill. Despite explicitly configuring the skill to avoid external data sources in the prompt, the test run indicates that it is still accessing external information. Could anyone offer insights or suggestions on how to ensure the skill adheres to the specified restriction?

Jayden4
Tera Contributor

@fibinpious Because the models are trained on external data, they will alway try to reference this data when building a response if that is what you mean?

The Snow Generic LLM doesn't access external information, it doesn't know anything about the world after 2023. 

sanjay02
Tera Guru

Hi @Eliza , thanks for the information. Is there any Documentation or Videos that I can refer for Custom skill in Virtual Agent

jay97
Tera Contributor

Hello @Eliza ,

 

I would like to use the incident summarization skill to summarize records. However, my client is not using the standard 'description' field as out of the box; instead, they are using a 'description HTML' field.

First, I tried cloning and modifying the Out-of-the-Box (OOB) incident summarization skill, but I received the message: 'This is a clone of ServiceNow skill. Prompts and providers can be customised. Inputs, outputs, tools and deployment settings cannot be edited.'This indicates that if I need to use the 'description HTML' field in the prompt, I must create a custom skill and specify 'description HTML' as the input.

 

input_isssue.PNG

 

 

 

  1. I have created a custom skill. However, the configuration options available in the OOB incident summarization skill (such as 'Choose Input,' 'Custom Prompt,' etc.) are not visible in my custom skill. How can I enable or access these options in a custom skill? (Image of OOB Incident Summarization skill showing options)OOB.PNG


     

    (Image of My Custom Skill not showing options)

     2.PNG

  2. How can I link my custom skill to the 'Summarize' button in the UI, so it can be triggered from both the Core UI and the Configurable Workspace? (Image of the Summarize button)"

    3.PNG

 

Eliza
ServiceNow Employee
ServiceNow Employee

Hi @sanjay02,

 

I just posted this video that can help when looking to expose custom skills within Virtual Agent.

Eliza
ServiceNow Employee
ServiceNow Employee

Hi @jay97,

 

There are 2 approaches you can take with skills in Now Assist Skill Kit:

  • Editing OOTB skills
    • This lets you clone and modify the provider LLM and the prompt for an OOTB skill. Inputs, and deployment vector (e.g. that UI component seen in question 2) cannot be modified.
    • You can find a walkthrough of this process here.
  • Create and deploy custom skills
    • This lets you have full control over inputs, provider LLM, and the deployment vector. However, you do have to create everything from scratch.

With that said, to answer your questions:
1. This difference is because the first image is from an OOTB skill. These options are given so that admins can configure the OOTB skill, however, the options are limited. If you build a custom skill however, you are asked to define the inputs etc within Now Assist Skill Kit itself.

2. That summarize UI component on the workspace is only for OOTB skills. If you wish to reuse it for a custom skill, you will have to build it out yourself. We are looking to make this easier later this year, but in the interim, we suggest using UI Builder and having it trigger the custom skill via script.

DarshanShah
Tera Contributor

I am looking to display the NASK output in a similar view as the display card for 'Summarize' feature provided by Now Assist. Are there any steps that I can follow to send my output so it displays in the same view style like below? Also, would love to have the 'Share the work notes' feature.

 

DarshanShah_0-1748281949896.png

 

Jayden4
Tera Contributor

@DarshanShah You can recreate the Share to work notes as a ui action to write to the journal on the record you're on. 

Jayden4
Tera Contributor

Is there a way to increase the maximum response token count on the Now LLM Generic?

Jayden4_0-1748575425713.png



Where is this setting changed? I'm wanting to have larger responses but 1000 tokens / 1 assist limits the output. I cannot locate the settings in the Now Assist Admin space. 

Being able to change this to 5000 would be huge and we can really obtain more detailed insights with a higher response cap. 

Eliza
ServiceNow Employee
ServiceNow Employee
svani
Tera Contributor

Have a question for custom skill, we have license for itsm and hrsd, can we create custom skill for SAM and itom related tables. 

 

Thanks in advance

Eliza
ServiceNow Employee
ServiceNow Employee

Hi @svani,

You can refer to the field of use guide for NASK, but I would also suggest reaching out to your account representative to confirm.

Jayden4
Tera Contributor

Hi there, 

 

Is there a setting to have a custom skill run as admin?

 

The reason is that I have a sub-flow that collects worknotes on an sn_grc_issue ticket, via the sys_journal_field table, and these are captured correctly in the sub-flow and validated via the sub-flow test. 

 

However, when this sub-flow is added as a tool in a Custom Prompt on the NASK, ACL is being applied and blocking the users who tests the Custom Prompt from retrieving the comments. 

 

If a sysadmin does the test, or if we give specific users read access to the table, this works properly. 

 

Is there any way to have a NASK Custom Skill run as admin, in the same way you can have a sub-flow run as 'system' ?

SamirNyra
Tera Contributor

Is it fair to assume that while Now Assist skills only uses ServiceNow Native LLMs - NASK allows us to use non native models as well ?

Jayden4
Tera Contributor

@SamirNyra You can bring in your own LLM if you want, Eliza made a how-to video here

Ricardo Reis
Tera Contributor

Hello @Eliza

We have created 5 extended tables for the Incident table, so I wanted to ask you if the Now Assist skills, for example Incident Summarisation, work in the same way for these custom tables.

 

Thank you in advance.

kalyan vallams1
Tera Contributor

I have been working on custom skills, as part of the prompt performance and evaluations i have human feedback along with reason and i would like to use it to refine the LLM responses . Could you please point me to the documentation or article which talks about how human feedback is used to refine the Skill.

Jayden4
Tera Contributor

Does anyone know what causes the 'Update Prompt' to sometimes be greyed out?

 

I have made several skills, all published, and some cannot have the prompt updated while others can and I cannot work out why:

Jayden4_0-1753753759682.png

 

On certain skills, I can clone the prompt via the side menu but it looks like if you publish a default prompt with no spare versions, you lose the ability to update the prompt any further and have to clone the entire skill? Is this a bug?

No clone and no update prompt (the dual square thing):

Jayden4_1-1753753956075.png

 

Another skill that has a spare version of the prompt that can be edited, updated and eventually promoted, while v3 cannot be touched (which makes sense as it's published):

Jayden4_2-1753753991469.png

 

 

Is it a bug that if you publish a default prompt with no spare versions, you cannot create any new versions of the prompt @Eliza ?


 

Version history
Last update:
‎08-07-2025 10:39 AM
Updated by:
Contributors