Eliza
ServiceNow Employee
ServiceNow Employee

Eliza_2-1742952794298.png

When using Now Assist products, you may encounter a scenario where you wish to modify the large language model (LLM), providing generative AI capabilities. This guide will walk you through this process, indicating how you can set up the connection to an external LLM and what options you have for adjusting the provider for both out-of-the-box (OOTB) and custom Now Assist skills.

 

Please note that the selection of an LLM is entirely dependent on your use case and the limitations you face regarding topics such as data sovereignty and handling. We cannot provide guidance on which LLMs to select.

 

Scenario 1: You want to modify the LLM for an OOTB Now Assist skill

We offer the ability to modify LLMs for certain OOTB Now Assist skills through 2 methods:

1. Using a ServiceNow integrated model (available for those on at least Yokohama Patch 6+ or Zurich Patch 0+)
We offer the ability to modify the provider for all OOTB skills without requiring additional licenses through our model provider flexibility initiative. This configuration can be done directly within the Now Assist Admin console, and allows you to select from the following integrated model providers:

  • Now LLM Service
  • Microsoft Azure Open AI
  • Google Gemini
  • AWS Anthropic Claude

 

2. Using Now Assist Skill Kit

This process can be used if you wish to use a provider separate from those in the list of ServiceNow integrated models noted above (i.e., you wish to change the provider to a BYOK/BYOLLM model).

To see which of your skills are eligible for this method, you can navigate to the sn_nowassist_skill_config table, and find the skills where is_template = true. The list of skills one can do this for as of August 2025/Yokohama Patch 6 is as follows:

 

Application

Skill Name

Application Name
Flow Generation Flow Generation with Images
Global Incident summarization (copy)
GRC Shared GenAI Risk Assessment Summarization
Now Assist for Accounts Payable Operations (APO) Invoice data extraction
Now Assist for Customer Service Management (CSM) Case summarization
Now Assist for Customer Service Management (CSM) KB generation
Now Assist for Customer Service Management (CSM) Resolution notes generation
Now Assist for Field Service Management (FSM) KB generation
Now Assist for Field Service Management (FSM) Work Order Task Summarization
Now Assist for FSC Common Purchase order summarization
Now Assist for HR Service Delivery (HRSD) KB generation
Now Assist for HR Service Delivery (HRSD) Persona Assistant
Now Assist for HR Service Delivery (HRSD) Case summarization
Now Assist for IT Service Management (ITSM) Resolution notes generation
Now Assist for IT Service Management (ITSM) Change request risk explanation
Now Assist for IT Service Management (ITSM) KB generation
Now Assist for IT Service Management (ITSM) Incident summarization
Now Assist for IT Service Management (ITSM) Change request summarization
Now Assist for OTSM OT Incident resolution notes generation
Now Assist for OTSM OT Incident summarization
Now Assist for Security Incident Response (SIR) Security operations metrics analysis
Now Assist for Supplier Lifecycle Operations (SLO) Supplier case summarization

 

If you wish to modify the provider for a skill not on this list, please let your account representative know.

 

To modify the OOTB skill’s LLM, you can follow the steps in the video below.

 

 

Scenario 2: Selecting a LLM for a custom Now Assist skill

When you are looking to build your own generative AI functionality, you can use Now Assist Skill Kit. During the process of creating a skill, you will be asked to select from a list of providers. This list is sourced from the records with external=true on the sys_generative_ai_model_config table.

Eliza_0-1742953082736.png

 



How to connect to external LLMs
 

Connecting to an external LLM comes with varying degrees of difficulty, depending on your desired LLM provider. If you are using one of the integrated model providers, you do not need to configure anything, as the connection is managed entirely by ServiceNow. The list of providers is available below:

Integrated Model Providers

  • Now LLM Service
  • Microsoft Azure Open AI
  • Google Gemini
  • AWS Anthropic Claude

 

If you are required to use a LLM that is not managed by ServiceNow, then you must procure your own license for that particular provider. There are 2 options here: connecting to an LLM that has a spoke, or bringing your own LLM (BYO LLM). We advise that using an LLM that we have a spoke for is preferred, as the connection process is more streamlined and easy to manage.

 

Connecting to an LLM spoke

We provide a number of spokes to external LLMs, including (as of March 2025):

  • Azure OpenAI
  • OpenAI
  • Aleph Alpha
  • IBM's WatsonX
  • Amazon Bedrock
  • Google's Vertex AI
  • Google's Gemini AI Studio

 

To connect to a spoke, you need to procure your own license and provide ServiceNow with the key and any additional credentials that the API may need to verify the connection.

 

For providers that offer multiple LLMs, such as Amazon Bedrock and Vertex AI, please note that you are limited to usage of only one LLM at this time.

 

To see how to connect to one of our spokes, you can view the recording below:

 

If you are connecting to IBM’s WatsonX spoke, you can follow the steps outlined from minutes 8:00 until 13:00 in the below AI Academy session. Do note that the build that occurs after the 13 minute mark is an outdated method of building generative AI functionality within ServiceNow, and we instead recommend using Now Assist Skill Kit.

 

 

Connecting to a LLM that does not have a spoke (Custom LLM)

Instances on at least the Washington DC release are able to use the generic LLM connector to connect to any external LLM not listed above, i.e. BYOLLM. This process requires a fair amount of technical acumen. To integrate with a non-spoke-supported LLM, you need: 

  • An API key from the provider 
  • Endpoint for the LLM 
  • Access to API documentation for that LLM to assist with writing the transformation script to translate the input and response into an acceptable format 

An example demonstrating the process of connecting to an external LLM using the generic LLM connector can be seen in the video below:

Comments
Prakash53
Tera Contributor

Hii I am trying to create one Custom LLM, but i am getting this error from the skill 

{ "error": { "sourceErrorMessage": "", "sourceErrorResourceId": "", "errorMessage": "Method failed: (/together/v1/chat/completions) with code: 401 - Invalid username/password combo", "sourceErrorFeatureInvocationId": "" }, "status": "ERROR" }

 

Even though i am using the correct Api key for this. can someone help me here ?

Eliza
ServiceNow Employee
ServiceNow Employee

@Prakash53,

 

I suggest testing your API key and endpoint using Postman (or even through a curl request in Terminal if using Mac/Linux). If it returns a 200/success code there then you may have to redo your configuration within ServiceNow.

acraus
Tera Explorer

Hello @Eliza,

 

Thank you for creating these guides.

I am trying to connect an agent created on AWS Bedrock to a ServiceNow instance.
Is it possible to this? If yes, can you give me some pointers please?
Or only models can be used?
Thank you.

Eliza
ServiceNow Employee
ServiceNow Employee

Hi @acraus,

 

You have a couple of options here (and, as a disclaimer, these are my personal musings, so I cannot report on the validity of the solution without actually seeing your instance) - you can set up a AWS Bedrock MCP server to reach out to ServiceNow or use ServiceNow's A2A (Agent to Agent) protocol. I believe we are releasing an A2A connection to Amazon in the near future, so stay tuned if you wish to utlise that protocol!

 

Depending on the use case, you can also potentially just point your agent to utilise an API that reaches into ServiceNow to fetch the information it may need.

Zack Sargent
ServiceNow Employee
ServiceNow Employee

@Eliza :

It seems like only one custom model at a time can be supported this way. Given that we are watching a storm of very capable open source models, this year from China (Deepseek, Qwen, Kimi, and last week, Longcat), I can see ServiceNow customers wanting to use multiple custom LLMs at a time. Especially given the "I'm the best this week!" pace of things. Longcat is apparently really good at "tool use" while Qwen3-Coder and Kimi K2 are on par with Claude for many coding tasks. I don't think we're going to keep up with the "spoke per model" pacing of the market in the coming year, so it would be nice to have a streamlined "New Model" guided setup or similar. Heck, we can scrape most of the pieces for the setup from Huggingface with just a model name, most of the time ... which sounds like future agent work. (ahem)

Chris Yang
Tera Sage

Hi, I followed the steps for BYOLLM but I'm getting an error response, has anyone else seen this?

FYI I'm able to get a response from GenAI Controller, and we are going through a MID server to go to the provider.

 

{
  "error": {
    "sourceErrorMessage": "",
    "sourceErrorResourceId": "",
    "errorMessage": "Persisting an unterminated plan is not supported for plan with id 8fd644809f00ba103ec75d55b5b517c9 and name: Custom LLM",
    "sourceErrorFeatureInvocationId": ""
  },
  "status": "ERROR"
}

 

bhakti_padte
Tera Explorer

Hello, I am also getting same error as below

{
  "error": {
    "sourceErrorMessage": "",
    "sourceErrorResourceId": "",
    "errorMessage": "Persisting an unterminated plan is not supported for plan with id bc1c6e78138cb6503a2720bb2d3514dd and name: Custom LLM",
    "sourceErrorFeatureInvocationId": ""
  },
  "status": "ERROR"
}

@Chris Yang Did you find the solution for this

Version history
Last update:
4 weeks ago
Updated by:
Contributors