The Zurich release has arrived! Interested in new features and functionalities? Click here for more

Anthony JC
Tera Explorer

As organizations accelerate their AI transformation journeys, ServiceNow has introduced a Generative AI strategy that is both highly flexible and deeply integrated into the platform’s core architecture. At the heart of this strategy lies a dual-model framework, designed to help enterprises align the right AI model with the right business need, whether that’s summarizing complex cases, enabling virtual agent conversations, or automating classification tasks at scale.

This architecture supports two primary categories of AI models:

  1. Now LLMs – These are purpose-built, domain-specific large language models developed and hosted by ServiceNow. Optimized for enterprise service management use cases such as ITSM, HRSD, CSM, and IRM, Now LLMs are deployed across trusted infrastructures including NVIDIA and Hugging Face, ensuring both scalability and contextual accuracy.
  2. OEM LLMs – These general-purpose models are integrated into the platform via licensed partnerships. A key example is Azure OpenAI, which provides access to models like GPT-3.5 and GPT-4 through ServiceNow’s SN OEM model service. Additionally, ServiceNow supports integration with external models from providers such as OpenAI, AWS, Google Cloud, IBM watsonx, and the new addition of NVIDIA Apriel, it offers compact, reasoning-optimized models like Nemotron 15B. These models can be brought into the platform using the Bring Your Own LLM connector as part of a custom GenAI experience.

What distinguishes ServiceNow’s approach is the seamless way in which these models are embedded into platform workflows. Whether leveraging out of the box Now Assist skills or deploying custom GenAI use cases, all AI interactions are routed through the Generative AI Controller. This orchestration layer safeguards data privacy, manages prompt governance, and enforces platform-level controls through the Now Assist Guardian.

Depending on the use case, requests are dynamically routed to the appropriate model interface:

  • Now LLM Spoke – Connects to ServiceNow’s native models for domain-aligned, high-precision outcomes.
  • SN OEM Azure Spoke – Integrates with Azure-hosted GPT models for broader language understanding.
  • Third-Party LLM Connectors – Enables customers to connect their own external or proprietary models through the Skill Kit or other approved connectors.

Through this tiered and secure architecture, ServiceNow empowers organizations to adopt AI with confidence that every model interaction, whether domain-specific or open, is secure, policy-driven, and seamlessly woven into the platform’s core workflows.

2 Comments