- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 07-31-2025 07:40 AM - edited Tuesday
This article is for customers upgrading to Yokohama Patch 6 and Zurich Patch 1
Today's enterprises are embracing AI to transform operations, but a "one-size-fits-all" strategy no longer works. Different departments have unique AI needs, from IT's automation requirements to Customer Service's need for empathetic conversations. That's why ServiceNow is excited to introduce Model provider flexibility, empowering you to align the ideal AI model provider with the distinct demands of every workflow on the ServiceNow AI Platform.
What's new: Flexible model providers across the ServiceNow AI Platform
We've fundamentally enhanced how you access, configure, and govern AI models on the ServiceNow AI Platform, giving you unprecedented Model provider flexibility. This isn't a single feature, but a powerful set of capabilities woven into key areas you already use:
- Expanded model provider access: You can now seamlessly integrate and choose from a diverse range of leading AI models, including ServiceNow's platform-native Now LLMs, and best-in-class third-party providers like Microsoft Azure OpenAI, Google Gemini, and Anthropic Claude on AWS via Amazon Bedrock. This broad selection ensures you can align the perfect model to every unique workflow.
-
Centralized governance in AI Control Tower: Your AI Stewards gain enhanced control, defining enterprise-wide policies for approved model providers, data routing, and fallback logic directly within AI Control Tower. This ensures secure and compliant AI deployment at scale.
-
Streamlined configuration in Now Assist Admin console: Administrators can easily assign and configure these approved model providers for specific functional groups or individual Now Assist skills via the Now Assist Admin console. No custom code or external integrations are needed for setup.
-
Powering custom skills with integrated model providers: For teams developing custom AI capabilities, the Now Assist Skill Kit now fully supports leveraging our integrated model providers. This means you can build bespoke AI solutions and still benefit from the diverse strengths of the Now LLM Service, Microsoft Azure OpenAI, Google Gemini, or Anthropic Claude on AWS via Amazon Bedrock, without the complexity of managing your own external models.
These integrated capabilities empower AI Stewards, Admins, Developers, and Architects to optimize AI performance, ensure compliance, and accelerate innovation across your organization.
Why It Matters: Unlock smarter outcomes, faster innovation, and greater control
Model provider flexibility isn't just a technical enhancement; it's a strategic advantage that directly benefits your operational efficiency and accelerates your AI journey.
Our innovative hybrid model strategy combines our robust platform-native intelligence with seamless access to leading third-party providers, all within the trusted ServiceNow AI Platform. This delivers powerful advantages that directly benefit your operational efficiency and accelerate your AI journey:
Here's how:
-
Optimize Every AI Workflow for Smarter Outcomes: We know different tasks require distinct AI strengths. With this flexibility, you can precisely align each AI workflow to its ideal model provider—whether for advanced reasoning, high-volume processing, or handling sensitive interactions. This tailored approach eliminates the need to over-provision AI resources or compromise on performance, ensuring your AI is always aligned with your business context. Imagine your IT operations automating complex workflows with greater effectiveness, your HR team delivering nuanced responses with confidence, and your customer service providing empathetic interactions, all powered by the most appropriate model for the job.
-
Accelerate Innovation and Achieve Faster Time to Value: The pace of AI innovation is relentless, and your business should move just as quickly. With our ServiceNow AI Platform, your teams can rapidly experiment, test, and update AI skills without the burden of rewriting workflows or managing complex API integrations. This built-in fluidity means you get from a great idea to a tangible business impact faster, turning AI concepts into real value with significantly less effort and a quicker speed to market. Our seamless integrations help you avoid vendor lock-in, enabling you to adapt and innovate with agility. This significantly reduces the need for custom scripting, allowing your developers and admins to focus on creating value rather than maintaining bespoke integrations.
-
Deploy with Confidence and Control, at Enterprise Scale: For us, security, compliance, and responsible AI are non-negotiable. Model Provider Flexibility comes with built-in governance that's native to the ServiceNow AI Platform, not an afterthought. Your AI Stewards can define enterprise-wide policies, control data routing, and set fallback logic through AI Control Tower, while administrators configure data privacy in the Now Assist Admin console. This robust, centralized framework gives you the confidence to deploy and scale AI across your entire organization, securely and reliably, knowing that policies are enforced in real time.
-
Simplicity at every step: We eliminate the complexity traditionally associated with using multiple AI model providers in an enterprise. Unlike managing separate vendor relationships, API integrations, and security protocols for each AI provider, ServiceNow handles everything for you. Simply choose your preferred AI model provider through the Now Assist Admin console, and ServiceNow manages all contracts, infrastructure, API keys, and version updates behind the scenes. There is no need for custom code, external integrations, or additional technical overhead. The same familiar configuration interfaces you already use become your single control point for deploying and managing AI across your organization. So, whether you are configuring a model for IT workflows, HR cases, or customer service interactions, the process remains consistently straightforward: select, configure, and deploy.
How it works: A hybrid approach to Enterprise AI
ServiceNow’s hybrid approach combines our robust platform-native models with seamless access to leading third-party providers:
-
The power of Now LLM Service: Our platform-native Now LLMs are purpose-built and deeply integrated with your ServiceNow workflows, data models, and compliance frameworks. They offer consistent, reliable performance for core automation across ITSM, HRSD, CSM, and Creator skill sets, serving as the trusted foundation for your AI journey.
-
Leading integrated model providers: Access best-in-class models from industry leaders like Microsoft Azure OpenAI, Google Gemini, and Anthropic Claude on AWS via Amazon Bedrock directly within Now Assist. ServiceNow manages the contracts, infrastructure, and APIs for you, allowing you to seamlessly leverage unique strengths for advanced reasoning, multimodal inputs, and complex task execution.
This hybrid approach means you’re choosing the right model provider for the right job while maintaining enterprise-grade control and simplicity of management – whether it’s an HR case resolution powered by Anthropic Claude via Amazon Bedrock with clear and empathetic responses, Application generation by Google Gemini with deep technical reasoning, Agentic orchestration with Microsoft Azure OpenAI, or IT summarization using the Now LLM Service.
Getting started is simple
Activating model providers is a straightforward, three-step process, all managed within the ServiceNow AI Platform - no separate tools or external setup required.
Before you begin, upgrade your instance(s) to Yokohama Patch 6 and update your Now Assist applications, such as Now Assist for ITSM, CSM, HRSD, etc.
Ensure Now Assist Admin Console and the Generative AI Controller applications are up to date as well.
-
Govern: Your AI Stewards define enterprise-wide policies for allowed model providers, data routing, and fallback logic in AI Control Tower. This is where you establish fundamental rules for your entire organization, specifying which model providers are approved and how data is routed based on factors like region, regulatory requirements, or specific use cases. You can also configure automated fallback logic, ensuring that if a primary model provider is unavailable or doesn't meet a certain performance threshold, the system automatically redirects to an approved alternative, maintaining continuous service and compliance.
-
Deploy: Once policies are set by your AI Stewards, your administrators can go into the Now Assist Admin console to assign and configure the approved model providers for different parts of your business. This process is simple and does not require custom code or external integrations. You have ultimate flexibility in how this is done:
-
An admin can apply a single model provider across the entire Instance for maximum consistency for all Now Assist skills.
-
For more targeted needs, they can apply a model provider to a Functional group, such as 'Knowledge' or 'Conversational Experiences'. For example, they could choose a model provider with a safer tone specifically for all 'HR' or 'Conversational' skills.
-
For the most precise control, they can assign model providers on an individual skill basis, allowing you to match the absolute best model provider to every single use case.
-
-
Enjoy: Your users immediately benefit from curated AI experiences, as the system automatically routes to the right model provider, tailored to their specific use case, without needing to think about the model provider behind it.
We manage the complexity, so your teams can focus on impact. Model provider flexibility ensures your AI strategy is trusted, flexible, and governed by design, empowering your business to innovate with confidence and achieve smarter outcomes at scale.
Real-world use cases
Let's look at a couple of real-world scenarios where model provider flexibility can make a tangible difference:
-
For an IT Operations Manager (ITSM, ITOM, SecOps): Imagine needing to quickly resolve complex IT incidents and automate proactive responses. With model provider flexibility, you can configure your AI to leverage Microsoft Azure OpenAI's advanced reasoning capabilities and strong instruction-following for agentic workflows and AI Agent orchestration. This enables more accurate problem solving and automation of complex, multi-step IT processes. While Anthropic Claude on AWS via Amazon Bedrock’s advanced reasoning enables comprehensive threat analysis and correlation, delivering structured incident response recommendations by analyzing vast security data with its extensive context window, helping SecOps teams quickly identify threats and vulnerabilities while maintaining compliance standards. For proactive issue detection and root cause analysis in highly dynamic environments, your teams can utilize Google Gemini's deep context understanding and swift, high-volume analysis of diverse IT and security datasets to quickly summarize high-priority alerts, identify potential impact from misconfiguration or vulnerabilities, and recommend mitigations before assets can be exploited. Find out more here:
-
For an HR Lead (HRSD): To manage sensitive employee inquiries with nuance and empathy, your HR team can leverage AI models within Now Assist that are known for their balanced safety and tone control, such as Anthropic Claude on AWS, providing a responsible approach for sensitive interactions and detailed policy summarization. For high-volume, routine HR content creation like knowledge article generation for offer letters or standard policy updates, your team can utilize the Now LLM Service for efficient and accurate content generation across diverse data types, purpose-built for ServiceNow workflows. Find more on Now Assist for HRSD here: Now Assist for HR Service Delivery (HRSD)
- For a Customer Service Agent (CSM): When dealing with customer cases, agents need quick insights and recommended actions. Model provider flexibility allows admins to select the Now LLM Service's fast summarization capabilities for chat summarization on handoffs or email reply recommendations to speed up responses. For handling complex customer scenarios, Google Gemini's multimodal understanding and nuanced language comprehension can help agents understand customer emotions and complex queries, ensuring more empathetic and effective interactions and potentially reducing case escalations. Find more on Now Assist for CSM here: Now Assist for Customer Service Management (CSM)
-
For a Developer/Architect (Creator Workflows): Developers can accelerate their work with powerful AI-driven capabilities. You can leverage Microsoft Azure OpenAI's highly accurate code generation and complex instruction-following for intelligent code recommendations and app generation. For generating complex application logic, flow generation, or playbook generation, or even detailed data visualizations, you can select Google Gemini's deep reasoning and strong performance across various programming languages. Find more on Now Assist for Creator Workflows here: Now Assist for Creator
- For Building Custom Skills with Now Assist Skill Kit: For unique, business-specific AI capabilities not covered by out-of-the-box skills, developers can build custom skills using Now Assist Skill Kit. For example, a developer might build a custom skill to analyze specific contract clauses for compliance, requiring a model with an extensive context window and strong legal reasoning. With model provider flexibility, they can choose an integrated model provider that offers these strengths, directly within their custom skill, without needing to manage external integrations. Find more on Now Assist Skill Kit here: Now Assist Skill Kit
Built-in governance, simplified licensing, and global language support
ServiceNow ensures that governance is native to the platform, not an afterthought. AI Stewards can define enterprise-wide policies in AI Control Tower, while administrators configure data privacy and processing controls in the Now Assist Admin console. This centralized control gives you confidence in deploying AI securely and reliably.
Our licensing model is designed to be simple and predictable: consumption is measured via "assists," and you pay the same rate regardless of which integrated model provider you choose. This eliminates unpredictable costs and encourages innovation.
Furthermore, AI needs to speak your users' language globally. Our platform leverages best-in-class native language support for P1 and P2 languages from each model provider and Now Assist Dynamic Translation. If a selected model provider doesn't natively support a specific language, the Now Assist Dynamic Translation service seamlessly steps in, ensuring a consistent, high-quality multilingual experience for all your users worldwide.
Model provider strengths at a glance
To help you choose the ideal AI for every workflow, here’s a high-level overview of the strengths offered by our integrated model providers:
- Now LLM Service: Purpose-built and platform-native for the ServiceNow AI Platform, excelling at reasoning and instruction following. Ideal for fast summarization, question answering, content generation, and multi-step agentic workflows deeply integrated into ITSM, HRSD, CSM, and Creator skill sets. Offers stable, consistent performance and comes with built-in Responsible AI governance.
Learn More: Responsible AI at ServiceNow, Now LLM Service model cards
- Microsoft Azure OpenAI: Leverages advanced GPT models (like GPT-4.1-mini and GPT-4.1) for versatile reasoning and language understanding. Excels in following complex instructions, handling multi-turn conversations, and delivering accurate summarization and classification. Boosts developer productivity with AI-assisted code generation and debugging, and supports agentic workflow enablement for ITSM, CSM, and Creator processes.
Learn More: Microsoft: Azure OpenAI in Foundry Models, OpenAI 4.1 system cards
- Google Gemini: Available as Gemini 2.5 Flash and Pro, offering fast, scalable AI with deep reasoning. Key strengths include broad comprehension across diverse data types (text, images, code) for deep context understanding in incidents and alerts. Excels in advanced content and code generation, automating content creation and accelerating development.
Learn More: Google: Introducing Gemini: our largest and most capable AI model, Google AI Blog: We're expanding our Gemini 2.5 family of models, Google for Developers: 7 examples of Gemini’s multimodal capabilities in action
- AWS Anthropic Claude: Delivered through Amazon Bedrock, Claude 3.7 Sonnet is a hybrid reasoning model known for balanced safety and tone control, ideal for sensitive HR, legal, and customer-facing scenarios. Supports advanced reasoning with large context windows for deep understanding of complex documents and multi-step workflows. Robust for orchestrating complex AI agent tasks and policy summarization.
Learn More: Amazon Bedrock: Anthropic's Claude in Amazon Bedrock, AWS TV: Anthropic's Claude: Powerful AI Model now on Amazon Bedrock for Enterprise, Anthropic Claude in Amazon Bedrock, Anthropic Claude 3.7 Sonnet System Card
We’re continuously enhancing the ServiceNow AI Platform to empower your enterprise AI journey. Look out for future updates that will bring even more fine-grained control, deeper integrations with emerging AI models, and expanded language capabilities to further optimize your AI workflows.
Ready to empower your teams with the right AI for every workflow?
- Explore the full capabilities of Model provider flexibility on the ServiceNow AI Platform today.
- Visit our product documentation to get started with configuration and deployment.
- Share your innovative use cases in the comments below!
FAQs
See the following FAQs here: Model Provider Flexibility (ServiceNow-integrated model provider) FAQ
For more information on data handling, security, and responsible AI for our integrated model providers, see this article.
- 2,899 Views
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Having the freedom to use model providers is a positive improvement!
Can all native pre-built skills and agents be switched from Now LLM to a trusted provider? Or these would require re-building by the customer?
Also, when using a integrated model provider such as Azure OpenAI, does this involve an integration back to the customer's own Azure Tenant for the processing? Or is it just that instead of ServiceNow's Now LLM an Open AI LLM is used behind the scenes but all within the ServiceNow infrastructure?
Note: The final two links for FAQ and more information about the data handling, security and responsible AI are not working, so unable to see if this is already answered - Access Denied (even when logged in).
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @David Blasdell once you upgrade to Yokohama Patch 9, you can switch over pre-built/OOTB skills and agentic workflows to a trusted provider.
We do suggest you designate an AI Steward to create a policy in AI Control Tower for your organization based on which providers they allow, to reflect which providers are avaialble for your admins to choose from.
The integrated providers such as Azure OpenAI use a ServiceNow-managed connection to Azure infrastructure, not the customer's tenant for processing. Thanks for the heads up on the links, they've been corrected, if you want details on the tenant infrastructure and processing, the data processing link on this article goes into detail: https://www.servicenow.com/community/now-assist-articles/now-assist-responsible-ai-data-handling-amp...