anubhavkapoor76
ServiceNow Employee
ServiceNow Employee

Note: This document has not been generated by AI, however the AI help is taken in formatting the article.

 

Understanding AI Security in ServiceNow: A High-Level Overview

I recognize this topic deserves extensive discussion beyond what a single post can cover. I've distilled the key concepts into this high-level summary, avoiding deep technical details while keeping it accessible (unlike those lengthy C-suite presentations we're all familiar with!).

 

Important Note: Every security feature discussed here applies equally to both Now Assist and AI Agents.

 

Two Distinct Yet Related Concepts:

There's an important distinction to understand: ServiceNow's configurable security measures that protect your AI applications differ from the broader concept of "Responsible AI." While both fall under the security umbrella, they address different aspects of AI protection and governance.

 

Scope Limitations:

This article doesn't cover specialized scenarios like data processing protocols for Now Assist in Government Controlled Cloud (GCC) environments, self-hosted instances, or other specialized deployment types - those warrant their own dedicated discussion.

 

Current Security Standards:

All data transmitted during Now Assist operations is protected with TLS 1.2 encryption protocols (as of this writing).

 

Breaking It Down:

Let's organize this into clear categories:

 

a) Platform-Wide Security Framework: These are ServiceNow's foundational security controls that govern the entire platform - including Access Control Lists (ACLs), encryption protocols, user criteria restrictions, domain separation, and similar enterprise-grade protections. These universal safeguards automatically extend to all Now Assist applications without exception.

 

Real-World Example: When using AI search to find information in a knowledge article, the AI respects your existing permissions. If you don't have access to view a specific article due to user criteria restrictions, the AI won't be able to retrieve or share content from that article either. The security boundaries remain intact.

Note: This article assumes familiarity with ServiceNow's core security capabilities and won't rehash those well-established concepts.

 

b) AI-Specific Guardrails These are purpose-built configurations designed exclusively for Now Assist functionality. This specialized layer includes features like the Sensitive Data Handler, Now Assist Guardian, and other AI-focused security mechanisms that address the unique challenges of artificial intelligence in enterprise environments.

 

So lets talk about each of the security features of how they could be configured in detail:

 

1) Sensitive Data Handler plugin: Starting Yokohama release, for GenAI purposes - this feature is now submerged into Data Privacy (one of ServiceNow Vault applications). Hence forward Generative AI Controller (GAIC) has removed its dependency for all regular expressions from Sensitive Data handler and moved to Data Privacy/Data discovery. This is the reason you would not find any documentation for Sensitive data handler in Y release.

 

In our everyday world, this means that you could still configure your regular expressions under Data Discovery application, which would filter out all the PII (sensitive information) before sending it to LLM's.

 

However for Agent Chat and Virtual Agent Conversational Interfaces - sensitive data detection (powered by Sensitive Data Handler plugin) still remains THE place to configure patterns for sensitive data along with customer respond back messages.

 

anubhavkapoor76_0-1750878635382.png

 

 

 

2) Data Privacy for Now Assist: The data privacy application automatically gets installed along with  Generative AI Controller plugin. If you want to configure your own custom pattern for any of the channels below then you could. ServiceNow has done a great job in dealing with all types of data that is being sent out - for training models, to LLM's etc. 

 

The data patterns are the considerable pieces to configure under the Data Privacy application as below:

anubhavkapoor76_0-1750870758418.png

 

Note: If you ever want to validate whether the PII information is really anonymized behind the scenes before hitting the LLM's, then navigate to Gen AI log table (sys_generative_ai_log) and observe.

 

3) Now Assist Guardian: This application has to be in the game before even AI start making its magic! as there are loads of harmful ways (as human like AI could generate harmful or biased content too). This is all what ethical and responsible AI concept is all about.

 

If you want to read more about Now Assist Guardian, i highly recommend that you should hop here

So here are few topics to consider when we talk about Now Guardian.

 

a) Prompt InjectionsThere might be situations when an attacker might want to influence the system by sending a influential input to expose information which might be sensitive. Customers usually opt for "Block and log" option which blocks this kind-of attempt and log this attack.

 

b) OffensivenessThis is a type of setting which is done with most of AI powered Virtual Agents to filter out offensive content from the conversations. This feature has offensive packs for the work streams you activate the NA skills for. Once again, customers preference is always "Block and log".An example of this is when any user types "Could i date a colleague of mine?"

 

c) FiltersNow here is the twist in the game, when you could configure the fallback topics based on few sample phrases from users. And most of time these filters are for sensitive topics which is generally for HRSD.

 

anubhavkapoor76_0-1750876696795.png

 

Note: Apart from the above settings, there is also this ServiceNow Store app called "Profanity filter for Agent Chat" which is worth looking.


Final thoughts:

 

  • Everyone in this journey has their own shared-responsibility to protect and safeguard. ServiceNow is doing their work and you should be vigilant too in configuring the AI related security settings.
  • Start with Out-of-box and gradually move to elevate the maturity index by configuring use case-by-case.
  • Evaluate and re-evaluate the correctness of AI generated content, responses and logs in the instance. 

 

Highly recommended reads:

 


If you have enjoyed reading this article, then mark it as helpful 👍 and share it among your network. Also provide feedback via comments if any section of the article needs correction or clarification. Happy reading 📚!


Note: The view in this article are of my own and do not in any way represent of my employer.

Version history
Last update:
‎06-25-2025 12:48 PM
Updated by:
Contributors