Anthony JC
Tera Explorer

As organizations accelerate their adoption of AI to improve productivity and service quality, safeguarding sensitive data has become a central priority. ServiceNow addresses this responsibility with a built-in privacy design embedded across the Now Assist framework. This model is anchored by four core principles that together create a secure and trustworthy AI ecosystem:

  1. Zero Persistence
    Prompts and responses are processed directly in memory and are not stored in ServiceNow data centers. Once the interaction concludes, temporary data is discarded to reduce residual risk.
  2. End-to-End Encryption
    Communication between the customer instance and the large language model is encrypted using Transport Layer Security (TLS) 1.2, mitigating risks of data interception.
  3. Customer Data Isolation
    Each instance processes data in a logically isolated environment, preventing cross-tenant exposure or sharing.
  4. Regional Data Residency
    AI data processing is confined to the customer’s region, adhering to local regulations such as GDPR and supporting jurisdiction-specific compliance requirements.

These pillars form the foundation of the Now Assist privacy framework, an approach designed to protect sensitive information without compromising the speed or quality of AI services. Building on this foundation, ServiceNow offers advanced capabilities that dynamically detect, mask, and anonymize sensitive data across conversational interfaces and generative AI workflows. Key among these are the Sensitive Data Handler and Data Privacy for Now Assist.

 

Sensitive Data Handler (SDH)

In scenarios like Virtual Agent or Agent Chat, the Sensitive Data Handler prevents exposure of personally identifiable information (PII) during AI interactions. It detects and obscures sensitive values such as contact numbers, email addresses, and identification details using customizable and preloaded regular expressions.

Functional Overview:

When users input sensitive details, the handler masks these values in real time, before the content interacts with any AI model. This process is confined within the instance environment, guaranteeing that raw data never exits the platform.

How to Configure:

Administrators can enable this capability by:

  • Activating the plugin com.glide.sensitive_data_handling
  • Creating detection rules in the Conversational Interfaces settings:
    • From requester to agent
    • From agent to requester
    • From requester to Virtual Agent
  • Customizing regex rules to meet the organization’s data governance policies

Supported Masking Techniques:

  • Partial Obfuscation: This technique hides a portion of the sensitive value while retaining part of it for context or user recognition, e.g., a credit card number like 1234-5678-9012-4321 would appear as XXXX-XXXX-XXXX-4321.
  • Static Replacement: This approach replaces sensitive data with fixed placeholder values that do not resemble the original content but are consistent in format, e.g., a Social Security Number such as 111-22-3333 is replaced with a placeholder like 000-00-0000.
  • Full Redaction: Completely removes or blacks out the data, often using a generic placeholder, e.g., ************ or [REDACTED].
  • Synthetic Anonymization: Replaces real values with realistic, human-readable but fictitious equivalents, e.g., John Doe becomes Alex Carter; jane.doe@email.com becomes emily.smith@demo.com.
  • Randomized Values: Substitutes real values with randomly generated ones that do not correspond to the original in any way, e.g., an ID like 987654321 becomes 473920158.

These methods can be calibrated according to risk profiles or communication needs.

 

Data Privacy for Now Assist (DPNA)

Tailored for generative AI use cases, such as summarizing incidents or drafting resolutions. this capability masks sensitive data in user prompts before any model interaction occurs.

Workflow Description:

When a user prompt includes confidential data, DPNA replaces it with configured placeholders. This anonymized version is sent to the model. After processing, the original values are reinserted within the response, all within the memory of the instance. No unmasked content is transmitted externally.

Setup Steps:

To activate DPNA-

  • Enable the following plugins:
    • Data Privacy (Classic) (com.glide.data_privacy)
    • Data Privacy Store App (sn_dp_store_app)
    • Data Discovery (sn_data_discovery)
    • Data Discovery APIs (com.glide.data_discovery)
  • Use Generative AI Controller to define masking templates
  • Create data redaction rules for inputs such as names, emails, IP addresses, and national IDs
  • Assign policies to Now Assist Skills, Virtual Agent, or AI Agents

Masking Options:

  • Synthetic Substitution: Replaces real user information with realistic, readable dummy data that mimics expected formats, e.g., jane.doe@company.com becomes alex.taylor@eg.com.
  • Static Placeholder: Replaces sensitive information with fixed, predetermined values that do not vary across use cases, e.g., any ZIP code is replaced with 123456, or any phone number becomes 000-000-0000.
  • Partial Obfuscation: Masks a portion of the value while keeping part of it visible for context or reference. (See the example mentioned in the SDH section.)
  • Tokenization: Converts sensitive values into unique reference tokens that have no exploitable relationship to the original value, e.g., john.doe@company.com becomes TOKEN_7832_ABC.
  • Random Value Generation: Replaces original data with randomly generated values that do not reflect any part of the source and cannot be traced back, e.g., a user ID like D3345 becomes Z8K90.

Multiple techniques can be combined for stronger privacy controls.

 

Beyond Masking: Additional Security Layers

ServiceNow enhances privacy with multiple layers beyond just data anonymization:

  • Inference: Refers to real time interactions with the model, every time a user uses Now Assist. AI inference occurs live per request; no prompt or response is stored post-processing.
  • Role-Based Access Control (RBAC): Governs who can view or edit specific fields or records.
  • Retrieval Augmented Generation (RAG): Limits what content the model can access. For eg, when handling HR requests, the AI references only approved HR knowledge articles.

This additional security layer collectively reduces exposure, restricts access, and improves auditability.

 

Related Capabilities

 

Implementation Recommendations

To establish a privacy-resilient AI framework within ServiceNow, organizations should take a structured, proactive approach that embeds privacy considerations throughout the lifecycle of AI deployment. The following practices help align platform capabilities with regulatory expectations.

  1. Activate privacy-related plugins early in your deployment cycle
  2. Tailor masking rules to your organization’s regulatory environment
  3. Perform regular audits of privacy logs and interactions
  4. Run scenario testing to validate masking behavior
  5. Stay up to date with ServiceNow’s privacy and AI roadmap

 

Final Thoughts

Data privacy is not a secondary concern in ServiceNow’s AI journey, it is foundational. By embedding privacy protections directly into the architecture, ServiceNow enables organizations to adopt generative AI responsibly. Tools like Sensitive Data Handler and Data Privacy for Now Assist actively mask sensitive information before model interaction. When combined with Role-Based Access Control, Retrieval-Augmented Generation, ServiceNow Vault, and Now Assist Guardian, the platform delivers a layered, defense-in-depth privacy model. This approach empowers enterprises to innovate with AI while maintaining regulatory compliance, user trust, and data integrity. Making privacy not just a feature, but a strategic advantage.

Comments
Moumita Anthony
Tera Explorer

Well-articulated. A strong reminder that in ServiceNow’s AI journey, privacy isn’t an add-on—it’s a strategic foundation. Excellent clarity on how layered safeguards enable responsible innovation.

Version history
Last update:
‎07-21-2025 12:21 PM
Updated by:
Contributors