- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
Hi everyone,
I’ve been exploring how AI is being used within ServiceNow workflows (ticket summaries, knowledge suggestions, automation, etc.), and I’m curious about one key challenge:
How are you handling sensitive data when using AI?
In many cases, tickets and records can contain:
Personal data (PII)
Internal business information
Customer-specific details
Sending that directly to external AI models can raise compliance and privacy concerns.
I’m currently exploring an approach where data is anonymized before it’s processed by AI — essentially adding a layer that removes or masks sensitive information while still preserving usefulness.
A few questions for those working with ServiceNow AI:
Are you using AI features on sensitive records today?
Do you rely fully on built-in ServiceNow AI, or external models as well?
Is data privacy a blocker in your use cases?
Would really appreciate hearing how others are approaching this in real environments.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
ServiceNow provides Data Privacy plugins you can use for this.
Please mark any helpful or correct solutions as such. That helps others find their solutions.
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
ServiceNow provides Data Privacy plugins you can use for this.
Please mark any helpful or correct solutions as such. That helps others find their solutions.
Mark
