Join the #BuildWithBuildAgent Challenge! Get recognized, earn exclusive swag, and inspire the ServiceNow Community with what you can build using Build Agent.  Join the Challenge.

Teresa4Now
Tera Expert

The 3 AM Reality Check

You're at lunch with a colleague and immediately start in with "Let me tell you about last Tuesday. Three applications degraded simultaneously.  Monitoring showed symptoms everywhere, but which system was actually broken?  Twenty minutes into manual correlation, the VP of Operations pinged me on Teams asking for an update."

 

Sound familiar?

 

This isn't a "keeping the lights on" story, it's a "why are the lights still flickering in 2025?" story. And it's exactly why I've been deep-diving into ServiceNow's latest capabilities: Workflow Data Fabric, Agentic AI with AI Control Tower, and RaptorDB.

 

These aren't incremental upgrades. They represent a fundamentally different operating model. But here's what the marketing slides won't tell you: they also require rethinking how your team works.

 

Workflow Data Fabric: When "Real-Time" Actually Means Real-Time

What the docs say: Zero-copy connectors to Snowflake, BigQuery, Databricks. Real-time data access. No duplication.

 

What I learned the hard way: Your data governance had better be rock-solid before you start exposing data across systems.

The Starter Guide makes setup look straightforward and technically, it is. But here's what they don't emphasize enough: once you connect these systems, you're now accountable for what happens with that data.

 

What Actually Gets Better

In testing, I connected our CMDB to external monitoring data. Suddenly, correlation that took me 20 minutes manually.  Down to seconds. Not because of AI magic, but because the data that lives in six different systems is finally accessible in one place.

 

The performance claims are legit. But you need to think through:

  • Data freshness expectations: Is "real-time" actually seconds? Minutes? What's acceptable for your use cases?
  • Access control: Who should see what? Just because you can surface data doesn't mean everyone should access it
  • Dependency mapping: When external sources go down, what workflows break?

The Honest Implementation Path

  1. Start small: Pick one external data source. One workflow. Prove value.
  2. Document dependencies: You're creating a new class of failure modes. Map them.
  3. Test failure scenarios: What happens when Snowflake is slow? When BigQuery times out?

Resources:

Agentic AI: The Part Where We Talk About What Could Go Wrong

Let's get real: Everyone's excited about AI agents that can "reason, orchestrate, and take action." I am too. But after building some test cases, here's what keeps me up at night.

 

The Capability Is Real

I built a proof-of-concept agent for P2 incident triage. It works. It correlates events, checks recent changes, proposes root causes, and even suggests remediation. When it works, it's legitimately impressive.

But here's the thing about AI Agent Studio: it makes building agents easy. Maybe too easy.

 

What the Community Needs to Talk About

Guardrails aren't optional. The AI Control Tower is supposed to help with governance, but it's still on you to define:

  • When does an agent escalate to a human?
  • What actions require approval vs. auto-execution?
  • How do you handle agent failures gracefully?

Agent drift is real. These things learn from outcomes. If your processes are inconsistent, your agents will inherit that chaos. Fix your workflows before automating them with AI.

 

The trust problem. Your team needs to trust these agents. That takes time. I'm starting with low-stakes workflows and building confidence incrementally. Anyone telling you to deploy AI agents straight to production is selling something. Remember, if you get this right, auditors won’t be looking over your shoulder anymore for proof of a control.  You’ll be able to show them it works from your control tower!

 

My Current Approach

  1. Shadow mode first: Let agents suggest actions. Humans execute. Build a feedback loop.
  2. Narrow scope: One use case, thoroughly tested, before expanding
  3. Over-communicate: Your team should know exactly what each agent does and doesn't do
  4. Version control: Track agent changes like you would code. Because that's what it is.

Key Resources:

 

RaptorDB: The Unsung Hero

I'll be honest, database performance doesn't usually get me excited. But the official claims got my attention. And RaptorDB is the exception (video to my excitement around this here)

 

What I’m Testing:

ServiceNow claims up to 27× faster analytics and 53% faster transactions. I'm currently benchmarking RaptorDB in our sub-prod environment. Early observations:

  • Complex queries are noticeably faster
  • Dashboard load times have improved
  • Reporting queries:  seems more responsive

But I'm not publishing specific numbers yet because, my baseline measurements need more consistency.  My data volume may not be representative, so I want at least 30 days of comparative data.

 

What Actually Matters

HTAP (Hybrid Transactional/Analytical Processing) means you're not choosing between operational speed and reporting speed anymore. For high-volume ITOM environments, this is legitimately game-changing.

 

Important Note: RaptorDB Pro is a separate SKU. Factor that into your business case. And RaptorDB Standard becomes the default for new instances with existing customers migrating later.

 

The Migration Question

If you're on MariaDB today, migration isn't automatic. There's planning involved. The Community has started sharing experiences, but this is still early days.

 

What matters more than my tests

Recognize that every organization’s environment is different. Your data model is different. How well your current database is optimized is key.

 

Before you migrate:

  • Benchmark your current performance (you'll need baselines)
  • Query Patterns - Identify your most critical queries before and after. That's the only number that matters
  • Test in a non-prod environment first
  • Have a rollback plan

Resources:

 

The Integration Story: When Everything Connects

Here's where it gets interesting. These three capabilities aren't just features, they're a stack:

RaptorDB provides the performance foundation. Workflow Data Fabric connects external data.  Agentic AI acts on unified, real-time information

 

In theory, you get:

  • Agents that work with complete context
  • Decisions based on actual current state
  • Actions that complete before problems escalate

In practice, you get:

  • New failure modes to plan for
  • More complex dependency chains
  • Higher expectations from stakeholders

A Real-World Scenario (In Progress)

I’m building toward this: When a performance degradation is detected, an agent:

  1. Pulls real-time metrics from our observability platform (via Data Fabric)
  2. Correlates with recent changes in ServiceNow (native data)
  3. Cross-references known issues and patterns (RaptorDB-powered analytics)
  4. Proposes a remediation with confidence score
  5. Auto-executes if confidence > 95%, otherwise escalates with full context

I’m not there yet. I’m currently at step 3. But the path is clear.

 

What the Community Should Be Asking

Instead of "Should we adopt these?", I think the better questions are:

  1. What workflows are killing your team right now? Start there. Not with the flashy demo use cases.
  2. What's your data governance maturity? Because Data Fabric will expose any gaps . . . loudly.
  3. How do you measure success? "Faster" and "more autonomous" sound great, but what metrics actually matter to your business?
  4. What skills gaps exist on your team? These capabilities require different thinking. Plan for training and adjustment time.
  5. What's your change management strategy? The technology is the easy part. Getting your team to trust and adopt it? That's the work.

My MVP Assessment

These capabilities are powerful. They're also complex. The vendor demos make everything look seamless. Reality is messier.

What's working:

  • Performance gains are real
  • The vision of unified data + AI is compelling
  • Early results from pilot workflows are promising

What needs work:

  • Documentation is still catching up (community is filling gaps)
  • Best practices are emerging, not established
  • The learning curve is steeper than marketing suggests

What I'm watching:

  • How AI Control Tower evolves (governance will make or break this) and your Internal and External Auditors will love you for this!
  • Community patterns around Data Fabric security and access control
  • Real-world RaptorDB migration experiences

For IT Teams Considering This Path

Start Here

  1. Audit your current pain points: Where do you spend the most manual effort?
  2. Assess your data landscape: How many sources? How clean? How accessible?
  3. Evaluate team readiness: Skills, time, buy-in
  4. Define success metrics: Be specific. Be measurable.

Then Build

  • Pilot project: One workflow. Small scope. Clear success criteria.
  • Measure everything: Before and after. Quantify the value.
  • Iterate based on feedback: From your team, not just metrics
  • Scale gradually: Prove value before expanding scope

Resources I Keep Coming Back To

Let's Keep This Conversation Going

I'm documenting my journey with these capabilities transparently.  The wins and the struggles. Because I think that's more valuable than another polished success story.

 

What I'd love to hear from you:

  • Are you piloting any of these capabilities? What's working? What isn't?
  • What concerns do you have about Agentic AI in production?
  • How are you thinking about data governance with Data Fabric?
  • What performance gains have you seen with RaptorDB?

Drop your experiences in the comments. Let's learn from each other's implementations and mistakes.

---------------------------------------------------------------------------------------------------------------------------------------

These are my personal observations from hands-on work with these capabilities. Your implementation will differ based on your environment, data, processes, and team. Always test in non-prod first. Always have a rollback plan. And never skip the change management work.

 

Connect: Teresa Purdy | LinkedIn

Tags: #WorkflowDataFabric #AgenticAI #RaptorDB #ITOM #ServiceOps #AIOps #ServiceNowCommunity #MVP #PlatformExperience

Comments
ronaldobonparte
Giga Contributor

Great insights, Teresa!!  Thank you. 

Version history
Last update:
25m ago
Updated by:
Contributors