Workflow Data Fabric and GRC Indicators

AlexR2
Tera Expert

We have started hearing about Workflow Data Fabric as a way to use external data for analysis in the platform without having to copy the data into the platform itself, our current approach. Has anyone experimented with WDF when it comes to GRC and control indicators? Would love to learn about your experience.

1 ACCEPTED SOLUTION

Abhi Sekhar
ServiceNow Employee

@AlexR2  This is a great question. I have recently completed an end-to-end experiment using Workflow Data Fabric (WDF) to integrate a Snowflake instance with ServiceNow GRC (IRM) Control Indicators.

The short answer is: It works seamlessly. You do not need to make any structural changes to the Control Indicator module; the integration relies entirely on how you configure the Data Fabric table.

Here is a detailed breakdown of the workflow and my findings from the experiment:

1. Connection and Schema Mapping

When you establish a new connection via Workflow Data Fabric, you gain direct visibility into the external source (in my case, Snowflake). You can see all schemas and tables that the connected service account has permission to access. This eliminates the need for traditional ETL or Import Sets.

2. Creating the Virtual Table

To use this data for GRC, you must create a Data Fabric Table within ServiceNow.

  • Metadata Only: It is important to note that ServiceNow only saves the metadata (table structure and field definitions).

  • Virtual Presence: The actual raw data remains in Snowflake. However, ServiceNow treats this as a "virtual" table, allowing it to appear in the dictionary and be selected within the GRC application.

3. Configuring the Control Indicator

Once the Data Fabric table is created, the process for a GRC administrator is identical to using a native table:

  • Table Selection: In the Control Indicator record, you simply select your new Data Fabric table from the list.

  • Logic Application: You can configure the relevant scripts or conditions exactly as you would for an internal ServiceNow table (e.g., Status IS Critical or Amount > 10000).

4. Execution and Audit Trail

The most critical part of this experiment was verifying the "Supporting Data" for audits.

  • Live Execution: When the indicator runs, WDF queries the Snowflake data in real-time.

  • Local Persistence: Although the source data stays in Snowflake, the Indicator Results capture the specific records identified during the run and save that supporting data locally in ServiceNow.

  • Audit Readiness: This ensures that you have a point-in-time snapshot of the evidence required for audit purposes without having to maintain a massive local copy of the entire external database.

View solution in original post

10 REPLIES 10

AlexR2
Tera Expert

Hoping someone from the product team can answer this question...

Hi @AlexR2 - unfortunately, the engagement of SN staff here on the Community is pretty minimal. 

 

However, this is a great question, I'd also love to hear a response myself. But because it's cross-functional (WDF meets GRC), there's probably only a handful of people who could help.

 

CC'ing some staff who might be able to help:

@Connor Levien @Rosalind Morvil @Lolita Honkpo @Pankaj Kumar Pa 

Hi @Simon Hendery and @AlexR2 -

I am in the product marketing group, not the actual product development. I am actively trying to find someone that can help - as I myself have limited insights to offer. However, when I don't know something, I look at it as an opportunity to learn and meet new people. I will see what I can do.

Ros

@Rosalind Morvil - you're a legend! I wish more people (both within SN, and in the world generally) had your intellectual curiosity.

 

I mean, honestly, WDF is a newish framework ServiceNow is actively trying to take to market. GRC is a growing product with a huge TAM.

 

So I don't understand why sales, marketing, and technical teams aren't clambering over each other to answer this type of question from clients/prospects! Instead, all we get is crickets.