- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-08-2025 09:23 AM
We have started hearing about Workflow Data Fabric as a way to use external data for analysis in the platform without having to copy the data into the platform itself, our current approach. Has anyone experimented with WDF when it comes to GRC and control indicators? Would love to learn about your experience.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Tuesday
@AlexR2 This is a great question. I have recently completed an end-to-end experiment using Workflow Data Fabric (WDF) to integrate a Snowflake instance with ServiceNow GRC (IRM) Control Indicators.
The short answer is: It works seamlessly. You do not need to make any structural changes to the Control Indicator module; the integration relies entirely on how you configure the Data Fabric table.
Here is a detailed breakdown of the workflow and my findings from the experiment:
1. Connection and Schema Mapping
When you establish a new connection via Workflow Data Fabric, you gain direct visibility into the external source (in my case, Snowflake). You can see all schemas and tables that the connected service account has permission to access. This eliminates the need for traditional ETL or Import Sets.
2. Creating the Virtual Table
To use this data for GRC, you must create a Data Fabric Table within ServiceNow.
-
Metadata Only: It is important to note that ServiceNow only saves the metadata (table structure and field definitions).
-
Virtual Presence: The actual raw data remains in Snowflake. However, ServiceNow treats this as a "virtual" table, allowing it to appear in the dictionary and be selected within the GRC application.
3. Configuring the Control Indicator
Once the Data Fabric table is created, the process for a GRC administrator is identical to using a native table:
-
Table Selection: In the Control Indicator record, you simply select your new Data Fabric table from the list.
-
Logic Application: You can configure the relevant scripts or conditions exactly as you would for an internal ServiceNow table (e.g.,
Status IS CriticalorAmount > 10000).
4. Execution and Audit Trail
The most critical part of this experiment was verifying the "Supporting Data" for audits.
-
Live Execution: When the indicator runs, WDF queries the Snowflake data in real-time.
-
Local Persistence: Although the source data stays in Snowflake, the Indicator Results capture the specific records identified during the run and save that supporting data locally in ServiceNow.
-
Audit Readiness: This ensures that you have a point-in-time snapshot of the evidence required for audit purposes without having to maintain a massive local copy of the entire external database.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Wednesday
Then you’ve got the right of it in understanding an “appropriate” use of pass/fail and the sample option.
For other folks following along at home and wondering what we mean here:
What we wouldn’t want to see is a test where we’re expecting 100% “pass”, setting sample to zero to evaluate the entire population, and then realizing you’re collecting the entire table. If memory serves, there is a way to set max baseline collection via sys_property, but if the table is of sufficient size this functionally doesn’t matter after a few thousand records (the job will likely stall out around 10k in many cases).
In those situations (where you want to prove you’ve set things up properly and you want to show some evidence that things look good) you actually require more than one indicator, or better still, and indicator + evidence collection task (which can be automated).
You’ll want ONE that has a sample size and is set to pass (this is essentially a desired state proof). If any of these indicators are related to CMDB attributes there’s also a much easier desired state audit collection you can use as well!
You’ll then also want some negative/fail cases to show you are also on the look out for common fail states as part of your due diligence, and these you’d want to set to zero; typically these are the common failure states that you match your queries and collect (hopefully) a smaller number of fails as you evaluate the entire table.
