Data Stream Action – Unable to Retrieve All Records from Nested Arrays in REST Response
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago - last edited 3 weeks ago
Hello Experts,
I have created a Data Stream Action in Flow Designer for a REST integration and am able to successfully retrieve data. However, I am encountering an issue while parsing the response due to its structure.
The API returns data in a tabular format containing multiple tables (Table_0, Table_1, Table_2, and Table_3). The actual user information resides in Table_0, specifically within the Rows array (highlighted in Blue).
🔹Sample JSON Response
{
"Tables": [{
"TableName": "Table_0",
"Columns": [{
"ColumnName": "email",
"DataType": "String",
"ColumnType": "string"
}, {
"ColumnName": "username",
"DataType": "String",
"ColumnType": "string"
}, {
"ColumnName": "name",
"DataType": "String",
"ColumnType": "string"
}, {
"ColumnName": "user_id",
"DataType": "String",
"ColumnType": "string"
}, {
"ColumnName": "last_used",
"DataType": "DateTime",
"ColumnType": "datetime"
}],
"Rows": [
["Savyasachi.Nemani@example.com", "SNemani2", "Savyasachi Nemani", "6262878", "2026-04-08T06:36:54Z"],
["Scott.Crenshaw2@example.com", "SCrensh2", "Scott Crenshaw", "5859935", "2026-04-08T06:36:54Z"],
["", "SVC_PRD_RANANIX", "SVC_PRD_RANANIX@example.com", "20025198", "2026-04-08T06:36:54Z"],
["Carlos.Martinez576@example.com", "CMartin576", "Carlos Martinez", "11396655", "2026-04-08T06:36:54Z"],
["AmithaPrem.Pothureddy2@example.com", "APothur2", "AmithaPrem Pothureddy", "10394676", "2026-04-08T06:36:54Z"],
["Jeeten.Shrestha@example.com", "JShrest3", "Jeeten Shrestha", "4242429", "2026-04-08T06:36:54Z"]
]
}, {
"TableName": "Table_1",
"Columns": [{
"ColumnName": "Value",
"DataType": "String",
"ColumnType": "string"
}],
"Rows": [
["{\"Visualization\":null,\"Title\":null,\"XColumn\":null,\"Series\":null,\"YColumns\":null,\"AnomalyColumns\":null,\"XTitle\":null,\"YTitle\":null,\"XAxis\":null,\"YAxis\":null,\"Legend\":null,\"YSplit\":null,\"Accumulate\":false,\"IsQuerySorted\":false,\"Kind\":null,\"Ymin\":\"NaN\",\"Ymax\":\"NaN\",\"Xmin\":null,\"Xmax\":null}"]
]
}, {
"TableName": "Table_2",
"Columns": [{
"ColumnName": "Timestamp",
"DataType": "DateTime",
"ColumnType": "datetime"
}, {
"ColumnName": "Severity",
"DataType": "Int32",
"ColumnType": "int"
}, {
"ColumnName": "SeverityName",
"DataType": "String",
"ColumnType": "string"
}, {
"ColumnName": "StatusCode",
"DataType": "Int32",
"ColumnType": "int"
}, {
"ColumnName": "StatusDescription",
"DataType": "String",
"ColumnType": "string"
}, {
"ColumnName": "Count",
"DataType": "Int32",
"ColumnType": "int"
}, {
"ColumnName": "RequestId",
"DataType": "Guid",
"ColumnType": "guid"
}, {
"ColumnName": "ActivityId",
"DataType": "Guid",
"ColumnType": "guid"
}, {
"ColumnName": "SubActivityId",
"DataType": "Guid",
"ColumnType": "guid"
}, {
"ColumnName": "ClientActivityId",
"DataType": "String",
"ColumnType": "string"
}],
"Rows": [
["2026-04-08T10:00:06.4015472Z", 4, "Info", 0, "Query completed successfully", 2, "2450336c-573f-44c7-90fc-93b3c09c2e8e", "2450336c-573f-44c7-90fc-93b3c09c2e8e", "470bbed5-aa84-445f-b016-29014a337b3c", "unspecified;ea7ca795-280f-4f37-8708-6d7495690106"],
["2026-04-08T10:00:06.4438722Z", 6, "Stats", 0, "{\"QueryHash\":\"1f567da9e29772d\",\"ExecutionTime\":0.0595677,\"resource_usage\":{\"cache\":{\"shards\":{\"hot\":{\"hitbytes\":4770007,\"missbytes\":0,\"retrievebytes\":0},\"cold\":{\"hitbytes\":1201853,\"missbytes\":0,\"retrievebytes\":0},\"bypassbytes\":0}},\"cpu\":{\"user\":\"00:00:00.0156250\",\"kernel\":\"00:00:00\",\"total cpu\":\"00:00:00.0156250\",\"breakdown\":{\"query execution\":\"00:00:00.0156250\",\"query planning\":\"00:00:00\"}},\"memory\":{\"peak_per_node\":2163728},\"network\":{\"inter_cluster_total_bytes\":1660748,\"cross_cluster_total_bytes\":0}},\"input_dataset_statistics\":{\"extents\":{\"total\":302,\"scanned\":152,\"scanned_min_datetime\":\"2025-10-10T12:42:17.1331071Z\",\"scanned_max_datetime\":\"2026-04-08T06:43:29.5661540Z\"},\"rows\":{\"total\":14183572,\"scanned\":7102013},\"rowstores\":{\"scanned_rows\":0,\"scanned_values_size\":0},\"shards\":{\"queries_generic\":1,\"queries_specialized\":0}},\"dataset_statistics\":[{\"table_row_count\":8767,\"table_size\":686026}],\"cross_cluster_resource_usage\":{}}", 1, "2450336c-573f-44c7-90fc-93b3c09c2e8e", "2450336c-573f-44c7-90fc-93b3c09c2e8e", "470bbed5-aa84-445f-b016-29014a337b3c", "unspecified;ea7ca795-280f-4f37-8708-6d7495690106"]
]
}, {
"TableName": "Table_3",
"Columns": [{
"ColumnName": "Ordinal",
"DataType": "Int64",
"ColumnType": "long"
}, {
"ColumnName": "Kind",
"DataType": "String",
"ColumnType": "string"
}, {
"ColumnName": "Name",
"DataType": "String",
"ColumnType": "string"
}, {
"ColumnName": "Id",
"DataType": "String",
"ColumnType": "string"
}, {
"ColumnName": "PrettyName",
"DataType": "String",
"ColumnType": "string"
}],
"Rows": [
[0, "QueryResult", "PrimaryResult", "81c9ccb7-fee7-4a51-a723-6f4459b1c299", ""],
[1, "QueryProperties", "@ExtendedProperties", "f16456e0-2020-4e7e-ab7d-47508fa5eda0", ""],
[2, "QueryStatus", "QueryStatus", "00000000-0000-0000-0000-000000000000", ""]
]
}]
}
🔹Item Path Configuration
Current Item Path: $.Tables
Attempted Item Path: $.Tables[0].Rows
Issue: Using $.Tables[0].Rows results in an "Item Path is incorrect" error.
When I configure the Item Path as $.Tables, the Data Stream iterates through each table. As a result, only the first record from Table_0 is processed before the stream moves to the next table. Consequently, I receive only one user record instead of all records.
🔹Script Used in the Script Parser Step
(function parse(inputs, outputs) {
var record = JSON.parse(inputs.sourceItem);
var rowObj = {};
// Process only Table_0
if (record.TableName != "Table_0") {
return;
}
// Extract column names
var columns = record.Columns.map(function(col) {
return col.ColumnName;
});
var rows = record.Rows;
// Loop through each row
for (var j = 0; j < rows.length; j++) {
var row = rows[j];
rowObj = {}; // Reset object for each iteration
// Map column names to row values
for (var k = 0; k < columns.length; k++) {
rowObj[columns[k]] = row[k];
}
// Extract required fields
outputs.targetObject.user_id = rowObj.user_id;
outputs.targetObject.email = rowObj.email;
outputs.targetObject.last_activity_at = rowObj.last_used;
gs.info('UserId: ' + outputs.targetObject.user_id +
' | Email: ' + outputs.targetObject.email +
' | Last Used: ' + outputs.targetObject.last_activity_at);
}
})(inputs, outputs);🔹Issue Description
The response contains four tables, but only Table_0 holds the required user data.
Because the Item Path is $.Tables, the Data Stream iterates through each table.
During execution, only one record from Table_0 is retrieved before moving to the next table.
Consequently, only a single user record is processed.
🔹Expected Outcome
I would like the Data Stream Action to:
Process only Table_0.
Iterate through all records in the Rows array.
Map the following fields:
user_id
email
last_activity_at
🔹What I Have Tried
- Verified that the REST step returns the expected response.
- Used $.Tables as the Item Path, which works but retrieves all tables, including metadata.
- Attempted to filter only Table_0, but without success.
- Considered using a Scripted Item Path but am unsure of the correct implementation.
🔹Questions
Why does $.Tables[0].Rows return an "Item Path is incorrect" error?
Is it possible to configure the Data Stream Action to iterate only through Table_0?
Is there a recommended approach to flatten this response before parsing?
Should a Scripted Item Path be used instead? If so, what would be the recommended approach?
Any guidance or best practices would be greatly appreciated.
Thank you in advance for your support!
Best regards,
Sneha Karthik
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Root cause is that ServiceNow's Data Stream Action uses a limited JSONPath implementation that doesn't support array index notation ([0]), which is why $.Tables[0].Rows throws the "Item Path is incorrect" error. And with $.Tables as the item path, the Data Stream iterates over each table object as one item — so your Script Parser runs once per table, not once per row. Even though your script loops through all rows internally, only the final iteration's values are written to outputs.targetObject, and then the stream moves on to Table_1.
Recommended approach — Pre-transform the response with a Script step before the Data Stream:
Add a Script Action step between your REST step and the Data Stream. This script flattens the nested structure into a simple JSON array that the Data Stream can iterate natively:
(function execute(inputs, outputs) {
var response = JSON.parse(inputs.rest_response); // your REST response body
var tables = response.Tables;
var results = [];
for (var i = 0; i < tables.length; i++) {
if (tables[i].TableName === 'Table_0') {
var columns = [];
for (var c = 0; c < tables[i].Columns.length; c++) {
columns.push(tables[i].Columns[c].ColumnName);
}
var rows = tables[i].Rows;
for (var j = 0; j < rows.length; j++) {
var obj = {};
for (var k = 0; k < columns.length; k++) {
obj[columns[k]] = rows[j][k];
}
results.push(obj);
}
break;
}
}
outputs.flattened_json = JSON.stringify(results);
})(inputs, outputs);
This produces a clean array like:
[
{"email":"Savyasachi.Nemani@example.com","username":"SNemani2","name":"Savyasachi Nemani","user_id":"6262878","last_used":"2026-04-08T06:36:54Z"},
{"email":"Scott.Crenshaw2@example.com","username":"SCrensh2","name":"Scott Crenshaw","user_id":"5859935","last_used":"2026-04-08T06:36:54Z"}
]
Then feed flattened_json into a second REST step (or directly into the Data Stream as a string input). Set Item Path to $ (the root array), and your Script Parser becomes straightforward:
(function parse(inputs, outputs) {
var record = JSON.parse(inputs.sourceItem);
outputs.targetObject.user_id = record.user_id;
outputs.targetObject.email = record.email;
outputs.targetObject.last_activity_at = record.last_used;
})(inputs, outputs);
Alternative — Scripted Item Path (if available on your instance version):
If your Data Stream Action supports the "Scripted Item Path" option, you can skip the pre-transform step and use a script directly in the item path configuration to return the flattened rows array. The logic would be the same extraction, just placed in the item path script rather than a separate step.
Why your current script only yields one record: The Data Stream treats each table as a single "item." Your for loop runs entirely within that one item's processing, overwriting outputs.targetObject on every iteration, so only the last row survives. The Data Stream doesn't know your script produced multiple records — it expects one output per item.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago - last edited 3 weeks ago
Hi @Naveen20 ,
Thank you so much for the detailed explanation and recommendation.
However, in ServiceNow, a Data Stream Action does not provide the option to add a Script Action step between the REST step and the Data Stream Parser. The structure of a Data Stream Action is fixed as follows:
- REST Step (Data Source)
- Splitter (Item Path)
- Script Parser
Due to this architectural constraint, it is not possible to insert an intermediate script within the same Data Stream Action to flatten the JSON response before parsing.
Attempted Alternative: Scripted Item Path
As suggested, I attempted to use a Scripted Item Path to extract and flatten the rows from Table_0. However, this resulted in the error:
"Path cannot be empty."
Below is the script I used:
var responseBody = fd_data._3__rest_step.response_body;
var path = [];
var jsonObj = JSON.parse(responseBody);
var response = {};
var table0 = null;
// Find the table named "Table_0"
for (var i = 0; i < jsonObj.Tables.length; i++) {
if (jsonObj.Tables[i].TableName === "Table_0") {
table0 = jsonObj.Tables[i];
break;
}
}
// Proceed only if Table_0 is found
if (table0) {
// Extract column names
var columns = table0.Columns.map(function(col) {
return col.ColumnName;
});
var rows = table0.Rows;
// Loop through each row
for (var j = 0; j < rows.length; j++) {
var row = rows[j];
var rowObj = {};
// Map column names to row values
for (var k = 0; k < columns.length; k++) {
rowObj[columns[k]] = row[k];
}
path.push(rowObj);
}
}
return '$.' +path;
})();
Error Explanation
The error occurs because the Item Path script must return a valid JSONPath string, not a dynamically constructed object or array. The statement:
evaluates to an invalid JSONPath such as:
which causes the platform to throw the "Path cannot be empty" error. Additionally, the scripted Item Path is intended to return a path expression—not to transform or flatten the response.
Could you please advise on the best practice to handle this scenario within ServiceNow? Specifically:
- Is there a supported way to reference Table_0 rows within a Data Stream Action?
- Can Scripted Item Path be used to return an array instead of a JSONPath?
- Would preprocessing the response outside the Data Stream Action be the recommended approach?
Any guidance or recommendations would be greatly appreciated.
Thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Why $.Tables[0].Rows fails: ServiceNow's Data Stream Splitter uses a limited JSONPath implementation that does not support array index notation ([0]), filter expressions ([?(@.TableName=='Table_0')]), or recursive descent. It only supports simple dot-notation paths like $.Tables or $.someArray.
To answer your three questions directly:
1. Within the Data Stream Action — GlideRecord workaround in Script Parser
Keep $.Tables as the Item Path. The stream will iterate 4 times (once per table). In your Script Parser, when Table_0 is encountered, don't rely on outputs.targetObject for multiple rows — it only captures the last overwrite. Instead, write directly to your target table using GlideRecord:
(function parse(inputs, outputs) {
var record = JSON.parse(inputs.sourceItem);
// Skip non-Table_0 items
if (record.TableName !== 'Table_0') {
outputs.skip = true; // signal to skip this item
return;
}
var columns = [];
for (var c = 0; c < record.Columns.length; c++) {
columns.push(record.Columns[c].ColumnName);
}
var rows = record.Rows;
for (var j = 0; j < rows.length; j++) {
var rowObj = {};
for (var k = 0; k < columns.length; k++) {
rowObj[columns[k]] = rows[j][k];
}
// Write each row directly to your target table
var gr = new GlideRecord('u_your_target_table');
gr.initialize();
gr.setValue('u_user_id', rowObj.user_id);
gr.setValue('u_email', rowObj.email);
gr.setValue('u_last_activity', rowObj.last_used);
gr.insert();
}
// Set outputs for the last row to satisfy the parser contract
outputs.targetObject.user_id = rows[rows.length - 1][3];
outputs.targetObject.email = rows[rows.length - 1][0];
outputs.targetObject.last_activity_at = rows[rows.length - 1][4];
})(inputs, outputs);
This bypasses the one-output-per-item limitation. The tradeoff is you lose the Data Stream's built-in record mapping, but you get all rows processed.
2. Can Scripted Item Path return an array?
No. The Scripted Item Path must return a valid JSONPath string (e.g., $.Tables). It cannot return transformed data, an array, or a constructed object. It is purely for dynamically building a path expression, not for data transformation.
3. Recommended approach — move the REST call to a regular Subflow (best practice)
If you have flexibility in your design, this is the cleanest path. Instead of a Data Stream Action, use a standard Subflow:
- Step 1 — REST step (same API call)
- Step 2 — Script step to flatten the response into an array of row objects and store it as a JSON string output
- Step 3 — For Each loop over the parsed array, processing each user record individually
This gives you full scripting control between REST and parsing, avoids the Splitter's JSONPath limitations entirely, and each row is processed as its own loop iteration with proper error handling.
Bottom line: The Data Stream Action's fixed architecture isn't designed for responses where the iterable data is nested multiple levels deep inside a non-standard structure. The GlideRecord approach (#1) works as a tactical fix, but if this is a long-term integration, refactoring into a Subflow (#3) gives you a more maintainable and debuggable solution.
