Custom Data Stream Action Error: [ERROR CODE: -1] Socket closed when retrieving large datasets
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
an hour ago
I've created a custom Data Stream Action within IntegrationHub to retrieve large datasets, but I consistently encounter an error where the operation stops mid-stream. The action fails with a status of -1 and the response body: [ERROR CODE: -1] Socket closed.
This issue only occurs when fetching a large volume of records, suggesting a potential timeout or memory issue related to stream processing limits on the ServiceNow side, especially for the mid-server or instance memory for large payloads.
Details of the Issue:
The failure seems to occur consistently around the 13,000 to 14,000 record mark, regardless of the API source.
I am utilizing the Pagination feature within the Data Stream action, successfully handling smaller batches, but the overall operation fails before completion.
Case 1: DocuSign User Retrieval
Total Records: Approximately 15,000 users.
Failure Point: Action gets stuck and returns the socket closed error while retrieving the 13,000 to 14,000 range.
Case 2: Microsoft Graph API (Teams Room Device)
Total Records: Approximately 30,000+ devices.
Failure Point: Action gets stuck and returns the socket closed error while retrieving the 13,000 to 14,000 range.
Troubleshooting Already Attempted:
I have configured Retry Policies for connection timeouts within the action settings, but this has not resolved the issue.
I suspect this may be related to an instance-level or Mid Server property/limit governing the maximum allowed duration or memory for a single, long-running Data Stream operation.
Has anyone encountered this specific [ERROR CODE: -1] Socket closed behavior when processing tens of thousands of records with a custom Data Stream Action? What potential system properties or IntegrationHub limits should I check?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
54m ago - last edited 33m ago
The [ERROR CODE: -1] Socket closed at the 13,000-14,000 record mark is caused by the HTTP transaction timeout limit. The consistent failure point indicates a time-based limit (approximately 175 seconds default HTTP timeout), not a record count or memory limit.
Why Your Retry Policy Didn't Work
Retry policies handle connection failures and HTTP error codes (4xx, 5xx), but they don't prevent socket timeouts. The socket closes due to time limit, then retry attempts hit the same timeout, creating a loop without resolution.
The fix requires increasing the timeout window, not retrying within the same window.
Set the HTTP timeout at the request level within your Data Stream Action script.
javascript
(function execute(inputs, outputs) {
var request = new sn_ws.RESTMessageV2();
request.setEndpoint(inputs.endpoint);
request.setHttpMethod('GET');
// CRITICAL: Set timeout for this specific request (milliseconds)
request.setHttpTimeout(600000); // 10 minutes
// Your pagination parameters
var pageSize = inputs.page_size || 100;
var pageNumber = inputs.page_number || 0;
var offset = pageNumber * pageSize;
// Add pagination to request (adjust based on your API)
request.setQueryParameter('$top', pageSize.toString());
request.setQueryParameter('$skip', offset.toString());
// Execute request
var response = request.execute();
// Handle response
outputs.status_code = response.getStatusCode();
outputs.response_body = response.getBody();
// Log for debugging
gs.info('Data Stream - Page: ' + pageNumber + ', Status: ' + outputs.status_code);
})(inputs, outputs);
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2m ago
Hi @MaxMixali
Could you please help review if I can simply add to "Connection Timeout" field in Data Stream?
