The Zurich release has arrived! Interested in new features and functionalities? Click here for more

Data Source with nested calls.

GeorgeDaveB
Tera Contributor

Hello ServiceNow Community,

I have to perform REST Message calls on a list of IDs, from a ServiceNow table, and each ID requires multiple calls as there is hundreds of objects per ID to fetch. So we must use pagination.
The implementation thus far has been done using the ETL module. We use Data Sources that utilize Data Streams to import the data.

Data Stream has pagination and can perform multiple calls, but the way its structured, we can do it for only a single ID... while what we need, is for the Data Stream to fetch all rows for one ID, then move on to the next ID. A nested loop, essentially.

I've thought of using the preprocessing script to look up the table , then save the ids in a list, and afterwards doing a nested loop during the pagination step so after it finishes fetching data for one and hits the limit, it will move on to the next.

But the pagination step of a Data Stream, to my knowledge, cannot receive input from the data pills or from the rest of the action, and it runs from the first line of code after every REST call performed... It can either access values from its own script, or from the response body of our calls.
I'd have to do a look up of the table before every single call. The performance cost is too high.

Thus, my only current solution is to write the whole Data Stream as a custom script within the Data Source... 


Therefore, I have two questions:

1. Do ETLs or the Data Source have an OOB way to handle a scenario where you have to perform calls and fetch hundreds of rows of data, for each id/value of a list of items? It seems to me like a commonplace scenario regarding API integrations. You won't always have a convenient endpoint where you can perform calls without needing to query for a single item at a time.

2. If not, how do I go about writing a custom DATA SOURCE script that will perform this for me? For example, do I have to declare a new import set within the import table inside the script?

Without knowing how to script for the importation of data here, this is what my code would look like:
----------------------------------------------------------------

 
(function loadData(import_set_table, data_source, import_log, last_success_import_time, partition_info) {
    var deviceIds = getAllDeviceIds(); //Here we perform the lookup and get the list of ids
    for (var d = 0; d < deviceIds.length; d++) {
        var start = 0;
        var pages = 0;
        var total = 0;

        //suppose we already know the amount of pages.
        while (pages < MAX_PAGES) {
            {
                var rm;
                if (USE_NAMED_REST) {
                    rm = new sn_ws.RESTMessageV2(REST_NAME, REST_METHOD);
                } else {
                    rm = new sn_ws.RESTMessageV2();
                    rm.setEndpoint(RAW_URL);
                    rm.setHttpMethod(RAW_METHOD);
                }

                rm.setRequestHeader('Content-Type', 'application/json');

                var body = {
                    deviceId: deviceId,
                    start: start,
                    limit: PAGE_SIZE
                };

                rm.setRequestBody(JSON.stringify(body));

                var res = rm.execute();

                try {
                    var data = JSON.parse(res.getBody())
                } catch (e) {
                    import_log.warn('Error ' + e);
                    var data = null;
                }
            }
            if (!data) break;

            // Stop if we got fewer than PAGE_SIZE (last page)
            if (items.length < PAGE_SIZE) break;

            start += items.length;
            pages++;
        }

    }
})(import_set_table, data_source, import_log, last_success_import_time, partition_info);



1 REPLY 1

GeorgeDaveB
Tera Contributor

Here's what worked for me in my custom Data Source script.

If you are using a newly created import table without any columns,  you use import_set_table.addColumn(column_name, optional_length_variable);

This is what I did:
 

   import_set_table.addColumn(u_id, 100);
   import_set_table.addColumn(u_name, 255);

If these columns already exist in the import set table, they simply won't be created.
The import set is automatically created by the Data Source so you don't need a method for that.

Afterwards, the only methods you need to import data within your table, is import_set_table.insert(), where you assign a value to each column that you previously created, or that already exists. You loop it for each incoming row of data:

 import_set_table.insert((function () {
            var row = {};
            row[u_id] = variable_1 || '';
            row[u_name] = variable_2 || '';
            return row;

Finally, to allow the Test Load 20 Records related link to work, you must use:
    var maxRowValue= import_set_table.getMaximumRows();

This method returns a value of 20 if the related link is clicked, or -1 if its a regular load.
So all you have to do is create a condition that breaks script runtime at 20 inserts at the corresponding value.

The rest of the code used is mostly what I posted above. It is essentially a Data Stream in scripted form.