Find your people. Pick a challenge. Ship something real. The CreatorCon Hackathon is coming to the Community Pavilion for one epic night. Every skill level, every role welcome. Join us on May 5th and learn more here.

Flow is getting failed to give output when executed via script .the flow has some for loop to check

CharanKulukuri
Tera Contributor

Hi all,

I’m trying to call a subflow from a Script Include (triggered via GlideAjax from a Catalog Client Script) and return the output to populate a select box dynamically on a catalog item whle submiting.

Current flow:

  • User selects a subnet on the Service Portal catalog item
  • onChange Client Script calls Script Include via GlideAjax
  • Script Include calls subflow using:
sn_fd.FlowAPI.getRunner()
    .subflow('global.subnet_ranges')
    .inForeground()
    .withInputs(inputs)
    .timeout(300000)
    .run();
  • Then tries to get outputs:
var outputs = result.getOutputs();
var subnet_ranges = outputs.subnet_ranges;

Problem:
I get the error:

The current execution is in the waiting state

If I increase timeout, I instead hit:

An execution timed out with timeout of 30000 MILLISECONDS

Observation:
The subflow includes steps like:

  • Launch Job Template (Ansible)
  • Possibly waits for completion / external processing and pass outpus

So it does not complete immediately.

What I’m trying to achieve:

  • Fetch subnet ranges dynamically based on selected subnet field on portal of a catalog item 
  • get the outputs from subflow and add them as option and Populate a dropdown field (subnet_ranges) on the catalog item before submission

Questions:

  1. Is it expected thatinForegeround() fails if the subflow contains wait/async steps?
  2. Is there any supported way to synchronously return outputs from such a subflow?
  3. If not, what is the recommended pattern for this use case (catalog client script → dynamic dropdown)?
  4. Is using inBackground()+ storing results in a custom table + polling via GlideAjax the correct approach?

Would appreciate guidance or best practices for this pattern.

Thanks!

1 REPLY 1

bonjarney
Giga Contributor

**TL;DR:** `inForeground()` cannot execute subflows that contain async steps (like Ansible jobs). This is by design — not a bug. The simplest fix: cache the results with a scheduled job and skip the real-time complexity entirely.

---

## Why Your Current Code Fails

`inForeground()` runs the subflow on the calling server thread — that thread blocks, waiting for completion. Your Ansible step requires the flow engine to PAUSE (send request to Ansible, wait for callback). A foreground execution can't release, pause, and resume. The engine detects this and throws the "waiting state" error.

There's also a second timeout you may not have noticed — GlideAjax has its own client-side timeout (typically 30-60 seconds). Even if foreground execution could wait, the AJAX call would die first. Two independent clocks, neither visible to the other.

**To your questions:**

1. Yes, you can get output from FlowAPI in a Script Include — but the subflow must be fully synchronous for `inForeground()` to work.
2. Yes, FlowAPI via GlideAjax has TWO timeout constraints (client AJAX + server FlowAPI) that don't exist in native flow-to-subflow calls.
3. Yes, use `inBackground()` for subflows with async steps.
4. Yes, the custom table approach is viable — and it's the pattern I'd recommend. Full implementation below.

**Quick diagnostic first:** Run your subflow manually in Flow Designer with test inputs. If it succeeds there but fails via FlowAPI, the issue is the execution mode — confirming this diagnosis. If it also fails in Flow Designer, the problem may be simpler (scope, permissions, config).

---

## The Fix — Cache Pattern

Before wiring up a polling mechanism, ask: **how often do your subnet ranges actually change?**

If the answer is "not every time someone opens the catalog item" — and for most network infrastructure data, it doesn't — you can avoid all async complexity by pre-caching the results.

**The pattern:** A scheduled job runs your Ansible subflow on an interval. The subflow writes results to a custom table. Your dropdown reads from that table via GlideAjax — fast, synchronous, no polling, no timeout complexity. The dropdown loads instantly because the data is already there.

### Step 1: Create the cache table

Create `u_subnet_range_cache` with these fields:

| Field | Type | Purpose |
|-------|------|---------|
| u_range_name | String | Subnet range identifier |
| u_range_value | String | Value for your dropdown |
| u_last_refreshed | Date/Time | When this row was last updated |
| u_ttl_expires | Date/Time | When this row is considered stale |
| u_source_job | String | Execution ID of the flow that produced this row |

### Step 2: Modify your subflow

Add a **Run Script** step as the LAST action in your existing subflow. This writes results to the cache table:

```javascript
// Add this as the final step in your subflow (Flow Designer > Run Script action)
// Replace <your range name output> and <your range value output> with your
// actual subflow output variables.

var gr = new GlideRecord('u_subnet_range_cache');
gr.initialize();
gr.setValue('u_range_name', <your range name output>);
gr.setValue('u_range_value', <your range value output>);
gr.setValue('u_last_refreshed', new GlideDateTime());
var ttlExpiry = new GlideDateTime();
ttlExpiry.addSeconds(24 * 3600); // 24 hours — match your scheduled job interval
gr.setValue('u_ttl_expires', ttlExpiry);
gr.setValue('u_source_job', fd_data.flow_execution_id);
gr.insert();
```

**Without this step, the cache table stays empty and your dropdown will never populate.**

### Step 3: Create the scheduled job

Scheduled Script Execution — triggers the subflow and cleans stale rows:

```javascript
(function() {
var SUBFLOW_NAME = 'global.subnet_ranges'; // Replace with YOUR subflow's internal name
var CACHE_TABLE = 'u_subnet_range_cache';
var TTL_HOURS = 24;

if (TTL_HOURS <= 0) {
gs.error('Subnet Cache Refresh: TTL_HOURS must be > 0. Aborting.');
return;
}

try {
sn_fd.FlowAPI.getRunner()
.subflow(SUBFLOW_NAME)
.inBackground()
.run();

// Cleanup stale rows
var staleDate = new GlideDateTime();
staleDate.addSeconds(-(TTL_HOURS * 3600));

var gr = new GlideRecord(CACHE_TABLE);
gr.addQuery('u_ttl_expires', '<', staleDate);
gr.query();
var deleteCount = gr.getRowCount();

// Safety: don't wipe the entire table on a config error
var totalGr = new GlideRecord(CACHE_TABLE);
totalGr.query();
var totalCount = totalGr.getRowCount();

if (totalCount > 0 && deleteCount >= totalCount) {
gs.warn('Subnet Cache Refresh: Cleanup would delete ALL ' + totalCount +
' rows. Skipping — check TTL_HOURS.');
} else {
gr = new GlideRecord(CACHE_TABLE);
gr.addQuery('u_ttl_expires', '<', staleDate);
gr.deleteMultiple();
}

gs.info('Subnet Cache Refresh: Subflow triggered, stale rows cleaned.');
} catch (e) {
gs.error('Subnet Cache Refresh failed: ' + e.message);
}
})();
```

### Step 4: Script Include (client-callable)

```javascript
var SubnetRangeCacheReader = Class.create();
SubnetRangeCacheReader.prototype = Object.extendsObject(AbstractAjaxProcessor, {

_ALLOWED_METHODS: ['getSubnetRanges'],

_isCallerAuthorized: function() {
if (!gs.hasRole('itil')) {
gs.warn('SubnetRangeCacheReader: Unauthorized access by ' + gs.getUserName());
return false;
}
return true;
},

_isAllowedMethod: function(methodName) {
return this._ALLOWED_METHODS.indexOf(methodName) !== -1;
},

getSubnetRanges: function() {
if (!this._isAllowedMethod(this.getParameter('sysparm_name'))) {
return JSON.stringify({ error: 'Method not allowed' });
}
if (!this._isCallerAuthorized()) {
return JSON.stringify({ error: 'Unauthorized' });
}

var ranges = [];
var gr = new GlideRecord('u_subnet_range_cache');
gr.addQuery('u_ttl_expires', '>', new GlideDateTime());
gr.orderBy('u_range_name');
gr.query();

while (gr.next()) {
ranges.push({
name: gr.getValue('u_range_name'),
value: gr.getValue('u_range_value')
});
}

return JSON.stringify({ ranges: ranges });
},

type: 'SubnetRangeCacheReader'
});
```

**Note on security:** Every client-callable Script Include is accessible to any authenticated user. The `_ALLOWED_METHODS` whitelist + role check above is not optional — without it, anyone logged in can query your infrastructure data from their browser console.

### Step 5: Catalog Client Script (onLoad)

```javascript
function onLoad() {
var ga = new GlideAjax('SubnetRangeCacheReader');
ga.addParam('sysparm_name', 'getSubnetRanges');
ga.getXMLAnswer(function(response) {
try {
var data = JSON.parse(response);
if (data.error) {
g_form.addErrorMessage('Unable to load subnet ranges: ' + data.error);
return;
}

var choices = data.ranges || [];
if (choices.length === 0) {
g_form.addInfoMessage('No subnet ranges available. ' +
'Data may still be loading — try again in a few minutes.');
return;
}

var fieldName = 'your_dropdown_variable'; // REPLACE THIS
g_form.clearOptions(fieldName);
g_form.addOption(fieldName, '', '-- Select a Subnet Range --');
for (var i = 0; i < choices.length; i++) {
g_form.addOption(fieldName, choices[i].value, choices[i].name);
}
} catch (e) {
g_form.addErrorMessage('Error parsing subnet range data. Contact your administrator.');
}
});
}
```

---

**That's it.** No polling, no timeout alignment across three systems, no phantom executions. The dropdown loads in milliseconds because the data is already cached. The scheduled job handles the Ansible complexity in the background on a schedule you control.

If you genuinely need real-time data (the subnet ranges change between every catalog item open), you'd need a polling pattern with `inBackground()` + `setInterval()` on the client side — but in most infrastructure scenarios, caching at hourly or daily intervals is the right answer.

*All code is provided as architectural patterns — test on your instance before deploying.*