- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-01-2015 12:06 PM
We are experiencing a problem that we're having a hard time tracking down, and I'm hoping someone here can provide some insight into what might be going wrong.
We have a Record Producer, which has two fields in it. This RP is associated with a Template, that fills in some additional information, and then the whole thing creates a new Change Request ticket.
Then we have a Workflow that triggers when this particular CR ticket is created. It retrieves the values entered into the RP, and does stuff with them. Specifically, it gets the sys_id of the CR ticket via current.sys_id, then does a GlideRecord query of the question_answer table, setting table_sys_id to the current.sys_id value in the query.
This setup works perfectly if I go to the RP form and type something into the fields. Whatever I type in is passed correctly to the question_answer table, and is retrieved with no problem by the WF.
In addition to all this, we have a custom Script Include which allows us to easily invoke a Record Producer via script. It has worked well for us in the past, but now has a problem: when I invoke this particular RP via the SI, the resulting workflow can't get the data out of question_answer. It's there when I check it manually, but that always happens dozens of seconds after the workflow ran. In my manual checks when using the SI to trigger the RP, everything looks perfect, and all the data is where it's supposed to be, but for some reason, the WF can't get it. Its attempt to retrieve is foiled at gr.isValid(), which suggests that the data hasn't hit question_answer by the time the WF runs.
So, I added some waits. I put a gs.sleep call of various lengths (1, 10 and 60 seconds) in the RP SI after it finishes inserting data into question_answer but before it returns the sys_id to the calling code (thus delaying the invocation of the WF); I tried the same thing inside the WF, just before the GlideRecord query to question_answer. Neither made any difference, and the process fails to retrieve data every time the RP SI is called.
I've included the RP SI below for reference.
So, this looks to me like caching. Somehow, the WF instance is getting a cached set of data, which doesn't include the updates to question_answer. But the waits I added should have taken care of that. Anyone have ideas for what else might be going on here? Other things I could test?
var RBA_Base_RecordProducer = Class.create();
RBA_Base_RecordProducer.prototype = {
initialize: function(producerSysId) {
this.producerSysId = producerSysId;
this.producer = this.getProducer(producerSysId);
this.targetTable = this.producer.table_name;
this.userVariables = {};
this.rpVariables = this.prepRecordProducerVariables();
gs.print("producerSysId: " + producerSysId);
gs.print("target table: " + this.targetTable);
},
getProducer: function(producerSysId) {
var gr = new GlideRecord("sc_cat_item_producer");
if (gr.get(producerSysId)) {
gs.print("Yep, we found the RP: " + gr.name);
return gr;
}
return null;
},
setVariable: function(name, value) {
this.userVariables[name] = value;
},
setVariables: function(variableObject) {
this.userVariables = variableObject;
},
submit: function() {
var targetRecord = new GlideRecord(this.targetTable);
targetRecord.initialize();
targetRecord.applyTemplate(this.producer.template.name);
var v;
// Set mapped fields on target record
for (v in this.rpVariables) {
if (this.rpVariables[v].mapToField == true) {
targetRecord.setValue(this.rpVariables[v].field, this.userVariables[v] || "");
}
}
var targetSysId = targetRecord.insert();
// if there's no target sys_id, don't create any question_answer entries -- prevents a bug that was slowing down the whole system for some reason
if (targetSysId)
{
// One more loop - insert variables in question_answer table
var qa;
for (v in this.rpVariables) {
qa = new GlideRecord("question_answer");
qa.initialize();
qa.question = this.rpVariables[v].sysId;
qa.order = this.rpVariables[v].order;
qa.table_name = this.targetTable;
qa.table_sys_id = targetSysId;
qa.value = this.userVariables[v];
qa.insert();
}
}
return targetSysId;
},
prepRecordProducerVariables: function() {
var variables = {};
var grItemOption = new GlideRecord("item_option_new");
grItemOption.addQuery("cat_item", this.producerSysId);
grItemOption.query();
while (grItemOption.next()) {
var obj = {};
var name = grItemOption.getValue("name");
obj.field = grItemOption.getValue("field");
obj.mapToField = grItemOption.getValue("map_to_field");
obj.sysId = grItemOption.getValue("sys_id");
obj.order = grItemOption.getValue("order");
variables[name] = obj;
}
return variables;
},
type: 'RBA_Base_RecordProducer'
}
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2015 01:54 PM
We use Record Producers to generate fairly complex Change Requests which then kick off orchestrations, and it ends up being easier to just trigger the RP programmatically than it would be to construct a Change from scratch. It allows us to abstract the behavior in a useful way.
But, I found the answer! We had conveniently scheduled a working session with SN's ITOM team for yesterday, and they were able to fill me in on what's happening.
If you look at the submit() method of the RBA_Base_RecordProducer code I included in the original post, you'll see that the Change insert() happens before the question_answer data is added. We originally did this to retrieve the sys_id (and if the sys_id is not found, we want to avoid adding question_answer data -- it had previously been created but with an empty target_sys_id column, which actually took down our instance after we added too many like that).
So, the insert() on change_request inserts, as you would expect, a record into change_request. What we had not considered is that this kicks off the workflow engine, and triggers the associated workflow, immediately. According to Jesse on the ITOM team, the workflow transaction can be thought of as very greedy, or very high-priority. This results in the workflow transaction thread basically ignoring everything that happens after it was triggered, including the question_answer inserts we were doing immediately afterward.
In light of this, I've modified the RBA_Base_RecordProducer code like so, around what was line 45:
var v;
// Set mapped fields on target record
for (v in this.rpVariables) {
if (this.rpVariables[v].mapToField == true) {
targetRecord.setValue(this.rpVariables[v].field, this.userVariables[v] || "");
}
}
// set the sys_id here and record it, but don't insert the change_request record yet.
// the insert() call triggers any associated workflows, and once a workflow has // been triggered, its transaction thread has a set environment -- one without
// the question_answer entries we're about to add.
targetRecord.setNewGuid();
var targetSysId = targetRecord.getUniqueValue();
// if there's no target sys_id, don't create any question_answer entries:
// prevents a bug that was slowing down the whole system for some reason
if (targetSysId)
{
// One more loop - insert variables in question_answer table
var qa;
for (v in this.rpVariables) {
qa = new GlideRecord("question_answer");
qa.initialize();
qa.question = this.rpVariables[v].sysId;
qa.order = this.rpVariables[v].order;
qa.table_name = this.targetTable;
qa.table_sys_id = targetSysId;
qa.value = this.userVariables[v];
qa.insert();
gs.log(self.type + ': sys_id of inserted question_answer data: ' + qa.sys_id);
}
}
else
{
gs.log(self.type + ': no target sys_id, skipping question_answer inserts');
}
targetRecord.insert();
return targetSysId;
},
Key changes here are that the original insert() call has been replaced by code to return the new sys_id without actually performing an insert, and then at the bottom, that the insert happens just before the return call. This means that the question_answer data is populated before the workflow is triggered, and allows the workflow to see that data.
This seems to work, although I haven't had time to thoroughly vet the solution yet.
As far as Mark Stanger's message, it's true that we didn't have native access to easily access Record Producers, but that's the point of the RBA_Base_RecordProducer Script Include: it sets up easy access to RPs from server-side scripts.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-01-2015 04:56 PM
Miriam, it appears that the Wait for Condition activity can only operate on the same table as the workflow -- in this case, I have the workflow set to work on change_request, and I need the Wait to operate on question_answer. Do you know if that's possible?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-02-2015 02:04 PM
Hello Ian,
Look into using the workflow scratchpad to store the RP variable amounts at the front. This will help retain the values throughout the workflow lifecycle.
Using the Workflow Scratchpad - ServiceNow Wiki
Thanks,
David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-02-2015 04:26 PM
Thanks for the suggestion, David. The code I posted above is attempting to retrieve the values so they can be put into the workflow scratchpad. The issue is that retrieving the information in the first place is not working, but only under certain circumstances, and I'm trying to understand the reason that those circumstances make the information unavailable.
Are you suggesting that there's a different/better way to retrieve the information? If so, I'm all ears. The workflow is triggering off the change_request table (the RP creates a change_request entry with certain criteria to trigger the WF), and then attempting to retrieve information that is, as far as I know, only stored in the question_answer table. And to reiterate, it works perfectly if I type the info into the RP form, but it fails if I call the RP programmatically.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-03-2015 06:55 AM
Hello Ian,
Sorry for the misunderstanding... I was more focused on the workflow side. I guess I am trying to see the benefit of kicking off a record producer to create a change request using a program... when you could code the program to directly generate the change request on which the workflow is run. However, if the program kicks off the record producer, and in turn produces a change request with the necessary variable information, then I would make sure the workflow can retrieve the values directly from the change request instead of trying to retrieve them from a different table.
However, I am also wondering about the line...
if (targetSysId)
I wonder if it should be... since you are wanting to populate the question_answer table if there is no target sys_id.
if (!targetSysId)
Thanks,
David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2015 01:54 PM
We use Record Producers to generate fairly complex Change Requests which then kick off orchestrations, and it ends up being easier to just trigger the RP programmatically than it would be to construct a Change from scratch. It allows us to abstract the behavior in a useful way.
But, I found the answer! We had conveniently scheduled a working session with SN's ITOM team for yesterday, and they were able to fill me in on what's happening.
If you look at the submit() method of the RBA_Base_RecordProducer code I included in the original post, you'll see that the Change insert() happens before the question_answer data is added. We originally did this to retrieve the sys_id (and if the sys_id is not found, we want to avoid adding question_answer data -- it had previously been created but with an empty target_sys_id column, which actually took down our instance after we added too many like that).
So, the insert() on change_request inserts, as you would expect, a record into change_request. What we had not considered is that this kicks off the workflow engine, and triggers the associated workflow, immediately. According to Jesse on the ITOM team, the workflow transaction can be thought of as very greedy, or very high-priority. This results in the workflow transaction thread basically ignoring everything that happens after it was triggered, including the question_answer inserts we were doing immediately afterward.
In light of this, I've modified the RBA_Base_RecordProducer code like so, around what was line 45:
var v;
// Set mapped fields on target record
for (v in this.rpVariables) {
if (this.rpVariables[v].mapToField == true) {
targetRecord.setValue(this.rpVariables[v].field, this.userVariables[v] || "");
}
}
// set the sys_id here and record it, but don't insert the change_request record yet.
// the insert() call triggers any associated workflows, and once a workflow has // been triggered, its transaction thread has a set environment -- one without
// the question_answer entries we're about to add.
targetRecord.setNewGuid();
var targetSysId = targetRecord.getUniqueValue();
// if there's no target sys_id, don't create any question_answer entries:
// prevents a bug that was slowing down the whole system for some reason
if (targetSysId)
{
// One more loop - insert variables in question_answer table
var qa;
for (v in this.rpVariables) {
qa = new GlideRecord("question_answer");
qa.initialize();
qa.question = this.rpVariables[v].sysId;
qa.order = this.rpVariables[v].order;
qa.table_name = this.targetTable;
qa.table_sys_id = targetSysId;
qa.value = this.userVariables[v];
qa.insert();
gs.log(self.type + ': sys_id of inserted question_answer data: ' + qa.sys_id);
}
}
else
{
gs.log(self.type + ': no target sys_id, skipping question_answer inserts');
}
targetRecord.insert();
return targetSysId;
},
Key changes here are that the original insert() call has been replaced by code to return the new sys_id without actually performing an insert, and then at the bottom, that the insert happens just before the return call. This means that the question_answer data is populated before the workflow is triggered, and allows the workflow to see that data.
This seems to work, although I haven't had time to thoroughly vet the solution yet.
As far as Mark Stanger's message, it's true that we didn't have native access to easily access Record Producers, but that's the point of the RBA_Base_RecordProducer Script Include: it sets up easy access to RPs from server-side scripts.