Future Proofing Script Includes (and tons of other scripts)

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎09-27-2017 07:32 AM
This is a request for comments on a coding standard we've adopted. Take a look and let us know where we can make improvements. This is a rather long post but this standard has already reduced our code base by 70% and reduced edits to our core script includes by around 90%. Less code and fewer edits = fewer surprises and easier testing. The attachment is the same as the content of this post.
Future Proofing Script Includes
Contributors
William Busby, Chad Kunz and Todd Wilson
Putting this out there for peer review and assistance in improvements. Features I'd like to add but just haven't found time for are:
- Modifying the 'getRecord' to return all records if no search criteria is passed
- Modifying the 'getRecord' to handle multiple criteria
- Including logic to handle CMDB tables (would need to identify the correct table first based on sys_class_name, etc.)
- Overall solidity to plug holes and ensure a definitive response to success/fail and clear error messages
- Anything else you can think of to improve this approach
Summary
The approach we've documented makes adding new script includes for making table modifications take minutes to implement and most importantly the code does not need to be changed if attributes are added or removed from a table.
Most of the coding needed to implement new features in ServiceNow consists of manipulating table data. One of the most significant, and difficult to manage, features of ServiceNow is the ease by which tables can be extended and modified to introduce new data or behaviors. For those of us who develop the code behind the scenes these structural changes can be a nightmare to manage. Once you've been developing a couple of years it's a bit challenging to identify all the places you've referenced the 'incident' table or that custom table added to track whatever.
Fortunately, we decided to try and alleviate these challenges after only writing dozens of script includes instead of hundreds. By the way, if you're not putting most of your common code in script includes you seriously need to rethink your development approach. This approach allows us to put all the code needed to implement CRUD changes in a single place for all tables and makes implementing these features for any new tables something that can be done in seconds. Best of all, you never (literally NEVER) need to revisit the code if new fields are added to the table.
Implementation
For the 'tldr' audience I'm going to show the approach we've implemented and those who want more details on the 'why' can read further. All the scripts below are script include definitions.
Add the following file to your script includes:
u_BASE_Table (Create a script include with this name and replace the entire code block with the text below)
var u_BASE_Table = Class.create();
u_BASE_Table.prototype = {
initialize: function() {
// gs.log('u_BASE_Table class instantiated');
this.tableName = null; // will be defined in parent class definition
},
updateRecord : function(input) {
var str = ''; // Temporary string for message construction.
// gs.log('updateRecord(input) = ' + JSUtil.describeObject(input));
// Define the return object
var result = {};
result.success = false;
result.returnCode = null;
result.message = null;
result.data = null;
// Parse the input.
var request = JSON.parse(input);
var grRec = new GlideRecord(this.tableName);
grRec.initialize();
if (typeof request.sys_id != 'undefined') {
// This is an update!!!
grRec.get(request.sys_id);
}
// Create the record.
for (var attribute in request) {
// Exclude attempting to update the sys_id.
if (attribute != 'sys_id') {
// gs.log('updateRecord(): Checking validity of field ' + attribute);
if (grRec.isValidField(attribute)) {
grRec[attribute] = request[attribute];
}
else {
// Regardless of update success/fail - list the invalid fields on return.
str = 'Field not valid ' + attribute;
result.message = ((result.message == null) ? (result.message = str) : (result.message += '; ' + str));
}
}
}
try {
var returnCode = grRec.update();
result.success = true;
result.returnCode = returnCode.toString();
str = this.tableName + ' record updated. sys_id: ' + returnCode;
result.message = ((result.message == null) ? (result.message = str) : (result.message += '; ' + str));
// gs.log('updateRecord(): ' + str);
result.data = result.returnCode;
}
catch (ex) {
str = this.tableName + ' record not updated.';
result.message = ((result.message == null) ? (result.message = str) : (result.message += '; ' + str));
// gs.log('updateRecord(): ' + str);
result.data = null;
}
return JSON.stringify(result);
},
getRecord : function(input) {
//gs.log('getRecord(input) = ' + JSUtil.describeObject(input));
// Define the return object.
var result = {};
result.success = false;
result.returnCode = null;
result.message = null;
result.data = null;
// Parse the input.
var request = JSON.parse(input);
//gs.log('getRecord parsed input = ' + request);
// Retrieve the record.
var record = {};
var grRec = new GlideRecord(this.tableName);
grRec.initialize();
if (grRec.get(request.sys_id)) {
// gs.log('getRecord(): Getting data for ' + this.tableName + ' ' + grRec.number);
for (var attribute in grRec) {
// Declare value holders to keep them in scope.
var value = '';
var displayValue = '';
value = grRec[attribute].toString();
try {
displayValue = grRec[attribute].getDisplayValue();
}
catch (ex) {
// Do nothing.
}
record[attribute] = value;
if (value != displayValue) {
record[attribute + '_dv'] = displayValue;
}
}
result.success = true;
result.message = this.tableName + ' details returned in data obj';
result.data = record;
}
else {
result.message = this.tableName + ' record could not be retrieved';
}
var resultJSON = JSON.stringify(result);
// gs.log('getRecord(): result = ' + JSUtil.describeObject(resultJSON));
return resultJSON;
},
newRecord : function(input) {
// Since GlideRecord update() will perform an insert if the record is not present
// we just pass the input as is (JSON) to the updateRecord function and pass
// back the return object (JSON) as is with no additional processing.
return this.updateRecord(input);
},
type: 'u_BASE_Table'
};
Now, for each table you want to implement CRUD operations against simply create a new script include and make a small edit to enable calls to the base table. The example below is for the sys_user table. Disclaimer — we don't use the 'new' function on sys_user since we import from LDAP, this is just an example applicable to any table.
var u_USER = Class.create();
u_USER.prototype = Object.extendsObject(u_BASE_Table, {
initialize: function() {
this.tableName = 'sys_user';
},
newUser: function(input) {
return new u_USER().updateRecord(input);
},
updateUser: function(input) {
return new u_USER().updateRecord(input);
},
getUser: function(input) {
return new u_USER().getRecord(input);
},
deleteUser: function(input) {
// nothing actually gets deleted, just set to inactive
// assuming input already has the sys_id of the record to
// inactivate so just need to add the 'active' attribute and
// call an update
input.active = false;
return new u_USER().updateRecord(input);
},
type: 'u_USER'
});
The edit you'll need to make after creating a new script include for a table which enables the call to the base table without having to explicitly reference it is to change the default structure of:
var <SI Name> = Class.create();
<SI Name>.prototype = {
initialize: function() {
},
yourFunction : function() {
// your code
}
type: '<SI Name>'
};
To (note we use a 'friendly' name in each function so the calling script can seem natural):
var <SI Name> = Class.create();
<SI Name>.prototype = {
<SI Name>.prototype = Object.extendsObject(u_BASE_Table, {
initialize: function() {
this.tableName = 'sys_user'; // update this with table to maintain
},
newUser: function(input) {
return new <SI Name>().updateRecord(input);
},
updateUser: function(input) {
return new <SI Name>().updateRecord(input);
},
getUser: function(input) {
return new <SI Name>().getRecord(input);
},
deleteUser: function(input) {
input.active = false;
return new <SI Name>().updateRecord(input);
},
type: '<SI Name>'
}); // don't forget this closing paren
These three minor edits I've highlighted along with the functions defined enables all functions you'll ever need to maintain any table in ServiceNow and you'll never have to change the code when the table attributes change. Of course, we usually add additional functions for calculations, etc. but we always use these defined functions for actual table changes or retrievals. Makes everything behave the same.
Calling your script include from any script is as simple as the following example:
var input = {}; //create an object of attributes to modify - sys_id needed to find record
input.sys_id = '6526372b4f22c20078fd97dd0210c765';
input.cost_center = '3312';
input.employee_number = '98765432';
var inputObj = JSON.stringify(input); //convert to JSON
// Call the server side script include to update the record.
var resultObj = new u_USER().updateUser(inputObj);
// Process the result.
var result = JSON.parse(resultObj); // convert from JSON
gs.log('success = ' + result.success);
gs.log('returnCode = ' + result.returnCode);
gs.log('message = ' + result.message);
gs.log('data = ' + result.data);
Notice we always pass and retrieve JSON with our script includes. We do a lot of REST calls and default to JSON for passing data back and forth. This lets us use the REST data natively but it's simple to convert to/from JSON with javascript objects.
The 'get' function is enhanced to retrieve the display value for any reference field so you get both the reference sys_id and the display in two fields. An example might be 'user' returning the sys_id reference to the sys_user table and 'Penelope Pitstop' in the field 'user_dv'.
Features leveraged
Coding Standards
The key to making something simple to implement is having a concise method that is repeatable without needing to employ 'one-off' criteria. In our case, it means defining a standard method of passing data in and out of script include calls that is flexible enough to handle any foreseeable use case while being simple and easy to adopt. Standards = good. Everybody doing things their own way = bad. If you can't follow standards save yourself some time and stop reading now. One of the benefits realized early on is you can write code to leverage script includes without even having to check the syntax needed to pass data in or out, everything works the same way so everything is predictable.
JavaScript Objects and JSON
If you don't understand them start Googling. If you do understand them embrace them like a long-lost love. Using flexible structures is key to processing data beyond a single attribute — which is 99.9% of the coding in ServiceNow.
GlideRecord Updates
One of the seemingly lesser known, but really cool, features of GlideRecord as implemented by ServiceNow is that the 'insert()' function is largely superfluous. From the ServiceNow wiki for GlideRecord regarding the 'update()' function, "Updates the GlideRecord with any changes that have been made. If the record does not already exist, it is inserted."
Include Directive (sorta)
If you've coded in C or just about any language following Atari Basic you've been exposed to this or something similar at the head of a code file
#include <stdio.h>
Which is just a shortcut for saying take everything in the 'stdio.h' file and act like I typed it into this file. Using the base table script file is something close to that which saves us a significant amount of code and reduce errors dramatically.
Javascript Object Keys
This is the meat and potatoes of out implementation. Most script includes I've seen used to interface with ServiceNow tables require the code to make explicit reference to attributes which requires you know them up front. Any new ones need to be added while deprecated ones need removal. This is a maintenance nightmare and poses a significant risk over the lifespan of any code base.
By taking the input as an object (JSON in our case which we convert to javascript object) we can process the input data by iterating through the object and just using the generic scalar 'attribute' to capture the field name of the attribute being modified. It doesn't matter how many attributes are passed or what their names are. We also validate the field references are valid and return a message with any that are found invalid but still enable the action by excluding the invalid field references. You might want to default to failure by changing the base script file which is a key feature of our approach. Change that one file and everything else behaves the same way without further edits.
Coding Standards
Our approach is simple and addresses the naming of script includes, default functions, things being passed into script includes and the data returned from script includes. We also address the need for server and client side script includes. This might seem a bit restrictive to some but it really does pay off in the long run.
Naming Standards
We've chosen to start object names with 'u_' in keeping with the way ServiceNow identifies customer created objects. This keeps our stuff easy to identify and makes is predictable to run queries against tables of objects to find our stuff and not have to question if something was out of the box (OOB). We've applied this to everything, not just script includes.
We create a script include for every table we interact with, this is critical because we want to have the same base functions available for each table without having to concern ourselves with conflicts.
Our standard distinguishes between server and client side script includes.
Examples of our script include names are:
u_CHG — server side change request script include
u_CHG_Ajax — client side change request script include
u_CTASK — server side change task script include
u_CTASK_Ajax — client side change task script include
Default Functions
We always use implement the base functions to create new records, update existing records, retrieve records and delete (inactivate) records in every script include without exception. Then we include any custom functions. If the default functions need to interact with the table they always call the base functions just so we don't introduce anomalies in table interactions. Example is you may want a function 'cancelChange' which just takes a sys_id. Your 'cancelChange' will add the attributes needed to cancel the change then call the 'u_CHG().updateChange' base function to actually modify the record.
Passing Data
We've adopted the standard that all data passed to script includes will be JSON in whatever structure is needed for the function as a single object called 'input' and the end result will always be passed out of the script include as a JSON object with the following attributes:
success — Boolean, true/false
returnCode — String, usually used for error codes if success is false
message — String, any text needed to clarify or add additional details from the function
data — String or Object, any additional detail that doesn't really fit the 'message' concept. Typically a sys_id if creating a record or the entire record structure if getting data
Code Administration Discipline
We always use the table or a clear abbreviation for the table in the script include name and never mix functions that interact with multiple tables in the same script include. It may seem tedious to call u_CHG to create a change request then grab the returned sys_id to call u_CTASK to add a task to the change but it keeps everything very clear and simplifies the interactions with REST and Orchestration custom activities by keeping everything in distinct black boxes.
To reduce points of administration our client callable script includes simply call the server side code to implement CRUD operations.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎11-04-2018 11:12 AM
Some Thoughts.
Get away from the creation pattern and Object.extendsObject.
Make the GlideRecord wrapper to a singleton
Don't catch exceptions in the Wrapper layer, instead let the business layer/script include using it catch errors and react accordingly. The CRUD layer shouldn't dictate how service layers react to errors. It's function is purely CRUD
Provide a mechanism to return precisely what GlideRecord returns for update,insert, delete, etc. If a Wrapper is created, it should return the exact behavior the vendor expects, plus whatever customization are desired.
I tend to like the idea of this:
var GlideInsert = {
//Utility class for all inserts
insert: function(options) {
var record = new GlideRecord(options.table);
record.initialize();
return function handlePayload (payload) {
PayloadHandler.handlePayload( record, payload );
if (!options.toCustom ) {
return record.insert();
} else {
//return custom object
}
}
}
}
//callers
var AddressBAO = {
insertNewAddress: function (current, addressOptions, addressFieldOptions) {
var payload = PayloadExtract.getFieldValues(current, addressFieldOptions);
GlideInsert.insert(addressOptions)(payload);
}
}
var AddressOptions = (function(){
/*this is done so that control for database options and operations is a configurtion*/
return JSON.parse( gs.getProperty('address_option') );
})();
/*properties*/
address_options = {
"table":"Address",
"newBusinessAddressFields" : ["address1","address2","address3","state","zip"],
"newHomeAddressFields": ["address1","address2","state","city","zip"]
}
/*Business Rule*/
AddressBAO.insertNewAddress( current, AddressOptions.table, AddressOptions.newBusinessAddressFields);
The strategy is to stop having to code multiple scripts, extend others, separate concerns, and drive behavior through configuration.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-31-2019 11:02 AM
Good feedback, I'll be incorporating some of this into my revisions for the approach. I agree with most of your comments but your code examples don't address some of the issues I'm trying to address. Tables change over time and I've incorporated validity checks to ensure the table being addressed exists and that each attribute of the JSON object is valid.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-31-2019 12:01 PM
I like the idea of the 'divination' game. What can change and how...
Ideally all future table changes should be caught with an ATF, this means that JSON objects will fail before making it to prod. Yet, the question if I undertand it correctly, is: how can notification be sent to the caller(code) that a structural change has happened to the table? The question is of less significance under an ATF as testing must happen before promoting to production, so, fool proofing might not be as necessary in that case.
As part of an AFT, all table options objects (JSON) would be scanned against the metadata for each table. A MetaDataAudit Script Include runs to match fields to table and field descriptors, yielding the results of the test.
That same utility class can then be exposed as either, an EndPoint for Web Services to consume, or an adapter sitting between incoming queries and the destination. This divides the concerns more cleanly, and can even be used as Audit Strategies functions to pass into your GlideWrapper class rather than embedding the functionality.
The goal really is to save cycle times in production from code whose sole purpose is redundancy checks The though alone of redundancy check talks to me more about testing and process than it does about informing the developer at run time.
The separation then allows the GlideRecord api to function precisely as intended with the added behavior you've given it. Be it through injecting functions that check for compliance, or an adapter sitting between a web service and your GlideRecord

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎02-04-2019 11:03 PM
That's an interesting approach to leveraging ATF that I hadn't considered. If I understand your suggestion you're recommending using ATF to directly validate the structure of a table in addition to the traditional ATF test cases for process, correct? It seems this approach would also allow the validation of ACLs if you have test users for each role that required access to the tables before you even ran the tests for process validation. Interesting...