- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on ‎03-23-2017 12:09 PM
There has been some questions in forum about using "Use Last Run Datetime" field so I have attempted here to explain it a bit more than what is there in the wiki. The checkbox appears when the type of the data source is JDBC. And if you were to select the checkbox you will see additional two input fields, Last run datetime and Last run database field as shown below in the screenshot.
Last run database field is the column in the source table that you would like to compare on an ongoing basis. It is not the column in your import table but the exact column name in your data base.
So say you have a table which is getting populated on a daily basis and you would like to fetch the delta records by just obtaining the delta from the last time you fetched it -
In order to do that, you can point to the table and leave the Last run datetime field empty. Before we proceed further, I would like to mention that there is a business rule "Format last run datetime" on the data source table. We can edit this business rule in case we feel that the date format of the incoming data source has to be formatted. I would generally leave this field unedited.
So once you have taken the initial load, ServiceNow automatically populates the Last run datatime after the import with the date value of the column. We don't have to specifically fill this field. When your next job runs it executes a sql statement to compare the load date in your source database against the last run date time field.
Something like
Select * from tableName where load_date >= value from last run date time field.
Note: Since it is greater than equal to operator it would pull the current listed date on last run date time values as well apart from later dates.
If you want the delta to start from a specific date we can do a manual entry into the Last run datetime field as well.
This ensures that we keep fetching the data on a delta basis and do not have to get the full table load. I would like to mention that your sql query cannot have where or any condition if you are using the last run date time field. It should just be a select * from tableName so I would suggest creating a view that has all your records for fulfilling your requirements.
- 14,393 Views
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Venkat, It sure is a good read. Thank you. Could you however elaborate upon the Last Run Database field's functionality? Thanks. Urmilla
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Urmila,
Thank you. Last run database is the field in your incoming source table that is used to autopopulate the last run date time. For e.g. say you have a column called build_date in your source data base which would vary based on insertion or update of records in that table. You could just get the delta changes by having the column name listed in last run data base. When servicenow loads all the records it updates the Last run datetime field with the value from this field. And then on an ongoing basis it would check if the source tables field records has a build date greater or equal to last run date time field.
Everything you import the data the last run datetime field will get populated with the new value hence allowing the delta feature to be in place.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Is there any impact if the SQL Statement has Groupby ?
We are seeing some errors when the query in the statement has Groupby. The first load works, but fails for subsequent runs...
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Ganesh,
I wouldn't try groupby as that clause comes later to where clause and the where clause in this case gets appended dynamically. If you can create a view and call the select * statement of that view it would be better.
Hope it helps.
Thanks
Venkat
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Thanks Venkat... Is it even possible for modifying any business rule to enable this feature along with Groupby?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Ganesh,
To my knowledge this piece is not exposed out where we could change the order. The only thing that is exposed is the format of last run date time. You might have create a view at the database level from where you are pulling the data to do this logic.
Since the transform works after we import the data may be an on start transform script that would run on start of the transformation could be a place to have this logic alternatively.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Dear Venkat,
Thanks for the post its very helpful.
I have question on the last run time, the field on my database captures run time in EPOCH format. so how do I pass the last run time so that it gets used when running the data load?
Best Regards
Anil
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Anil,
I am happy you found it useful.
Edit the time format specified in the "Format last run datetime" business rule according to your date time string, if the default format is different than your format.
Note — Although you can work around this problem by deactivating the 'Format last run datetime' business rule, the system in that case does not update the Last Run datetime value.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Dear Venkat,
Thanks for response. I have this line at the end of the query,
with ur for fetch only
so when I try to have the last run datetime as abc.lastmodifiedtime it errors out while executing the query. abc is the alias name of the table in the query.
MID Server reported error: com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-104, SQLSTATE=42601, SQLERRMC=select
case
when co;BEGIN-OF-STATEMENT;<query_expr_body>, DRIVER=3.69.66
at com.ibm.db2.jcc.am.gd.a(gd.java:749)
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Anil,
Please use the column name directly i.e. lastmodifiedtime even through the query uses alias you can skip on this one as long as the main table you are fetching from has this column.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Venkat,
Thanks for posting this! Very informative. I was wondering if the code listed out of box will support multiple data source types? We are currently connected to SQLServer and Oracle databases but may add more new ones later.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Alex, It does. I used it after adding a new one for Postgresql. As it basically adds a sql with a where clause it is independent of the server as long as the server supports sql query.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Great, thank you Venkat!
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
How does this impact Web Services integrations?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hello Curtis,
It helps you in fetching delta records on an ongoing basis.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Alex,
Can you please let us know the change you made in the BR for the epoch format? We also have a similar requirement!
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
One thing to note is that this option DOES NOT work with sophisticated queries using left/right joins and unions as there are TWO where clauses.
An exception to this MIGHT be one where you have an "Is In" clause with only one where or where the where clause is OUTSIDE the "Is In."

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
I've got multiple queries running with 5+ joins in them with no major issues with the last run datetime.

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
I have an issue with the load_date >= value (from your example above). For low volume tables, I'm getting the last entry repeatedly picked up my my query. For example, I run the data source every minute and in our pre-prod environment, there aren't necessarily any new log entries every minute. For example, my last run datetime may be '2019-02-23 11:23:36.75' and if there isn't another log entry pulled since then, since the internal query is set to >= (emphasis on the equals sign), I'll grab that last log entry again. Repeatedly. I would think it would make more sense to just say '>' the last run datetime, especially since it's returning down to the 100th of a second.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi Dant,
I didnt face this issue. But you have a valid point. Feel free to tweak that condition.

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Unfortunately, that's part of the SN platform....unless you know how to change that 🙂
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Yes that's correct. I was not sure whether they opened it up as I had written this post when we were in Helsinki.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Can we use this field for LDAP Type Data Source?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
It does not work for me. After the initial load, the "last run datetime" was populated once. In any following runs, it loaded everything and the "last run datetime" field does not change any more.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Yes it doesnt work well in those cases.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
I see quotes around the logon Time. Dont know if that is causing the issue here. That column should be available in your sql query response.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
I have the same issue! What's the point of adding >= when you have a large data set with the same datetime value that has already been loaded.

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
If they could add a single millisecond to the last date time value, that would fix my issue
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Yeah, that's what I did myself on the sql side so I wasn't pulling the full set of 5k records with the same date value every time.
I have an idea that might work for this, I'll post another comment if it works.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hello Venkat,
Do we still have the limitation (of using where condition or any other conditions) on the SQL Query on the recent versions like Paris or Quebec.
If Yes, is this limitation for both the scenarios of Manual Entry into the last run date time (as mentioned by you) and for Non Manual Entry.
Thank you,
Anup Saha
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hello Anup,
I did check it now in my personal instance and I see the same. It is mentioned within the Business rules - Format last run datetime on Data sources.
// The filter has the following syntax:
// SELECT * FROM [tableName]
// WHERE [last_run_database_field] >= [last_run_datetime]
//
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Is there a way to change the business rule in min and sec ?