Scheduled Job stopped with silently with no log..

kozy_f
Tera Expert

Team, we created a scheduled job that is expected to run a few hours considering its volume of data being processed.

We ran the job mannually but it stopped with random duration (First time 3hours, Second time 30 minuites, Third time 24 minuites).

Looked into System Log, Transaction Log but there were no log found there.

We re-ran the scheduled job and the it started processing data, that means the data it self will not be the reason for this.

If you have any idea why this happened please let me know.

1 ACCEPTED SOLUTION

kozy_f
Tera Expert

Looking into system log, we found ServiceNow's OOTB funcation to stop logging if one transaction logs more than 200,000 logs. The following log was found on our instance's system log.

*** WARNING *** Maximum per transaction log statements (200000) reached. Suppressing further logging *** 

View solution in original post

3 REPLIES 3

Tony Chatfield1
Kilo Patron

Hi, with no clear details of your script or your debugging, I don't think the forum can provide much in the way of diagnostics.
Unless there is a recursive component that is causing the code to terminate,
or your code consumes so much memory that the nodes it is running on restarts,
then the most likely cause is an error while running IE an unexpected result\a field or value that doesn't exists\a NULL or similar....
Perhaps you should add some debugging and\or wrap your code in some level of try\catch as you cannot investigate something unexplained when you have no information or context for reference.

 

kozy_f
Tera Expert

Looking into system log, we found ServiceNow's OOTB funcation to stop logging if one transaction logs more than 200,000 logs. The following log was found on our instance's system log.

*** WARNING *** Maximum per transaction log statements (200000) reached. Suppressing further logging *** 

abhishek_s
Tera Contributor

This KB should help -

https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0656906

 

The limit of 200,000 applies not only to gs.log statements that you may have coded but also any node logs that the job generated on its own like slow SQLs or gs.print statements. So, even though you would have barely put 2000 records in the syslog you may end up running into this issue. Just rewrite the job to run for a smaller timeframe / batch of records.