- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎12-03-2022 08:07 PM
Team, we created a scheduled job that is expected to run a few hours considering its volume of data being processed.
We ran the job mannually but it stopped with random duration (First time 3hours, Second time 30 minuites, Third time 24 minuites).
Looked into System Log, Transaction Log but there were no log found there.
We re-ran the scheduled job and the it started processing data, that means the data it self will not be the reason for this.
If you have any idea why this happened please let me know.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎12-13-2022 10:38 PM
Looking into system log, we found ServiceNow's OOTB funcation to stop logging if one transaction logs more than 200,000 logs. The following log was found on our instance's system log.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎12-03-2022 10:51 PM
Hi, with no clear details of your script or your debugging, I don't think the forum can provide much in the way of diagnostics.
Unless there is a recursive component that is causing the code to terminate,
or your code consumes so much memory that the nodes it is running on restarts,
then the most likely cause is an error while running IE an unexpected result\a field or value that doesn't exists\a NULL or similar....
Perhaps you should add some debugging and\or wrap your code in some level of try\catch as you cannot investigate something unexplained when you have no information or context for reference.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎12-13-2022 10:38 PM
Looking into system log, we found ServiceNow's OOTB funcation to stop logging if one transaction logs more than 200,000 logs. The following log was found on our instance's system log.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎06-16-2024 08:49 PM
This KB should help -
https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0656906
The limit of 200,000 applies not only to gs.log statements that you may have coded but also any node logs that the job generated on its own like slow SQLs or gs.print statements. So, even though you would have barely put 2000 records in the syslog you may end up running into this issue. Just rewrite the job to run for a smaller timeframe / batch of records.