System log table has so many records.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-23-2022 09:34 AM
In our production instance, I was looking to find a log for something regarding a failure that was sent over to me. Upon opening system logs > all I was greeted with a long transaction time and 6.4 million records. This was at 10:40AM EST. When i checked back at 11:40AM it had jumped to 7.7 million records. I know different teams in my organization are recording different gs.logs but like what is considered like an acceptable amount of records on this table? Like a lot of the logs i was looking through seem so basic with no value of information being put there.
- Labels:
-
Team Development
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-23-2022 12:59 PM
Hi
I dont know what is the acceptable number of records in logs table.
But i have my way to load the logs quicker. I have a filter saved as favourites to load the logs in last 15 mins which loads very quickly. If i need to check something that happened within a particular time frame, I change the created on time filter and load it again, otherwise it takes a lot of time.
Please mark correct/helpful if it helped.
Thanks

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-23-2022 01:41 PM
Are you certain these are log entries that have been created by your teams, and not the result of applications / processes running in PROD? Some applications, like Discovery, can be extremely verbose and processes like SSO can also generate a ton of logs in a busy instance.
That being said, I'd say it's not unusual to see 1 - 2 million record days in a well utilized instance however that you are reporting (more than a million in an hour) would be excessive and would indicate a potential issue.
The best practice is always to minimize system logging in scripts where possible. If you can be certain that one or more of your own processes is generating excessive traffic, it's worth reviewing that process and eliminating gs.log / gs.info unless they serve a definable, measurable purpose. A lot of times logging is just left in place after development without consideration for the impact that it can have (a few test transactions vs. thousands)
If you do need to put deep level logging in place to support troubleshooting, there are a couple of tips that can help minimize log churn.
First, use gs.debug() where you can. That doesn't write to the log table but can be viewed by enabling session debugging - it is a little harder to find what you are looking for, but the trade-off is typically worth it. You're able to leave verbose logging in place (instead of having to add logging when you find an issue) but you won't have any historical records - you can only troubleshoot in your current session.
Second, in long scripts or processes, I often create an object or an array and push log statements to that, and only at the end of the process do I write that array or object (stringified) to the logs. That way you don't have 100 tiny logs, but rather one log per "run" that you can review.
Last, as mentioned, you can mitigate the issue somewhat with a filter that reduces the time-frame for which logs are displayed, if you have a good idea of the time frame. I usually look at the last 5 minutes in a busy instance, personally:
https://<your_instance_name>.service-now.com/syslog_list.do?sysparm_query=sys_created_onRELATIVEGT%40minute%40ago%405&sysparm_view=
You can also load the list with no records and just the filter, so that you can narrow it down even further before you query if you'd like:
https://d<your_instance_name>.service-now.com/syslog_list.do?sysparm_filter_only=true
I hope this helps!
If this was helpful, or correct, please be kind and mark the answer appropriately.
Michael Jones - Proud member of the GlideFast Consulting Team!
Michael D. Jones
Proud member of the GlideFast Consulting Team!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-24-2022 06:58 AM
You are correct about SSO logs being turned on under properties. However, looking at yesterdays sytem logs there was a grand total of 15.5 million records. I just feel like it doesn't make sense to have SSO logs running unless trying to debug an issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-24-2022 07:17 AM
I would absolutely agree that the "Turn on debug logging for SAML 2.0 Authentication" property should NOT be enabled in Production, unless you are actively troubleshooting an issue! If that is on, I'd raise the suggestion to your teams that it be disabled! That adds log entries far above and beyond the "normal" churn you would see for SSO.
I hope this helps!
If this was helpful, or correct, please be kind and mark the answer appropriately.
Michael Jones - Proud member of the GlideFast Consulting Team!
Michael D. Jones
Proud member of the GlideFast Consulting Team!