The Zurich release has arrived! Interested in new features and functionalities? Click here for more

ChrisF323949498
Tera Explorer

🧟‍Monster Logs and Where to Find Them

It started innocently enough with a gs.info(). But when the syslog table refused to load, I knew something was wrong.

2 million, 4 million, 8 million logs per day, I've seen it all and while volume matters, the real question is:
How do we reduce them?

Here’s my battle-tested approach to taming the monster logs in ServiceNow.

 

🧪 1. Extract

Start by running a report on the syslog table—weekly or daily depending on size—and export to Excel.

When logs are massive, even loading the table can be painful(or potentially not even possible!).

 Exporting locally lets you filter and analyse without stressing your instance.

 

🔍2. Analyse

Now build pivot tables to identify what’s generating the bulk of logs.

Group by:

  • Source (e.g. application or integration)
  • Level (info, warning, error)
  • Message (look for recurring patterns)
  • Created time stamps (Going through in timeseries can help locate the culprits too)

Even partial string matches like "<DeveloperName>:" can reveal noisy scripts or forgotten debug lines.

 

🎯3. Prioritise

Start with the noisiest informational logs, often accidental gs.info() calls in ACLs, Auth Policies, or Script Includes.

These are low-risk and high-volume. Cleaning them up gives you quick wins and often it’s these that will reduce logs by 20% in on swoop (which can be hundreds of thousands of logs!)

 

🧭 4. Locate

Once you’ve identified a message like:

"Dev Line 83 inside the while loop" and found it appears 100,000 times…

Use Studio Code Search a hidden gem! Paste part of the string and you’ll often land on the exact Business Rule or Script Include causing the noise.

Go further, search for the classics:

  • gs.info()
  • gs.warn
  • ‘log’
  • ‘line’ – often used by developers to debug a script 😄
  • Console (useful for client side logs, but won’t help us with the backend logs)

If that fails, check the Source field in the excel we exported, and trace it back to related application system properties.

 

💣5. Terminate

Now that you’ve found the culprit, how do you shut it down?

  • System Property? Great - reduce verbosity or disable logging entirely.
  • Error Logs? These may need backlog tickets for proper fixes, but this already highlights the value of reducing logs – Now you can see actual errors that need resolving.
  • Forgotten gs.info()? Easy, comment out and move on.
  • Conditional Logging? Tie it to a property so it can be toggled when needed.

 

🧘 Final Thoughts

None of this is cutting-edge but Studio Code Search is underrated.

 

Taking logs from 2 million to 100k is deeply satisfying. But remember:

At some point, the juice isn’t worth the squeeze.

 

If you're logging 100k+ per day, chances are you're drowning in noise. That noise can limit visibility, slow performance, and even cost money; especially if you're using third-party monitoring tools.

 

Make your logs useful. Keep them lean.

 

Have a great day!