Querying syslog table for particular day is very slow. how to make it fast?

Suggy
Giga Sage

I am querying syslog table for particular day like "created on 15th Feb AND message contains 'SLA Breached'"

Its taking min of 30s to show 200 records (overall 80k logs created on that day).

 

How to increase the search speed? How to know if any indexing is required or not? Displaying 200 records out of 80k records is not huge right?

 

5 REPLIES 5

TaironeA
Tera Expert

The syslog table in ServiceNow can grow significantly, and querying it with conditions like "Created on a specific day AND message contains 'SLA Breached'" can be slow due to full-table scans. Here’s how you can speed up your query:

Check Indexing

  • Run this query in Scripts - Background to check if indexing exists:
    javascript
    CopiarEditar
    var gr = new GlideRecord('sys_dictionary'); gr.addQuery('name', 'syslog'); gr.addQuery('column_label', 'message'); // Check if 'message' field has an index gr.query(); while (gr.next()) { gs.info(gr.getValue('indexed')); }
  • If it returns false or is empty, the field is not indexed.

Add an Index (If Needed)

  • If "message" is frequently searched, request your admin to add a full-text index to improve text search performance.
  • Ensure created_on has an index (most date fields in ServiceNow are already indexed).

Optimize the Query

  • Instead of "message contains 'SLA Breached'", use "message LIKE 'SLA Breached'" to leverage full-text search.
  • Try filtering by a shorter date range or limiting fields returned to reduce processing time.

Use Performance Analytics or Logs Aggregation

  • If querying logs often, consider storing frequent queries in a separate indexed table for quick access.
  • Use Performance Analytics snapshots instead of querying raw logs.

Check Query Execution Time

  • Use Debug SQL (add &sysparm_query_debug=true in URL) to analyze query execution and see if it’s scanning too many rows.

Final Thought

Displaying 200 out of 80k records isn’t huge, but if the query runs on non-indexed text fields, it slows down. Optimizing indexes and using LIKE instead of CONTAINS should improve performance.

Hope this helps! If you found this useful, give it a thumbs-up!

Mark Manders
Mega Patron

It is not just displaying 200 records out of 80k. You are asking the system to check 80k records to see if anywhere in the message (which is a huge string field) the string 'sla breached' present. 

Depending on the use case (why not just check on the task sla table for breach date on February 15th), adjust your logging. Have the message start with 'SLA Breached'. That way it is only checking for the message to start with it and that will already faster, but still, you are asking to check on 80k records. It doesn't matter if there is 1 record to return or 799.999, it will have to check all if you don't have other filters.

Best and fastest is to check the records itself. Logs are for troubleshooting. What trouble are you in that you need to check on 'sla breached' within the log that isn't available somewhere else?


Please mark any helpful or correct solutions as such. That helps others find their solutions.
Mark

Hi @Mark Manders This use case of not w.r.t sla breach, its more about searching for some logs in log table. 

I have other instances where there are upto 200k entries in log table and when I search for something, I get the results in 5-10s

but in one particular instance, the search is slow. 

hence was looking for details on how to troubleshoot and fix the slowness.

Try using one or more of the System Diagnostics "Debug SQL..." modules. Or one or more of the "Index Suggestions..." and "Active Query Index Hints" modules