- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 10-25-2021 02:51 AM
If you have a UI transaction that is slower than you would expect, follow these steps to identify what part of the platform the slowness is coming from.
The investigation can then focus on the correct area of the platform wether it be ACLs, business rules, database, or client-side etc. All the timings for the individual components are recorded in the Filter Navigator > System Logs.
Please Note :
You must follow these steps on the same node (eg reproduction of the slow transaction & the subsequent node log file review)
Only transactions with a response time >100ms will be seen in the logs
This article will cover a few different ways to investigate this.
1A) Node Log file browser (lots of steps but flexible approach as you can analyse anything in the log).
1B) Download the localhost_log file to your local workstation and use tools like grep, less in a terminal window or text editor to interrogate the logs further.
2) Debug Module in the UI (very quick & easy when reproducing a specific session/transaction only. This can also take you down the Index Suggestion Engine so try this method out!)
RESOLUTION:
OPTION 1A: Node Log file browser
1. Reproduce the slow transaction
2. Filter Navigator > System Logs > Transactions (All user)
3. Personalise the Columns on the list. These columns are useful to add:
Browser time
Business rule time
Client network time
Client response time
Client script time
Network time
Semaphore wait time
Session
Session wait time
Session ID
Total wait time
Total page load time
Total wait time
Transaction Number
Transaction processing time
UI policy time
4. Amend the list condition filter to be created_by = <affected_user> and sort by created desc (this filters out the noise and allows focus on the affected user only)
5. Identify the slow transaction (URL column shows the transaction type) in the transaction log which should be near the top of the list
6. Copy the SessionID (session column) & Transaction Number of the problem transaction
7. Filter Navigator > Node Log File browser
i) Ammend the 'Start time' and 'End time' fields to encompass the time the problem transaction was executed
ii) Session ID = (paste in the SessionID from step 6)
iii) Max rows = 10000
iv) Message = (paste in the Transaction Number from step 6)
8. Click Submit. This should show the targeted transaction.
9. Copy the txid value from the affected transaction log output eg txid=81c8704bdbc7
By searching using the txid instead of just the SessionID, the output will show more details about the user transaction.
10. Keep the log file parameters the same apart from replacing the Transaction Number in the message field with the txid value. Click Submit.
This should now give you the transaction that you are interested in with extra detail
In the above screen example you can see that:-
- the transaction was a mytable_list.do
- the database query (SELECT) took 13.604 seconds. (Time:0:00:13.604)
- the log entry with EXCESSIVE ***END , shows the part of the platform that is accounting for most of the time. Here we have 13.737 total time & SQL time is 13.611 which means the vast majority of time taken for the transaction is SQL (database layer time).
Now looking at the query itself, we see the issue straight away.
The query has NO WHERE clause, which means the platform has to pull all records from the table which is bad practice as this cant scale, nor can we utilize a database indexe. So we should really look to amend the filter condition to retrieve a smaller result set.
The next screenshot shows the same list view transaction but this time with a filter condition mycolumn='ABCD'
The query is still slow at 15 seconds even with a filter condition. This means that perhaps we need an index on the mycolumn column to make the condition really effective.
* I have now added an index on MyTable(MyColumn) via the UI. If you need help with deciding what index to select, you could try the OPTION 2 approach and use the Index Suggestion Engine.
11. If you look at the transaction again now after the index addition, we don't see the SQL query in the logs anymore. That is because it is now executing faster than 100ms so its not logged. SQL time is now very fast at 73ms
OPTION 1B: Node Log file browser
1. Reproduce the slow transaction
2. Filter Navigator > Node Log File Download
3. Ammend list conditional to show only 'Name' starts with localhost
4. Sort by Name in DESC order
5. The file at the top of the list should be todays localhost log file
6. Open record
7. Click on Download log
Now use a text editor of choice to search / interrogate the logs (search using SessionID (session column) & Transaction Number to look at the relevant transactions).
You can also load into terminal and use tools like grep, less etc to do the same.
OPTION 2: Session Debug (Recommended)
1. Filter Navigator > System Diagnostics > Session Debug > Enable All (This should open a debugging tool in a new window)
2. Reproduce the transaction
3. Review the output in the session log window
There are some useful features in the session log - you can download the log, clear the log, or even amend the Filters if you want to remove any noise or clear the output screen
The important word to search for in the log output is EXCESSIVE as those are the slow queries that need to be investigated.
4. If you have an expensive SQL query like the below screen show shows (8.459 seconds), then you can try the index suggestion index feature.
5. In your main UI window where you reproduced the transaction, scroll down to the debug output under the 'Query' header. Now open the record which is the expensive query - this will open a record in the slow query module
6. Now follow the Index Suggestion Engine KB KB0782916
NEXT STEPS:
Depending on where most of the transaction time is being spent, that should be the focus. It is pretty self-explanatory - if most of the time is spent processing ACLs, then that area should be the focus. If UI Action time is high, then UI Actions should be investigated. If SQL time is the highest, the database queries need to be reviewed etc.
If SQL time is the majority of the transaction time, then answer these questions :
- Does the query have a condition filter (a WHERE clause)?
- If so, is the condition filter efficient? See Performance Best Practice for Efficient Queries in the additional information section of this article.
- Is the data result set looking at a small or limited subset of data (eg using active=1 or a date range). If not, consider adding active=1 or use a date range to reduce the result set that needs to be queried.
- Would the query benefit from an index hint / or index? Depending on what columns and logic operands are after the WHERE clause, an index may help. If you followed the Option 2 path, then you can easily try the index suggestion engine KB0782916
- Remember to test any new indexes in a recently cloned instance before applying in production.
Additional Information
- Performance Best Practice for Efficient Queries - TOP 10 practices
- Recommendations for Optimal Instance Performance
- What is Session wait / Session Syncronisation?
https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0755706
- More information about the Session log
- Adding an index
- How to create indexes using the index suggestion engine
https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0782916
- Client Transaction timings
- Six common performance pitfalls in Service Portal and how to avoid them
https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0634588
- Performance Landing Page
https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0829067
- 2,553 Views