
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
ServiceNow performance graphs, especially the ServiceNow servlet performance metrics, are very useful in understanding the performance of your instance. There is a lot of useful information about the attributes which can be found by monitoring thread performance.
There always seems to be a lot of confusion as to which graphs are most useful in gauging system performance. The ones that we are concerned more about belong to the ServiceNow Servlet graphs. The key point here is understanding the trends of how the system has been performing post major events on the instance (ex: Upgrades, Update sets, Data imports, Script changes etc).
Lets delve further into this.
System Overview Graphs |
---|
If you look at the System overview graphs here, there is a spike around midnight on Sunday. Something happened which caused the increase of concurrency mean in active threads. The way these graphs are calculated is consider a stack (Last In First Out) taking in metrics like business rule time, concurrency, cpu, db, network. During the lifecycle of the transaction based on where it is at it gets the status to report and construct performance charts. Legend: database_mean will be mean of the threads that are waiting on the database business_rule_mean: Thread utilization while the instance is running the BR logic (sync/async) minus the querying to the database. concurrency_mean : Mean of the threads waiting on a semaphore / session synchronization. This will correspond to the the session queue wait spike as well which can be seen in the graphs. A steady rise over a period of time should be investigated. Ask yourself "Is this a transient issue or is this becoming a pattern every week which wasn't the case before?" How are the response times looking during the peaks? Steps to take for validation:
|
Active Sessions Graph |
---|
Another graphs that is really helpful is the active session graphs. The X-axis denotes the date/time and the Y-axis denotes the number of sessions. If there has been a steady rise in the number of session in the 30 days graphs , we need to figure out what has been causing the gradual increase in sessions. Whether these are user sessions or integrations creating multiple sessions. Along with the gradual uptick look for reoccurring sudden session spike as well. Sessions during the end of peak / business hours usually take a dip as they get timed out because of no user activity. Times they remain constant is because of session retention or when you have users logging in from different TZ owing to you being a 24/7 shop with global users. Check also to see if sessions persists. Default user session timeout is 30 minutes.
|
Memory Graph |
---|
Each ServiceNow node has approximately 2GB worth of memory. Checking the memory stats of a node is an indicator of the health of the system.
A normal memory graph would be like a "sea saw" pattern, with garbage collection showing the dips while reclaiming the objects no longer being referenced. If the memory of node constantly remains high in the bracket of 1.7k-1.9 or a gradual uptick in over a period of time could indicate a "memory leak" situation. In this example above, an event the memory took an upswing hitting 2GB at a point of time. |
Transaction Count Graph |
---|
The below graph corresponds to the number of transaction being handled by the node.
|
The ServiceNow Performance graphs can be very helpful in determining overall instance health as well as seeing changes affecting the performance of an instance post an event. Customer Support Performance have additional internal tools to check the overall instance health to see if there indeed is an issue with the overall system performance.
- 7,797 Views
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.