- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
If you've been a ServiceNow administrator or developer for any length of time, I would be prepared to bet that you've been asked more than once to look at an issue with a process or a form taking too long to load/complete.
You'll have had a demo of that same process in an out of the box instance. It's fine.
You'll have been told that "it didn't used to be this slow".
Tricky to answer when you don't have any historical reference data to hand. What was the performance like on the initial go live? What about after the last but one upgrade? Was that the trigger? Or was it a recently updated custom capability?
Maybe the most important question: how do you make sure it's not deteriorating over time, or specifically after a new capability is put live. Better still, before it is put live.
Performance profiling capability was added to the Automated Test Framework product a few releases back. It allows you to either run as a one off, or regular scheduled "Performance Suite" run of a test suite.
The key differences between a normal run and a performance run are:-
- The performance run actually runs each test in the test suite 11 times, not one.
- The first run is a "warm up" run, designed to ensure cachable values are cached; more closely simulating what happens in a real instance with real load
- The next 10 runs are used to get a view of the performance of each test
I've made a copy of the OOTB test suite, and I've run a standard run of the suite against the Cloud Runner first. I wanted to ensure that for my demo instance, I was getting successful tests first.
It was quite a large suite, so after it completed I removed a load of the tests to save some time doing the performance testing.
You can compare the performance of two runs together by selecting which two to compare, and using the list menu
Obviously it's not a perfect answer to questions about how a process performs in a production system where the load will vary, and where performance will vary based on the data with which configurations interact.
If you run it in an instance with at least a chunk of representative test data then it's a powerful way of being able to understand if your recent config/process changes have affected performance. It's a way of measuring the success of an initiative to improve performance.
- 311 Views
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.