SaschaWildgrube
ServiceNow Employee
ServiceNow Employee

The key to shorter release cycles are automated tests.

 

The key to fewer regressions are automated tests.

 

The key to fewer defects are – again - automated tests.

 

Automated tests are crucial for a robust, reliable, and mature software development and deployment process.

 

Titus Winters, one of the authors of Software Engineering at Google stated: “One of the broad truths we’ve seen to be true is the idea that finding problems earlier in the developer workflow usually reduces costs.”

 

This underlines the idea – often referred to as “shift left” – to establish quality as early in the process as possible.

 

There are different forms of automated tests, and such tests may be executed for different reasons and different stages of the delivery process.

 

Of the many forms of automated tests (and tools) which may be helpful to verify quality in a specific organization, the automated tests that can be developed and deployed as part of the application and run within the platform are most relevant. In ServiceNow these are tests based on the Automated Testing Framework (“ATF”).  The ATF is an OOTB platform feature.

 

ATF – or to be precise – the many ServiceNow plugins and products come with a set of ready-built tests.

 

These quick-start tests validate the OOTB features in their unaltered non-customized form. However, most relevant for your development and deployment process are the tests that the development and test team create by themselves to verify the result of their work.

 

My recommendation is to

  • Practice Test-Driven Development – and create tests as early as possible by the development team
  • Create tests as part of the applications to be shipped
  • Execute tests as part of every deployment
  • Maintain zero-tolerance regarding failed tests

Executing tests is an integral part of automated deployments and only if all tests pass an application should be deployed to a downstream instance. That requires discipline regarding the fidelity of all tests contained in an application regarding its behavior. A zero-tolerance mindset can only be applied if tests are kept up to date with the application on the long run.

 

To be executed continuously and successfully as part of the deployment and to be maintainable on the long run, tests should meet the following requirements:

  • Tests are self-contained – that means they are agnostic regarding the data that already exists on the instance. Required test data must be created by the test – that includes users and groups and any other record that is relevant for the outcome. This is very important yet sometimes difficult to achieve.
  • Tests must be executable in headless mode. This may be achieved using the Cloud Runner – for tests that make use of client-side test steps or by using server-side test steps only.
  • Tests should not be redundant – every test should verify a specific aspect of the application’s behavior; any duplication should be avoided. The scope of a single test should be as narrow as possible. Lengthy end-to-end tests are more difficult and costly to maintain.
  • Tests are part of an application and verify the behavior of that application – by that these tests indirectly verify the behavior of the application’s dependencies. However, the results of the tests must not depend on the behavior of applications that depend on the application being tested. As a result, tests contribute to the dependencies of their application and like with any other application component bi-directional (i.e. circular) dependencies must be avoided.
  • All tests that are intended to be executed as part of an automated deployment should be contained in a test suite that is named exactly after the application – so that the tests can be clearly identified – and of course be contained in that application.
    E.g. an application named “My App” should contain a test suite called “My App”. This test suite is to be called during deployment.
    The application may have any number of additional test suites (e.g. for GUI based tests that cannot be run during deployment – the recommended name for the test suite containing GUI-based tests is “My App GUI”)

 

“Tests are not outside the system; rather, they are parts of the system that must be well designed if they are to provide the desired benefits of stability and regression.”

Robert C. Martin, Clean Architecture.

 

Writing automated tests has an impact on how software must be designed. Teams that start writing tests late in the process will face all sorts of challenges. Obviously this insight is of little help when a development team is faced with already written code and applications with little test coverage.

 

However, that must not be accepted as an argument against making the first step and starting the journey.

The increase of test coverage has always a positive impact on code structure, architecture, and overall maintainability of code – as there is a natural incentive to refactor (existing) code to allow better testability.

 

Sometimes stakeholders may oppose the idea of automated testing as there is a widespread – and false – belief that maintaining a suite of automated tests implies higher costs and prolonged time-to-market cycles. While this observation may be true for the first iterations after a development team started the new practice even short to mid-term effects are the opposite. A well-crafted test suite increases velocity, reduces roundtrips between development and test, reduces production issues and results in better maintained code.

 

Never ever argue about the costs of automated testing. Never adjust estimates by skipping automated tests.

 

Defend that ground at all costs!

 

Read the full story on how to integrate automated testing into your development and deployment process:

https://www.wildgrube.com/download/A%20mature%20Development%20and%20Deployment%20Process.pdf

Version history
Last update:
‎03-19-2025 01:45 AM
Updated by:
Contributors