
- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on 01-10-2025 05:00 AM
Testing is a fundamental part of the software development process. Every single quality gate, building block, and every technique has some kind of testing built into it. This article provides an overview on the terminology and the different intentions (why we test), scopes (what we test), methods (how we test), and tools used for testing.
This content is part of a holistic recommendation for a mature development and deployment process. Read the full story here:
https://www.wildgrube.com/download/A%20mature%20Development%20and%20Deployment%20Process.pdf
For a team to communicate effectively about testing, the terminology must be aligned. This article aims at providing a common language about testing.
Doing the right thing and doing things right.
Validation is to verify that we are doing the right thing.
Verification is to verify we do things right.
These are different aspects of testing, they require different approaches, different techniques and take place at different times.
Validation is about creativity, empathy, understanding, anticipation, investigation – it is part of a design process that never ends.
Verification is about precision, details, the happy path and the potential errors, the thinkable edge cases, the nitty gritty stuff, looking under every stone.
Verification is – for the good or the bad – agnostic to whether what has been built is easy-to-use or makes sense from a business perspective. Verification is about making sure things work as described and designed.
It is validation that helps a team to understand whether the intended outcome is likely to be achieved, whether users will understand, adopt, and make the best use of provided technical capabilities.
This distinction has implications on the approach, tools, and techniques – and eventually on the personnel that conducts the corresponding tests.
In an ideal world, verification is done BEFORE a backlog item ("story") is considered ready. Then AGAIN during the formal test conducted on a test environment. And continuously AFTER applications are deployed to production and exposed to users.
Validation on the other hand starts with the implementation and is performed continuously throughout the delivery process – and – in the form of operations monitoring – continues after go-live.
Intentions – why are we testing?
- Validation of designs and requirements
- Validation of the intended outcome when exposed to users
- Verification of functional requirements
- Narrow scope technical capabilities (“units”)
- Use cases (Sometimes expressed as user stories)
- Full end-2-end processes
- “Business Capabilities” that may contain multiple processes, integrations, etc.
- Verification of non-functional requirements
- Performance
- Security
- Maintainability
- Portability
- Scalability
- Coding Guideline compliance
- Documentation
- Protection from Regressions
- Verification of defect resolution
Scopes – what are we testing?
- Unit
(the smallest possible technical element) - Capability
(A combination of technical units that provide some distinct value to users or stakeholders) - System / Integration
(A combination of capabilities and external systems – real or simulated)
Methods – how are we testing?
- Manual
- Ad-hoc / Exploratory
(Testers interact with the system without a clear script or path to follow – in some cases testers may be given a goal but no guidance on how to achieve the goal) - Scripted
(Testers interact with the system following a clearly defined path and given a clear set of quality criteria on when the test is to be considered failed or successful) - Code review
(Peer developers look at code and join a discussion about what they see and understand)
- Ad-hoc / Exploratory
- Automated
- Whitebox
(“Whitebox” means that the test tool has both access to the defined interface AND to the underlying structures, e.g., the database tables being modified during the test – so the result of the test may not only depend on the output provided by defined interfaces but also by assessing the actual changes to the database or external systems)- Simulation
(Test scripts execute specific parts of the system and compare actual and expected results) - Static code analysis
(Test scripts are used to analyze the source code artifacts directly without executing any of the analyzed code)
- Simulation
- Blackbox
(“Blackbox” means that the test tool has ONLY access to the system via defined interfaces – and the test result is assessed based on the responses provided by these defined interfaces)- Via API
- Via GUI
- Via data import/export interfaces (e.g., SFTP, file shares, database connections, etc.)
- Whitebox
Tools – what do we use for testing?
- Automated Test Framework
The ATF is the ServiceNow test automation tool that enables automated scripted white box and black box testing and to ship the tests as part of applications. - Instance Scan
Instance Scan is a ServiceNow platform capability that can be used to perform white box testing on code and configuration artifacts at rest – that is to check the source code and the configuration instead of executing it. - Test Management 2.0
A ServiceNow platform capability to manage and document manual scripted tests. - External tools (E.g., Selenium, RoboClient etc.)
Any other tools for automated or manual black box or white box testing or the documentation of such tests.
Test Suites that yield Value over Time
Testing is effort. Yet, a well-balanced and executed test strategy can reduce overall efforts, costs and increase trust of all stakeholders over time.
A Test Suite is the superset of repeatable tests being performed as part of the process.
This does not include any ad-hoc or exploratory tests.
While all tests that are conducted manually directly translate into effort each time the tests are executed, the long-term effects on efforts of automated tests are more subtle.
Obviously the initial creation of automated tests is associated to some effort – which if done very early in the process (during or even before actual development) is not only hard to measure but – that’s the good news – almost insignificantly low.
The tricky part is the maintenance of a Test Suite over time. How much effort is required to adapt an existing Test Suite to modified components? That is the key question.
The better a Test Suite is designed – the less costs it will require over time – and the more value it will yield – by detecting introduced regressions early in the process – when their remediation is least expensive.
What can be said about the long-term costs of a code base can also be applied to a Test Suite.
The more code there is, the higher the maintenance cost. If less code (in a wider sense) is required to implement the required capabilities, more value is produced.
The exact same is true for tests in a Test Suite.
When talking about code, the most obvious pattern to look after is duplication. The more often the exact same code pattern, logic or sequences repeat themselves in a code base, the more code is needed. Duplication, however, may come in very subtle forms. Exact copies of functions or even classes may be spotted here and there, but most cases of duplication are more subtle and often not that easy to detect – even for the experienced eye.
It requires intimate knowledge and oversight over large parts of a code base to spot the more subtle repeating patterns for the human eye – and so far even very advanced static code analysis tools were not convincingly good at that.
Rather it requires a mindset shared by the whole team to always seek for logic that can be refactored into re-usable components. This will shift the task from spotting duplications in hindsight towards making code as re-usable as possible all the time – which will eventually lead to code being moved away from a very specific context into places where the team agrees to put re-useable components – where it is then easier to see if a comparable component already exists and whether – maybe with slight modifications – it can be used for the specific case a developer currently works on.
This same principle should be applied to tests.
When done so, the resulting Test Suite is likely to consist of many smaller, narrow scoped tests, and fewer larger, broad-scoped tests. It will consist of tests that verify the functionality of single components, rather than many components at the same time. Tests that do not depend on each other, data, the environment, the time of day.
If most tests are of that nature – and that applies to automated and manual scripted tests – the resulting Test Suite will be easier to change and adapt and hence to maintain.
Test Data
To perform tests in many cases test data is required. To test different scenarios and use cases different combinations of data is needed during the execution of different tests. And some of the data combinations may not exists at the same time.
So, the creation and maintenance of test data on different instances to support different test scenarios and to different testing methods, intentions and scopes is all but trivial.
Test data is opposed to production data is any data (not code) that is used to conduct tests. In rare cases, test data may even exist on a production instance and that data may be stored in the same tables as production data.
With that being stated, it should be obvious that test data must be created, managed, maintained, and eventually be removed consciously and according to an agreed and defined process. Ownership should be defined for test data on different instances.
Consider the following guidance on test data:
- Test data must be obvious – both the human eye and software should be able to tell if a given record is test data or not
- Automated Tests must not depend on any existing test data on an instance: all data required by the automated test must be created by the automated test
- Test data should be generated through code – not manually
- Test data ownership must be clearly defined per instance and table
Test Users
Test users are test data artifacts. However, they are special. They are not only part of a test data constellation that is required to test specific scenarios or uses cases, but they are used to perform the tests – as test users are impersonated by manual testers or automated tests.
Test users should hence be as close to real users (human or technical) as possible to produce high fidelity test results.
As outlined in chapter “Personas and Roles” users should always be associated to a persona role – the same can be said about test users for end-to-end tests – but in some cases only specific technical roles should be assigned to test users to verify the exact behavior and access defined for specific roles.
The platform does not have a specific representation of test data – including test users in its OOTB data model. There is no flag by which test data and test users could be identified.
This calls for a naming convention to identify test users.
The following convention worked well for several teams for several reasons:
- A test user’s “First name” is ALWAYS “Test”
- A test user’s “User ID” ALWAYS starts with “test.”
- A test user’s “Last Name” contains a hint on the persona or role which the test user has (e.g., “Service Desk Agent”, “Report Admin”, or “ITIL”)
If applicable the application name that defines the role should also be used in the “Last Name” (e.g., “Agile Agilist”, “Deployer Manager”) - A test user’s “User ID” ALWAYS ends with the lowercase, dot-separated equivalent of its “Last Name” (e.g., “test.servicedeskagent”, test.report.admin”, “test.itil”, “test.agile.agilist”, “test.deployer.manager”)
Note how the dot is used to express a hierarchy in the naming scheme.
By following that convention, developers and testers can
- Filter for test users
- Identify a test user as the source of a log entry or any kind of errors
The Definition of Ready ideally defines some test cases which should be created as part of the backlog item ("story"). All test cases require a test user; hence the description of test cases should specify which personas (or roles) should be used to perform the corresponding test cases.
If you are curious how to transform these ideas into a process, real habits, into something that work in reality, check out the whitepaper:
https://www.wildgrube.com/download/A%20mature%20Development%20and%20Deployment%20Process.pdf
- 1,006 Views