There are many ways of testing software. This post uses the five Ws to classify the different types of tests and shows how to use this classification.
Programmer vs Customer (Who)
Tests exist to give confidence that the software works as expected.
But whose expectations are we talking about? Developers have different types of expectations about their code than users have about the application. Each audience deserves its own set of tests to remain confident enough to keep going.
Functionality vs Performance vs Load vs Security (What)
When not specified, it’s assumed that what is being tested is whether the application functions the way it’s supposed to. However, we can also test non-functional aspects of an application, like security.
Before Writing Code vs After (When)
Tests can be written after the code is complete to verify that it works (test-last), or they can be written first to specify how the code should work (test-first). Writing the test first may seem counter-intuitive or unnatural, but there are some advantages:
- When you write the tests first, you’ll guarantee that the code you later write will be testable (duh). Anybody who’s written tests for legacy code will surely acknowledge that that’s not a given if you write the code first
- Writing the tests first can prevent defects from entering the code and that is more efficient than introducing, finding, and then fixing bugs
- Writing the tests first makes it possible for the tests to drive the design. By formulating your test, in code, in a way that looks natural, you design an API that is convenient to use. You can even design the implementation
Unit vs Integration vs System (Where)
Tests can be written at different levels of abstraction. Unit tests test a single unit (e.g. class) in isolation.
Integration tests focus on how the units work together. System tests look at the application as a whole.
As you move up the abstraction level from unit to system, you require fewer tests.
Verification vs Specification vs Design (Why)
There can be different reasons for writing tests. All tests verify that the code works as expected, but some tests can start their lives as specifications of how yet-to-be-written code should work. In the latter situation, the tests can be an important tool for communicating how the application should behave.
We can even go a step further and let the tests also drive how the code should be organized. This is called Test-Driven Design (TDD).
Manual vs Automated Tests (How)
Tests can be performed by a human or by a computer program. Manual testing is most useful in the form of exploratory testing.
When you ship the same application multiple times, like with releases of a product or sprints of an Agile project, you should automate your tests to catch regressions. The amount of software you ship will continue to grow as you add features and your testing effort will do so as well. If you don’t automate your tests, you will eventually run out of time to perform all of them.
Specifying Tests Using the Classification
With the above classifications we can be very specific about our tests. For instance:
- Tests in TDD are automated (how) programmer (who) tests that design (why) functionality (what) at the unit or integration level (where) before the code is written (when)
- BDD scenarios are automated (how) customer (who) tests that specify (why) functionality (what) at the system level (where) before the code is written (when)
- Exploratory tests are manual (how) customer (who) tests that verify (why) functionality (what) at the system level (where) after the code is written (when)
- Security tests are automated (how) customer (who) tests that verify (why) security (what) at the system level (where) after the code is written (when)
By being specific, we can avoid semantic diffusion, like when people claim that “tests in TDD do not necessarily need to be written before the code”.
Reducing Risk Using the Classification
Sometimes you can select a single alternative along a dimension. For instance, you could perform all your testing manually, or you could use tests exclusively to verify.
For other dimensions, you really need to cover all the options. For instance, you need tests at the unit and integration and system level and you need to test for functionality and performance and security. If you don’t, you are at risk of not knowing that your application is flawed.
Proper risk management, therefore, mandates that you shouldn’t exclusively rely on one type of tests. For instance, TDD is great, but it doesn’t give the customer any confidence. You should carefully select a range of test types to cover all aspects that are relevant for your situation.