skip to main content

gerg.dev

Blog | About me

Coffee break tests: An exercise in test design

Let’s think about an example test scenario.

We’re given a program called MoverBot that is responsible for distributing a data file created by another program, OutputBot, to several servers. Since the two programs operate asynchronously with variable runtimes, OutputBot and MoverBot communicate about which files are ready to be distributed through a shared database entry. OutputBot adds the new file to the database when it is ready to be distributed, and MoverBot marks when that file is done.

The “happy path” might be written like so:

Given a file to be distributed and an entry in the database,
When MoverBot runs,
Then the file is copied to each server.

Now, we should also consider the case where MoverBot shouldn’t distribute the file:

Given a file to be distributed and there is not an entry in the database,
When MoverBot runs,
Then the file is not copied to any server.

Can you spot the problem with that test?

Early in my testing career I included an equivalent test in a test plan for something a lot like MoverBot, but luckily it didn’t survive peer review. The problem is that instead of running the test, you could go for a coffee break and come back to the same result. Like so:

Given a file to be distributed and there is not an entry in the database,
When I go for a coffee break and come back without running anything,
Then the file is not copied to any server.

In other words: given any scenario at all, when I do nothing, nothing happens.

To improve this, we need to ask: what bug (or risk) is this test trying to catch (or mitigate)? Once we answer that, we can do a better job of designing the test. These “negative tests”, if you want to call them that, aren’t just for exercising some behaviour. They need to demonstrate that some erroneous behaviour doesn’t exist.

In this case, what I wanted to do was show that the database entry actually does control whether the file is distributed or not. I needed a control to show that MoverBot would have distributed the file without the database entry given a chance. In other words, I needed to show that MoverBot was perfectly capable of distributing files, it just opted not to for this case. This is the revised test:

Given two files to be distributed and only one has an entry in the database,
When MoverBot runs,
Then only the file with the database entry is copied to any server.

We know now that if MoverBot wasn’t reading the database entry correctly it probably would have happily distributed both files. Since it only distributed one file, it must be respecting the database entry.

This particular example is equivalent to running both the first “happy case” and the “coffee break test” at the same time. One might argue that given the happy case is there and presumably passes, you don’t need the control in the negative test. However, that is only true if both tests always run together and you can guarantee there are no bugs in how the second test is set up. Remember, automated tests are code, and all code can have bugs!

Including controls directly in your tests are a way of proving, in a self-contained way, that the test setup is correct, and that you would have seen the bug manifested if it existed. They let you say with confidence that the only reason nothing happened is because nothing was supposed to happen. They aren’t always necessary, but if going for a coffee break would give you the same result otherwise, it’s a smart move to include them.

About this article

Leave a Reply