This is the fourth part in my series highlighting some of the lessons I’m taking from reading Agile Testing by Lisa Crispin and Janet Gregory. Other entries in the series can be found here.
Chapter 6 is largely dedicated to an overview of the “Agile Testing Quadrants”, as a way of exploring the different roles that different kinds of testing can fill. The conclusion states that the quadrants “provide a checklist to make sure you’ve covered all your testing bases.” Given different possible purposes—critiquing the product vs supporting the team, business-facing vs technology-facing—do whatever testing is required to fill that quadrant. But I think, unless you’re starting from scratch, the reverse is the more interesting question: given the tests you have, what purpose does each serve?
I frame it that way because I often find teams working with an automated test suite (including unit tests) where new cases and scenarios are added one feature at a time without a lot of thought of the big picture. I need a test for this feature, and this is the suite I know how to use, so I’m going to put that test here.
There are two questions that need to be considered in building an effective suite:
- What does this test tell me that no other test will?
- Why is this test best done here, and not there?
(These are not the only questions, of course, but two good ones.)
When I first started as a test engineer, my manager made us include a reason for every test we wrote. It was a documentation-heavy culture that I’m happy to have left behind, but I still think this is a valuable exercise. It was usually just one sentence, but enough that you couldn’t just copy and paste tests with one variable changed. It forced putting some thought into whether each new test was worth it. Any time I had trouble articulating a unique reason for two different tests, it was usually a red flag that I either had redundant test coverage or hadn’t thought through the scenario I was aiming for well enough. I still often ask this first question when reviewing test suites.
Now, I’ve graduated into an environment with a highly variegated test strategy, and the second question has become much more important. Any particular piece of feature might be tested in a unit test, a visual test, a functional UI test, or a number of other places. Even a single functional test suite can be split into different sets based on when each should run: when a PR is opened, when a merge happens, when a release candidate is made, or when a new build is deployed. Since it’s not practical to run every test at every stage, understanding what should be tested when becomes a crucial part of our strategy.
With the big picture in mind, each new test and each place where tests run builds up confidence in the product layer by layer. Being able to articulate the purpose of each test, in both broad strokes and excruciating granularity, is a key part of getting the most value from each layer.
Leave a Reply