This is a lesson highlighted by three interactions the last couple weeks. I was going to title this post “Existence is insufficient”, but I found myself taking an unexpected turn in the last section. Let’s see if it still makes sense.

1. Unit tests existing is insufficient

I was talking to someone about the difficulty of keep with writing UI and integration tests. I thought one way to mitigate the burden would be to move tests lower down the test pyramid, which led to this exchange:

“Could any of the things you’re testing here be covered as unit tests instead?”

“No, the developers already have unit tests.”

Great that developers have unit tests, I guess, but less great that the only information feeding back from them into the larger testing strategy was that they exist.

2. Coverage is insufficient

That interaction reminded me of another classic moment every tester is familiar with:

“How did we miss that, we have a test for that.”

Or, even worse:

“How did we miss that, we have 100% coverage.”

Just because a thing called a test exists in code doesn’t mean it actually tests anything.

3. Monitoring is insufficient

Since I’ve been trying to beef up my DevOps know-how, I’ve been asking a lot of people about how they test or monitor their production systems. How do they know when something breaks? What tools are they using? What kind of metrics and thresholds do they use? That kind of stuff.

This exchange never happened verbatim, but there have been hints of it:

“Do you have alerts set up to tell you when something is wrong?”

“Yes we do! They’re always going off though, so we ignore them.”

It was, coincidentally, about this same time that I learned the phrase “alert fatigue” from Abby Bangster.

Use it or (you might as well) lose it

There are seemingly infinite flavours of this: we have tests but don’t run them, the test suite always fails, we get feedback but never act on it, etc. These just happened to be the three examples that hit me one after the other in the span of a week.

Related is the old joke, paraphrased: “the easiest way to fix a failing test is to not run it.”

It turns out this is, sadly, an easy trap to fall into. I was going to end the post there, but I realized there is actually a bright side to this.

Saying “you might as well lose it” isn’t actually a judgement one way or another. Yes, you might be missing an easy opportunity to reap benefits from something you already have. But equally possible is that it might also just be something you don’t need in the first place. If you aren’t using something to its actual potential, can you save yourself the time and energy of carrying the burden entirely? I remarked the other day that I found great joy in submitting a pull request that did nothing but delete a bunch of out-dated and redundant UI tests.

You can’t get value out of something if you don’t use it. But don’t spend the time to use something just because you can.