skip to main content

gerg.dev

Blog | About me

99 character talk: Unautomated automated tests

Some of the testers over at testersio.slack.com were lamenting that it was quiet online today, probably due to some kind of special event that the rest of us weren’t so lucky to join. Some joked that we should have our own online conference without them, and Daniel Shaw had the idea for “99 character talks”, an online version of Ministry of Testing’s 99 second talks. Several people put contributions on Slack, and some made it into a thread for them the MoT Club.

The whole concept reminds me of a story I once heard about Richard Feynman. As I remember it, he was asked the following: If all civilization and scientific knowledge was about to be destroyed, but he could preserve one statement for future generations, what would he say to give them the biggest head start on rebuilding all that lost knowledge? His answer: “Everything is made of atoms.

Luckily the 99 character challenge wasn’t quite so dramatic as having to preserve the foundations of software testing before an apocalypse. So instead, for my contribution I captured a simpler bit of advice that has bit me in the ass recently:

Automated tests aren’t automated if they don’t run automatically. Don’t make devs ask for feedback.

99 characters exactly.

But because I can’t help myself, here’s a longer version:

The problem with automated tests*, as with any test, is that they can only give you feedback if you actually run them. I have a bunch of tests for a bunch of different aspects of the products I work on, but not all of them are run automatically as part of our build pipelines. This effectively means that they’re opt-in. If someone runs them, actively soliciting the feedback that these tests offer, then they’ll get results. But if nobody takes that active step, the tests do no good at all. One of the worst ways to find a bug, as I recently experienced, is to know that you had everything you needed to catch it in advance but just didn’t.

If your automated tests aren’t run automatically whenever something relevant changes, without requiring someone to manually trigger them, then they aren’t automated at all. An automated test that is run manually is still a manual test. It’s one that can be repeated reliably and probably faster than a human could do, sure, but the onus is still on a fallible bag of meat to run it. Build them into your pipeline and loop the feedback back to the programmer as soon as something goes wrong instead. More and more lately I’m realizing that building reliable and efficient pipelines is a key skill to get the most out of my testing.

Now to put my own advice into practice…


* The “tests vs checks” pedants in the room will have to be satisfied for the moment with the fact that writing “automated checks” would have make the talk 100 characters and therefore inadmissible. Sorry, but there were zero other ways to reduce the character count by 1.

About this article

Leave a Reply