A crucial little caveat to my statement that automated tests aren’t automated if they don’t run automatically: All your automated tests should fail.
… at least once, anyway.
In the thick of implementing a new feature and coding up a bunch of tests for it, it’s easy to forget to run each one to make sure they’re actually testing something. It’s great to work in an environment where all you have to do is type make test
or npm run test
and see some green. But if you only ever see green, you’re missing a crucial step.
TDD understands this: red, green, refactor. It’s the first one. You write a test first, see it fail, and then write the code necessary to make it pass. You don’t have to be a die-hard TDD acolyte to do this, though.
Write a test. See it pass. Introduce the bug it was designed to catch. See it fail.
If you can’t make it fail, then what information is that test ever going to give you?
Even if you just reverse the expectation (add a .not
or change true
to false
) and see the case fail, you at least know that you ran the test. This is why a catch-all npm run test
is great but dangerous: It is often possible to write a test but not include it in the tests that run automatically, and fail to notice because it’s 1 out of 1000 tests. Quite likely this shouldn’t be possible, but seems to arise naturally in at least three frameworks I’ve worked on. Knowing that all tests passed isn’t the same as knowing that one specific test passed.
Aside from just checking the wrong thing, or failing to run the test at all, this concept can hit in more subtle ways. I recently saw an example while using Jest to test a React app. There were tests written to assert that if a certain condition was met, dispatching the action being tested would result in a rejected promise. Since promises resolve asynchronously, in order to test their results you need to wait until the promises are done before checking the results. In Jest, you do that by returning a promise from the test. These tests weren’t doing that:
it("rejects apples", () => { expect(makeOrangeJuice("apples")).rejects.toEqual(); });
By the time the makeOrangeJuice()
call was rejected and the expect()
failed, the it()
function had already finished and declared a pass by default. If the call wasn’t getting rejected—and, spoiler alert, it wasn’t—the test would still pass but you’d see an UnhandledRejectedPromiseWarning
error in the console if you happened to be watching carefully enough. Worse than that, Jest clears the console logs when it runs more than one suite at a time, so we wouldn’t have seen this error in the CI logs either.
Confusingly, the code was resolving a promise that should have been rejected, but the expect
was being rejected (failing) when it should have resolved (pass). So not the most helpful error message unless you’re familiar with this sort of mistake. (“What do you mean unhandled rejection? I’m not rejecting anything! There’s a Promise.resolve()
right here!”)
For the record, Jest is pretty good at testing asynchronously, if you remember to write your tests that way. All it takes in most cases is a return
:
it("rejects apples", () => { return expect(makeOrangeJuice("apples")).rejects.toEqual(); });
That’s a test that should have failed, and actually was failing, but we would never see it fail.
Test your tests. Feed them with bugs so they grow up strong. Love any test that fails. If it can’t, it’s no good to anybody. All your automated tests should fail.
(Bonus tip: you can automate testing your tests! It’s called mutation testing, and though it generally doesn’t tell you if you have useless tests, it does insert bugs and tells you if your test suite as a whole will catch them or not. I have a great little demo for this that I will, one day, make into a video.)
6 Pingbacks