AssertJS, a conference specifically about testing in JavaScript, took place in Toronto last week. Interestingly, I’d say most of the talks were not specific to JS at all. Here are my top 5 highlights, in no particular order:
1. Tophatting
From Adam Archer: “Tophatting” is the a word used at Shopify that refers to actually testing someone’s changes when doing code reviews, rather than just squinting at the code and saying “looks good to me”. They have developer tools to make it easy to set up whatever local state is required to run the code changes.
I’ve always taken it as a given that if I approve a pull request, it means I’ve run the code, but some people find that a surprising concept. While he didn’t go into the origin of the term, it’s nice to have a word for it.
2. Training yourself on short TDD cycles
From James Shore and Ryan Marsh: Both talked about TDD.
James emphasized that the TDD cycle is about making testable hypotheses about your code (“this should fail because…”) and using it as a way of making sure you’re actually understanding what you’re doing.
Ryan talked about a workshop to train yourself to, essentially, crave the reward of those tests turning green and being ruthless about throwing out code if you can’t get the desired result in a few minutes. That could be a failing test, the code to make a test pass, or refactoring while keeping the tests green.
The lesson from both: the red/green/refactor cycle in TDD can be/should be really short. The missing piece here for me: the same feedback cycles can be made to apply at longer timescales, too. I haven’t been a hardcore TDD’er, but I don’t buy that it’s just a development thing.
3. A defense of test coverage
A great analogy from Isaac Z Schleuter (which I’m sure I will not do justice to) in defense of pursuing 100% code coverage: It’s like seatbelts. Everybody is going to die. If you wear a seatbelt, you’re still going to die. But that doesn’t mean seatbelts don’t save lives.
Isaac also made great points that there’s a bigger psychological impact when coverage drops from 100% to 99.5% compared to 98% to 96%, and that going for 100% forces you to look into the deep dark corners of your code. People love to hate on test coverage—I’m even giving a talk about it next week!—so it was refreshing to hear a defense of it.
4. Addressing flaky tests
Both Nancy Du and Jason Palmer talked about tools that can record API calls being made by a test and use them as canned responses the next time they run, as a way of reducing flakiness in a way that’s easier than manually maintaining mocks (cypress-autorecord and polly-jest-presets, respectively).
Jason did a great job of showing how collecting stats on flaky tests and visualizing them can be a motivator to fix them. He used a grid of pass/fail result for each test in each build, which lets you easily see the difference between a flaky test and a problematic build.
5. Component-Driven Development
From Michael Shilman: using Storybook to develop front-end components in isolation from a larger web app is a great way to trick people into writing tests. It can be difficult to test every visual variation of a component in a UI, so Storybook provides a way of cataloging the different visual states a component can have. Once you have that catalogue of states, you can couple it with a visual testing framework and you’re off to the races.
It does feel like visual testing is going to be a theme of my next year.
Leave a Reply