skip to main content

gerg.dev

Blog | About me

Thoughts from a TDD kata mob

Test Driven Development has been on my mind lately.

I’ve heard from time to time that some people don’t consider Test Driven Development to be a testing activity. Lisa Crispin recently said as much in a piece on modeling test automation, and it’s not obvious to me why it shouldn’t be seen that way.


Coincidentally, the day after reading that, I had planned a workshop for a group of developers that were looking for more practice with TDD. They knew they concepts and were on board with it being a valuable technique, but in the thick of actively developing a big project it can be very easy to take shortcuts.

I had planned a few activities for our session, but ended up focusing just on the Bowling Game TDD Kata from UncleBob (with a hat tip to this collection of TDD Katas on github). I took a mob programming approach, passing one laptop around the room after each red/green/refactor cycle, so we could keep the group working together on the same task and answering the same challenges. We also tried to stick as closely as possible to two rules:

  1. Only write enough of a test to make it pass; as soon as you have a failing test (even if it wasn’t the test you meant to write), switch to writing production code.
  2. Only write the minimum necessary production code to make the test pass, then switch to adding another test.

I have to admit I rarely take a strict TDD approach in my own development projects, so watching the exercise led to a few interesting observations.

An appreciation for failing tests

One of the ways to make a developer unhappy is to not let them write any production code. That’s effectively what our 2nd rule above did. There was a lot of discussion about what they should needed to write, but as long as test were green they couldn’t actually do anything about it. It was neat to see the usual reaction to red tests — “oh no what break” — make a 180 degree change to “oh boy, the tests failed! I get to write the code now!”

And of course, when tests turn red after refactoring, the “oh no” element was still there, but it was also satisfying to know that the tests were doing their job in keeping our code changes safe.

Tests can have bugs too

This should go without saying, but it was fun to see how this manifested.

It’s possible to write yourself into a corner with two tests that can’t both pass at the same time. The tests as a whole will always be red no matter what (reasonable) production code you write. We wrote an assertion that would only be true after adding some more production code, so the tests turned red, but we hadn’t actually passed the argument we needed to use in the test.

Likewise, tests can be green because of an error in the tests. Very early on we wrote a test to check that bonus points were applied to the score. The code didn’t implement this yet of course, but we weren’t yet re-initializing the game between each test, and the previous test happened to add the right value to the score to make the test pass anyway.

In both of those cases, we were only 2-3 tests in so it was immediately evident that the error was in the tests. Further along, I wonder if these kinds of errors can make for some tricky debugging. Or, would the incremental buildup of tests help ensure that only the most recent one is suspect?

Temptation of looking ahead

Once tests turned green, it was always extremely tempting to add new functionality in the refactor step. We know that boolean is going to have to be a counter soon, for example, so why don’t we just set it to its initial value now? The trick is not only recognizing when you’re doing this, but to instead come up with a test for some behaviour that will require it to be a counter. That test will fail, and will then give you the excuse you need to add it.

In this way, the tests you choose to write will shape how the code develops. We had one developer who really wanted to add the concept of frames to our bowling game. We knew we were going to have to address it eventually. Everything we did without considering frames, the argument went, was just delaying the inevitable. But, the group kept coming up with test cases didn’t actually require tracking frames, so we didn’t add frames. This is what prevents code from being over-engineered, but requires some skill in knowing the best way to guide the evolution of the code.

My advice here was to focus on one thing at a time. If there are twelve things this code is still going to do incorrectly, which one do we address first?

Exercising the tester mind

In each iteration, as we evaluated what we wanted to refactor, everybody had a different idea about what test needed to be added next. Sometimes these ideas came in the form of “we should add Z”, but often the ideas came in the form “If we do X, it’ll give us the wrong answer.” That’s a test idea.

Although we want to stick to implementing just one test at a time, I think it is important to keep track of test ideas as you go. When you know there are three edge cases that still need to be considered, put in a comment for each one but just implement one.

it("will do something", () => { ... })
// it will do something else
// given X it should return Y
// test all 0s

As you work through the file, each comment line gets converted into a test. (Or, it ends up not being relevant any more.)

Whereas choosing the right tests to guide the incremental changes you want is itself an exercise in good code design skills, coming up with what your options are is a testing activity. It requires the same mental muscles.

Why TDD isn’t about testing to some

I think the idea that TDD is both a testing activity and a code design activity is borne out by Lisa’s answer to my question:

She’s downplaying the testing aspect of TDD because of a context where developer’s are leaving the testing—including unit tests—up to testers. If you label TDD as a tester activity instead of a code design activity, perverting that into “testers write unit tests” might be a real danger.

On the other hand, in a context where developers are comfortable writing unit tests already, emphasizing the connection with testing as a whole is a great way to turn it into a stepping stone to other kinds of testing. Practicing TDD is a stealthy way to get developers to practice generating test ideas that could be applicable elsewhere.

The primary goal might be stated differently — code design rather than overall quality — but of course one feeds the other. Both can be true.

A quick retro on the TDD workshop

For my own reference, if I were to do this TDD mobbing workshop again, there are only few things I’d do differently:

  1. Set up the initial (empty) module imports and test files in an IDE the team is used to in advance; we lost too much time on setup and unfamiliar keyboard shortcuts.
  2. Make sure to have something like wallaby.js set up to show the test status in real time rather than running manually (my plan to use jest --watch didn’t work in the moment).
  3. Don’t underestimate how long the exercise will take; the rules (and terminology) in the Bowling Game are surprisingly complicated. I’d budget 2 hours to do just this one kata in a mob, with no expectation to finish.

I would recommend trying a TDD kata for yourself.

About this article

Leave a Reply