A number of times in my career, I’ve come across developers who were determined that application code should never be changed just to facilitate testing. Even when everybody recognizes that flakey tests are a problem, somehow adding testability features to the app itself to fix that flakiness is beyond the pale. It is a belief that I have seen in multiple companies, multiple types of products, both Waterfall and Agile, with dedicated testers and without.
So what do we do about it?
I asked that recently on a testers’ slack group and the Ministry of Testing forums. These are some of the main ideas that came out of those discussions. I’ve ordered them roughly by what I felt was least to most helpful. As we get further down, we’ll get closer to what addressing what I think the root cause is. I would love to hear if there are other ideas.
Cover your ass
Don’t even bother trying to understand why developers aren’t co-operating with the advice of testers. Make sure you have a paper trail to show you tried to make improvements but was thwarted. Then when you get in trouble for wasting your time on flakey tests, you can prove that you did your best.
Hopefully it’s obvious why I find this an unsatisfying resolution.
Make it someone else’s problem
If developers disagree with testers, don’t consider it your problem to change their mind. Make your case to your managers, and let them decide. Someone up the chain of command should be able to force the developers to comply and make the changes you need. Let your manager deal with it.
Unfortunately, forcing developers to implement what they believe is a bad practice won’t make them believe it’s a good idea after all.
The only nugget of good advice to pull from this is to make the magnitude of the problem as clear as possible. That can always be used to help prioritize changes.
Make it safe
One person suggested that the reason for wanting to avoid code changes in the application could be a lack of confidence in the code. The developers may not be concerned that any change is likely to introduce new issues, or that changes may break a fragile mental model they have of how things work.
If this is the root of the problem, then think about how the change you’re proposing can be made as safe as possible; think about what the potential impacts could be and how you can test the change you’re proposing.
Keep it narrow
I have heard developers worry about code bloat, or not wanting to make things more complex. Some versions of this idea certainly do carry this risk, so the worry can be justified. It’s fair to say that introducing multiple code paths based on whether you’re in a testing environment or not is a often a bad idea. This is especially true if you’re proposing to change some existing behaviour. There can also be security concerns with things like exposing whole new APIs or information that is meant to be private.
In response, testability changes should try to be as minimal and non-invasive as possible. Limit yourself to adding, not changing if you can. An extra log message or publishing a boolean “ready” state is sometimes all it takes to address a flaky test. These both have in common that they don’t look like “test code” if you didn’t know why they were added, and don’t even need to be wrapped in “if (test)” logic. The less “test-y” the proposed change is, the better.
Show, don’t tell
Sometimes the limitation is that they there isn’t time for developers to work on “test stuff”. There may be an asymmetric prisoner’s dilemma at play if the developers don’t have anything to gain by making tests easier if it detracts from them doing their own work (more on that later).
The answer to this is to do the work for them. Test developers should have equal access to the code to make a change and open a pull request, on top of knowing how to test it well. This assumes, of course, you aren’t dealing with fragile code or making a huge change – see “Make it safe” and “Keep it narrow” above.
It’s also possible that the developers don’t know what testability looks like, or don’t recognize what internal code quality looks like. Developers may not know, or may not appreciate, that they can write code that is trivial to verify. This is what makes TDD hard at first; it forces developers to start thinking about how a unit can be written in a testable way before even starting. It can feel like you have to contort your regular coding style in uncomfortable ways until you’re used to it and take testable code for granted. Instead of just describing what the code should be, show them what that looks like in real life.
Make it their solution
Instead of running it up the chain of command and relying on managers to force the issue, make your case directly to the developers. The same arguments that you’d use complaining to managers should work with developers: “we have a hard time testing because…” or “it takes X hours longer to move a release to production because…”
The difference here is that by working with developers on the problem, they can help figure out the best solution. It’s possible that they will come up with a different idea that works just as well. You might arrive at the same solution you wanted all along, but now it was the developer’s idea instead of something you are forcing on them.
Besides, if you work as a test automation developer, talking to developers about your mutual challenges is always a good idea.
Make it their problem
Everything above so far assumes that this is a tester problem. The tester has a problem to solve and needs the developers’ help. That just shouldn’t be the case.
The more the barriers between “test” and “dev” are broken down, the more the pain of dealing with flakey tests is shared by the developers too. If developers can’t merge pull requests because the tests fail, or developers have equal responsibility in testing code before being released, they will be motivated to fix this problem just as much as you are. On the flip side, if developers can ignore flakey tests, they will. Then the tester is the one left to deal with it.
Sometimes you have to let them feel pain.
Make it their code
We already touched on the most common objection to adding testability features to application code: some version of “it’s added complexity” or “it’s more code to maintain”. But, remember, all the hoops that test code goes through to try to create a stable test out of an unstable application is also complex code that needs to be maintained. This kind of thinking just shows that the developer thinks of application code as theirs and test code as someone else’s.
I’ve even picked up a sentiment from time to time that adding “test code” to an application somehow corrupts their otherwise pure codebase. Test code is considered “other”. Sometimes a few counter examples are enough to poke holes in this idea: Have you ever refactoring code to make unit tests easier? Have you ever added a log message to make code easier to debug? Have you changed the implementation of a product to make development easier in a way that wasn’t strictly related to business functionality? When posed the right way, you may be able to bring developers around to the realization that they’ve been writing “test code” all along.
If test code is developed in parallel with the application code from the start instead of as an after thought, the need for testability just becomes part of the development process. Testability features can be made a requirement up front. We all love to talk about how quality is everybody’s responsibility, but put it in concrete terms. What if the last thing you committed went straight to production right now? (A question like that also has the side benefit of motivating strong CI/CD practices.)
This doesn’t necessarily mean that you should get rid of dedicated testers altogether, since you can still have a team of developers that keeps the code in separate mental buckets. What we should get rid of is the idea that test automation code is something separate from the application. Test automation is a feature of the application and should be treated as such.