What is enough testing?

Deciding what is enough testing is a difficult decision. Here are three things to think about when making that decision.

1 min read

This is the million dollar question of testing.

If I had a simple answer, I’d be living on my own luxury island with a billion dollar yacht.

As it is, I don’t, but here are three factors to think about.


The most obvious constraints on your testing are the time and resources available. If there is a million dollar penalty clause that kicks in next week, then it’s unlikely to be worth testing past that deadline. Any issue found is unlikely to be as bad as that. Usually it’s not as stark as that, but the same idea applies. If you miss out on a trade show, that might cost a lot of money too.

The counterbalance to that is the unexplored risk remaining. In an ideal world we’d have good hard data about the number of bugs that have been found in particular areas of the application or related to particular changes. Unfortunately, in the real world, that kind of data rarely exists, so it’s down to the instinctive, gut feeling of people on the team. People will develop a non-numeric understanding of the fragility of certain parts of the code base. People have an understanding of the complexity of the change. How much code was edited? How complex were those edits? This is where it is worth listening to those people on the team who have the most experience.

The third dimension that plays into this is the “embarrassment factor” or reputation risk. This is why people insist on installers getting tested, even if there hasn’t been any changes, but it would be so embarrassing to break that and prevent users from getting value from the new version. The test targeting this sort of risk should be the core of a regression pack and, hopefully, automated.

This is one of the most compelling reasons to make small, incremental changes. The less you change, the smaller the risk is.