The 1% Testing Solution

I recently wrote about 80% hacks and this post is closely related to that. The overall concept is "don't let the perfect be the enemy of the good".

When it comes to identifying issues in building tools, we often think of a "perfect" solution and then try to implement it. Business objects to this because they want to develop products, not test code. Developers object to the business because they want to know their code works. This tension is very difficult to resolve, so I fall back on my favorite example:

sub reciprocal {
    croak "Reciprocal of 0 is not allowed" unless $_[0];
    return 1/$_[0];
}

Right off the bat, I could see people writing two tests for that: one for zero as an argument and one for a non-zero argument. And you know what? That will get you 100% test coverage on this function and that's where the whole "know their code works" argument falls down.

What happens if you pass a non-numeric argument? What happens if you pass an argument which causes an overflow? What happens if you pass no argument? What happens if more than one reciprocal() subroutine has been exported into your namespace?

Mind you, this is for one of the simplest functions you can imagine and honestly, I can't imagine many developers who would honestly test every possible failure mode. The counter-arguments tend to be:

  • I know what arguments I'm getting
  • I can't test every possible combination

Frankly, I'm going to trust the developer on this. They may be wrong, but they know their system better than I do. It's often foolish to make concrete recommendations about systems you don't understand well. Telling someone they need better test coverage of a function which is only called once a month in a non-critical setting makes you look silly, particularly if the rest of their system is falling to pieces. You need to prioritize.

Getting back to the 1% solution: a couple of years ago at a London.pm meeting, I had a gentleman sheepishly admit that his company just started testing and they only had 1% test coverage. However, their support department called and asked them what they had done different because their support calls had dropped dramatically.

Apparently what happened is that the developers didn't just sit down and write tests. When they bugfixed, they sat down and wrote tests for the bugs they were currently fixing. As a result, known issues were dealt with and the call center was very happy. This means that their customers must have been happier.

If you don't know where to start on testing, write a test or two just for the problem you're working on. The test doesn't need to be perfect and you certainly don't need comprehensive test coverage, but if start slowly building up a small set of integration tests exercising the major code paths that you deal with, you'll find that your small number of tests will catch a huge number of bugs.

Note that I said "integration" tests, not "unit" tests. You can write the latter and I've no problem with that. The difference here is that you might write unit tests to verify that your authentication routine can return a valid user, but an integration test using something like Selenium checking the login and redirection will work properly will cover a lot more ground. The unit test will be far easier to debug, but will catch far fewer bugs. The integration test will be harder to debug, but will catch far more bugs. It's a trade-off and it's perfectly OK to make that trade-off. What's important about decisions like this is that you understand why you're making them and you accept the consequences, not that you please some TDD fanatic who's knocking on your door at 7 in the morning and asking you if you've heard the good news about Kent Beck.

If your back is against the wall, you're willing to buy a little technical debt and you want some moderate assurance that your code can work, just start with a few end-to-end integration tests, exercising your code the way a user would. You may not have good code coverage, but you'll be amazed at how many bugs you can find.

Purists will object to this, but so what? Are you there to build great things or to please purists? Even with 100% code coverage, we can see multiple failure modes in the reciprocal tests and we know that few, if any, write them. Even the purists compromise here. 1% testing is a different compromise. You still want to find the bugs, but you have to get your release out the door. Open source software aside, programmers shouldn't be compromising with purists, they should be compromising with business.

2 Comments

I'm happy to see more people writing about integration testing. As far as i'm concerned it's the most important thing, since "getting as much work done with as little effort as possible" applies in testing as well; but is obscured by how all test examples start really small.

Judging by the tests on CPAN you are preaching to the choir here :)

Leave a comment

About Ovid

user-pic Freelance Perl/Testing/Agile consultant and trainer. See http://www.allaroundtheworld.fr/ for our services. If you have a problem with Perl, we will solve it for you. And don't forget to buy my book! http://www.amazon.com/Beginning-Perl-Curtis-Poe/dp/1118013840/