Meta-testing

A year and a bit ago I wrote about measuring the coverage not of the code I was testing but of my tests, and how doing so had helped me find some problems. Today I dived further down that rabbit-hole.

As I mentioned then, we ran all our tests under Jenkins. Because we're testing quite a complex application, which needs configuration data, databases and so on, we've got a wrapper script that sets up all that jibber-jabber, runs the tests, and then tears down the temporary databases and stuff that it created. But that script had a bug. Under some circumstances, if one of the test scripts failed, the wrapper script would eventually exit with status 0, which is the Unix-ism for "everything went as expected", and so Jenkins would say "all the tests passed, hurrah". We need it to exit with a non-zero status if anything went wrong so that Jenkins will know that it needs to kick up a fuss.

Oops. It's something that's come up a couple of times - we've found it was broken, fixed it after some time of getting erroneous results from Jenkins, then later broken it again when we updated the wrapper script to make it do more stuff, got loads of erroneous reports, fixed it again ...

So now I've written a script that tests that the script that runs the tests has the correct return value. Hopefully this will make any errors more obvious as we'll see a "not ok" on our dev machines before ever sending a broken testing wrapper to Jenkins.

The only question remaining is whether I should write tests for the script that tests that the script that runs the tests is ok.

And ... diagram that previous sentence, I dare you :-)

1 Comment

Here you go. Created with TrEd ([http://ufal.mff.cuni.cz/tred]), written in Perl.

http://i193.photobucket.com/albums/z176/echoroba/out.jpg?t=1399928650

Leave a comment

About David Cantrell

user-pic I'm in yur test resultz analyzn yr failz