Discoverable tests and creating testing standards

I like writing code. I like writing tests. I don't like:

  • Trying to figure out where the tests are
  • Writing boilerplate
  • Finding yet another package without tests

That last one is particularly vexing when you discover that your code is failing because another package doesn't load, but it doesn't have tests. So I fixed that.

Some of what follows is a repeat of things written in previous posts, but it's important enough that it bears repeating.

A couple of months ago I suggested we create "discoverable" tests at work. In short, many of us face the problem of trying to figure out where the tests are in a new project. Worst case is this:

$ ls t/*t | wc -l

And you're sitting there weeping because the nice, orderly code in your lib/ directory is tested by a pile 'o junk. Other projects have subdirectories, but those subdirectories often group things by behavior, not package; your tests for lib/Foo/Bar/ wind up in t/common/exceptions/, t/common/standards/, t/database/, t/miscellaneous/crap/tests/. Even if you memorize all of those subdirectories, you find yourself adding code that can, arguably, be in multiple subdirectories.

My recommendation was that our modules have one corresponding .t file. The agreement reached was that a module like lib/Foo/ would go into t/Foo/Bar.t. I also have the following in my .vimrc:

noremap ,gg :call GotoCorresponding(expand("%"))<cr>                                                                                     

function! GotoCorresponding(module)                                                                                                       
    let file = system("get_corresponding ".a:module)
    if !empty(file)
        let ignore = system("perl /home/cpoe/bin/make_test_stub ".a:module." ".file)
        execute "edit " . file
        echoerr("Cannot find corresponding file for: ".a:module)

The get_corresponding code merely returns lib/Foo/ for t/Foo/Bar.t (and vice versa). It's trivial to write. However, the make_test_stub is what I added today. It's a hack (as many good experiments are!), but it does the following:

  • Exit if we're not about to enter a .t test.
  • Exit if the .t test exists
  • Write a subtest for every public function

It looks like this:

use strict;
use warnings;
use autodie ':all';
use Class::Inspector;
use Sub::Information;

my ( $package_file, $test_file ) = @ARGV;

my $package = $package_file;
$package =~ s{\.pm$}{} or exit;
$package =~ s{^lib/}{};
$package =~ s{/}{::}g;

exit if -e $test_file;

eval "use $package";
die $@ if $@;

open my $fh, '>', $test_file;
print $fh <<"END";
use Test::Most;
use $package;


my $functions = Class::Inspector->function_refs($package);

my @functions;
foreach my $function (@$functions) {
    my $info = inspect($function);
    my $name = $info->name;
    next if $name =~ /^_/ or $name =~ /^[[:upper:]_]+$/;
    next if $info->package ne $package;
    push @functions => $name;

foreach my $function (@functions) {
    print $fh <<"END";
subtest "Verify $function" => sub {
    can_ok '$package', '$function';


print $fh "done_testing;\n";

So now, if I edit a file named lib/Weborama/Collect/Component/ and it has no tests, I hit ,gg in vim and get this:

use Test::Most;     
use Weborama::Collect::Component::Publisher;

subtest "Verify code" => sub {
    can_ok 'Weborama::Collect::Component::Publisher', 'code';

subtest "Verify halt" => sub {
    can_ok 'Weborama::Collect::Component::Publisher', 'halt';

subtest "Verify has_landing_url" => sub {
    can_ok 'Weborama::Collect::Component::Publisher', 'has_landing_url';

subtest "Verify headers" => sub {
    can_ok 'Weborama::Collect::Component::Publisher', 'headers';

subtest "Verify landing_url" => sub {
    can_ok 'Weborama::Collect::Component::Publisher', 'landing_url';

subtest "Verify new" => sub {
    can_ok 'Weborama::Collect::Component::Publisher', 'new';

subtest "Verify response" => sub {
    can_ok 'Weborama::Collect::Component::Publisher', 'response';


This has a few benefits:

  • You start out with working code rather than an empty test file.
  • Even for modules that have no tests, you'll at least know that you can "use" the module.
  • Even if you don't update the stub test, you'll get a test failure if someone removes part of the public API.
  • It's harder to forget to test a particular public function.
  • Continuing to evolve standards for testing can make our test suites easier to manage.

So I used this to quickly generate some stub tests for one of our projects and a few minutes later I had these extra tests in the test suite (I thought it prudent to obscure these test names):

$ git st --porcelain|grep ^A|cut -d' ' -f3|xargs prove -l
t/xxxxxxxx/xxxxxxx/xxxxxxxxx/xxxxxxxxx.t.................... ok
t/xxxxxxxx/xxxxxxx/xxxxxxxxx/xxxxxxxxx/xxxx/xxxxxxxx.t...... ok
t/xxxxxxxx/xxxxxxx/xxxxxxxxx/xxxxxxxxx/xxxx/xxxxxxxxxxxx.t.. ok
t/xxxxxxxx/xxxxxxx/xxxxxxxxx/xxxx.t......................... ok
t/xxxxxxxx/xxxxxxx/xxxxxxxxxxxxxx.t......................... ok
t/xxxxxxxx/xxxxxxx/xxxxxxxxxxxxxx/xxxx.t.................... ok
t/xxxxxxxx/xxxxxxx/xxxxxxxxxxxxxx/xxxx/xxxxxxxxxx.t......... ok
t/xxxxxxxx/xxxxxxx/xxxxxxxxxxxxxx/xxxx/xxxxxxxxxxxxxx.t..... ok
t/xxxxxxxx/xxxxxxx/xxxxx/xxxxxxx/xxxxxxxxx.t................ ok
t/xxxxxxxx/xxxxxxx/xxxxx/xxxxxxx/xxxxxxxxx/xxxxxx.t......... ok
t/xxxxxxxx/xxxxxxx/xxxxx/xxxxxxx/xxxxxxxxx/xxxxxxxxxx.t..... ok
t/xxxxxxxx/xxxxxxx/xxxxx/xxxxxxx/xxxxxxxxx/xxxxxxxxxxx.t.... ok
t/xxxxxxxx/xxxxxxx/xxxxx/xxxxxxx/xxxxxxxxx/xxxxxxxx.t....... ok
t/xxxxxxxx/xxxxxxx/xxxxx/xxxxxxx/xxxx/xxxxxxxx.t............ ok
t/xxxxxxxx/xxxxxxx/xxxxx/xxxxxxxxx.t........................ ok
t/xxxxxxxx/xxxxxxx/xxxxx/xxxx.t............................. ok
All tests successful.
Files=16, Tests=53,  7 wallclock secs ( 0.06 usr  0.01 sys +  6.48 cusr  0.28 csys =  6.83 CPU)
Result: PASS

Naturally, not all tests can be shoe-horned into this one-size fits all methodology, but shortly after I recommended this at the BBC, devs noticed it was much easier to work with the test suite. At my current position, I'm finding that other projects have adopted this pattern and it makes switching between repositories and getting up to speed much easier.

This idea is just an experiment, but I'd love to hear feedback and suggestions.

If you liked this post, don't forget to check out my Beginning Perl book.


I like it. Reminds me of ZenTest for Ruby - specifically, the material in this article. It's been a while since I used ZenTest, though. The resemblance may be coincidental.

Pretty straightforward recommendation, though not applicable in all cases. Sometimes I organize tests not by module but by case groups. There are also abstract classes and roles which might not be tested at all (heretic!).

*... which might not need to be tested ...

Very nice.

Here's a little more vimscript, also including the GetCorresponding() function:

See also Devel::CoverX::Covered for another take on the problem of finding the way in a sprawling test suite.

Devel::CoverX::Covered extracts and stores the relationship between covering test files and covered source files from a Devel::Cover cover_db.

So you can jump from source to test files vice versa. In Emacs, you can also have it highlight sub coverage.

Curtis, I wrote that after investigating Devel::Cover during a Gold Card day when we both worked in Pips (a project at the Beeb). I'm surprised you didn't give that a try; it sounds like it would have been helpful in your situation.


It seems that the glob used by MakeMaker when running "make test" for finding tests is "t/*.t", which doesn't discover the tests in subdirectories. Does anyone know of a way of changing that?

I like the naming convention, I try to do that on my projects. Usually I go with one test file per method, especially in untested code methods to too much, so testing Foo::Bar->thing would be t/Foo/Bar/thing.t.

I've tried the can_ok stubs before, it's why can_ok takes a list of methods, and I found it has major drawbacks. It gave the illusion of more test coverage than really existed (caveat, this was before Devel::Cover).

For a dev team that isn't sold on the value of testing, it trained the developers to not trust the tests when they passed. There were a lot of .t files, and they passed, but that didn't tell them much. They had to manually look at each .t file to see if it actually tested anything. It was better to not have a .t file at all and know they were playing without a net.

As for the benefits, "even for modules that have no tests, you'll at least know that you can 'use' the module" is better accomplished with a single t/00compile.t. It runs first and fails fast.

In my particular case, because modules were heavily interdependent and slow to load, it made the test suite very, very slow. Another strike against the value of testing to a team already unconvinced.

The only concrete one is "even if you don't update the stub test, you'll get a test failure if someone removes part of the public API" but IMO it isn't worth creating all those stubs and all those false positive passes.

What is nice about those stubs is it encourages good scoping. However, I'd see more value in an editor macro to expand "method_name" into a subtest stub.

I add this to my WriteMakefile.

test        => {
    TESTS => 't/*.t t/*/*.t t/*/*/*.t',

After a big search through the (unfortunately not well documented) Module::Install code I found the "tests_recursive" function that does the same thing to the required depth automatically.

About Ovid

user-pic Freelance Perl/Testing/Agile consultant and trainer. See for our services. If you have a problem with Perl, we will solve it for you. And don't forget to buy my book!