So, you got mixed results from cpantesters?

Every so often you may have found yourself puzzled by the results the cpantesters provided, wondering which reasons may stand behind the fails or passes.

So you click on one of the FAIL reports and it doesn't reveal anything useful to you?

Chances are that if the reports contain some common pattern the new analysis site will point it out to you. Analysis uses some well known basic statistical tools that estimate the influence of independent variables on the result of the test runs.

During the last two months I wrote over two hundred RT tickets based on the findings (see also my posting). Among my first favorites were FAILs that happen only when uselongdouble is defined, like #51196. There are plenty of similar ones. Another case where the tables provided a guaranteed quick understanding of the problem was when an upgraded prerequisite broke the distro, like #52889. See how the table at analysis marks all Test::Builder versions above 0.84 as red. You can skip reading reports and go directly to the Changes file of Test::Builder to see what they changed around 0.84.

What I'd love to see is that cpantesters got some inspiration from reading these tables. When I see a distribution with only a dozen test results and plenty of strong correlations I try to create reports that add new flavors to the existing set. After a few hours or days I visit the cases again and sometimes the number of strong correlations has dropped and a clearer picture evolves where the hot spot is.

Another use of the site is to identify broken cpantesters setups.

Another one to discover bugs in bleadperl, as in #50939.

I'm sure you'll find your own entertaining bits in it. Enjoy!

Leave a comment

About Andreas Koenig

user-pic Perlin' for Berlin