Misconceptions & Misunderstandings

While there are many who really appreciate the work of CPAN Testers, and value the feedback it gives, but it would seem there are still several people who are less than complimentary. One recently posted about what they see as wrong with the project, while continually making incorrect and misguided references. What follows is my attempt to explain and clarify many often mistaken facts about CPAN Testers.

1) CPAN Testers != CPANTS

First on the agenda is the all too frequent mistaken assumption that the CPAN Testers and CPANTS projects are one and the same. They are not. Both projects are very different, but both are run for the benefit of the Perl community. CPANTS is the CPAN Testing Service, currently run by Charsbar, and provides a static analysis of the code and package files with each distribution uploaded to CPAN. It provides a very valuable service to Authors, and can help to highlight areas of a distribution that can be improved. CPANTS does not run any test suite in the distribution.

CPAN Testers is very definitely aimed at both Authors and Users, and is very much focused on the test suite associated with the distribution. The Users can use to the CPAN Testers project to see why a distribution might be good (or not) to use within their own projects. Well tested distributions are often well supported by the Author or associated project team.

2) CPAN Testers != CPAN Ratings

CPAN Testers does not rate any distribution. It only provides information about the test suite provided with the distribution, to help authors improve their distributions, and others to see whether they will have problems using the distribution. Any perception of a rating of the distribution is misguided.

Which is better, a distribution with a test suite that only tests the enclosed module loads, or one with a comprehensive test suite, that occasionally highlights edge cases? Don't attribute number of fails or passes as any sign of how bad or good a distribution is. It may highlight this for a particular release version, but these are only signs of kwalitee, and should not be used to rate a module or distribution. The reports themselves may help Users make an informed choice, as they can review the individual reports and see whether they would be applicable for them or their user base.

On the Statistics site, I do highlight distributions with no tests or high counts of FAIL reports. These lists are intended for interested parties to help fix distributions and/or provide test reports, helping to improve the distributions. However, none of the lists rate these distributions, or say they are not worth using, only that the test suite might not be as robust as the author thinks.

3) Development Release != Production Release

If you're banking you're whole decision on whether to use a distribution, based on whether there is a FAIL report against a development release of Perl (5.19 is exactly that), then you're going to get the rug pulled from under you. Then again, if you're using a development version of Perl in a production environment, a FAIL report is the least of your worries.

Testing CPAN against the latest development version of Perl, is extremely useful for p5p and the Author. If the Author is aware of potential issues that may affect it working with a future production version of Perl, they can hopefully fix those before the production version is released.

The default report listings in CPAN Testers excludes the development Perl releases. If you change the preferences in the left hand panel, you can see these reports, but for the regular User they are not going to see these.

4) Volunteers != Employees

All the people involved in CPAN Testers are volunteers. Not just the testers, but the toolchain developers, website developers and sysadmins. We do it because we want to provide the best service we can to the Perl community. For the most part the infrastructure has been paid by the developers themselves, although now we do have the CPAN Testers Fund, graciously managed by the Enlighten Perl Organisation, which allows individuals and companies to donate to help keep CPAN Testers up and running.

None of us get paid to work on CPAN Testers, and as such please don't expect your great idea will be the focus of our attention until we get it done. There are several sub-projects already on-going, but being volunteers our time working on the code is subject to our availability. If you wish to contribute to the project, you are free to do so. We have a Development site that lists many of the CPAN and GitHub links to get at the code. If you fork a GitHub repository, contributing back is simply a matter of a pull request.

5) Invalid Reports

Unfortunately, we do see some smokers that have badly configured setups. These are usually picked up quite quickly, and the tester is alerted and advised how to fix the configuration. Normally an Author or Tester will post to the CPAN Testers mailing list, and make the Admins and the Tester aware of the problem. In most cases the Tester responds quickly, and the smoker is fixed, but on occasion we do have Testers that cannot be contacted, and the smoker continues to submit bogus reports.

We do have a mechanism in place for run-away smokers, but it has only seriously been used on one occasion, when an automated smoker broke, while the tester was on holiday. In these cases we ignore all the reports sent, until the smoker is fixed. Although we can back track and ignore reports previously sent, it isn't always easy to do manually. This is where the Admin site project aims to make marking bogus reports easier for both Authors and Testers.

I have been working on the Admin site recently, which has been too long coming. Although a large portion of the site is complete, there is still work to do. The site will allow Authors to selected by Date, Distribution and Tester to enable them to selective mark reports that they believe to be invalid. The Tester will then be required to verify that these reports are indeed invalid. The reason for this two-stage process, is to prevent Authors abusing the system and deleting all the FAIL reports for any of their distributions. The Tester on the other hand can mark and delete reports before being asked, if they are already aware of a broken smoker having submitted invalid reports. Admins will also get involved if necessary, but the hope is that Authors and Testers can better manage these invalid reports themselves.

In the meantime, if you see a badly configured smoker, inform the Tester and/or the CPAN Testers mailing list. If it is bad enough, we can flag the smoker as a run-away. Posting that we never do anything, or are not interested in fixing the process, does everyone involved a disservice, including yourself. If you don't post to the mailing list, it is unlikely that we will see a request to remove invalid reports.

6) Rating Testers == No Testers

Rating a smoker or tester is pretty meaningless, and likely to mislead others who might otherwise find their reports useful. How do you take back a bad rating? How do testers appeal against a rating that may be from an author with a personal vendetta? How many current testers or future testers would you dissuade from ever contributing, even if they only got one bad rating? CPAN Testers should be encouraging diverse testing, not providing reasons not to get involved.

A recent post demanded that it was "a matter of basic fairness" that we allow Authors to rate Testers. Singling out your favourite, or least favourite, Tester is not productive. Just because one tester generates a lot FAIL reports, doesn't mean that they are not instructive. If we were to allow Authors to exclude or include specific Testers, you are opening up the gates to Authors who wish to only accept PASS reports. In that case, there would be no point to CPAN Testers at all.

There are Testers that are not responsive for various reasons. It was once a goal of Strawberry Perl to enable CPAN Testers reporting by default, such that the User could post anonymously, without having to actively respond for more detailed information, in the event that the User may not have sufficient knowledge or time to help. ActiveState have also considered contributing their reports to CPAN Testers, which would be a great addition to the wealth of information, but as their build systems run automatically, getting a detailed response for a specific report wouldn't be possible. Rating these scenarios gives the wrong impression of the usefulness of both the Tester and the smoker.

There are already too many negative barriers for CPAN Testers, I'm not willing to support another.

7) What is Duplication?

The argument of duplication crops up every so often. However, what judgement do you base that on? Just the OS, and maybe the version of Perl? If you only consider the metadata we store for the report, then you're missing a lot of differences there can be within the testing environments. What about the C libraries installed, the file system, other Perl modules, disk space, internet/firewall/port connections, user permissions, memory, CPU type, etc. Considering all that and more, are we really duplicating effort?

Taking a recent case (sadly I can't find the link right now), one tester was generating FAIL reports, while others had no problem. It turned out he had an unusual filesystem setup that didn't play well with the distribution he was testing. If we'd have rejected his reports simply because the OS/Perl had already been tested, we'd have a very poor picture of what could be tested, and would likely have missed his unique environment. There are so many potential differences between smokers, it is unlikely we are duplicating effort anywhere as much as you might think.

8) The Alternatives?

In that recent post it was suggested there were "alternatives to the CPAN testing system." Sadly the poster elected to not mention or link to them, so I have no idea what these alternatives might be, or whether they are indeed alternatives.

In all the time I've been looking out for other systems, there have been Cheese Cake for Python (which is more like CPANTS) and Firebrigade for Ruby (which seems to have died now), but I've not seen anything else that works similar to CPAN Testers. There is Travis CI and in the past Adam Kennedy worked on PITA, and even had some dedicated Windows machines at one point, to help Authors test their code on other environments. However, CPAN Testers still tests on a much wider variety of platforms and environments, and more importantly is much more public about its results. I'd be interested to hear about others, if they do exist, and potentially learn from them if they do anything better or have features that CPAN Testers don't have.

Conclusion

You are free to ignore CPAN Testers, but there are thousands inside and outside the Perl community that are very glad it exists, and pleased to know that it is what it is. If you have suggestions for improvements, there are several resources available to enable discussion. There is always the mailing list, but also there are various bug tracking resources (RT, GitHub Issues) available. We do like to hear of project ideas, and even if we don't have the time implement them, you are welcome to work on your idea and show us the results. If appropriate we'll integrate with the respective sub-project.

CPAN Testers isn't opposed to evolving, it just takes time :)

5 Comments

Hi

On the high failure count report, could you please as CPAN id, to make searching the list easier.

Thanx.

I for one am very very thankful this service exists. I can't count the number of times I've released a dist (obviously thinking it's bug-free) and then received a flood of FAILs over the next few days pointing out how I forgot something very subtle that turned out to be critically important. It's definitely trained me to be a better developer, more conscious of cross-platform (especially win32) issues, and more aware of what features aren't available in earlier versions of Perl.

The only (occasional) frustrating thing is receiving the oddball fail report and not being sure if it or me is the one that's broken, so having a more streamlined process for getting clarification about tests will be awesome.

Thanks for the post and for this great service!

Thanks for the great service and thanks for this clarifying post! To get more tests, support in cpanminus and guidelines how to set up a test machine would help.

Awesome volunteer service and greatly appreciated. Helps me strive to writing better code and more test cases.

Thanks again for all the hard work and dedication to making perl's ecosystem better.

Leave a comment

About CPAN Testers

user-pic This is the new account for incidental and summary updates to what's happening with the CPAN Testers. For all the latest news and views please see our blog.