CPAN Testers Summary - December 2012 - Shabooh Shoobah
November turned out to be a very eventful and productive month. Aside from various code updates with some CPAN-Testers distributions, including porting many of the tests to Test::Database, and discovering the usefulness of Test::Trap for testing some of the scripts, we also got a handle on the missing reports. For the past few months, questions about missing reports has increased. Back in August I started to look at a more thorough catch-up. After some suggestions and ideas from David and Andreas, I also added some to code to collect data in a similar way to the tail log. As a consequence of the tail parsing, the improved catch-up code and the rewritten generate code, it now means that not only have we caught up, but we now have a much more robust mechanism in place to ensure we're not missing any reports.
November turned out to be a very eventful and productive month. Aside from various code updates with some CPAN-Testers distributions, including porting many of the tests to Test::Database, and discovering the usefulness of Test::Trap for testing some of the scripts, we also got a handle on the missing reports. For the past few months, questions about missing reports has increased. Back in August I started to look at a more thorough catch-up. After some suggestions and ideas from David and Andreas, I also added some to code to collect data in a similar way to the tail log. As a consequence of the tail parsing, the improved catch-up code and the rewritten generate code, it now means that not only have we caught up, but we now have a much more robust mechanism in place to ensure we're not missing any reports.
During November, the builder was under heavy load to compile all the pages with new reports. With 3 processes running simultaneously, a high volume of request were being made, but thankfully it held its own, and is managing requests very well. The oldest request is less than 2 days old at the moment, with the latest report lag being about 15 hours behind. As such, if you don't see one of your reports or your distribution doesn't seem to be quite up to date, please wait a couple of days to make sure it's not just waiting to be processed.
Last month also saw some significant limits being pushed a little higher. First and foremost 1033056 reports were processed in a single month. Although we've had more than a million reports submitted in a single month, its never been this high before. Chris Williams became the first tester to pass 10 million report submissions, not that we doubted it was coming soon. We also now have 6 testers who have submitted over 1 million reports each, and last month saw an notable increase in the number of testers currently producing reports. Hopefully this is a trend that will continue. November also saw the biggest variety of platforms we've ever had tested too, and with 102 different platforms its the first time we've managed to cover more than 100 in a month. With all these new highs, it perhaps isn't too surprising to hear that at 67, we've also had the highest number of perl versions being used in testing too, with 63 so far this month already. And finally the number of reports ... last month we passed the 25 million (although last week we passed 26 million) reports submitted. As always many thanks to everyone who has submitted reports over the past 13 years.
On the mailing list Shmuel Fomberg began a lengthy discussion about reports grades that would highlight distributions with dependencies that failed to install. While I understand why many users might find this useful, for authors it would be a blight on their distributions, which may not be truly representative of the state of the distribution. For example, a distribution may fail with one version of a dependency, which then gets fixed, but the report remains tag against the latest distribution, even though the new latest version of the dependency works. It gives a false impression of the distribution. Without extensive metadata analysis the CPAN Testers systems wouldn't know that first report is now is not necessarily relevant. Or is it? What if the user can't install the latest version? Either way an author is tarred with a fault beyond their control, and that isn't what CPAN Testers is about. This isn't the first this discussion has come up, and I doubt it'll be the last.
JT Smith asked about the best practice for test network access. As yet there isn't a one size fits all, LWP has an is_online method, some distributions use ping, although this available on all platforms, and others test whether they get a HTTP 200 response from a known URL. If you know of a more suitable method, please add it to the wiki. Kirk Kimmel asked how long it took test the whole of CPAN, which didn't get a definitive answer, but was guess to be several days, but less than a week. Nathan Goodman questioned why he saw no output in a PASS report for hist distribution, and after some investigation, Chris Williams discovered a bug in CPANPLUS (specifically CPANPLUS::Dist::Build), proving yet again how useful CPAN Testers can be ... even with PASSing reports.
With all the improvements and great support from the community, I'm very pleased to see the CPAN Testers project is in a very healthy state again. I'd also like to thank everyone who has contributed to the CPAN Testers Fund, and particularly those who attended the London Perl Workshop at the end of last month and poured their spare cash into the fund buckets, including the CPAN Testers Fund. Projects like CPAN Testers can only survive with the volunteer contributions, both in time and cash.
Although we still can't say too much just yet, we do have some great news that the Metabase will be moving to a new home in the new year, and we'll be moving off SimpleDB, meaning we might have even better response times in future. My personal thanks to David Golden for following this up, as well as initial introductions and negotiations by Karen Pauley and Ricardo Signes. 2013 is looking very bright for CPAN Testers and long may it continue.
Cross-posted from the CPAN Testers Blog.
Leave a comment