Reading CPAN Testers Reports Using AI Agents
CPAN Testers produce a lot of data. Every CPAN distribution gets tested by our volunteers almost immediately after upload. These testers run every version of Perl across every platform you can imagine, and some you never knew existed. Instead of each project maintaining its own testing environments, the community maintains these systems so the project developers can focus on developing their project. There are more than 150 million test reports so far, and that number currently grows by about one million every month.
Sorting through all of those test reports is a big job. The community helps: Slaven Rezić, Andreas König, and others regularly submit tickets to a project's bug tracker for problems revealed by the testing systems they maintain. And individual maintainers can visit one of the UIs to view the data like the CPAN Testers Matrix (by Slaven) or CPAN Testers Magpie (by Scott Baker). But this, too, is a lot of manual effort.
Large Language Models (LLM) or "AI" agents have recently arisen as a way to chew through large data sets to produce summaries, even if the data is not well-formatted or "machine-readable." By making requests in plain language, a human can tell an agent to fetch data, analyze it, reformat it, compare it, and produce reports. This year, at the 2026 Perl Toolchain Summit in Vienna Austria, I have built an interface so agents can easily discover and analyze the CPAN Testers data using the Model Context Protocol (MCP.)
By pointing your agent at https://mcp.cpantesters.org, you can ask for CPAN Testers reports for any distribution, and your agent can give you a composite of which tests are failing on which Perl versions and platforms and even suggest fixes! CPAN authors can even ask for a summary across all of their projects to find easy things to fix when they've got a few minutes free. If your agent has scheduled tasks, you can get daily digests of all the test failures from the last day, a feature that was once part of the CPAN Testers website, now entirely customizable to your preferences. If you'd like to help expand the possibilities for agent integration, join the CPAN Testers MCP project on Github.
I'm not that experienced yet with using these agents, but here are some quick examples I made while testing the MCP integration with the Claude desktop application:
"list test reports for all of PREACTION's dists and check for fails"
That second query impressed me. With this, CPAN authors could remove a lot of the tedium from diagnosing failures from user test reports!
This work was made possible by the sponsors of the 2026 Perl Toolchain Summit in Vienna, Austria: The Perl and Raku Foundation, Grant Street Group, Geizhals Preisvergleich, Vienna.pm, SUSE, Trans-Formed Media LLC, Ctrl O, Simplelists, Harald Joerg, Michele Beltrame (Sigmafin, Laurent Boivin.
Leave a comment