April 2026 Archives

Reading CPAN Testers Reports Using AI Agents

CPAN Testers produce a lot of data. Every CPAN distribution gets tested by our volunteers almost immediately after upload. These testers run every version of Perl across every platform you can imagine, and some you never knew existed. Instead of each project maintaining its own testing environments, the community maintains these systems so the project developers can focus on developing their project. There are more than 150 million test reports so far, and that number currently grows by about one million every month.

Sorting through all of those test reports is a big job. The community helps: Slaven Rezić, Andreas König, and others regularly submit tickets to a project's bug tracker for problems revealed by the testing systems they maintain. And individual maintainers can visit one of the UIs to view the data like the CPAN Testers Matrix (by Slaven) or CPAN Testers Magpie (by Scott Baker). But this, too, is a lot of manual effort.

Large Language Models (LLM) or "AI" agents have recently arisen as a way to chew through large data sets to produce summaries, even if the data is not well-formatted or "machine-readable." By making requests in plain language, a human can tell an agent to fetch data, analyze it, reformat it, compare it, and produce reports. This year, at the 2026 Perl Toolchain Summit in Vienna Austria, I have built an interface so agents can easily discover and analyze the CPAN Testers data using the Model Context Protocol (MCP.)

About preaction

user-pic CPAN Testers data janitor