A ghetto test library.

I've been working on some slightly complicated code with a myriad of bad design decisions over the course of a decade, and a total absence of test suite. I knew I needed some code reusability in my tests but I had no idea of exactly how much without making a quick start. Meanwhile my brain was filled with ancient code from which I was expurgating zombies, and had issues understanding multi-vendor interactions, so I wanted some bare minimum reusability to make engineering failure conditions easier.

Writing a simple splunk API client.

I needed to do some work with splunk the log and monitoring analytics toolkit. Specifically I needed to combine disparate logs from different systems to solve a problem that had been intefering with our systems for a long time. I had a look around for stuff on the CPAN, but it was either embedded into other bigger things that I didn't want to have to deal with, or did not work for me for maintenance reasons. So I decided to write my own. The library I wrote is available here in draft form.

Now because this was written by me for debugging purposes, I think it shouldn't be a CPAN module - not without a lot of hardening it up - but I wanted to share it anyway, as a handy way of doing API integration.

More refactoring adventures

So lucky for me a client decided to pay me to refactor some of their very old code. Refactoring can be fun, but if you have a 20 year old business critical codebase where the team has forgotten or don't know how stuff works and it absolutely has to not break, then you have some challenges and quite a lot of potential for loss of face.

This particular job was to refactor a single large, excessively complex subroutine into something that was testable and that a relatively naive programmer could reason about. And there were no tests.

tl;dr: this blog post is relatively involved, but scroll down to the bottom to see some neat abuse of git as a data analysis assistant.

Not so new shiny things: LWP::Protocol::PSGI

Too long/didn't read: Don't use Test::WWW::Mechanize::PSGI or similar, use LWP::Protocol::PSGI. I wish I'd found out about this module when it was first released in 2011 rather than in 2015. Which is why this article is here. LWP::Protocol::PSGI is great work and needs to be more widely known.

Since we moved away from monolithic web applications deeply embedded into the web server via the web framework movement and PSGI, now around a decade ago, a common way of testing things has been with modules like Catalyst::Test, Test::WWW::Mechanize::PSGI, Dancer2::Test and similar. These work by mocking the http request and response layers, with the mocked layers talking to an object or a subroutine reference rather than a web server. Which is fine, except, say you want to test both against your application code and your web server using the same test suite. You end up having to do fairly nasty things to decide if you are dealing with a mocked http request/response layer talking to a coderef, or a real one talking to a web server.

Fortunately LWP::Protocol::PSGI eliminates this entire problem.

PPIX::Refactor

(Moved from reddit.com/r/perl now that blogs.perl.org seems to be behaving itself).

Over the past year or so I've had to deal with PPI for parsing and rewriting perl code a handful of times. Refactoring scripts are generally one-shot single purpose. Like any data mangling activity refactoring scripts can get messy because they're generally disposable, so architecture tends to be an afterthought at best. You can kind of tell this if you go and read Perl::Critic policies. They're plagued by boilerplate and repetitive code, and that's for code that is extensively reused. The problem multiplies in the case where you're writing throwaway code against PPI. In this post I'm going to show off PPIx::Refactor, which is a minimal interface to contain a small but annoying part of the mess.