A happy mod_perl story

10:26 < kd> me:  "how do I integrate the REST API with our psgi/mod_perl abomination?"
10:26 < kd> colleague:  "aah I've been wanting to do that for a while, do you really need it?"
10:26 < kd> me:  "yeah it would be good"
10:26 < kd> [waits two days]
10:26 < kd> colleague:  "here, have a code review"
10:29 < kd> meanwhile it gave me the opportunity to prototype an important thing via a semi-abomination that would never be acceptable in production.  So 1. We got a feature we really needed, and ensures the longevity of the platform, and 2.  I got to do the important thing of  getting my thing wrong on the first try.

A ghetto test library.

I've been working on some slightly complicated code with a myriad of bad design decisions over the course of a decade, and a total absence of test suite. I knew I needed some code reusability in my tests but I had no idea of exactly how much without making a quick start. Meanwhile my brain was filled with ancient code from which I was expurgating zombies, and had issues understanding multi-vendor interactions, so I wanted some bare minimum reusability to make engineering failure conditions easier.

Writing a simple splunk API client.

I needed to do some work with splunk the log and monitoring analytics toolkit. Specifically I needed to combine disparate logs from different systems to solve a problem that had been intefering with our systems for a long time. I had a look around for stuff on the CPAN, but it was either embedded into other bigger things that I didn't want to have to deal with, or did not work for me for maintenance reasons. So I decided to write my own. The library I wrote is available here in draft form.

Now because this was written by me for debugging purposes, I think it shouldn't be a CPAN module - not without a lot of hardening it up - but I wanted to share it anyway, as a handy way of doing API integration.

More refactoring adventures

So lucky for me a client decided to pay me to refactor some of their very old code. Refactoring can be fun, but if you have a 20 year old business critical codebase where the team has forgotten or don't know how stuff works and it absolutely has to not break, then you have some challenges and quite a lot of potential for loss of face.

This particular job was to refactor a single large, excessively complex subroutine into something that was testable and that a relatively naive programmer could reason about. And there were no tests.

tl;dr: this blog post is relatively involved, but scroll down to the bottom to see some neat abuse of git as a data analysis assistant.

Not so new shiny things: LWP::Protocol::PSGI

Too long/didn't read: Don't use Test::WWW::Mechanize::PSGI or similar, use LWP::Protocol::PSGI. I wish I'd found out about this module when it was first released in 2011 rather than in 2015. Which is why this article is here. LWP::Protocol::PSGI is great work and needs to be more widely known.

Since we moved away from monolithic web applications deeply embedded into the web server via the web framework movement and PSGI, now around a decade ago, a common way of testing things has been with modules like Catalyst::Test, Test::WWW::Mechanize::PSGI, Dancer2::Test and similar. These work by mocking the http request and response layers, with the mocked layers talking to an object or a subroutine reference rather than a web server. Which is fine, except, say you want to test both against your application code and your web server using the same test suite. You end up having to do fairly nasty things to decide if you are dealing with a mocked http request/response layer talking to a coderef, or a real one talking to a web server.

Fortunately LWP::Protocol::PSGI eliminates this entire problem.