May 2011 Archives

A/B Testing and Local Maxima (again)

Given the large amount of commentary on my first A/B testing post, I was surprised that there was none for my "A/B testing and local maxima" post. That's when I discovered that when you start a post and publish it several days later, if you don't update the date, it keeps the start date and thus gets buried in the queue. So, if you haven't subscribed to my atom feed, you may not have seen it on the front page. This is just a heads up that I took the time to answer some questions people had.

A/B Testing and Local Maxima

I recently posted about Failing your way to success with A/B testing and I was pleasantly surprised about the very active feedback.

One issue which was raised was by Steffan Mueller (one of my colleagues at and a brilliant guy) is a common concern:

What I am most afraid of in the kind of A/B testing I've seen is that it seems very prone to getting stuck in local minima. In that image, imagine you're in the place marked with the red arrow. Taking small steps and adjusting for the test result each time, you're never going to reach the optimal place (blue arrow) since once you're in the local minimum, all small steps you can take actually produce a worse result.

Aristotle raised a similar concern and I think it's important enough to address up front.

Failing your way to success with A/B Testing

So I've read and very carefully considered the words of Douglas Bowman and his decision to leave Google and much of this was about Google's use of A/B testing ... Google's extensive use of A/B testing. Google engineers clearly valued function over form. He complains about a situation where 41 shades of blue were tested for optimum response (see page 3) and makes it clear that he felt constrained. No matter what he did, how simply he did it, he had to "prove" it worked. He simply wanted to build something beautiful and not worry about extreme micromanagement of every aspect.

I can understand that. Many times I want to just "make stuff happen", secure in the knowledge that I know what I'm doing. I look at how foolishly code is implemented and I think "what our customers really want is X!". And A/B testing shows me, over and over again, that I'm wrong. It's humbling. It's humiliating. And if you really care about being the most effective you can be, you'll love it, but only if your pride can handle being proved wrong on a regular basis.

Perl's Built-In OO

There appear to be a few experienced Perl devs who still insist that there's nothing wrong with Perl's built-in object-oriented programming. You bless a reference and do everything manually. Stealing from twitter user @kaesees, I can now explain their opinion in two words: Stockholm Syndrome.

About Ovid

user-pic Freelance Perl/Testing/Agile consultant and trainer. See for our services. If you have a problem with Perl, we will solve it for you. And don't forget to buy my book!