Enter the Matrix ... with PDL

We interrupt this k-Means broadcast to bring you an important message about threading (the PDL kind, not the Perl kind - darn those overloaded terms!)

The Assignment

Take two vectors, x and y, and create a matrix C from a function of the values of each element pair, such that

  k-Means
k-Means-er

As we take another lap around the k-Means race trace, the Porsche 914-2 and Volvo 142E are still neck and neck. This time we'll try a straight-forward normalisation that linearly scales all values to the range [0,1] and see if they still end up in the same cluster.

Possibly the best k-means clustering ... in the world!

Short post this time because I got nerd-sniped looking at the data. The fun part is that you quickly move from thinking about how to get your results to trying to work out what they mean.

Forget why I started down this road. Right now, we are seeking the answer to Lewis Carol's famous question, How is a Porsche 914-2 like a Volvo 142E? (well, that's what it was in the first draft) A quick summary for those who have just joined us.

pdl> use PDL::IO::CSV ':all'
pdl> $cars = rcsv2D('mtcars_001.csv', [1 .. 11], {text2bad => 1, header => 1, debug => 1});
pdl> p $cars->dims
32 11

You got 32 11, right?

PDL: Episode VI - a New Book

The title is clickbait. I ran short of time this week and am ~~recycling~~^Wconsolidating comments, replies and thoughts. Let's talk about Books!

I would love a new PDL Book. One that's completely different from the original to maximize the surface of engagement to a new audience. As a "sequel", It would have the advantage of being able to refer the reader to the first book for longer explanations and be able to jump right into how to solve significant problems. brian d foy has just finished his Mojolicious book, so I bet he's got loads of free time on his hands. (although I remember him in the middle of writing it in 2018, so you may have to wait a bit)

k-means: a brief interlude into Data Wrangling

When last we saw our heroes, what they thought was the brink of success turned out to be the precipice of hasty interpretation and now they are dangling for dear life on the branch of normalization! how's that for tortured metaphor!

If you use raw values for your k-means clustering, dimensions with large values or large ranges can swamp smaller dimensions and skew your clusters. The process of normalization tries to bring everything into the same range, usually [0,1], although your choices on how to transform the ranges are also significant. There is not always one best way to do it and, as usual, get familiar with your dataset and use your judgement.