In the last week, as a result of YAPC::EU::2015, the main website for the `Portuguese Perl Mongers' (a free translation of the association name) has been rewritten by Nuno "smash" Carvalho, in Perl 6, to generate (static) web content. It still has its perks, but it mostly working.

In order to not be just waiting for Nuno's work, I've been writing some artic…

CPAN PRC: July is Data::Dump

For July, the CPAN Pull Request Challenge assigned me Data::Dump. Better than the pull request itself, this assignment was great to know Data::Dump, as I have never see it before.

For the PR, I tried to read user complains, and one suggestion was to keep UTF-8 intact when dumping to a stream that is utf-8 aware. I created a basic PR, so illustrate a possible solution. Unfortunately, as on most of the previous months, I did not receive any feedback yet. But the pull request is there, ready for comments or to be merged.

CPAN PR-Challenge: June Report

I know this is getting a lame excuse. But with lack of time, the patch I had time to prepare this month is, again, small. It is mostly some extra tests:

But better few than nothing...

CPAN PR-Challenge: May Report

Somehow I missed to post my April report. Don't remember well what was the PR. It was something basic, as I lack the time for real work.

This month I prepared a Pull Request on removing HTML from result entries obtained with WWW::Wikipedia. Now, waiting to see if it gets merged. It seems I have no luck on my PRs to be merged...

Term::ReadLine::Gnu Unicode Hell

I must be doing something wrong. Surely. I have a REST service. I get data using HTTP::Tiny and use JSON::Tiny to decode it.

If I try to print to the Term::ReadLine::Gnu OUT filehandle, I get double encoded strings, like this:


If I try to binmode it to utf-8, things get worst, with triple encoded strings:


Resolved it decoding (Encode::decode) from UTF-8 and using the internal Perl character representation. It worked.

The problem was when I tried to feed a pre-defined input line to Term::Rea…