I'm an avid horror fan, big surprise! I like horror movies of all types: zombies, slasher, B-grade, C-grade, gore and even oldschool thriller horror movies like Hitchcock. To this day, I host a Friday the 13th event at my house every time for my friends and I. We run a marathon of as many movies as we can. Sometimes we make it through two, sometimes five. It's not always easy to stay up! :)
These past few months have been pretty difficult and busy. At 10pm I got a message from a friend in Canada: "happy Friday the 13th!" - Shit! I missed one! Well, no matter. The question is: how do I make sure not to miss one again? This is something I considered more than once and was always too lazy to actually try out.
So, I jotted down this code to accomplish it. I'd be happy to hear of better ways you can come up with:
Noirin Plunkett will give a talk at YAPC::Europe 2012 described as
You might not feel like it yet, but if you're considering this talk, you have what it takes to make it in open source!
If you just want to help and don't know where to start, we'll talk about how to find a project, and all the prerequisites. It's easy to think you don't have anything to offer--but you're wrong!
If you've found a project and don't know how to start contributing, this talk will give you tips on getting in the door, finding things you can help with, and learning how to work with the existing contributors.
And, if you already have a project, you'll still learn a lot from this talk--how to attract new contributors, and how to keep them once they've shown an interest!
With years of experience in community development, and as editor of the popular Open Advice book, Noirin knows what it takes to get involved from both sides. Come and learn from her experience, and avoid her mistakes!
1. JSON::Color, YAML::Color. I'm loving colors on the terminal. Only recently (yes, recently!) found out that terminals can support 256 colors. After Log::Any::App and Data::Dump::Color, I'm looking forward to see JSON::Color and YAML::Color, and might even write them myself someday if needed.
Since there are already several syntax highlighters targetting HTML (like those Javascript-based ones, search.cpan.org is even using one), a feasible approach is to convert HTML output of syntax-highlighted JSON/YAML/others to ANSI. Sort of the reverse of HTML::FromANSI.
Yet another approach might be to raw dump the output of color-supporting text-based browser like links2 (complete with its ANSI escape codes), though I can't seem to activate its color support in my terminal ATM.
2. Compress::smaz. An interface to the smaz compression library. Might be useful someday, since I deal with language text a lot.
3. RSS::CPAN::ReverseDepends. In a similar spirit to previous idea, RSS::Mention::CPAN::Module, this module can generate feeds to let CPAN authors know when there is a new module on CPAN using (one of his, or any of his) modules. This should be easily done using the MetaCPAN API.
I've just proposed the following talk at YAPC::Europe:
Networks are great in theory, but have some well-known limitations that you should bear in mind when using them. Come find out what they are so that you don't make the same mistakes.
I'm updating Geo::GeoNames, and my fork is on Github. I have a bunch of geo-coordinates where I want to know the country or city names so I can easily remember why I have those coordinates and tell them apart from the other coordinates.
Since it's last release, GeoNames changed the web services address and started requiring a username (which you can get for free). Once you respond to the email and enable free access on your account, you are ready to go. I've updated that.
I've also responded to all of the RT tickets and Google Code issues. I think I've fixed everything that I can fix.
The tests pass, at least for me. Many of the current failures come from the outdated interface before GeoNames required usernames. There's some tests that I've made TODO until I care about those features. If you'd like something that isn't supported, send a pull request.
I don't know if the current module handles all of the search types. I'm not particularly interested in making it definitive as long as it has the stuff I need. When other people need unsupported types, they can add those too. If there is someone who is motivated to maintain Geo::GeoNames long term, I can make co-maintainters.
If you arrive on or before Sunday, there is an informal
meetup on Sunday evening, 18:30 in the
Café Extrablatt
near the Bockenheim Campus of the University.
It would be good for us if we have a rough estimate
of how many people plan to attend. So if you plan
on coming, please head over to the wiki and
add your name.
I've taken a slightly different approach to this review: previously I've spent a lot of time learning about each module, trying to fix any bugs I come across etc, so the first version published is fairly polished. As an experiment, this time I've done a much quicker first pass, and essentially published a first draft. I started off with 6 modules, but two more were pointed out by readers.
I wanted one of these modules for another review I'm working on, where I was curious what all the modules were getting pulled in by some of the modules under review.
Any comments on the review, and identifying missing modules, appreciated as ever.
The issue identified affects the latest version of the software.
The report includes a test script illustrating the problem
... which is self-contained
... and is minimal
... and conforms to the Test Anything Protocol.
The report includes an explanation.
The report includes patch
... which is well-written
... and obeys coding conventions.
Obviously all reports of genuine bugs are welcome, but that doesn't mean all bug reports are equal; some are better than others. Getting 10 out of 10 is a lot of work, but even 6 out of 10 is better than average in my experience.
Why is writing a good bug report important? Because the better the bug report, the faster the issue can be solved; and the faster the issue is solved, the sooner you can use the fixed software.
So let's look at those criteria a little more closely.
After some email exchange with David and Joel I decided to create a Google Group, The Quantified Onion, for discussing the use of Perl in science. This will hopefully work with Joel's Perl 4 Science site.
We're happy to announce that Curtis Poe will be attending
YAPC::Europe 2012. If you don't know him, he is a programmer,
prolific blogger about living and working abroad and the author
of "Beginning Perl".
The unofficial subtitle of his book is 'Get a job, hippy!' It's focused very heavily
on real-world skills that you'll need in the marketplace. It's based not
only on Curtis' 13 years of experience with Perl, but also on surveys that
show what companies are actually using.
For a short time, you
can read it for free at http://ofps.oreilly.com/titles/9781118013847/
We are very happy to have him visit Frankfurt and give a keynote.
Recently leprevost posted a comment on requiring better software in science. Its a good plea, read it! In response I started a comment, which got a bit too long, here it is:
A few of us have been talking about how to increase the use of Perl in the scientific community. While our efforts are in their infancy, we hope to fight this very problem. Some new sites are
This week I recieved from a friend researcher a paper from the scientific journal Bioinformatics. The journal is very famous among bioinformaticians and it describes itself as 'The leading journal in its field'. I'm not gonna specify who is the author or what is the name of the paper because I don't think necessary. In a simple way, the paper is about a software written in Perl designed to increase the performance of a database searching using protein mass-spectra.
I became interested so I downloaded the .rar, what I saw was 7 .pl scripts, 2 .xml and a readme file telling me the right order of execution and the correct inputs and outputs. The first impression I had was not good, there was no organization at all, the documentation was limited to comments above functions and the most important, the authors did not made test files. The scripts were a little messy too, no 'warnings' and 'strict' pragmas were used and some scripts weren't even indented.
My point here is that I think scientific journals should have better peer-review systems for papers based on softwares, specially those between biology and informatics, I know that it could be hard for someone who don't code in Perl to see some of those points, but good practices are a common thing among all languages. In scientific projects if bad laboratory work can be refused why can't bad coding be refused as well? This kind of project became a publication in a scientific journal and with this software I can't have sure that it works exactly like the authors described and I can't know if I will have reproductible results.
Good practices in software development are as important as laboratory good practices, everyone should respect it.
The Call for Papers is now closed! We'd like to thank everybody who submitted a talk proposal. Without you and your talks an event like YAPC::Europe would not be possible.
We recieved more than 80 talk proposals. And we've already accepted more than 50 talks. You can find the list at http://act.yapc.eu/ye2012/talks.
Today we start our final voting round and it ends on Friday.
It will take a few days to create the schedule. It will be announced here...
There are two main differences from the regular set of events. First, this time we are ready to offer a Partner programme. Second, there will be no attendees dinner, we propose to make a river cruise instead.
Edit: MojoCMS has been renamed to Galileo and released to CPAN. Enjoy!
Over the holiday break, I decided to have a little fun learning some things about the web. I usually get my Perl fix through science, but several upcoming projects might have some web involvement; so I thought I should brush up. The following are some reflections on that experience.
The task I set myself was to make a micro CMS (it is currently named MojoCMS, but I’m not sure I like that), leaving most of the heavy lifting to freely available Javascript libraries. I didn’t think I would be especially good at writing the actual interface, but rather the routing and functionality would be my task. In a strange way, the result was a kind of nostalgic Perl experience; Perl was the glue in my project again, not the main/only language involved.
Noirin Plunkett will give a talk at YAPC::Europe 2012 described as
The Apache Software Foundation provides legal oversight and technical
infrastructure for about 150 projects with more than three thousand
committers. There's much more than just a web server: Apache projects
run the gamut from distributed computing and big data processing to
end-user office software.
But there is one common Open Source paradigm that's absent at Apache:
the benevolent dictator. Meritocracy, community oversight and
consensus have been the hallmarks of our projects for more than a
decade, and they seem to be working. It's not the only way, but it is
another way.
If you've got some code you'd like to see grow and flourish, be useful
to an ever-wider audience, or attract new contributors, why not come
to this talk, and learn a little more about this alternative model?
Perl's motto says "there's more than one way to do it"; at Apache, we
say "try it and see"!
Yet another effort to add Log::Any logging to a popular module, this time LWP. Presenting: Net::HTTP::Methods::patch::log_request. Inside, tt's currently just a wrapper for format_request() method in Net::HTTP::Methods, which is where the raw HTTP request is being formed.
To use it:
use Net::HTTP::Methods::patch::log_request;
# now all your LWP HTTP requests are logged
use LWP::UserAgent;
my $ua = LWP::UserAgent->new;
my $response = $ua->get('...');
Sample usage and output (by using LWP via WWW::Mechanize; you can see how Google redirects to the country-specific domain here):
If you're ever in the position of needing to convert a large (in our case around 32000 revisions) Subversion repository to a Git repository, you should know
git svn is agonizingly slow, and falls over at regular intervals (apparently memory problems; symptom is "git svn died with signal 13")
The KDE version of "git2svn" is written in C++, requires that QT4 be installed (so you have "qmake"), and requires a local copy of the SVN repository. If you have it, it is apparently blindingly fast - in my case I didn't have direct access to download the whole SVN repository.
So for my conversion I used the "svn2git" Ruby gem. It works very well indeed, and is screamingly fast. My current import has been running for 12 hours and is about halfway done - this may seem slow, but the "git svn" version, even wrapped with a Perl program to restart it when it fell over, ran for over
three weeks
before hitting a situation where it couldn't continue, with the import imcomplete.
I'll update the post once the import completes - or if it doesn't!