Sometimes a good time means relaxing with CPAN, reading a POD of something and trying to learn it. At least for me.
Last weekend I treated myself to playing with KiokuDB and Dancer. I'll write on KiokuDB later, this post is on Dancer.
Dancer is a Plack-aware web application framework written by Alexis Sukrieh. It has a built-in simple templating system, but supports Template::Toolkit (a must for me), routes handlers (named matching, regex matching, wildcard matching), simple rapid prototyping and support for multiple configurations and allows separate configurations for separate environments and stages of the software.
After playing with it for a rather short time, I already implemented CRUD. The code comes out clean (which I love), understandable and simple. Pure joy. You should totally check it out!
The only problem is that I want to write an interface which will be hosted on someone else's computer, which only supports CGI. I have no idea how to run it as CGI. Any ideas?
Update: Alexis Sukrieh wrote on how to do this here. Next post will reflect this!
So I hear that the next Perl QA Hackathon will be in Vienna. What should we accomplish? The following is not complete as I was so focused on the areas I was working on that I really didn't follow the other areas.
In the first QA Hackathon, in Oslo, we nailed down a bunch of issues we'd like to see in TAP. We clarified part of the spec and started work on tests for TAP itself. (And convinced Nadim Khemir to release App::Asciio).
The second QA Hackathon, in Birmingham, UK, saw the creation of nested TAP (i.e., subtests).
The third one, I think, should result in either better parsing of nested TAP or shoe-horning structured diagnostics into TAP.
I've been upgrading some systems at work recently. Using local::lib so I can test it first without worrying about rolling back being a pain (which it could be with our current build system).
We've had most of our CPAN modules installed for several years and haven't had the need/time to upgrade them. But with the advent of Catalyst 5.8 using Moose and other great projects we'd like to get involved in, such as Gitalist I bit the bullet and started installing into my local::lib so I could then run our tests against it, but then I hit a few problems....
Now remember when I said we had old module? - I'm talking some that haven't been upgraded in 4+ years (they only get upgraded when a newer version is set at a prerequisite).
Our current project at my job is based solely on rather sophisticated object model, we've chosen MooseX::Declare as one of our main helpers. We are very happy with
compiletime
argument checks, I guess it saves us a lot of debugging time.
Unfortunately our app performance was far from satisfying; the profiling showed it was MXD's fault. Also, we found an
enlightening benchmark
.
The problem was solved by implementing a
tool
that allowed us to have two variants of our codebase: development and release.
Release is prepared by translating method declarations with method signatures into plain perl subs with parameters parsing. This job is done by
undeclare.pl
, it uses PPI to parse and transform .pm files.
Some of our domain-specific tests now run more than 10 times faster.
Last month CPAN Testers was finally given a deadline to complete the move away from SMTP to HTTP submissions for reports. Or perhaps more accurately to move away from the perl.org servers, as the amount of report submissions has been affecting support of other services to the Perl eco-system. The deadline is 1st March 2010, which leaves just under 2 months for us to move to the CPAN Testers 2.0 infrastructure. Not very long.
A couple of years ago I had a simplistic way to run Test::Class methods on my use.perl blog. Unfortunately, it littered the test class with $ENV{TEST_METHOD} assignments. I should have fixed that. Here's a better version:
With that, if your cursor is inside of a test method, typing ",tm" (without the quotes and assuming that a comma is your leader), will run just that test method.
Two ago I began writing the beginnings of what was to be the first Cantella::Data::Tabular renderer class. The idea was to render a Cantella::Data::Tabular::Table object into a plain text table. I failed miserably. Within 5 seconds I got wrapped up on issues of formatting and how to render data and data-types. Eventually I resorted to #moose for ideas and rafl pointed me towards some code of his. We agreed that it would be mutually beneficial to use if I got to use his code as long as I separated it from its original package, and packaged it separately so it could stand alone.
I didn't play drums to make money. I played drums because I loved
them. [...] It was a conscious moment in my life when I said the rest
of things were getting in the way. I didn't do it to be [be]come rich and
famous, I did it because it was the love of my life.
I never said it. [...] Why did I
rob banks? Because I enjoyed it. I loved it. I was more alive when I was
inside a bank, robbing it, than at any other time in my life.
Money moves job indexes.
But where the following for a language is only cash-driven,
that following may become as broad as the ocean,
but it will never be more than a millimeter deep.
And it will dry up the moment the cash disappears.
Working on a book about something is one of the best ways to discover issues with interfaces. When I have to explain some process and think about all the ways that things can go wrong so I can make the explanation as bulletproof as possible, all sorts of issues pop up.
I'm working on the distributions part of the next edition of Intermediate Perl. All of the h2xs stuff is being shoved into a couple of paragraphs and everything else now uses Module::Starter. I'm not a particular fan of the module (I have my own: Distribution::Cooker), but I do think it's the best thing to use if you don't know what you want to use.
Back in September 2008, I had a list of the most popular testing modules on the CPAN. I created this list because I was writing Test::Most and I needed to know them. Tonight, after hearing that the next Perl-QA hackathon is in Vienna, I thought about what I might want to accomplish and decided to see if the most popular modules had changes. In 2008, out of 373 test modules, we had the following top 20:
When just registering to use.perl.org, I started using RSS feeds. Currently I have roughly 25 or so subscriptions, which is lightweight, I know. Mostly are specific Perl programmers (mst, nothingmuch, elliotjs, dagolden, drolsky), some are aggregators, a few friends' personal/political blogs, etc.
Work-wise, the best feed I've had was the "use.perl.org generated feeds", which shows me on a daily basis which modules/distros were released. It helps me by:
Keeping me up to date with modules I use.
Keeping me up to date with modules I want to use.
Keeping me up to date with modules I intend to use (once they go stable).
Keeping me up to date with modules I don't even know that could benefit my work.
I'm heading off to Chennai, India next week and I would love to see some Perl people while I am there, but Madras.pm looks a little dead. Would any mongers in or around Chennai like to meet up for a technical meeting / dinner / tea?
I'm on a new team at the BBC and I was rather curious to note that I couldn't run the Test::Class tests by simply doing a prove -lv t/lib/Path/To/Test.pm. A bit of research reveals the culprit is FindBin, a module I've never been terribly happy with. Seems we have configuration information located relative to the $FindBin variable that module sets.
Since Cantella::JobQueue has been put on hold after the discovery of Beanstalkd, I've moved on to other projects. The first on my list was to add some new views to a Reaction-based application. Specifically, I needed a couple of ways to export tabular data. I needed to display it as inline XHTML tables, export it as PDF sheets and allow for spreadsheet downloads. Because none of the current offerings gave me the flexibility I wanted, I decided to write my own.
Well, I'm not a native Perl speaker. ;-)
I'm not even a native English speaker. Obviously. This can be hard sometimes.
Altough I would like to qualify my English as "quite o.k." I encounter problems reading documentation and code.
My problem (as I analyze it) is that I'm often unsure if I understood something correctly. And if I clearly don't, I find it hard to identify if it is due to complicated content, peculiarly used words or technical terms I don't know.
For the non-native speakers: Do you have similar problems?
For the native speakers: Imagine reading when you are really tired. I think this should feel more or less the same than reading technical writing in a foreign language.
And then all these little jokes. I really enjoy them, but they sometimes make it really hard to get the main content.
Another question for the native speakers: How do you feel when you are reading non-natively spoken (baby) English? Are you annoyed at the many mistakes?
And then: If you compare this to computing languages? Most of us are non native speakers. At least I hope so. :-)
Yeah, Perl 6 is going to allow us to do very interesting things. Given this code:
use v6;
subset Filename of Str where { $_ ~~ :f };
sub foo (Filename $name) {
say "Houston, we have a filename: $name";
}
my Filename $foo = $*EXECUTABLE_NAME;
foo($foo);
foo($*EXECUTABLE_NAME);
foo('no_such_file');
We get this output:
Houston, we have a filename: /Users/ovid/bin/perl6
Houston, we have a filename: /Users/ovid/bin/perl6
Constraint type check failed for parameter '$name'
in Main (file src/gen_setting.pm, line 324)
Obviously the error message needs some work (and I would love to be able to generate custom error messages for subsets), but proper use of subsets are going to save a lot of code and prevent a lot of errors.
I do backup... but as I'm sure with most other people, it's probably not enough.
As a Mac user I've been relying on TimeMachine for a while now and it is fantastic.
For my unix based stuff I then use git or svn and rely on my laptop having a checkout as well as a set being on the server (and the server does some backups as well).
But the main issue is it's lots of bits of "It will probably be ok" and "I'm fine if it's a hardware failure". But this doesn't cover the "what if my house burned down" type scenarios.
So anyway, with the scene set... I started thinking I needed offsite backup for everything. A service (or services) where I can upload everything to and then not worry about it.
Enter Dackup, I can upload to Amazons S3, or Rackspace's CloudFiles, or to another server over SSH, or another disk over Filesystem.
I'm already using Amazon's S3 for some stuff, so this seemed the easiest.
As part of the Perl track committee, I gave some guidance in what you might propose in "How to tell your Perl story at OSCON". There are many interesting things you could talk about, even if you don't think it's interesting.
I mentioned several categories your talk might fall into:
The Perl language itself, and how it works
Using Perl features to provide programmer capabilities
Using other technologies from Perl
The process of using Perl to get work done
A 5 minute lightning talk
Every proposal is judged both by a committee of subject matter experts as well as the entire OSCON program committee and the organizers. Take the time to let OSCON know why your talk is the best, and remember that some of the people judging it might not know who you are or why your really cool thing is important. You're also in competition with other Perl talks, so we need a reason to pick yours over the many other Perl talks for the limited space each track gets.