Last day for YAPC::EU::2010 talk proposals

Today is the last day for proposals for YAPC::EU::2010. I have already the travel tickets, I have already a room in the Hotel. Now, I need to make something to lower my costs in Pisa, for example, to have an accepted talk, and get free entry.

To maximize my chances I proposed two talks, in very different perspectives of Perl:

SQL scripts? Just do it.

The problem is that I have various .sql files containing database creation statements like:

DROP TABLE IF EXISTS foo;
CREATE TABLE foo ...
...
...;

It would be wonderful to execute these sql files using DBI’s do method, but do can only execute one statement at a time. Ok, simple enough, just split the file on semicolon, right? Well, often you have stored routines that change the delimiter, mid-statement.

San Francisco Perl Mongers Twitter Feed

@sfperlmongers

Coding styles that make me do a double take

One of the worst programming habits I know of, is assuming that everything will work out; you just perform the command/function call, and it has to work. In other words: do not bother checking for error conditions or propagating them, because they are not going to happen anyway. right?

Wrong.

But the code works. Mostly. Except when an error condition occurs. And it will, eventually.

In some instances, such code is a glorified shell script . I have ranted about that before. But catching error conditions properly can be tricker than with in-Perl functions and modules. Especially if you have no control over what the external program does, but the original programmer did.

Even with in-Perl functions and modules, you might run into, ehrm, interesting usage .

Usually, the root of the problem is that the original programmer did not foresee that some his code could grow into something else.

While it may have been a good idea to create an SQL query wrapper to hide parts of what DBI does for

Magic: too powerful?

I'm currently at the stage of reading the Python docs and considering what I'll have to keep track of as regards C type objects, and how to do so. The Python ctypes code is delightfully well documented, particularly this nugget. The Ruby and JS projects unfortunately seem to be entirely lacking in any documentation of their internal functioning, but I think I've got enough to work on.

Perl Hispano Artículos : Tutoriales Basicos : Como enviar email en Perl - Perl Hispano

Para ello, deberemos conocer la ruta absoluta de sendmail (en el ejemplo /usr/sbin/sendmail) y redirigiendo el correo que queremos enviar al mismo sendmail usando la entrada estandar, conseguiremos realizar el envío que deseamos.

El procedimiento es muy sencillo, veamos un ejemplo y despues explicare que es exactamente lo que se está haciendo:

To Depend Or Not To Depend

When starting an application, I always strive to reduce dependencies as much as possible, especially if I know that there's some way that does not take me too much time to go.

It turns out that I chose one of the worst possible examples. Check out the comments below and - more importantly - check out lib documentation!

This happens when I use lib, for example to add ../lib to @INC. For my own stuff I usually head towards Path::Class:

use Path::Class qw( file );
use lib file(__FILE__)->parent()->parent()->subdir('lib')->stringify();

but when I want to reduce dependencies I revert to File::Spec::Functions (which is CORE since perl 5.5.4 and now part of PathTools):

The best Perl developer in the industry for May 2010?

Not sure I agree with the list but figured it was worth posting anyway.

ProgrammingPerl2ndEdition.jpg

Best Web Design Agencies, the independent authority on the best web designing companies, has announced the best Perl developer in the industry for May 2010. Numerous applicants have contacted the independent authority seeking to be ranked. An independent research team was assigned to review each applicant. After extensive review each of the firms on the list were determined to be the best at PHP development.

An extensive evaluation process performed by an experienced research team with bestwebdesignagencies.com is what separates the best development firms on this list from the rest of the industry. The team also contacts at least three clients of every vendor in order to obtain an evaluation from their perspective. Questions are asked of clients such as, "How easy is the web application to use for various users?" or "Is the code simplified to allow for developers to add onto and document code easily for future development?".

The best PHP developer for May 2010 is:

"block sequence entries are not allowed in this context"

I have a couple hundred thousand YAML files that I created with plain, ol' YAML.pm. I created these just before I realized how slow YAML.pm is, but I have the files already. Processing them with YAML.pm is really, really slow, so I wanted to see how much faster the other YAML modules might be.

My problem, which Google doesn't know much about (yet), is that the faster parsers complain "block sequence entries are not allowed in this context" when I try to parse these files while YAML.pm (really old, but pure Perl) and YAML::Syck (deprecated, uses YAML 1.0) don't. YAML::XS is based on libyaml, an implementation that actually conforms to the YAML 1.1 specification. I didn't create the files with YAML::XS though, so I have lines like:

cpplast: -
cppminus: -

When in YAML 1.1 those lines should be something like:

How I setup my Debian server to run perl 5.13.1 with perlbrew

In May I decided to stop using Debian's perl 5.10.1 in favor of using a 5.13.1 built with perlbrew, and CPAN modules built with cpanminus. It's been great, here's how I did it.

Before switching over I ignored Debian's perl library packages, and installed everything with cpanm into /usr/local. But since I wanted to use the new post-5.10 features of Perl I thought I might as well replace all of it and use a newer perl.

What I did:

Late to the party, but I brought bottles

I'd been procrastinating at looking at Dist::Zilla, but I thought I'd have a go with it this week.

And as usual I found some yaks to shave on my travels.

First off I had to port over Module::Install::GithubMeta to Dist::Zilla::Plugin::GithubMeta.

Flush with the success of this I decided to try converting some other of my Module::Install extensions, starting with Module::Install::AssertOS.

After much gnashing of teeth and hair-pulling I've eventually released Dist::Zilla::Plugin::AssertOS to CPAN.

Closely followed by Dist::Zilla::Plugin::NoAutomatedTesting.

Both these demonstrate a slightly dubious mechanism for manipulating the dzil generated Makefile.PL. You might want Dist::Zilla::Plugin::MakeMaker::Awesome instead.

Better dump output in debugger

I was reading Steve Haryanto's blog post about filtering Data::Dump output and immediately installed Data::Dump. If you use the debugger as often as I do, I recommend you install that module along with DB::Pluggable. That's when the delightful magic starts.

Hudson for Everybody Else


Joe McMahon will be talking about Hudson on June 22nd at 7pm, at the office of Mother Jones.

"Continuous integration" sounds like a great idea: you automatically run your build on every checkin, so you know very soon after you've committed if you make a mistake or checked in a bug. However, like any
properly lazy Perl programmer, the last thing you want to do is write more code; you want to take advantage of work that's already done: that's Hudson.

Hudson is a continuous integration server that's easy to set up, customize, and use. Unlike other similar Java-based tools, Hudson is language-agnostic, even well-integrated with other tools.For Perl
projects, with a little assistance from CPAN, it's easy to set up and use for Perl projects. We'll look at a sample setup that covers most of the bases, including a few pointers on making it easy to build and track things
under Hudson, and finish up with a look at using Hudson to get your team involved - even enjoying - continuous integration.

Announcement posted via App::PM::Announce

RSVP at Meetup - http://www.meetup.com/San-Francisco-Perl-Mongers/calendar/13762958/

OTRS on Plack

Lately I've been playing around with the OTRS ticketing system on one of my servers. OTRS is written in Perl, and is typically run as a CGI, mod_perl or FastCGI app. Usually I'd map it as a FastCGI app on Nginx and start the 2 FastCGI servers via a init.d script (one for the customer helpdesk and another for the management console).

But this time I wanted to give Plack a try.

I'm new to Plack and PSGI, but I can't wait to start moving my apps to this badass middleware framework.

Plack comes with 2 CGI wrapper modules, Plack::App::WrapCGI and Plack::App::CGIBin. WrapCGI seems like the most appropriate for my needs. Apparently it even precompiles CGIs using CGI::Compile, for added performance.

So I wrote a little app.psgi in the /opt/otrs directory:

Grepping exact values

In my perspective Perl syntax for grep doesn't help making it faster when searching for exact values. Let me explain. While it is not that common, what you do when trying to grep for an exact string from an array?

Perl makes the use of pattern matching on greps easy:

@selected = grep { /^foo$/ } @list;

You can argue on the usability. But trust me, every once and then, some strange constructs get really useful. Unfortunately the above expression is not efficient. If you replace it by

@selected = grep { $_ eq "foo" } @list;

you get a two times faster code (check the bottom for Benchmark results).

Following the idea of split that accepts a string and uses it for the split process, I think grep could accept a string as well (at least on grep EXPR,LIST construct):

@selected = grep "foo", @list;

What kind of inconveniences would this raise?

Check out Devel::NYTProf 4.00

Tim Bunce's Devel::NYTProf has a bunch of improvements in version 4.00, which was released yesterday.

The compatibility problem with Devel::Declare code like Module::Signatures::Simple that I previously blogged about has been fixed. It can now profile inside string evals, and more.

Update: Tim Bunce now has a posting about NYTProf 4.00 on his blog.

Custom dumping *in* Data::Dump

After blogging about my small patch to Data::Dump, I contacted Gisle Aas. He is quite responsive and finally comes up with a new release (1.16) of Data::Dump containing the cool new filter feature. My previous example after converted to use the new feature becomes:

$ perl -MData::Dump=dumpf -MDateTime -e'dumpf(DateTime->now, sub { my ($ctx, $oref) = @_; return unless $ctx->class eq "DateTime"; {dump=>qq([$oref])} })'
[2010-06-09T12:22:58]


This filter mechanism is quite generic and allows you to do some other tricks like switching classes, adding comments, and ignore/hide hash keys. The interface is also pleasant to work with, although starting with this release the "no OO interface" motto should perhaps be changed to "just a little bit of OO interface" :-)

Aren't we glad that stable and established modules like this are still actively maintained and getting new features.

Thanks, Gisle!

Effective Perl Programming

I now have my copy of the new edition of Effective Perl Programming. I'm halfway through another book, a wedding on the 20th and some other major personal (good) news, so I won't be able to review it right away. However, a few quick notes:

Pros

  • Covers 5.12.
  • A full chapter on Unicode
  • Good discussion of testing

Cons

  • Recommends SQLite for a test database (I used to like this idea myself).
  • Needs better coverage of Test::Class.

(Can you tell the one chapter I've already read?)

I don't think the testing points are that serious; there needs to be a far more in-depth treatment of testing and for this book, it can't possibly cover this area properly. I also noticed it had nice things to say about Moose, but didn't use it in examples. I think this was the right decision, but I wish it weren't.

And in Web sites it recommends, blogs.perl.org is listed, but use.perl.org is not. Rather interesting.

In any event, those were just a few things I noticed flipping through the book. I'll have a better description later. For now, suffice it to say that it looks very, very good. Just a few of the places I've taken the time to read are well-thought out and show lots of experience.

On Moose default variables

If you read my last post, I was wondering why Moose wasn't accepting array or hash references as default values that could be cloned and, instead, used a code reference to create a new array/hash.

Decided to benchmark the two approaches. The results were... surprising:

Benchmark: timing 500000 iterations of clone, retfunc...
clone: 34 wallclock secs (32.35 usr + 0.12 sys = 32.47 CPU) @ 15398.83/s (n=500000)
retfunc: 4 wallclock secs ( 3.75 usr + 0.02 sys = 3.77 CPU) @ 132625.99/s (n=500000)

With these results, I think the correct behavior is the one already present on Moose, complaining about the default value being an array reference and suggesting an alternative:

References are not allowed as default values, you must wrap the default of 'a' in a CODE reference (ex: sub { [] } and not [])

Decoding HMMs in Perl 6

I've wanted to write a reasonably useful Perl 6 module for a while, and I finally realised that the Viterbi algorithm would be a pretty simple place to start (hopefully it'll be useful as well).

There's a module with the same name on CPAN, but I've based the code on a Common Lisp version I wrote for school a while back. At the moment the module is pretty basic, and doesn't support assigning probabilities to unseen data for example. The module is available on GitHub.

A more advanced version will support computing the sum of the log-probabilities rather than the product, smoothing and unobserved data, and the option of mixing in a role to domain objects so that they can be passed directly to the decode method from the client code.

About blogs.perl.org

blogs.perl.org is a common blogging platform for the Perl community. Written in Perl with a graphic design donated by Six Apart, Ltd.