At $work we're nearing the end of a long upgrade process, that's included moving from Debian etch to lenny, from svk to git, from Perl 5.8 to 5.10, from Apache 1.3 to Apache 2 (and then to Plack), and moving to recent versions of DBIC and Catalyst (which had previously been pegged at a pre-Moose state).
For code deployment we had been packaging all Perl modules as Debian packages and keeping our own CPAN repository, but wanted a more 'Perl-ish' solution avoiding the various pitfalls with OS packaging and Perl.
THE STRUCTURE
The system that we implemented consists of the following parts:
The community at #perl6 on freenode derived much pleasure from studying and discussing the entries to Masak's Perl 6 Coding Contest 2010. Five problems, five contestants, 26 entries (go figure) all used Rakudo. Since Niecza is the new Perl 6 kid on the block, it seemed an interesting idea to try it out on the submissions.
Alas, it was very disappointing to see error after error from Niecza. The results are included below for closer analysis. During testing it began to look as if Niecza would not run any entries, but fortunately it managed with p3-fox and p3-matthias. (They produced wrong answers, but hey, running code!)
The scripts that died with "Parse failed" never ran - the parser considered the source code to be not Perl 6 STD grammar compliant. There are lessons here - maybe Rakudo is too lenient in places, or maybe STD needs to be tweaked. It is also possible that the scripts could be patched to comply with STD, masak++ has already suggested a project to investigate.
These days RESTful web services are all the buzz, even though many who think they know what that means don't have a clue. But if you want to build a truly RESTful web app to go with your RESTful web services you'll quickly learn that web browsers only support GET and POST in HTML forms. Plack::Middleware::HttpMethodTunnel to the rescue after the jump.
I’ve been setting up my Windows 7 laptop to have as complete a Perl development environment as possible. I don’t have a choice about the operating system in this case, but I am used to the Unix-style environment, which I’d like to maintain.
One option is to use Cygwin and just dive into there for everything. However I do want some of my code to work natively under Windows so there is still a need to run Perl tests at a Windows console. I might as well develop there too, if I can. I’ve ended up with the following steps, which are in places bodgy hacks, but show the principle works at least:
I configured them, played around with them, injected example values, etc.
Besides that I had some progress with SpamAssassin. After
accessing the repo (using git-svn) I found an ok looking version
3.3.2 waiting there that passes all tests on Perl 5.12.3. I volunteered in spamassasin-dev and irc to make the corresponding release.
Different languages are suited to different things. We know this well and we remind people about it from time to time. For example, Erlang is a great language for running a massively concurrent system, but the language itself is rather slow, so it would be awful for performance-intensive procedural work.
By a similar token, there are some things for which Perl is simply not the first choice. If you want to write a rich, cross-platform GUI application, there are a number of choices, but Perl's not your best one by a longshot. I once timidly suggested that a GUI toolkit be pushed into the core to resolve this and I was shot down immediately for excellent technical reasons which nonetheless relegate us to a non-contender in this arena.
I have released
the first non-developer's version of Marpa::XS:
0.002000.
Marpa::XS is the XS-accelerated version of
Marpa.
Marpa is a parser generator -- it parses from any
grammar that you can write in BNF.
If that grammar is one of the kinds in practical use
(yacc, LALR, recursive descent, LR(k), LL(k), regular expressions, etc.),
Marpa and Marpa::XS parse from it in linear (O(n)) time.
The parts of Marpa::XS that were rewritten in C run 100 times faster than
the original Pure Perl.
Typical applications run approximately 10 times faster.
There is a new, simplified, interface for reading input.
The documentation has been improved.
Error reporting and tracing of grammars with right recursion has
been simplified and
is much improved.
Day 2 was full of Yak shaving. It was not very convincing at first sight
but in the end I think it was ok and needed to be done. The annoyance part of that work is the primary reason why it isn't already done and worth to do on the hackathon.
What if a CPAN mirror wasn't stationary? How would we track it? Would people make Google Earth maps to show its path? Could we dynamically adjust capacity without additional servers? Would we have to get FAA approval?
Ricardo and I started talking about CPAN mirror data, mostly because Ricardo was relaying messages to me from Adam Kennedy on IRC. That turned into a discussion of what work we could give Adam as part of this hackathon, and since he's not here. Steffan Müller quickly wanted in the fun of porposing work we could heap onto Adam.
Some time ago I searched for constant hash algorithm at CPAN and found nothing. So I wrote one. Here it is - distribution can be found on CPAN, development tree is on github.
The algorithm is well-known and shared by most of constant hash implementations. At Last.fm they have written a C library called ketama. The algorithm I have implemented is the same but written in pure perl with only one dependency - String::CRC32.
Here are some scenarios I have used it in mass-hosting environment project:
Despite my being on the other side of the world, I hope to remain at least somewhat as productive as if I was at the actual event itself (even if this is not particularly achievable).
I've made it something of a tradition for myself that I use time spent in airports to work on modules and algorithms that make it easier to write programs that deal elegantly with being offline, or being in nasty network environments.
This year I've been revisiting one of my biggest successes in this area, LWP::Online.
I originally wrote LWP::Online in response to the rise of "captured" (my term) wireless networks in airports. Captured wireless networks allow you to connect without a password, and appear to have working internet, but all requests result in a redirect to some login, advertising or paywall page.
If your feature request is complicated or I'm just too dumb to understand it (more common than I like to admit) and you don't a patch or tests to send me, there's a good chance it will be ignored. My bandwidth is limited enough even with patches (which I have to read, evaluate, apply, test and release).
I'm in Amsterdam now, the morning of the first day of the 2011 Perl QA Hackathon, and about to be pulled in by the black hole of Perl developers that is Booking.com. Curiously, this is the first trip in a long time that I haven't used Booking.com for my reservations.
Every year that I come to one of these workshops, my list of things to work on get longer and longer.
Last year I sketched out, with David Golden's help, a possible new CPAN client. We held off last year to see which direction Miyagawa would go in, but this year I don't think our tools really overlap. His tool, cpanm, appeals to the people who want the tool to not only make most of the decisions but also hide the details. It's started off as zero-conf, and you can still use it like that, but it is starting to get some knobs and dials.
I've been lurking on the various ARM Linux boards for a while, so when I saw a hackable ARM appliance for $40, I grabbed it.
Honestly, setting up (Arch) Linux on it was the easiest Linux setup ever. [By cracky, back in my day we didn't even have ZEROS in our binary!] After installing a few Perl packages from the PlugApps repo, I decided, "Why not try to kill this wee Linux box and its THUMBDRIVE filesystem with PostgreSQL and some heavy DBI action?!" Surprisingly, it'd actually probably be usable as a tiny PostgreSQL server. The L'ane test suite, which takes about three-ish minutes on my Core i5 Thinkpad took just under nine minutes on this tiny, silent, USB-DRIVEN box. While a third of the speed might sound like a deal-breaker, the test suite really hits the database with bulk data loads that aren't realistic under normal use. I'll have to whip out my minimum-wage-worker-simulator to see if this (did I mention it's pink?) lil' box can actually hold up to a "real" workload.
AMD released Tapper to github and CPAN, a test infrastructure for all aspects of testing including Operating Systems and Virtualization. It provides independent layers to adapt to different levels of QA requirements, from simple tracking and presenting test results to complete automation of machine pools multiplexing complex virtualization use-cases with detailed data evaluation.
Tapper includes:
Automation
Machine Scheduling
Command line utilities
Web Frontend application
Support for writing tests
Powerful result evaluation API
Testplan support with TaskJuggler
Technologies used:
Test Anything Protocol (TAP)
Core system written in Perl and CPAN
DB independent, developed on MySQL and SQLite
Language agnostic testing (e.g. Perl/Python/Shell test suites)
The Catalyst cookbook provides two recommendations for adding RSS feeds to your application:
Create an XML template, populate the stash with data, let your template view render it, then override the Content-Type: header of your view with Catalyst::Response.
Use an XML::* feed module to render the XML, manually set the Catalyst::Response body, then set the Content-Type: header.
The former almost makes me angry, and the latter, although saner, puts view-specific responsibilities on the shoulders of the Controller.
Catalyst::View::XML::Feed
Catalyst::View::XML::Feed is an attempt at something cleaner, and more in the spirit of MVC, which makes the nitty-gritty details of presentation entirely the responsibility of the view. Using it is pretty much what you would expect from any view:
Put your data in $c->stash->{feed}
Forward to your View::Feed (or whatever you called your Catalyst::View::XML::Feed subclass)
It attempts to be as intelligent as possible, allowing you to supply data in a wide range of formats to the feed value in the stash. These can be XML::* Atom or RSS objects, custom objects, plain hashrefs of values, etc.
As always, any recommendations or other feedback are warmly welcomed.
The response to my original Ouch post has been overwhelming. Based upon the number of responses it got and the number of emails I personally received about it, Ouch must have struck a cord.
I've taken the feedback I've received and released Ouch 0.03. New feature details after the jump.