I have read several articles, and had several conversations about why people don’t like Mojolicious. I have been meaning to write an article about people’s dislike for Mojolicious for a while now, so I’m going to take this opportinity do so while responding to that article.
Mojolicious is Pro-CPAN
Some people don’t like it claiming that it is “anti-CPAN” and in fact this comes in two flavors. First, they believe that because a tool is available from CPAN that it should be used rather than reimplemented. Second, if one really can reimplement it better, this tool should be forked out of the project and uploaded to CPAN for everyone’s benefit.
While at the Italian Perl Workshop I was talking with a gentleman who does a lot of contract work (and gave me permission to anonymously share this story). Most of his contract work deals with the Web and he's fortunate enough to have worked with quite a few companies who are a bit more sophisticated than the old CGI.pm days. In fact, some of them use Mojolicious, an excellent Web framework that many developers are enjoying. Mojolicious is fast, flexible, robust, and has no CPAN dependencies.
This developer hates working for clients who use Mojolicious. I confess that I was surprised when I found out why. It's an exercise in "unintended consequences".
{
# ... something that needs to be done twice ...
if ( not state $first_iteration_done++ ) {
# ... something that must only happen after the first time ...
redo;
}
}
In general, some form of “if state $flag” can be used as a “have I been here?” idiom that avoids the need to mention the flag’s identifier anywhere else. Without state, one must repeat oneself some distance away at least to declare the flag in whichever outer scope has the appropriate lifetime.
Thank you! Thank you to all who took part in the survey. Your contribution has been greatly appreciated and will help understand better the experience of newcomers in the Perl community.
You can still participate
If you have not filled in the survey yet, there is still time to participate. The survey will be live until October 22 and can be completed by any Perl contributor who joined the Perl community within the last two years.
An older blog post provides some further details about the research project.
Some preliminary results
Here are some of the results drawn from the dataset collected during the first week of the survey.
Number of survey participants:
43 people took part in the survey so far (thank you again). 7 of the 43 people reported to have attended at least one Perl community event and 2 people went through some sort of mentoring while becoming contributors.
We had a bit of downtime on the CPAN Testers last month. Did you notice? I doubt it, as the guys at Bytemark did a wonderful job helping to get us back online. We had a disk failure and with minimal fuss they replaced the disk and had us back up and running within a few days of spotting the problem. Such a far cry from the fiasco of our previous hosting company. Many thanks to all the guys at Bytemark.
Sitting here in the Italian Perl Workshop and have been enjoying the talks and the excellent food. I was thinking about Matt Trout's Data::Query talk from yesterday. In a nutshell, Data::Query lets you write things like this:
SELECT { $_->cd->name }
FROM { $_->cds, AS 'cd' }
JOIN { $_->artists, AS 'artist' }
ON { $_->cd->artistid eq $_->artist->id }
WHERE { $_->artist->age > 25 }
This is very exciting, though it might not be immediately evident why.
One of my many rules of software engineering, born of more than a decade seeing things done the Wrong Way, is that serialization must occur only at the extreme edges of your program. At all other points you should, if possible, deal only with structured data. The lack of it in one crucial area of the Perl MongoDB driver is what made support for Perl 5.8 no longer possible.
This post introduces an HTML parser which is both liberal and configurable.
Currently available as a
part of a Marpa::R2 developer's release on CPAN,
the new Marpa::R2::HTML allows detailed
configuration of new tags
and respecification of the behavior of existing tags.
To show how a configurable HTML parser works,
I will start with a simple task.
Let's suppose we have a new tag, call it
<acme>.
The older, non-configurable version of Marpa, and most browsers,
would recognize this tag.
But they'd simply give it a default configuration,
one which is usually very liberal --
liberal meaning the tag is allowed to contain just about anything,
and to go just about anywhere.
A configurable parser allows us to specify the new tag's behavior
more explicitly and strictly.
I am happy to announce that Alien::Base (GitHub) has seen a beta release, version 0.001. It seems that my design change that I previously blogged about has indeed fixed (well avoided) the problems that I was having supporting Mac.
This is not to say that Alien::Base is quite completed. While I have released two testing modules which are an Alien:: module (Acme::Alien::DontPanic) and a dependent module (Acme::Ford::Prefect) these are very simple modules. To be sure that the API is flexible enough and that the loader mechanisms are robust enough Alien::Base needs to be used in the wild.
This week’s Chicago/WindyCity.pm meeting, our monthly Project Night, will feature (though not exclusively) creating these modules. I personally will work on porting Alien::GSL to the Alien::Base system. I hope that if you are in the area you will consider attending or if not please attempt to wrap your favorite C library using Alien::Base and let me know how it goes.
Computational Social Science lets you compute what a group of people will do given their various levels of motivations (self-interest, wanting to fit in, etc.) Correspondingly, given the outcome Computational Social Science can calculate the varied and sundry motivations that went into creating that outcome.
This is real, solid hard science, with variables and equations and everything -- science with mathematically verifiable results, not pages of dense, hard-to-read prose because we don't actually have the equations to express what we know about the subject in question.
Those of us who have braved the wilds of science fiction will recognize this by another name -- Isaac Asimov's fictional science of psychohistory. Computational Social Science is social science taking the first steps toward becoming a hard science -- and guess what? Hard sciences are where we make the fastest transition from pure science to everyday engineering.
In the first part
I showed some problems and possibilities of the B::C compiler and
B::CC optimizing compiler with an regexp example which was very bad to
optimize.
In the second part
I got 2 times faster run-times with the B::CC compiler with the
nbody benchmark, which does a lot of arithmetic.
In the third part
I got 4.5 times faster run-times with perl-level AELEMFAST optimizations, and discussed optimising array accesses via no autovivification or types.
Optimising array accesses showed the need for autovivification detection in B::CC and better stack handling for more ops and datatypes, esp. aelem and helem.
But first let's study more easier goals to accomplish. If we look at
the generated C source for a simple arithmetic function, like
pp_sub_offset_momentum we immediately detect more possibilities.
Just a brief night post about people that I successfully hired recently. I'd like to publish a screenshot from our corporate Yammer pages, which--if you don't know--is a kind of Facebook for using within a company. My method is much more trustful than the endorsement block recently launched on LinkedIn :-)
One of the first module I took over as a maintainer on CPAN was Proc::Fork.
It is a beautiful interface.
It did get a bit uglier in relatively recent times when I added the run_fork wrapper, an unfortunate necessity in certain cases.
But for small single-file-redistributable programs that can be offered to people who are merely users of a Unix system, who do not have any sort of CPAN setup or installation experience, it always felt like a burden to pull in a dependency for something as… insubstantial as this little bit of syntactic sugar:
Welcome to Perl 5 Porters Weekly, a summary of the email traffic on the
perl5-porters email list. In case you missed hearing about it, don't forget
to sign up for Gabor Szabo's Perl Maven programming contest.
This week's dusty thread is from the week of July 30, 2012. Pumpking Ricardo
Signes was looking for volunteer(s) to do some hacking on a gitalist
installation hosted on perl5.git.perl.org. Read about the details here.
Are you interested? Contact Rik.
Topics this week include:
Perl 5.14.3 RC1
JROBINSON grant report #10, #11
[PATCH] Suggest cause of error requiring .pm file
Auto-chomp
Refactoring t/op/lex_assign.t to use test.pl
Why is Filter::Simple in core distribution?
Features and keywords versus namespaces and functions
Moose is great! At its very basic, it simplifies the boilerplate required to create Perl objects immensely, providing attributes with type constraints, method modifiers for semantic enhancement, and role-based class composition for better code re-use.
Moose is built on top of Class::MOP. MOP stands for Meta-Object Protocol. A meta-object is an object that describes an object. So, each attribute and method in your class has a corresponding entry in the meta-object describing it. The meta-object is where you can find out what type constraints are on an attribute, or what methods a class has available.
Since the meta-object is a Plain Old Perl Object, we can call methods on it at runtime. Using those meta-object methods to add an attribute would modify our object, adding that attribute to the object. Using Class::MOP, we can compose classes at runtime!
This week there were a couple of contradictory messages on our Twitter which triggered a small discussion.
Being a joke of what is more important at the conference, either talks on Perl 5 or talks on Perl 6, those two tweets contain a mixture of feelings behind them.
The major problem with Perl 6 talks is that there are many attendees to the conference who are not very interested in that version of the language. There's nothing wrong with this opinion, and we respect the attendees who come to the conference to learn new things about Perl 5, and we will always support this part of the audience.
In the first
part
I showed some problems and possibilities of the B::C compiler and
B::CC optimizing compiler with an regexp example which was very bad to
optimize.
In the second
part
I got 2 times faster run-times with the B::CC compiler with the
nbody benchmark, which does a lot of arithmetic.
Two open problems were detected: slow function calls, and slow array accesses.
At first I inlined the function call which is called the most, sub advance
which was called N times, N being 5000, 50.000 or 50.000.000.