Ricardo wants to fix smart matching, which is horribly broken and always has been, although we're just starting to realize how bad it really is. He reduces the table to just a few operations:
$a $b Meaning
======= ======= ======================
Any undef ! defined $a
Any ~~-overloaded invokes the ~~ overload on the object, $a as arg
Any Regexp, qr-OL $a =~ $b
Any CodeRef $b->($a)
Any Any fatal
Module::Metadata::CoreList compares module pre-req version #s in your Build.PL or Makefile.PL with the versions of those modules in Module::CoreList, for a given version of Perl.
This can help you specify the minimum version # for the pre-req.
It used to be that my preference for Perl was considered niche; now it seems to be considered outdated. Google trends shows how Perl has declined over the past 7 or so years:
My previous post about a weird do-block bug I stumbled upon has been fixed as of 7c2d9d0,
the day after I posted. I don't know if anybody saw this post or if it was a coincidence, but thanks!
iCPAN hit the app store about a year ago. I didn't really expect there would be a lot of downloads, but the reality is that there was far more interest than I had expected, which is good. After the original app was released, I was able to release a couple of subsequent versions with updated Pod, but it soon became clear that some problems were not being solved.
The first issue was that it was taking a very long time to put a build together, because I had to parse all of minicpan with each run. The second issue was that my coverage was too low, around 60,000 docs. A lot of docs were being missed and there were a lot of edge cases to work out.
About a month or two months ago I started writing together with Smash, a book on CPAN modules and frameworks. The final name and table of contents is not yet fully defined, but some chapters are already popping out.
I put up a simple page where the draft documents will appear. I know the URL sucks, but I might fix it soon. At the moment I made one chapter available.
The final document will be made available in electronic format freely to the community (without that DRAFT watermark, that is there just so you know this wasn't yet revised). Also, a printed version will be available, probably in lulu, but not yet totally defined.
The book is being written in PseudoPod, and it available on GitHub. Refer to the book page for details.
I am happy with comments, corrections and grammar fixes. We know our English sucks. Just try to be positive, or we might end up quitting from this job :)
As a PAUSE admin, I ran into a new problem today. The URI::Dispatch module did not index because it has no package statements and has no provides in META.yml.
This leads me to a couple of questions, for which I invite you to answer:
Just how hard should the PAUSE indexer work to discover namespaces when you work hard to hide them? Remember, at the moment it's only MooseX::Declare, but it might be many other modules later.
What are other MooseX::Declare people doing to have their stuff indexed?
I'm also thinking about this for my BackPAN indexing bits, which uses my module Module::Extract::Namespaces. That's a PPI-based module, which I guess I now have to mutate to understand Moose. I haven't thought too much about that, so I'm not sure how I'm going to do that.
The Astro-satpass distribution contains classes to compute satellite position and visibility. The recent ‘Heads up’ posts were the latest chapter in its life, and that chapter comes to an end with the release of version 0.040. The only code modification since the most-recent ‘Heads up’ was to have the satpass script take advantage of the new lazy_pass_position attribute.
So who uses this distribution anyway? Since it is open source, the only users I find out about are the ones who write me — usually with a problem of some sort. Most of these represent an opportunity to improve the distribution, even if only to try to make the documentation a little clearer.
My impression from the correspondence is that most uses of this package are casual — hobbyists, interested amateurs, and the like. This is not to deprecate those users: I am one myself.
But it appears to be be in serious, day-to-day use in at least four places. These are:
Recently, we had to rework a legacy project using Catalyst. We chose HTML::FormFu as our Form engine because we had good experience with it in other projects.
However, we had to face one problem: Forms had to be different for every user. Well, the difference is based on the user's roles, to be precise.
As a rescue and a generic approach, we chose to create a simple HTML::FormFu::Plugin that can get applied to every or some of the forms, just as needed. To get things working, we need to do two things:
mark certain fields inside the form with the rights we need for editing
have our plugin make the privileged fields readonly
A simple form (in perl syntax) might look like this:
We have changed the domain for YAPC::NA 2012. It was yapc2012.us. It is now yapcna.org. The reason for this change is that we want to start a tradition of handing down resources from one YAPC::NA to another so that each new conference organizer need not start from scratch. So after YAPC::NA 2012, we will hand over this domain and a bunch of other resources to the YAPC::NA 2013 organizer, and they will hand it down to 2014 and so on.
June proved to be an eventful month for CPAN Testers. We had several updates and passed a couple of significant milestones.
Firstly, the updates. As well as the blog and wiki updates earlier in the month, David Golden also released a new version of CPAN-Reporter. This release is notable for two reasons. It now defaults to using the Metabase, rather than the email transport method of old, and also attempts to automated as much of the Metabase ID creation as possible. In order to encourage new testers, simplifying the setup process is a must. David made significant changes to CPAN.pm regards auto-configuration, and he's incorporated the same ideas into CPAN-Reporter. Ultimately we'd like to see a common client, which uses the APIs of CPAN-Reporter and CPANPLUS-YACSmoke, and can abstract away much of the configuration and processing of smoke testing. The aim being to encourage more casual testers to send us reports from the real world, as distributions are installed or upgraded.
I've just recently released my first Perl-related screencast, Fun with clouds.
It details how to install and build your first Mojolicious web application. I'd like to create more, just let me know what Mojolicious-related topics you would like to hear about on http://mojocasts.com. The more feedback I get, the more I'll release.
In Perl, we like to put important things first, so the ElasticSearch query language has always felt a bit wrong to me. For instance, to find docs where the content field contains the text keywords:
# op field value
{ text => { content => 'keywords' } }
To me, the important part of this is the field that we’re operating on, so this feels more natural:
# field op value
{ content => { text => 'keywords' }}
Any method which takes a query or filter param (eg search() now also accepts a queryb or filterb parameter instead, whose value will be parsed via SearchBuilder:
Do a full text search of the _all field for 'my keywords':
$es->search( queryb=> 'my keywords' );
Find docs whose title field contains the text apple but not orange, whose status field contains the value active:
I'm tired hearing from people that perl is dead or slow.
But basically they are right. If you check out a popular language comparison,
perl is almost on the last ranks, if listed at all. Behind python, php and ruby. And we all
know that at least ruby is slower than perl.
Partially because those perl scripts are pretty lame, not optimized as the comparable scripts. And partially because perl is slower than C or LISP.
I started improving some of the slowest scripts which give us a bad reputation.
E.g. I optimized fasta from 7 sec to 2.3 sec
without any algorithmic improvement. Maybe others want to help out also.
Worthwhile targets are fasta, binary-trees.
As I found out adjusting the algo helps much more.
The official tracker contains a version which is 20x faster than fasta.perl by using pythons algo and is therefore 2x faster than python3. This is the range we are used to.
And I'm also working on the compiled perlcc versions with several optimizations.
And I'll try out the new ideas which I talked at YAPC::NA about. So far my typed perlcc versions are slower, because I copy forth and back too many lexicals. Needs more optimization improvements.
Task::Dancer is just a collection of everything-Dancer. It was started by Sawyer X, and now I'm helping maintaining it.
Being a collection of all modules (plugins, mostly) for Dancer, it is a good way to show external progress in the framework (the internal progress was blogged a few days ago).
Dancer::Logger::ColorConsole, is a logger back-end, mostly for debugging purposes, that colorizes the log accordingly with user-defined rules,
Dancer::Plugin::Mongo is not a new module, but was disabled on last version, because MongoDB module was failing to install. It is a simple wrapper to MongoDB.
In our work project, there are lots of Test::Class-based modules, and they live in lib/ so other distributions can use them. Anyway, when writing a new module of tests, I don't start from an empty file but rather copy one of the existing test modules.
So I have to change the package name, but I've long felt that this can be automated - after all, if the module file is lib/Foo/Bar.pm, vim should be able to deduce that this is package Foo::Bar and change the package name accordingly.
The script below does that; it makes a few assumptions: that your modules live in a lib/ directory; that the package line already exists (but presumably with the wrong name) and that the package name to be replaced is in the first line that starts with 'package', followed by the name.
It's simple enough so you can adapt it to your needs. I hope you find this useful!
This article is about the tension between writing code and owning code.
We all know about working for some organization (hereafter 'org'), which
ends up owning our code. I think the tension arises because we pour some
of our heart into each program, especially those we feel are well-written.
But we're uneasy with the fact that the resultant code isn't ours.
So, how do we reconcile this tension?
I thought about this for years, until one day I had an insight which I
feel resolved this issue for me.
The first step is to take your focus off the program you're currently writing,
and look at your history. Almost certainly there will have been a series
of such programs, written for a variety of orgs.
Now, stop thinking of them as programs, and imagine they are a series of
poems, paintings, sculptures, magazine articles. See the similarity?
The next step is to see that a series of creations, poems or programs etc,
are simultaneously all similar and yet all unique.