Well, not actually wrong, just slow. But the exaggeration makes a punchier headline, you’ll admit.
This comes up when an interface takes a pattern to match things against. Sometimes you have some reason to want this match to always fail, so you want to pass a pattern which will never match. The customary way of doing this is to pass qr/(?!)/
. There is a problem with that, though.
I’m not talking here about the fact that if possible, you really don’t want to pass an actual qr
object. We’ve already covered that. It was a surprising enough discovery that I’ll take this opportunity to signal-boost that while we’re here, but this article is not about that.
Tom Wyant:
Interestingly (to me, at least) they reported that the removal of the /o
modifier made their case 2-3 times slower. This surprised me somewhat, as I had understood that modern Perls (for some value of "modern") had done things to minimize the performance difference between the presence and absence of /o
.
They indeed have.
Ironically, it’s qr
objects which don’t get that benefit. On the machine I’m typing on, the following benchmark…
Have a look at the CPAN Testers reports for two TRIAL releases of the same module, one from 2 days ago, the other a little over 3 years ago:
Last time, reports started coming in within hours of the release; over 60% of the picture was there within a day; some 85% after 2 days; and the first wave of reports lasted a week.
This time, it took almost a day to even start getting reports, and the diversity has been much lower. 3 days in, reports are still absent for many platforms:
- None for NetBSD, almost none for OpenBSD.
- Only a handful for Solaris (but I’ll admit to small surprise about that one).
- Windows is sparse – although coverage is not notably down relative to last time.
- Cygwin is entirely absent this time – but not a big difference from the one lone report it had last time.
- FreeBSD and Linux are at least healthy – but no longer anywhere near comprehensively covered. (How’s that for something I’d never have expected to see?)
- Who’da thunk the best-covered platform would someday be… Darwin?
(Also of note is that the only 5.8 tested at all so far is 5.8.9, a maintenance release from long after 5.10.0 with little real-world relevance. Last time 5.8 had good coverage, including the most important releases (5.8.1; 5.8.5; 5.8.8). (Of course, any 5.8 is better than no 5.8 at all.))
Within just a few years, we are severely down on CPAN Testers resources.
And that’s compared to 2018, which already was a long way from any kind of golden age for CPAN Testers.
CPAN Testers is on its way out.
I never bothered with CI for CPAN modules before but now it seems an unavoidable necessity.
(And common CI products have never made broad platform coverage easy…)
Update: Time::Local 1.30 includes a new pair of functions timegm_posix
/timelocal_posix
which address all issues outlined in this article, including the issues with the traditional functions, at least as pertains to Time::Local’s purpose as an inverse of the gmtime
/localtime
Perl builtins.
The new _modern
function variants in Time::Local have come up a few times lately. I have some thoughts on them, but presenting my position dispassionately enough to be persuasive demands an essay of unfortunate length… so let’s get on with it.
Let me lead with the positive: it is a problem with the traditional functions that they would sometimes add 1900 to the year and sometimes a different value and sometimes nothing. This heuristic in the interface is bad. Doing something about it is a good idea.
Let me also lay out some simple statements of fact: Perl ships with gmtime
and localtime
functions which return a datetime represented as a list of numbers. Time::Local supplies inverse functions which take such a list and return the Unix epoch time that corresponds to that datetime. The Perl functions, among other things, return the year as the number of years since 1900. A correct inverse of the core functions would therefore simply add 1900 to the year passed.
The traditional Time::Local functions do not fully do this: they only do it if the year number is just big enough but not too big.
Eevee:
Perl has the strange property that its data structures try very hard to spill their contents all over the place. Despite having dedicated syntax for arrays – @foo
is an array variable, distinct from the single scalar variable $foo
– it’s actually impossible to nest arrays.
my @foo = (1, 2, 3, 4);
my @bar = (@foo, @foo);
# @bar is now a flat list of eight items: 1, 2, 3, 4, 1, 2, 3, 4
The idea, I guess, is that an array is not one thing. It’s not a container, which happens to hold multiple things; it is multiple things. Anywhere that expects a single value, such as an array element, cannot contain an array, because an array fundamentally is not a single value.
And so we have “references”, which are a form of indirection, but also have the nice property that they’re single values.
This is a common thing for people to find weird about Perl. Really though it’s just a different default.
Perl’s reference-taking operator is simply dual with the splat operator in Ruby and recent Javascript.
Since my recent participation at the QA Hackathon I have become aware that rather more people than I expected do not know the specifics of this situation. Fewer than I expected have heard of it at all, even, although there appears to be some general awareness at the “something happened with that” level at least.
However, the situation is being used to characterise Marc Lehmann whenever his name comes up (and it comes up rather more often than I would expect or consider necessary).
To give a clear picture of the facts and to avoid repeating that exercise every time I have a related conversation, here is an outline of where we are and how we got here.
(Thanks to Andreas König, Graham Knop, and Peter Rabbitson for proofreading drafts of this article and verifying the stated facts.)
This is a moderately edited (primarily rearranged) version of a comment on the Perl 5 issue tracker written by Yves Orton (demerphq). I thought it would be useful to a wider audience so I am reposting here with permission.
Note: this was written off the cuff and is not comprehensive. In correspondence, Yves noted that such an article could/should cover more formats, e.g. MsgPack.
I [feel strongly] that Data::Dumper is generally unsuitable as a serialization tool. The reasons are as follows:
I wrote this article almost a year ago as part of an omnibus reply to a bunch of different posts from a perl5-porters thread. I never finished all parts of the reply and thus never sent this part either, but in contrast to the other parts of this stillborn mail, I think this one is worth reading. So asked Johan Vromans:
It still escapes me why @*
was chosen instead of the much more logical
[]
:
$h{a}[0]->[]
The reason is that there are a number of problems to solve with any new deref syntax:
“Let’s free ourselves from the shackles and do something bold!”
I always cringe when I hear this battle cry. Isn’t that sentiment exactly what set the trajectory for the Perl 6 effort? Maybe it’s just been so long that people have forgotten.
But that is precisely how Perl 6 became such an amazingly long trek: once you remove the constraint of staying compatible, everything is suddenly, potentially, up for reconsideration. Then when you start changing things, you discover that changes in one part of the language also affect several other, remote parts of the language. So it starts with the simple desire to fix a handful of obvious problems in obvious ways… and spirals out as you make changes, and further still as you make changes in response to your changes, ever further and further.
At that point, it is exceedingly likely that the project will fizzle out before it ever comes to any fruition. But even if you have the perseverance, you face an uphill battle: unless your project has the community’s implicit blessing as the successor (as Perl 6 does, due to Larry’s presence), it is likely to simply slip into oblivion… the way Kurila did.
So yes: backcompat is holding us back… the same way that gravity is. It keeps us from floating away untethered.
Note that I’m not saying it doesn’t really hold us back. I’d love to travel to space easily, too! I still await Perl 6, as well.
But what I think, every time someone proposes to throw off the shackles of backcompat and go for it, is that we already have one Perl 6 – we don’t need another.
Dave Cross:
I’m not going to object to Module::Build leaving the core. I’m sure there are good reasons, I just wish I knew what they are. I am, however, slightly disappointed to find that Schwern was wrong ten years ago and that ExtUtils::MakeMaker wasn’t doomed.
Schwern wasn’t wrong and MakeMaker remains doomed all these years later. It’s still around only because there hasn’t been anything to take its place. Module::Build looked like it was going to be that usurper – but didn’t work out.
Note that the reason that, between EUMM and M::B, M::B is the one leaving the core, is that EUMM is necessary to build the core and M::B is not. The reason for that is that no one bothered to port the existing MakeMaker-dependent infrastructure to Module::Build. And that never happened because M::B never gained the necessary features (XS support, mainly) fast enough for anyone to want to – because it wasn’t sufficiently much better than EUMM for anyone to want it enough to add the features.
However, EUMM is about as marginally maintained nowadays as M::B. Both are doomed, though their type of doomedness is one that’s accompanied by remarkable staying power. (Break-the-CPAN status tends to have that effect.) RJBS is on record that, should EUMM ever become unnecessary to building the core, it will make its exit stage left much the same as M::B is making now.
So… what happened?