How fast can you try?
I just saw the release of Aristotle's Try::Tiny::Tiny to CPAN, which aims to speed up Try::Tiny. That led me to wonder how fast the various Try* modules were. I cannibalized the benchmark code from Try::Catch, and off I went.
Updates
- Include
eval
andTry::Tiny
master (39b2ba3b0) at Aristotle's request - Fix bug; correct versions of Try::Tiny now always loaded.
The candidates are:
- Try::Tiny 0.28 (PP)
- Try::Tiny master (PP)
- Try::Tiny::Tiny (PP)
- Try::Catch (PP)
- TryCatch (XS)
- Syntax::Feature::Try (PSP)
- Syntax::Keyword::Try (PSP)
- Plain old Perl
eval
Where
PP => Pure Perl
XS => XS routine
PSP => Perl Syntax Plugin
Try::Tiny::Tiny
doesn't replace Try::Tiny
; it alters it, so it's not possible to test the two at the same time. The test code uses an environment variable is used to switch between the two. It also switches between testing try
with no catch
and try
with catch
:
use strict;
use warnings;
use Storable 'store';
use if $ENV{TRY_TINY_MASTER}, lib => 'Try-Tiny-339b2ba3b0/lib';
use if $ENV{TRY_TINY_TINY}, 'Try::Tiny::Tiny';
use Dumbbench;
use Benchmark::Dumb qw(:all);
our $die_already = $ENV{DIE_ALREADY};
my $TT_label = $ENV{TRY_TINY_TINY} ? 'Try::Tiny::Tiny' : 'Try::Tiny';
$TT_label .= '::Master' if $ENV{TRY_TINY_MASTER};
my $res = timethese(
0,
{
'TryCatch' => \&TEST::TryCatch::test,
'Try::Catch' => \&TEST::Try::Catch::test,
$TT_label => \&TEST::Try::Tiny::test,
'Syntax::Keyword::Try' => \&TEST::Syntax::Keyword::Try::test,
'Syntax::Feature::Try' => \&TEST::Syntax::Feature::Try::test,
'Eval' => \&TEST::Eval::test,
},
'none'
);
store $res, $ARGV[0] // die( "must specify output file" );
{
package TEST::TryCatch;
use TryCatch;
sub test {
try {
die if $die_already;
}
catch( $e ) {
};
}
}
{
package TEST::Try::Catch;
use Try::Catch;
sub test {
try {
die if $die_already;
}
catch {
if ( $_ eq "n" ) {
}
};
}
}
{
package TEST::Try::Tiny;
use Try::Tiny;
sub test {
try {
die if $die_already;
}
catch {
if ( $_ eq "n" ) {
}
};
}
}
{
package TEST::Syntax::Keyword::Try;
use Syntax::Keyword::Try 'try';
sub test {
try {
die if $die_already;
}
catch {
if ( $@ eq "n" ) {
}
};
}
}
{
package TEST::Syntax::Feature::Try;
use syntax 'try';
sub test {
try {
die if $die_already;
}
catch {
if ( $@ eq "n" ) {
}
};
}
}
{
package TEST::Eval;
sub test {
eval {
die if $die_already;
};
if ( $@ ) {
if ( $@ eq "n" ) {
}
}
}
}
The separate runs results are merged thanks to the magic of Benchmark::Dumb.
use strict;
use warnings;
use Storable 'retrieve';
use Regexp::Common;
use Benchmark::Dumb qw(:all);
use Term::Table;
die( "must specify input files" )
unless @ARGV;
my %merge;
push @{ $merge{ $_->name } }, $_ for map { values %{ retrieve $_ } } @ARGV;
my %results;
print "Key:\n";
for my $results ( values %merge ) {
my @results = @$results;
my $result = shift @results;
my @sections = map { /^([[:upper:]])/g; $1 } split( '::', $result->name );
my $name = join '', @sections;
printf " %4s => %s\n", $name, $result->name;
$result = $result->timesum( $_ ) foreach @results;
$results{$name} = $result;
}
my $rows = cmpthese( \%results, undef, 'none' );
my $header = shift @$rows;
for my $row ( @$rows ) {
for ( @$row ) {
s/\+\-[\d.]*//g;
s<($RE{num}{real})/s><sprintf( "%8d/s", $1)>ge;
s<($RE{num}{real})%><sprintf( "%4d%%", $1)>ge;
s/--//;
}
}
my $table = Term::Table->new(
header => $header,
rows => $rows,
);
print "$_\n" for $table->render;
And one script to bind them all:
#!/bin/bash
for da in 0 1 ; do
export DIE_ALREADY=$da
for ttm in 0 1 ; do
export TRY_TINY_MASTER=$ttm
for ttt in 0 1 ; do
export TRY_TINY_TINY=$ttt
perl all2.pl ttt_$ttt-ttm_$ttm-da_$da.store > /dev/null
done
done
perl -Ilocal/lib/perl5 merge.pl \
ttt_0-ttm_0-da_$da.store \
ttt_1-ttm_0-da_$da.store \
ttt_0-ttm_1-da_$da.store \
ttt_1-ttm_1-da_$da.store
done
Dumbbench
provides individual errors. In this instance they are
smaller than the differences between the results, so I've removed them
to simplify the comparison tables. All tests were run using Perl 5.22.
Key:
E => Eval
T => TryCatch
TC => Try::Catch
TT => Try::Tiny
SFT => Syntax::Feature::Try
SKT => Syntax::Keyword::Try
TTM => Try::Tiny (master)
TTT => Try::Tiny::Tiny
TTTM => Try::Tiny::Tiny with Try::Tiny (master)
First, try
without a catch
:
+-----+---------+-----+-----+-----+-----+----+----+----+----+----+
| |Rate | SFT | TC | TT | TTM |TTT |TTTM| T |SKT | E |
+-----+---------+-----+-----+-----+-----+----+----+----+----+----+
| SFT | 44666/s| | -56%| -60%| -62%|-78%|-79%|-86%|-91%|-97%|
| TC | 101475/s| 127%| | -10%| -14%|-51%|-53%|-70%|-81%|-95%|
| TT | 113761/s| 154%| 12%| | -4%|-46%|-48%|-66%|-78%|-94%|
| TTM | 118890/s| 166%| 17%| 4%| |-43%|-46%|-64%|-77%|-94%|
| TTT | 210960/s| 372%| 107%| 85%| 77%| | -4%|-37%|-60%|-90%|
| TTTM| 220330/s| 393%| 117%| 93%| 85%| 4%| |-34%|-59%|-89%|
| T | 337780/s| 656%| 232%| 196%| 184%| 60%| 53%| |-37%|-84%|
| SKT | 538450/s|1105%| 430%| 373%| 352%|155%|144%| 59%| |-75%|
| E |2176700/s|4773%|2045%|1813%|1730%|931%|887%|544%|304%| |
+-----+---------+-----+-----+-----+-----+----+----+----+----+----+
Now, try
with catch
:
+-----+---------+----+-----+-----+----+----+----+----+----+----+
| | Rate|SFT | TC | TTM | TT | T |TTT |TTTM|SKT | E |
+-----+---------+----+-----+-----+----+----+----+----+----+----+
| SFT | 19747/s| | -55%| -75%|-76%|-83%|-83%|-84%|-87%|-90%|
| TC | 44001/s|122%| | -44%|-46%|-62%|-63%|-65%|-71%|-78%|
| TTM | 78860/s|299%| 79%| | -4%|-32%|-33%|-38%|-49%|-61%|
| TT | 82734/s|318%| 88%| 4%| |-28%|-30%|-35%|-46%|-60%|
| T | 115970/s|487%| 163%| 47%| 40%| | -2%|-10%|-25%|-43%|
| TTT | 118930/s|502%| 170%| 50%| 43%| 2%| | -7%|-23%|-42%|
| TTTM| 129150/s|554%| 193%| 63%| 56%| 11%| 8%| |-16%|-37%|
| SKT | 154550/s|682%| 251%| 95%| 86%| 33%| 29%| 19%| |-25%|
| E | 206810/s|947%| 370%| 162%|149%| 78%| 73%| 60%| 33%| |
+-----+---------+----+-----+-----+----+----+----+----+----+----+
TT
vs.TTM
: These measurements swap ordered with repeated runs, indicating they're the same within measurement errors.TTT
vs.TTTM
:TTT
is always slower thanTTTM
.
So,
eval
winsSyntax::Keyword::Try
is next (but for Perl >= 5.14)catching
slows things down significantly- What's up with
Syntax::Feature::Try
? It's a syntax plugin, so shouldn't it be similar toSyntax::Keyword::Try
?
Can you retry with current Try::Tiny master?
Even so, though… I’m still interested in that result, but this benchmark lineup misunderstands the purpose of TTT. It pitches it against competition it wasn’t meant for.
Also, the most important competitor is missing. Perl 5.14 made raw
eval
sane, and that’s the same minimum perl version as required for keyword plugins. If you accept that minimum, then a benchmark without raweval
isn’t really complete.Even so, TTT is not even meant to compete with any of the other modules. It’s meant to be a solution for all the code on CPAN where you can’t pick which try/catch implementation that code uses: you’re stuck with the fact that it uses Try::Tiny. (Though I suppose you could try to submit dozens of patches and convince dozens of maintainers…) But while you can’t switch them to S::K::T or (as I’d advocate) raw
eval
, you can use TTT to clean them up a little. All of them – at once.(So the type of benchmark I’m most interested in is “I ran our test suite from my day job with
PERL5OPT=-MTry::Tiny::Tiny
and it saved 3% CPU”.)I've added eval and Try::Tiny master. eval of course is fastest.
The point of the benchmark wasn't to single out T::T::T. It didn't make sense to leave it out of a comparison of all of the Try::* modules. Had I left it out I'm sure I would have caught flak for that instead.
I did inadvertently leave out Try. I'll try to add that in at some point.
Not from me, at least. :-) Any flak from my part was limited to the omission of
eval
, because that’s what TTT is for – it exists because I don’t use Try::Tiny. Including TTT in the lineup is a different matter… it’s interesting to see the figures (as I said), even if only as a curiosity, since it’s kinda beside TTT’s point.Anyway, I wasn’t going to write about TTT just yet, but you gave me a clearer idea of how to explain it when I do – so thank you for that.
Also interesting that you found Try::Catch to be slower than Try::Tiny, given its purpose is to be faster.
Actually, on repeated look, it seems this benchmark is probably entirely bogus… ☹️
The current results say that without the
catch
clause, CPAN Try::Tiny is slower than master (with or without renaming (i.e. TTT)), but with thecatch
clause, CPAN Try::Tiny is faster than master (with or without renaming).But if CPAN Try::Tiny is 0.28, then the only difference to master is that it doesn’t call
caller
unnecessarily (i.e. under TTT), and it never stores the value to a variable. It’s impossible for master to be slower than 0.28, but esp. when renaming is disabled.So as usual, benchmarking is hard. That also makes me suspicious of the result that Try::Catch is slower than Try::Tiny.
I wouldn't call it entirely bogus, just for comparing TTM vs TT. If I rerun the benchmarks they change relative position, indicating that the differences are within measurement noise.
The larger differences with and without TTT are real.
The TryCatch result is repeatable. I haven't looked at the code, but a wild guess would be that since this is the only XS module, perhaps there's an efficiency loss going through that interface?
There mispell:
TTTM => Try::Tiny::Tiny with Triy::Tiny (master)
How embarrassing. There's a bug in the script.
These lines:
use if $ENV{TRY_TINY_TINY}, 'Try::Tiny::Tiny';
use if $ENV{TRY_TINY_MASTER}, lib => 'Try-Tiny-339b2ba3b0/lib';
Will not always result in Try::Tiny being loaded from my requested directory.
They need to be swapped:
use if $ENV{TRY_TINY_MASTER}, lib => 'Try-Tiny-339b2ba3b0/lib';
use if $ENV{TRY_TINY_TINY}, 'Try::Tiny::Tiny';
I'll update the script and the results.
Aargh TryCatch != Try::Catch. Disregard my remarks.
Results and code updated.
Ah, makes more sense now. Thanks
Well, now. That does indeed look a lot more plausible.
TT vs TTM differ only in whether the SV returned from
caller
gets assigned to an SV on the pad, which is clearly going to be a noise-level difference.But TTT vs TTTM differ in that TTTM skips the
caller
call entirely, which should rise above the noise, if only just – as indeed it seems to.The fact that TC is so much slower than TTTM is bizarre, though. A quick skim does not reveal any obvious culprit either. I’ve filed an issue, let’s see if the maintainer figures it out.