It might be useful for others who also have no idea how XS works and who would benefit from the approach and point of view of another XS newbie.
It is both the step-by-step description and a CPAN-ready Perl distribution.
You can find it on github:
Acme::The::Secret::XS::Diaries
Happy hacking.
]]>I need it basically once a year, read and fiddle around, then forget everything, then try to recall next year, etc., I will definitely come back here for (re-)finding examples and explanations.
Thanks!
]]>As often in the last years I worked on the Benchmark::Perl::Formance toolchain.
The major achievement in this Perl Toolchain Summit was to wrap up the separate major components together under a common umbrella (App::Benchmark::Perl::Formance::Supertool), so it is easier approachable when setting up a new environment or actually running and evaluating benchmarks.
This allows to create a whole overall performance history of Perl5 releases based on a current snapshot of CPAN. Tracking current blead performance is possible as well, though not actively exercised by me right now.
The overall vision can be found on my earlier YAPC::Europe slide decks:
In a couple of days I should have new reliable benchmark results uploaded at
For now, here is a current snapshot with preliminary non-representative results taken with the benchmarks running in "fast mode", i.e., with much smaller datasets or iterations, and slightly disturbed with OS noise:
I will probably give a lightning talk about it at the upcoming PerlCon in Riga 2019.
Besides the actual numbers that still have to be computed, this wraps up the story I worked on throughout the last hackathons. Thanks to our sponsors to enable this work.
For the future, I might head over to regular blead performance tracking now, and/or picking up a Perl6 benchmark suite and integrate it to report and evaluate results here, too.
A big Thanks to the organizers, Neil Bowers, BooK, and to Wendy for providing us with food and shopping services. Everything felt so easy and smooth, it helped to focus on hacking completely.
I'm looking forward to whatever Sawyer device they will invent after his 16th Perl release in a row. :-)
Another big Thank You to the Wendy for the support with fresh and healthy food. And also for the less healthy and chocolate stuff :-). And another big Thank You to all the other great attendees there - it is always a pleasure to spend the time with you all.
]]>Part 3 - Net::SSH::Perl
I am co-maintaining Net::SSH::Perl, though usually I just apply patches that come up on RT or github.
Some months ago Lance Kinley implemented modern ciphers like AES, more key exchange algorithms, etc. on github - however he started from a CPAN .tgz snapshot. With the help of E. Choroba I got Lance's history rebased to my repository. However, I was short on time and wanted to do release polishing during the hackathon, and so I did.
The only trouble I had was that another patch from Brad Lhotsky which I merged earlier did conflict with Lance's changes in a way that I could solve in a git way but some tests kept failing and I did not understand why.
To be sure to not screw things up I had to give up merging and only released Lance's extensions to CPAN as a new major release v2.01.
As a side effect I also uploaded another new module from Lance, Crypt::OpenBSD::Blowfish, to CPAN.
If you are a user of Net::SSH::Perl please test if it works.
Part 2 - CPAN testing on L4 Linux
To extend the diversity of platforms on CPAN TESTERS, I brought a laptop with me which runs on the L4Re micro-kernel in order to set up CPAN::Reporter tools on it. The laptop runs Ubuntu 16.04 with the kernel exchanged by L4Linux v4.4.
The only hickup I had was that the information about the operating system kernel is not picked up at runtime of CPAN installation or reporting but taken from the Perl's $Config entry. Once I realized that, I recompiled the Perl, currently using 5.22.1, re-iterated the setup, and let it run during the hackathon, with just occasional reviewing to install missing external dependencies.
So if you spot a kernel version looking like 4.4.0-l4-g2be3f0e like in here - that's my L4Linux CPAN test box.
Part 1 - Benchmark::Perl::Formance
To keep a benchmark stable but still allow further development, last year I started to create separate bundles of existing benchmarks, starting with the "PerlStone2015" suite. Once settled I would only touch it for maintenance, and for newer developments I can fork it into an independent "PerlStone2016" suite where I could adapt the timings for newer hardware, other benchmarks, or particular language features.
This hackathon I reviewed and polished it to take reasonable runtime in "normal" mode so it does not take weeks to execute and also in "fastmode" where a benchmark produces results within 1-2 seconds.
Once I had these carefully polished I sync'ed my CPAN mirror and updated all of my multi-hundred Perl installations with the current CPAN dependencies and started benchmarking them in "fastmode" to get a tendency by end of the event. They ran repeatedly so I have 10+ data points per metric and Perl version for some statistical significance.
The dashboard is here.
First conclusions for the 5.23-5.24 era:
- improvements for small blocks entry/exit are clearly visible.
- algorithmic benchmarks like binarytrees, fannkuch, fasta, nbody, mandelbrot became remarkably faster
- others like regexdna and spectralnorm became already faster with 5.20, and could keep that speed
- some regex micro benchmarks run faster
- however, the regex engine micro benchmarks generally became slower since 5.18, but at least kept stable at that level
Please note, I could only run the scaled-down "fastmode" benchmarks. I will run the heavier-weight benchmarks throughout the next weeks.
Thanks to our sponsors
Thank you very much, FastMail, ZipRecruiter, ActiveState, OpusVL, Strato, SureVoiP, CV Library, Infinity Interactive, Perl Careers, MongoDB, think project!, DreamHost, Campus Explorer, and the Perl 6 community.
Sponsoring is always a difficult topic inside a company, I know it from both sides of the "money transaction", therefore I am very grateful for the sponsors' leap of faith into the event, and I can at the same time confirm that it is absolutely worth it. 30+ people working for a week in the same room for 16+ hours a day with zero communication barriers gives an incredible boost on bringing things forward.
Hurry now while stocks last!
Schedule: http://act.yapc.eu/gpw2015/schedule
It's been a while that I blogged. Yet, it's a tradition now to write my report about the Perl QA hackathon as probably the last one of the attendees. The 2015 edition of the Perl QA hackathon was a lot of fun. I'm one of the less visible guys there so I want to give some visibility into my work.
My topic is benchmarking.
Benchmarking the Perl 5 interpreter.
Over the years, I narrowed down that topic beginning with the problem statement over several steps: the search for workloads, the creation of a framework for executing and producing meaningful numbers, the bootstrapping of Perl with CPAN, ensuring a stable CPAN, a system to store benchmarks together with general testing results, and the actual execution on dedicated hardware.
You can recapitulate some intermediate steps here:
Perl Workloads - YAPC::Europe 2010
Perl::Formance - YAPC::Europe 2011
Perl::Formance / numbers - YAPC::Europe 2012
With the 3 projects that hold my overall vision together being those:
bootstrap-perl
Perl::Formance
Tapper
This year I tried hard to spend my time on actual result generation.
From the 1st hackathon day I had set up my bootstrapping and quick sample benchmarking to ensure I can generate results. This basically ran continuously during the 4 days on all released major Perl versions since 5.8.9/5.10 to 5.20.2
I gradually increased the amount of meta information, polished CPAN bootstrapping on distroprefs and dependencies, resurrected Spam Assassin as interesting macro workload, finished some more benchmark plugins (matrix multiply, 5.20 function signature).
Unfortunately I didn't have a release of our scalable test infrastructure Tapper with its dedicated benchmarking subsystem ready by the hackathon start, so I just worked with local result files. However, I reworked the data schema to already prepare for the later n-dimensional evaluation of results, and concentrated on 4 dimensions for now: Perl release version, 64bit on/off, threading on/off, and the different workloads.
I integrated PDL::Stats to improve data confidence by simple repetition, and have aggregated values at hand. Although I concentrated on the simple mean when I then charted it using google's chart api we can do a more thorough evaluation later.
When I left the train at home I had all parts finished, yet with only the quick runs to proof the overall approach:
The actual benchmarks are much longer running for a couple of days now.
Stay tuned for the actual charts any time soon...
]]>The first rule of the benchmark club: you do not change the environment of the benchmark.
In other words: my changes during the hackathon could have influenced the measurements. I don't think they did, but in fact this happens more often than one might hope. For instance, I also upgraded my CPAN mirror and dependecies during hackathon. Only rerunning the old versions again will make sure. I planned to do that anyway.
"The issue" that happened after 5.8 is not yet clear, that's what I try to narrow down by running benchmarks against 5.9.x. Most probably it will not be a single issue but somewhere in the generalisations for better flexibility that made 5.10 the base for other nice developments that happened after that.
Another theory: I'm mostly running threaded Perls so far. It could just be a change in the overhead against non-threaded Perl that got worse and now becomes better again.
Re on caring for 5.8. It is natural because it is that famous Perl that lots of people used for many years and which still CPAN authors respect when writing libraries. It is therefore the speedy baseline worth to compare against.
From the performance perspective, according to Reini, it's not even the fastest Perl, but I can't get my toolchain to run with 5.6.
I am late in writing this summary about my hackathon but it fits the prolongation style I exercised this time. I had quite a slow start as I found it difficult to flush my overfull @work mindset and resume my open source projects. I used my flight delay to carefully prepare a TODO list which finally helped on that flush'n'resume exercise.
So what did I do?
My pet project is benchmarking Perl. There I have one major problem:
My benchmarks are rather straight lines without interesting changes: they are straight in the 5.8 timeline; they are straight in the 5.10+ timelines.
However, both straight lines are different to each other, so obviously something must have happened during the 5.9.x times. Unfortunately, exactly that interesting timeline did not work well in my benchmarking toolchain.
So the mission was:
So the high-level plan for my hackathon attendance was kind of a big polishing Chuck Norris roundhouse kick. This is a bit in contrast to what I read so far from others about inventing amazing new stuff - anyway, I think someone has to solve the boring parts of benchmarking.
Results:
I did not yet come to the point of actually running and inventing new benchmarks with 5.9.x, but it's now possible.
On several other "second fronts" during Perl compilation and benchmarking I did:
As slow as the hackathon started - it return it kept its momentum for me during the next couple of weeks to happily continue these projects in my rare spare time.
Thanks to the organizers! Everything felt easy and natural which is always only achieved by hard preparation work.
Thanks to the sponsors.