Perl QA Hackathon report - part 2: CPAN testing on L4 Linux

This year at the Perl QA Hackathon I had three topics: benchmark update Perl until 5.24, enable CPAN test reporting on L4Linux, release a Net::SSH::Perl v2 to CPAN.

Part 2 - CPAN testing on L4 Linux

To extend the diversity of platforms on CPAN TESTERS, I brought a laptop with me which runs on the L4Re micro-kernel in order to set up CPAN::Reporter tools on it. The laptop runs Ubuntu 16.04 with the kernel exchanged by L4Linux v4.4.

The only hickup I had was that the information about the operating system kernel is not picked up at runtime of CPAN installation or reporting but taken from the Perl's $Config entry. Once I realized that, I recompiled the Perl, currently using 5.22.1, re-iterated the setup, and let it run during the hackathon, with just occasional reviewing to install missing external dependencies.

Thumbnail image for cpan-reporter-log2.jpg

So if you spot a kernel version looking like 4.4.0-l4-g2be3f0e like in here - that's my L4Linux CPAN test box.

Debugging adventure - why it's hard to estimate

This is the second instance in recent time where I thought it was a quick fix but ended up taking way more time than it should. The first was an iOS app submission which was something Apple changed recently. This is a perl example so posting it here.

Mandrill is shutting down . I was using their service for my domains table tennis match and Brainturk brain games online . I looked at the alternatives and settled with sparkpost . This is the sparkpost example for using perl . It seems simple enough so I asked my employee who is learning perl to try it out . When we ran the example we started getting an error . This example uses Net::SMTP and running it in debug mode gave us this

RCPT TO: <x@mydomain.com>
550 5.7.1 relaying denied

Many Sides of Moose!!

In my last post I did managed to make a little code progress but I still had one major goal to accomplish and that is to make my Data Accessor smart enough that is would know what Driver to use form the connection passed in.

Well I am glad to say I am now able to do that with this test 02_base.t

ok 1 - use DA;
ok 2 - use DA::View;
ok 3 - use DA::Element;
ok 4 - Person is a View
ok 5 - Street is an Element
ok 6 - County is an Element
ok 7 - City is an Element
ok 8 - Address is a DA
ok 9 - SQL correct
ok 10 - Mongo Query correct

Moose is anything but inflexible and with it I was able to come up with a working prototype very quickly just by moving a few things about and reversing my roles.

So before I had DA.pm as a role. I change that to a class

O pumpking! My pumpking!

As many people know by now, Ricardo Signes recently announced that he will be standing down as pumpking once Perl 5.24.0 is released, after four and a half years in the role — not to mention an unprecedented five stable 5.x.0 releases of Perl!

Since the Perl QA Hackathon is the first Perl event Rik has attended since his announcement, we thought it would be fitting to offer Rik a token of our appreciation for the remarkable and tireless work he’s put in during his service. So we closed up the second day of the hackathon with a short presentation and a small expression of our gratitude (and hopefully one that Rik didn’t find too embarrassing!)

In particular, Rik has now joined the very select group of people who’ve received a Silver Camel.

P4220270.jpg

Perl QA Hackathon report - part 1: Perl::Formance

This year at the Perl QA Hackathon I had three topics: benchmark update Perl until 5.24, enable CPAN test reporting on L4Linux, release a Net::SSH::Perl v2 to CPAN.

Part 1 - Benchmark::Perl::Formance

To keep a benchmark stable but still allow further development, last year I started to create separate bundles of existing benchmarks, starting with the "PerlStone2015" suite. Once settled I would only touch it for maintenance, and for newer developments I can fork it into an independent "PerlStone2016" suite where I could adapt the timings for newer hardware, other benchmarks, or particular language features.

This hackathon I reviewed and polished it to take reasonable runtime in "normal" mode so it does not take weeks to execute and also in "fastmode" where a benchmark produces results within 1-2 seconds.

The removal of the lexical topic feature in 5.24

Now that p5p removed the broken lexical topic feature in 5.24 it's time to look back how that happened and why it should not have happened, and how cperl fixed that problem.

In 2013 Rafael Garcia-Suarez posted to p5p a plea to Salvaging lexical $_ from deprecation arguing that the recent deprecation of my $_ is wrong, and can easily be fixed.

He explained the problems pretty nicely on the language level, but forgot to explain the internal advantages of a lexical $_ over a global or localized $_. To add that it's pretty simple: A lexical $_ is an indexed slot on the pad array, calculated at compile-time, whilst a local or global $_ is a hash entry in the namespace. It's much faster to lookup, write and restore. The slot index is compiled and carried around in the op. That's why it's called lexical, it is resolved at compile-time, and not at run-time.

Just a Quick one!

So spent a little time working on my rough code from my last post and made up a working copy that you can have a look at here.

It was easy enough to get the SQL part working, after all I have been using such code my Data Accessor for many years now. It also took only a few minutes create a new DA::Mongo class with and an '_execute' function to come up with the query for Mongo.

With my limited test I get the correct query in SQL
 
SELECT street, city, country FROM person  AS me 
and Mongo as well
 
db.person.find({},{ street: 1, city: 1, country: 1}
So I have accomplished one of my goals; To have the same set of params to my API come up with the correct query in either SQL or Mongo.

Perl QA Hackathon 2016: Configure

I write this from sunny Rugby, England, where I’m attending the QA Hackathon 2016. It’s always great to spend time with people who are active in the Perl community, not just socialising, but working on the software we all depend on.

Half time at the Perl QA Hackathon 2016

Two days down, two to go.

The Perl QA Hackathon for 2016 is being held in Rugby. We are now half way though. This is the first year in quite a few when I haven't come to the event with Devel::Cover failing tests against the imminent new Perl release, and this has given me a little more time to concentrate on other matters.

A large part of the value of the QA hackathon is in the discussions that take place when people can sit together and thrash something out, or bounce ideas off each other. I have been able have a number of interesting discussions, some relating to things I brought with me, and others arising because of ideas others have had.

Test2/Test::Builder Update from the QAH

Yesterday was the first day of the QA Hackathon in Rugby, UK. The first item on the agenda was a discussion about Test2 going stable. This blog post will cover the important points of that discussion.

For the impatient, here is a summary:

  • Test2 is going to be part of the Test-Simple dist. It will not be a standalone dist.
  • The next stable Test-Simple release will include Test2 and a Test::Builder that runs on Test2.
  • The release date for the next stable Test-Simple, which includes Test2, will be no sooner than Friday May 6'th, which is our planned release date.

The QAH discussion focused around a single question: "What is the best path forward for Test::Builder when we consider both end-users and test tool authors?"

Code comes and Code goes but the App goes on!

Well how about some coding today?

Lets look at the first little bits from my last post the 'View' and 'Element' and I am going to focus in on using it with SQL in this post. So we have this;

It's Earth Day - time to clean up CPAN!

Happy Earth Day 2016!

Fortuitiously, Earth Day this year falls during the QA Hackathon in Rugby, UK. Earth Day is a great time to clean up old distributions in one's CPAN directory, to save storage space on the countless CPAN mirrors and generally reduce clutter. To do this, I run my cleanup script, available here.

Virtual Spring Cleaning (part 6 of X) wherein I expose my bad taste for all to see

Part of my motivation for the virtual spring cleaning is that I have many slow-cooking pet projects in which I try out new modules or features that have not yet passed the test of time, at least at the moment of creation. Usually, I only work on these projects when I have both, inspiration and downtime, usually right after Perl workshops. One example of such a project is a prototype of a rogue-like game named App::StarTraders (unpublished) which is intended to become a clone of Elite.

Announce: Rakudo Perl 6 compiler, Release #98 (2016.04)

Announce: Rakudo Perl 6 compiler, Release #98 (2016.04)

On behalf of the Rakudo development team, I'm very happy to announce the
April 2016 release of Rakudo Perl 6 #98. Rakudo is an implementation of
Perl 6 on the Moar Virtual Machine[^1].

This release implements the 6.c version of the Perl 6 specifications.
It includes bugfixes and optimizations on top of
the 2015.12 release of Rakudo, but no new features.

Upcoming releases in 2016 will include new functionality that is not
part of the 6.c specification, available with a lexically scoped
pragma. Our goal is to insure that anything that is tested as part of the
6.c specification will continue to work unchanged. There may be incremental
spec releases this year as well.

The tarball for this release is available from http://rakudo.org/downloads/rakudo/.

Please note: This announcement is not for the Rakudo Star
distribution[^2] --- it's announcing a new release of the compiler
only. For the latest Rakudo Star release, see
http://rakudo.org/downloads/star/.

And the Plot Thickens

Yesterday I looked at maybe abstracting my API to make it common across a number of Data Accessor Derivers such as SQL and MongoDB and that got me thinking of how I am going to put all the pieces together?

Now what I would like is the end user to be able to something like this
 
my $address_da = SomeDa::Address()->new();
my $address_hash = {};
my $dbh = DBI->connect(“DBD::aything”,$uid,$ps);
 $address_da->retrieve($dbh,$address_hash,$options);
print $address->{steet}...

CV-Library is sponsoring the QA Hackathon

We're delighted to announce that CV-Library is supporting the QA Hackthon for the first time, as a gold sponsor.

CV-Library is the UK's leading independent job site, attracting over 3.8 million unique job hunters every month and holding the nation's largest database of over 10 million CVs.

Another release of Net::SSH2 is comming... test it, please!

I have been working on a new release of Net::SSH2 that has undergone mayor, mostly internal, changes.

There are not too many new features. The aim has been to simplify the code, made it reliable and improve the overall consistency and behavior of the module.

Most important changes are as follows:

  • Typemaps are used systematically to convert between Perl and C types:

    In previous versions of the library, inputs and specially output values were frequently handled by custom code with every method doing similar things in slightly different ways.

    Now, most of that code has been replaced by typemaps, resulting on a consistent and much saner handling of input and output values.

  • Only data in the 'latin1' range is now accepted by Net::SSH2 methods. Previous versions would just get Perl internal representation of the passed data without considering its encoding and push it through libssh2.

That caused erratic behavior, as that encoding depends on the kind of data, but also on its history and even latin1 data is commonly encoded internally as utf8.

Current subroutine signatures implementation contains two features which purposes are different

I want to write about subroutine signatures more.

This is previous topic.

I think subroutine signatures don't need arguments count checking

Current subroutine signatures contains tow difference features

  1. Syntax suger of my ($x, $y) = @_
  2. Arguments count checking

My opinion is that this two features has different purpose. First is for syntax sugar. Second is for aruments count checking.
I think it is good to separate two features because each feature has different purpose.

I don't hope "perl subroutine + (syntax suger + argument count checking)".

I hope "perl subroutine + syntax sugar + argument count checking".

It is not good that different purpose features is mixed into one feature.

I want syntax sugar and I don't need argument count checking in my program. This is natural for me.
but there are people who want also argument count checking.

We should not assume all people want arumengt count checking.

Syntax sugar is the feature most poeple wait, but argument count checking is not.

It is safe implementaion in the Perl future that any perfomance cost don't force to user

Back in API again

Now onto another problem!

With present Perl version of my Data Accessor I had only a very small set of common methods, the CRUD ones and add_dynamic_criterion while the DA::SQL brought into the API many methods and attributes that are SQL specific.

  • Table
  • Fields
  • Group_by
  • Where
  • Having
  • ...

So if one of my evil plans is to make DA work across a number of differing DB platforms it would not be a good thing if for each of the accessor drivers would have its own API. Could also see the case where drivers for the same sort of DB say Mongo and Clusterpoint could have completely different APIs.

This would be no problem for the end user (remember her out on the pointy end writing up REST ful web services and desktop apps) as they would just have that nice little CRUD API to play with.

Perl 5 Porters Mailing List Summary: April 5th-13th

Hey everyone,

Following is the p5p (Perl 5 Porters) mailing list summary for April 5th week. Enjoy!

About blogs.perl.org

blogs.perl.org is a common blogging platform for the Perl community. Written in Perl with a graphic design donated by Six Apart, Ltd.