I write this from sunny Rugby, England, where I’m attending the QA Hackathon 2016. It’s always great to spend time with people who are active in the Perl community, not just socialising, but working on the software we all depend on.
So I spent a little time having a look at MooseX::AbstractMethod and at first glance it look promising as is suppose to allow on to create an Abstract Class. So having a little time I created a little test code and of course a test file.
The code is simple parent Abstract class that has one sub and one required or abstract sub and then a child class that extends that parent but does not have the required sub so it should fail. This give me a simple four test plan;
use the extended child class
fail to instantiate the child class
use the abstract class
fail to instantiate the abstract class
So in my first run with MooseX::AbstractMethod I get
ok 1 - use DA_AM::Memory;
ok 2 - DA_AM::Memory dies requires _execute sub
ok 3 - use DA_AM;
ok 4 - Directly opened DA
On behalf of the Rakudo development team, I'm very happy to announce the April 2016 release of Rakudo Perl 6 #98. Rakudo is an implementation of Perl 6 on the Moar Virtual Machine[^1].
This release implements the 6.c version of the Perl 6 specifications. It includes bugfixes and optimizations on top of the 2015.12 release of Rakudo, but no new features.
Upcoming releases in 2016 will include new functionality that is not part of the 6.c specification, available with a lexically scoped pragma. Our goal is to insure that anything that is tested as part of the 6.c specification will continue to work unchanged. There may be incremental spec releases this year as well.
Please note: This announcement is not for the Rakudo Star distribution[^2] --- it's announcing a new release of the compiler only. For the latest Rakudo Star release, see http://rakudo.org/downloads/star/.
We're delighted to announce that CV-Library is supporting the QA Hackthon for the first time, as a gold sponsor.
CV-Library is the UK's leading independent job site, attracting over 3.8 million unique job hunters every month and holding the nation's largest database of over 10 million CVs.
I have been working on a new release of Net::SSH2 that has undergone mayor, mostly internal, changes.
There are not too many new features. The aim has been to simplify the code, made it reliable and improve the overall consistency and behavior of the module.
Most important changes are as follows:
Typemaps are used systematically to convert between Perl and C types:
In previous versions of the library, inputs and specially output values were frequently handled by custom code with every method doing similar things in slightly different ways.
Now, most of that code has been replaced by typemaps, resulting on a consistent and much saner handling of input and output values.
Only data in the 'latin1' range is now accepted by Net::SSH2 methods. Previous versions would just get Perl internal representation of the passed data without considering its encoding and push it through libssh2.
That caused erratic behavior, as that encoding depends on the kind of data, but also on its history and even latin1 data is commonly encoded internally as utf8.
Now that I have things mostly sorted out let's actually do some code or at least some pseudo-code on how I going to make it work.
I could go the pure Perl route and do the same sort of thing DBI does, an interface then drivers under it following this design pattern
I have cloned out and customized the DBI code and pattern in other applications and it really works well when you want the common glue on-top approach.
One thing I want to avoid is API deviation or sub bloat as I like to call it. It is an all to common occurrence with this design pattern. I am guilty of it myself when I added Scrollable Cursors into DBD::Oracle. Nothing in the DBI spec about these and I recall a little note from Tim asking to explain why I didn't create a generic DBI version before I added into DBD::Oracle.
Current subroutine signatures contains tow difference features
Syntax suger of my ($x, $y) = @_
Arguments count checking
My opinion is that this two features has different purpose. First is for syntax sugar. Second is for aruments count checking.
I think it is good to separate two features because each feature has different purpose.
It is not good that different purpose features is mixed into one feature.
I want syntax sugar and I don't need argument count checking in my program. This is natural for me.
but there are people who want also argument count checking.
We should not assume all people want arumengt count checking.
Syntax sugar is the feature most poeple wait, but argument count checking is not.
It is safe implementaion in the Perl future that any perfomance cost don't force to user
The QA Hackathon (QAH) will be kicking off on Thursday morning this week, starting 4 days of intensive work on the CPAN toolchain, test frameworks, and other parts of the CPAN ecosystem. The participants will be gathering from all over the world on the Wednesday evening.
The QAH wouldn't be possible without the support of all of our generous sponsors. In this post we acknowledge the silver, bronze, and individual sponsors. Many of the Perl hackers taking part wouldn't be able to attend without your support. On behalf of the organisers and all attendees, thank you!
My little API exercise in my last post I noticed that I may be missing an important part of the puzzle and that is some sort of generic connector function.
Yes, it does indeed! The not-so-well-known
Git configuration parametercore.whitespace is quite versatile in this respect. It supports different
values, even tabwidth to tell how many space characters an indentation (tab)
must be equal to.
core.whitespace
: A comma separated list of common whitespace problems to notice.
git diff will use color.diff.whitespace to highlight them, and git apply --whitespace=error
will consider them as errors. You can use prefix ”-” to disable any of them (e.g. -trailing-space):
We're still hacking away on the Veure MMORPG and things are moving forward nicely, but I thought some folks would like to hear more about our development process. This post is about our test suite. I'd love to hear how it compares to yours.
Here's the full output:
$ prove -l t
t/001sanity.t ... ok
t/perlcritic.t .. ok
t/sqitch.t ...... ok
t/tcm.t ......... ok
All tests successful.
Files=4, Tests=740, 654 wallclock secs ( 1.57 usr 0.20 sys + 742.40 cusr 15.79 csys = 759.96 CPU)
Result: PASS
Let's break that down so you can see what we've set up. You'll note that what we've built strongly follows my Zen of Application Test Suites recommendations.
[This is a post in my latest long-ass series. You may want to begin at the beginning. I do not promise that the next post in the series will be next week. Just that I will eventually finish it, someday. Unless I get hit by a bus.
IMPORTANT NOTE! When I provide you links to code on GitHub, I’m giving you links to particular commits. This allows me to show you the code as it was at the time the blog post was written and insures that the code references will make sense in the context of this post. Just be aware that the latest version of the code may be very different.]
Last time
I talked briefly about the raft of failures that CPAN Testers threw up for me to look at. I mentioned that there were roughly 3 groups of failures, and that one of them was bigger than the other two. I even gave a hint as to what it was in one of my examples:
So I have sketched out my test suite lets have a look at the API I am trying to express.
For me the API is the most important part design. Have a good workable general purpose one it will become a de-facto standard, like DBI, do a very narrow one it will languish in the niche that is was designed for. It has been said many times before, but it bears repeating here, that any API should be
Easy to Learn
Easy to Use
Easy to Extend
Consistent and
Hard to misuse
Well on the higher level my Data Accessor will be easy to use with just the basic four CRUD operations so that should be easy to use and learn.
I'm a sucker for early access to free APIs. So I quickly went forward when
Backblaze opened up access to their B2 storage API, and implemented a client
for it, Backblaze::B2. I feel a bit guilty for releasing a module without having a use case
for it myself, but instead of letting it rot on my filesystem, I'm putting
it out for others to use.
where the <oct> capture represents things that
need to be run through the oct built-in, and the
<float> capture comes from perlfaq4.
The problem here was that the <float> expression
matched things like '09', which was not what I wanted. What
I wanted was to have the entire expression fail if it got past the
<oct> expression and found something beginning with
'0', other than '0' itself.
We're very happy to announce that SureVoIP are supporting the QA Hackthon as a gold sponsor.
SureVoIP® (Suretec Systems Ltd.) is an Ofcom-registered Internet Telephony Service Provider supplying Hosted VoIP solutions, SIP trunks, UK inbound numbers, International SIP numbers, a partner program, public API (powered by Catalyst) and other related VoIP products and services.
Well lets see as I am in a testing mode right now lest have a quick look at the DA.pm's expressed API to see what I should test or at least how I should organize my tests.
Always start with the basics so I will have
00-load.t
that will do your basic load checks to see if it will work on the perl that is trying to run it. No need to go into details there.
As well I will add in a
02-base.t
And this will test just a little more than load. Perhaps the use of a driver and some basic set and get checks of the higher level stuff.
Having failed to find a working profiler on npm I ended up webpacking nqp-js-on-js and profiling it directly in Chrome.
It turns out the first big slowdown was the lack of multi caching.
I implemented them.
The second big slowdown was actually the slurp() function.
MoarVM doesn't handle concatenation large amounts of huge strings very well so the cross compiler so instead of concatenating bits of javascript code it's often much faster to write them to disk and then slurp it back in.
On the nqp-js-on-js due to a misset buffer size slurp turned out sluggish.
Due to profiling a webpacked version (which doesn't do IO as it runs in Chrome) this has baffled me for a bit.
Changing the nqp::readallfh buffer size from 10 to 32768 speed up stuff a lot and I'm back to compiling rakudo.
Based on the output of the profiling there seem to be a few low hanging fruit optimalizations for bunch of easy ~5% speedups but I'll work on them later on as having actual Perl 6 running instead of NQP will give me a better vision of how we want to optimize things.
I think subroutine signatures don't need arguments count checking,
because performance is more important than utility.
Subroutine signature should be optimized from performance perspective, not utility.
Arguments count checking is one logic, so it damage run time performance.
And I like simple and compact syntax.
sub foo($x, $y) {
...
}
# same as
sub foo {
my ($x, $y) = @_;
}