Test driven development is a must, that's why I think it is necessary to check that there is at least a test for every module specially if you can't know a priori how much will be populated a namespace, for example if you decide to have plugins .
So here it comes a very nice core module called Module::Pluggable ! For sure ti can help a lot !
Read a short explanation ( more code than words :) in this article .
It's been almost a week since the Perl QA Hackathon 2011 in Amsterdam is over. As usual since 2008,
the main topics were the Test and CPAN toolchains. Other people will talk
a lot better than me about the work they achieved during these three days.
In this post, I'd like to give some information about the organization.
After too long a wait, I have updated File::Slurp with a long list of requested features and bug fixes. See the Changes file for the changes in the recent burst of 3 releases. Major changes include adding prepend_file() (this inserts data at the beginning of a file), the binmode option supports all the values of the binmode function, a rewritten and improved benchmark script and more synopsis examples.
In the near future, expect to see the subs edit_file() and edit_file_lines(). These support modifying a file in-place from inside your Perl program (similar to -pi on the command line).
Besides adding support for the new stuff in v 0.16, I've also added a few features:
scrolled_search()
It is possible to scroll through a long list of results in ElasticSearch, but this required a bit of repetitive code, which is now nicely packaged up in scrolled_search. So you can do:
Suppose you want to check if a number is either 70 or 73, and your duct-tape-and-chewing gum production environment is stuck in the stone age without the smart-match operator.
You might be tempted to write foo() if $num =~ /^70|73$/, as I did. Oops. That will match 70 and 73, but also anything that ends in 73, like 173 or 273 or foo73. /^(?:70|73)$/ fixes it.
The moral of the story: life sucks without smartmatch.
I recently wrote about 80% hacks and this post is closely related to that. The overall concept is "don't let the perfect be the enemy of the good".
When it comes to identifying issues in building tools, we often think of a "perfect" solution and then try to implement it. Business objects to this because they want to develop products, not test code. Developers object to the business because they want to know their code works. This tension is very difficult to resolve, so I fall back on my favorite example:
sub reciprocal {
croak "Reciprocal of 0 is not allowed" unless $_[0];
return 1/$_[0];
}
Right off the bat, I could see people writing two tests for that: one for zero as an argument and one for a non-zero argument. And you know what? That will get you 100% test coverage on this function and that's where the whole "know their code works" argument falls down.
This post is just in case I forget this again in the future. Hopefully, I'll find it when I search the net for this issue.
This post is testament of my tenacity (more like my stupidity when I let myself get mentally exhausted).
I have been working on integrating some Mojolicious apps into our existing Apache CGI/mod_perl set of applications. When I went to test the latest version of a particular app, that changed how things were added, I decided to put some warn statements in to see if I had the flow right. Looking in the error log, I started seeing everything happening twice!
The methods that get used to create the initial view. The methods that get used to add a new item and subsequently view it. Everything.
If you're running a CPAN Testers smoker with metabase-relayd, thanks to Chris Williams' forward planning, you can now run it in offline mode as:
metabase-relayd --offline
In Chris' own words:
> It will still collect test reports, but it won't submit them to the metabase. > > I knew there was another reason for writing a relay. > > /me skips away going 'la la la' > > :)
If you're not using the relay, then you may need to stop testing until the EC2 servers come back online. We'll try and keep an eye on things and let you know when all has returned to normal.
/me prepares for Chris' 100k+ report submissions when he switches his relay back to online mode!
This means that if someone has posted a completely RTFM question, you can point them to the answer, but that F had better be fracking silent.
See what I did there? I used an "F" word. Why? This is my fracking blog. If you don't like my "F" word, too bad. It's my fracking blog. Not yours. Mine.
Similarly, the perl.beginners mailing list is probably not yours. While some of the people responding to Peter Scott's email had perfectly valid reasons why being overly gentle isn't always a good thing, the overriding point is simple: the blog was created to be a flame-free environment. If you object to that, that's OK. However, you're probably not the list creator and thus, you don't get to make the rules.
If you don't like the rules, that's OK. If you can't play by the rules, it's not. I thought people were taught this when they were five years old, but obviously I was wrong.
Hello, Perl bloggers! I decided to start
blogging about a most of my exclusively Perl-related stuff here on
blogs.perl.org, in hope of getting more comments
from active Perlers. (Until now, I've blogged about it
on
my technical LiveJournal blog and previously on
use.perl.org Journal.).
You can learn more about me on my home
site - www.shlomifish.org .
OK, having put that aside - let's move on to the main topic of this post.
Many months ago I wanted to use the
Text::Table CPAN module
to present a table related to the meta-scan heuristics construction
scheme of Freecell Solver. Now,
I wanted to present nicely formatted borders, using the
Unicode
box-drawing characters (which some people would recall from DOS).
However, I found it difficult to specify the separators in the
rulers properly based on their indices - they were assumed to be the
same globally. As a result, I've written
a patch,
and placed the modifications in
a github repository.
At $work we're nearing the end of a long upgrade process, that's included moving from Debian etch to lenny, from svk to git, from Perl 5.8 to 5.10, from Apache 1.3 to Apache 2 (and then to Plack), and moving to recent versions of DBIC and Catalyst (which had previously been pegged at a pre-Moose state).
For code deployment we had been packaging all Perl modules as Debian packages and keeping our own CPAN repository, but wanted a more 'Perl-ish' solution avoiding the various pitfalls with OS packaging and Perl.
THE STRUCTURE
The system that we implemented consists of the following parts:
The community at #perl6 on freenode derived much pleasure from studying and discussing the entries to Masak's Perl 6 Coding Contest 2010. Five problems, five contestants, 26 entries (go figure) all used Rakudo. Since Niecza is the new Perl 6 kid on the block, it seemed an interesting idea to try it out on the submissions.
Alas, it was very disappointing to see error after error from Niecza. The results are included below for closer analysis. During testing it began to look as if Niecza would not run any entries, but fortunately it managed with p3-fox and p3-matthias. (They produced wrong answers, but hey, running code!)
The scripts that died with "Parse failed" never ran - the parser considered the source code to be not Perl 6 STD grammar compliant. There are lessons here - maybe Rakudo is too lenient in places, or maybe STD needs to be tweaked. It is also possible that the scripts could be patched to comply with STD, masak++ has already suggested a project to investigate.
These days RESTful web services are all the buzz, even though many who think they know what that means don't have a clue. But if you want to build a truly RESTful web app to go with your RESTful web services you'll quickly learn that web browsers only support GET and POST in HTML forms. Plack::Middleware::HttpMethodTunnel to the rescue after the jump.
I’ve been setting up my Windows 7 laptop to have as complete a Perl development environment as possible. I don’t have a choice about the operating system in this case, but I am used to the Unix-style environment, which I’d like to maintain.
One option is to use Cygwin and just dive into there for everything. However I do want some of my code to work natively under Windows so there is still a need to run Perl tests at a Windows console. I might as well develop there too, if I can. I’ve ended up with the following steps, which are in places bodgy hacks, but show the principle works at least:
I configured them, played around with them, injected example values, etc.
Besides that I had some progress with SpamAssassin. After
accessing the repo (using git-svn) I found an ok looking version
3.3.2 waiting there that passes all tests on Perl 5.12.3. I volunteered in spamassasin-dev and irc to make the corresponding release.
Different languages are suited to different things. We know this well and we remind people about it from time to time. For example, Erlang is a great language for running a massively concurrent system, but the language itself is rather slow, so it would be awful for performance-intensive procedural work.
By a similar token, there are some things for which Perl is simply not the first choice. If you want to write a rich, cross-platform GUI application, there are a number of choices, but Perl's not your best one by a longshot. I once timidly suggested that a GUI toolkit be pushed into the core to resolve this and I was shot down immediately for excellent technical reasons which nonetheless relegate us to a non-contender in this arena.
I have released
the first non-developer's version of Marpa::XS:
0.002000.
Marpa::XS is the XS-accelerated version of
Marpa.
Marpa is a parser generator -- it parses from any
grammar that you can write in BNF.
If that grammar is one of the kinds in practical use
(yacc, LALR, recursive descent, LR(k), LL(k), regular expressions, etc.),
Marpa and Marpa::XS parse from it in linear (O(n)) time.
The parts of Marpa::XS that were rewritten in C run 100 times faster than
the original Pure Perl.
Typical applications run approximately 10 times faster.
There is a new, simplified, interface for reading input.
The documentation has been improved.
Error reporting and tracing of grammars with right recursion has
been simplified and
is much improved.
Day 2 was full of Yak shaving. It was not very convincing at first sight
but in the end I think it was ok and needed to be done. The annoyance part of that work is the primary reason why it isn't already done and worth to do on the hackathon.