And what is most amazing about it? That 186 or 46% are either resolved
or patched. That's a wonderful success rate.
If you have a login on rt.cpan.org you can issue this query to see the
actual 401 tickets.
Very nice graphics and ad-hoc statistics are then provided by RT when
you look at the bottom of the page. I've never noticed that RT footer
under the result of a query before.
For example I can create a bar chart by CreatedMonthly that shows me
when the rally started: 65% of the tickets were created from August
to December. That's because on the first days of August was the relaunch
of http://analysis.cpantesters.org/ after a few months break due to the
cpantesters 2.0 launch. Analysis is the real work horse that drives me
in this game. It's kind of playing solitaire with bugs. Analysis
provides the deck: cards of many different shapes and colors, my job is
to read them and find the ones that can be resolved quickly.
5 bazillion functions imported by default in top level Namespace
The lanugage comunity is esentialy the VB6/JavaScript community --
Well at least JavaScript has clean semantics.
In other words php is not the market we want to go after.
I'm just saying that php is not a good model to base a language off of, except for ease of configuration.
It's been some time since I last blogged. My next post will be looking forward to what I'm planning to write next year, but first I should blog about what stuff I've come up with in the past couple of months.
Shared memory
I had already written POSIX::RT::SharedMem before, but even though POSIX IPC is a great idea (for being remarkably sane to use) it isn't widely implemented (Linux, Solaris and recent versions of OS X support it out of the box, FreeBSD does too but you have to enable it explicitly). Therefor I ended up writing SysV::SharedMem, which does pretty much the same thing, accessing shared memory as a string, but implemented using a different backend. It's largely derived from File::Map, unlike POSIX::RT::SharedMem that delegates most of the work to File::Map. Writing it has made my conviction that SysV IPC is an incredably crappy API even stronger, but fortunately that's largely hidden from the end-user.
James Clark in XML vs. The Web has finally said what needed to be said -- that XML is a singularly bad format for data transmission. Here is the crux of what Mr. Clark had to say:
It's "Yay", because for important use cases JSON is dramatically better than XML. In particular, JSON shines as a programming language-independent representation of typical programming language data structures. This is an incredibly important use case and it would be hard to overstate how appallingly bad XML is for this. The fundamental problem is the mismatch between programming language data structures and the XML element/attribute data model of elements.
Tradition tells us to look back and forward when a year turns. And this year I choose to be a traditionalist.
The Past
It's been a remarkable year for me, Work- and Perl Wise. I had the opportunity to work with two great companies with both skilled and devoted coworkers.
And I dived head first into great distributions like
Dist::Zilla
Moose
PSGI
Mojo
Catalyst
DBIx::Class
XML::Compile
... and many more
Not all are from this year, or even new, but they are all great examples of the code quality of CPAN.
I had a short look at Perl 6. Alas there weren't enough time in a year to do share between the exciting stuff in Perl 5 and 6. But next year...
Connectivity
Restful applications are nice and easy to integrate functionality, even internally in a company.
I wrote Catalyst::Model::REST to make it even easier to access other applications through the model layer of Catalyst.
Yesterday we had our first revived Tel Aviv Perl Mongers meeting. You probably came across the announcement once or twice in your RSS feeds, mailing lists or even through a personal message from me.
Apparently this worked quite well. The last Perl Mongers meeting included 5 to 6 times as many people as previous meetings. It was pretty awesome. We also scored pretty high on the variety of people.
I got to meet a lot of new people, hear interesting talks and mainly have a lot of laughs.
After the talks, the top-posters went out to dinner and had a great time. The discussion of how to represent a salad in pure OO form ensued and all hell broke loose from then on! :)
Next meeting we're hoping to get even more people to attend. My plans include shorter talks and a lot more fun (such as lightening talks) and getting more people to step up (help organize and give presentations on various issues).
Thanks to everyone who came, everyone who gave talks, helped with the website (DNS, hosting), graphics, flyers, advertising and so on.
Over the summer I had the privilege of attending a week-long workshop on CUDA hosted by the Virtual School of Computational Science and Engineering. It was was free for students from the University of Illinois (and other partner institutions, I presume) and it was excellent. If you want to learn CUDA quickly and you want to learn it well, I highly recommend attending such a workshop.
Over the fall I started writing and using CUDA kernels in my research. This meant writing code in C. C is a great language, but it is not known for its whipuptitude. Almost immediately, I noticed that my main() function did little more than manage memory and coordinate kernel launches. This, I thought to myself, is exactly what scripting languages are for, and wished there was something out there to let me manage CUDA memory and invoke CUDA kernels from Perl.
This is how I started down the path of writing perl_nvcc.
As predicted in the October Summary, the latest milestone for CPAN Testers came just before Christmas. On 22nd December to be exact, as can be seen on the Interesting Stats page of the CPAN Testers Statistics site. Once again, many thanks to all the testers who have help to contribute to the milestone.
Congratulations to Chris Williams for posting the 10 millionth report. It was a PASS for App-cpanminus-1.1005.
YES! I am extolling the greatness of Dist::Zilla! It really has a lot of greatness. I think it's finally reduced the friction enough that I'm going to start making dists for all my internal projects. A big thanks to all involved in building up such lovely infrastructure.
A long time ago I posted about Roles without Moose and while I still feel that for most cases Moose is the way to go, there can still be a bit of resistance to the idea. Matt Trout responded to my post with how one could have just roles (read his entire post to understand the context):
package Foo::Manual;
use Moose;
extends 'UNIVERSAL'; # get rid of Moose::Object
with 'Foo::Manual::Bar';
sub new { bless {} => shift }
This still involves putting Moose on your servers and when you're faced with a large dev team that is very conservative in their approach, this might be an uphill battle. So what are my alternatives?
Giving this a try. I have been brought back into Perl world about a couple of years ago, by my old friend Luis Campos (LMC), and I am now writing some of my own modules in Modern (or quasi-) Perl.
yacc
was a
major breakthrough.
For the first time, automatic generation of
of efficient, production-quality parsers was possible
for languages of practical interest.
Yacc-generated parsers had reasonable memory footprints.
They ran in linear time.
But error reporting was overlooked.
Then as now, the focus in analyzing algorithms was on power
-- what kinds of grammar an algorithm can parse --
and on resource consumption.
This leaves out something big.
Our frameworks for analyzing things affect what we believe.
We find it hard to recognize a problem if our
framework makes us unable to articulate it.
Complaints about yacc tended to be kept to oneself.
But while yacc's
overt reputation flourished,
programmers were undergoing an almost Pavlovian
conditioning against it --
a conditioning through pain.
Well, I'm living with my mother, who has Alzheimer's, which is a bit like being out of work, in that I sit around a lot. But I can go out - I just have to lock the front door and garden gate so she doesn't accidently let my 2 miniature dogs out.
Nevertheless, I hope to be still productive in the Perl arena.
So, post frequently, and that'll give me things to read :-).
Absolutely ages ago, I took over maintainership of RTF::Parser. Grand plans abounded, but mostly what I ended up doing was fixing a few of the more outrageous bugs, and made it use the much more sensible RTF::Tokenizer as its back end.
People still use RTF::Parser, and a couple of other modules on CPAN use it, but I really can't give it the love and care it deserves. The code is mildly crazy, there are age-old outstanding bugs on rt ... this Xmas, will you take in a deserving module?
If you've had a look at search.metacpan.org, you may have noticed that some of the author pages have more info than you might find at search.cpan.org. Take, for instance, FREW's author page. You'll see that it has links to his blog, Twitter, StackOverflow, website etc. Lots of information there which allows you to find his various online presences without having to do all too much digging around.
If you'd like to expand your author info, it's pretty easy. We don't have a login for you yet, but this is a trivially easy stop-gap solution to get yourself up and running:
You're writing Perl code in vim and have just typed a package name - maybe you
want to create an object of this class:
some_statement;
my $o = Some::Class->new;
do_something_with($o);
You obviously need to write use Some::Class at the top. So you either move
the cursor near the top and add the line, then jump to the previous line
number, or maybe you split the window, move to the new viewport, make the
change, then close that viewport.