Both of my proposals for OSCON were turned down, so I won't be able to make it this year. The BBC is also feeling the financial crunch, so international conferences are harder to manage. I'm not terribly disappointed, but it would have been nice to see my beloved Portland again. Fortunately, with my wedding in June, a number of my close friends from Portland will be in London. If you can't bring Ovid to Portland, bring Portland to Ovid.
In addition to The Perl Foundation being accepted into GSoC 2010, BioPerl is now also part of the Google Summer of Code! The Open Bioinformatics Foundation, which also includes BioPython, BioRuby, and others, has been accepted into the Google Summer of Code for 2010. We are actively looking for students interested in OBF-related bioinformatics projects; new ideas are welcome. Project ideas and other details can be found here:
This isn't the first year BioPerl has been part of GSoC. A successful project was recently published by 2008 GSoC student Mira Han for developing a phyloXML parser for BioPerl.
Update: Rough project idea for use of Modern Perl tools or Perl 6 with BioPerl now added.
Each year, Google offers students the opportunity to spend their summer coding on open source projects. You propose a project, and if selected, you're assigned a mentor and provided a $4500 stipend. It is a competitive program to get into, but offers an amazing amount of real-world experience and the ability to get seriously involved in an open source project of your choosing. The Perl Foundation spans a wide variety of projects including Perl 5, Perl 6, and Parrot with many great mentors knowledgeable in areas ranging from language design, virtual machines, and compilers through web and desktop applications. This program is a great chance to get more involved in the Perl community and put a substantial project worth of source code in your portfolio.
Adam Kennedy's recent post on threads in Padre reminded me to post about an experiment of mine. Last year I learned some Erlang. I really liked their model of multi-threading: many threads that share no data at all and communicate through message queues. A lot of other things where really annoying though, specially their crappy support for strings and lack of libraries in general. I kept thinking I want perl with these threads, so I started implementing it. And thus threads::lite was born.
The main difference between threads::lite and threads.pm is that t::l starts an entirely new interpreter instead of cloning the existing one. If you've loaded a lot of modules, that can be significantly quicker and leaner than cloning. As an optimization, it supports cloning itself right after module loading, so you can quickly start a large number of identical threads. Threads can be monitored, so that on thread death their send an exit code to their listeners.
What are your options when you would like to deploy an application written in Perl using the Catalyst MVC framework? Is the most advanced option to rent a dedicated server, install everything yourself and go?
Are there companies that offer ready-made dedicated or virtual servers with Catalyst and mod_perl already configured? I found a list of hosting companies on the Catalyst web site (though it was not linked from the main page) but as I can see none of them ready-made solution.
I am not in the web application building business but I wonder if it would not make sense to have a web hosting offering with a turn-key Catalyst environment and even with a small demo application already running?
It could be as simple as a standard dedicated server (or VPS) with all the necessary packages already installed and Apache configured.
It's nice to get a big project to the point where it produces
something which is actually useful.
I'm pleased to announce
html_fmt,
an HTML pretty printer
that's part of the
Marpa::HTML
distribution.
The command
html_fmt http://perl.org
will pretty-print
the HTML for
http:://perl.org.
Tags are printed out one per line, indented according to structure.
html_fmt
supplies any missing start or end tags,
adding comments to that effect.
html_fmt
respects <pre> tags.
If the argument is not a URI, it's interpreted as a file name.
Suppose, for example, that
very_bad_html
is a file
containing this HTML: "<tr>cell data".
Then
html_fmt very_bad_html
will convert it into this:
Yesterday, Gabor and I went to a PyWeb-IL meeting. It's a monthly gathering of mostly-web Python programmers. We went there to see what can be learned from our peers and to understand the image of Perl in other communities. It was interesting.
There were two lectures: Optimizing Python and RDF and Python. The RFD lecture was actually a lecture I heard at the last W3C gathering a while ago, by the exact same person. Only this time he added a few lines in Python to show how to get things rolling. At least this time someone (me) explained the difference between URI and URL (at the W3C meeting, it took roughly half an hour for 3 different people to explain it).
One of the things we missed on CeBIt is an easy way to show various Perl technologies. We improvised which worked ok but for the upcoming events - and we are planning to participate on a number of events in the next 6 months - we would like to be a lot more prepared.
We would would like to have a 5 minutes long presentation on each interesting topic. After going over the slides that would allow every one of us to talk about almost every one of the projects. For example when people asked us about Perl 6 the other Perl::Staff members sent the visitors to me and I showed them a few of my slides I picked based on my knowledge of Perl 6. If we had a well prepared set of slides showing a few features of Perl 6 then any one of us could have shown those.
I thought I'd write my own practical version of Moose::Manual::Unsweetened by converting Hailo away from Moose. The resulting hack passes all of Hailo's tests but not surprisingly it wasn't worth it.
I wanted to see if I could get the startup time of Hailo down since Moose doesn't incur a runtime penalty once all your classes have been constructed. Here's how much time it takes the three version of Hailo to start up and reply to input, and how much (RSS) memory they use:
Hacky Perl OO: 100ms / 5.6MB
Mouse: 150ms / 7.4MB
Moose: 350ms / 12MB
Mouse seems to get a lot of flak within Moose circles for not being a 100% complete Moose implementation. I previously wrote about how you can maintain dual support for Moose and Mouse in a previous posting.
In future programs I'll be using Mouse as the default with a fallback to Moose. It does everything I need from Moose and doesn't suffer from the high startup time / memory use that's frequently cited as an objection to Moose-based applications.
If that doesn't convince you, here's a mouse riding a frog:
After struggling with ExtJS for a while, I've decided to let go of it. Not in favor of jQueryUI or any other UI. Not even if favor of writing a UI myself (other than, perhaps, a few loose forms that make you want to vomit). I've decided to work on what I do - the backend.
This is actually something that took me a while to understand. I don't write UIs, I don't write websites, I don't write frontend. I'm not good at that, it doesn't even interest me. I love working on backend, I like writing the engine, I like fiddling with the Perl code. I don't enjoy Javascript, I don't enjoy XHTML, CSS, or any other such thing. I don't like debugging obscure IE bugs. It's not my thing.
Is it just me who has to re-authenticate on every single page load, or is this a known issue? It's really annoying? I seem to keep my session fine on the front page, but the MT interface asks me to authenticate every. single. time. I change pages, whether through a GET or POST request.
While my Enterprise Perl cartoon may seem like a joke, it's not. It's a sad fact that for larger codebases, tests can take a long, long time to run. The one I used on the BBC PIPs project took an hour and twenty minutes to run when I left that team. The one I use on the BBC Dynamite project takes just over an hour to run. Adam Kennedy, on the Enterprise Perl post, reported his tests can take a couple of hours to run.
I just bought an
Archos 5 Internet tablet
. The whole mobile world is quite new for me. My mobile phones were quite old fashioned and I never tried to install any application there. The small screen size wasn't too attractive.
So I am learning. So far I managed to connect it to wifi and even installed AppsLib which seems to be the main interface to install additional applications.
I tried the GPS but so far I don't think I saw it working but I was just sitting at home with the devices and it might need some movement. No idea. Never had a GPS.
Anyway, one of the main reasons I bough this and not some other devices is that I wanted to research - you see I had to have a "business reason" to buy it :-) - how can I run applications written in Perl on it.
I searched a bit about Perl on Android but so far I found relatively little information. I think it is quite important and probably will be a lot of fun to run Perl on the device so I started to collect the little information I found about
Perl on Android. If you have any more details or pointers I'd appreciate your additions.
There are plenty of interesting and valuable projects on CPAN that do not have their own websites. I know, I know, CPAN gives us a default website ("with a reasonable design") and all we need to do is write the code and POD. Yes, that's great.
But wouldn't you rather advertise yourself, your product and Perl along the way? Then your project should have its own website. Yes, it should. No, I'm right!
A website represents you and your project. If it's beautiful, you're beautiful (even if you're ugly). If it's approachable, your project is approachable. If it gives a warm feeling to the user, so does your project and so will your project be for the user's boss - at least at first. Which means that if you're certain your code is great and approachable ("once given a shot"), you should make sure it gets a fair shot, by making it appealing.
In Modules vs Applications, Sawyer X noted that one of the "issues" (emphasis mine) of Perl people is the tendency to write modules instead of applications. That CPAN is great but due to the lack of end user programs there is no WOW factor. He suggested that we write programs/applications that everyone can use to attract more people to Perl.
While I agree with the last suggestion, I don't agree with the preference to modularize everything as an issue. As someone who wrote a program years ago that is comprised of many separate scripts and duplicated code (even in different languages, just for the fun of it) and still have to maintain it today, I'd say that not putting as much code as possible into reusable modules is a mistake.
I'd instead suggest that we still write modules (which is what made CPAN great anyway), but try to *also* accompany each distribution with a demo app (preferably in the App:: namespace).
Most of my work with DPAN revolves around the creation of private, CPAN-like repositories that a project team can use without affecting anyone else. Setting up a DPAN process for a recent customer involved making several MiniCPANs, one for each project group. I had to add a couple of features to CPAN::Mini to make it work out. These new features show up in CPAN::Mini 1.100590.
Like most CPAN tools, CPAN::Mini assumed that there would only ever be one repository, so a person tasked with maintaining several had a problem. Consider this workflow to support several groups:
There's a master MiniCPAN that holds all of the modules anyone in the company is allowed to use. The remote is a real CPAN.
Slave MiniCPANs pull from and filter the modules in the master to contain just the modules their application needs. The remote is the master MiniCPAN.
DPAN (or CPAN::Mini::Inject) adds project specific modules to the slaves.
Once in a while we talk about things such as GreyPAN or DarkPAN referring to open source Perl code that is not on CPAN or code that is hidden behind corporate firewalls respectively. It might be interesting to see if we can encourage more of that code to be released to CPAN.
AFAIK Bugzilla is one of the successful Perl projects in terms of user base but it does not have a huge mind share in the Perl community. I am not sure what are the reasons but I think the fact that it is not on CPAN is part of this so I looked at the project a bit.
Getting the source code
Bugzilla has a wiki page for developers where you can find information on how to contribute to the project. They just recently switched from CVS to Bazaar which is a nice distributed vcs. Instructions on how to get the source code can be found on the Bugzilla Bazaar page.
Yesterday, I was thinking about this same exact thing. I have a project where the compile-time hit of the collective libraries that make up the application is a little on the heavy side. I thought to myself that it would be good use something like Test::Aggregate, but I wondered if that could mean that some of my tests could be affected by a downstream dependency where the state could be changing at the class level. I don’t think it’s very probable, but it could happen and it would be a real pain to debug.