Database schemas are a little like packages in Perl, they provide namespace. If you have a database with dozens, or even hundreds of tables, you really like to divide them into logical groups.
In PostgreSQL you do like this
CREATE SCHEMA <db_schema>;
SET search_path TO <db_schema>;
If you don't create a schema, all your stuff goes into the default schema public.
DBIx::Class knows about db schemas, but not enough to make them work out of the box. Or at least it seems that way. Here's how I did it.
FIrst (well, after creating the database with the db schemas itself. But that's an exercise left to the reader), I created the DBIC classes for the tables with the excellent tool dbicdump. (It's installed together with DBIx::Class::Schema::Loader). dbicdump creates the class structure right below your current directory. So I started with cd lib/ and then:
Last week I posted about my current experiments in deploying perl applications to our Centos 5 servers - or rather the first steps of building a perl package along with the required modules.
I am just starting work on testing this all through, when suddenly one of the blocks to using the current stable perl (ie 5.12.2) has disappeared - TryCatch is now supported on 5.12.x
So, although I have some current tests running, I am just in the process of modifying a few parts of the build scripting (mainly down to me missing a couple of local modules from the build), and then a new version based on current stable perl will hit the build systems.
I will be attending the PostgreSQL Conference West 2010 at the Sir Francis Drake hotel in San Francisco from November 2nd to 4th. I'm waiting to hear back from the travel agency to see when I'm flying out of and arriving back at D/FW.
SVN is slow, and git-svn is slower. The amount of network traffic needed by SVN makes everything slow, especially since git-svn needs to walk the history multiple times. Even if I made no mistakes and only had to run the import once, having a local copy of the repository makes the process much faster. svnsync will do this for us:
# create repository
svnadmin create svn-mirror
# svn won't let us change revision properties without a hook in place
echo '#!/bin/sh' > svn-mirror/hooks/pre-revprop-change && chmod +x svn-mirror/hooks/pre-revprop-change
# do the actual sync
svnsync init file://$PWD/svn-mirror http://dev.catalyst.perl.org/repos/bast/
svnsync sync file://$PWD/svn-mirror
I thought I'd note here too as well as on my blog that I'll be moving to Amsterdam tomorrow to work for Booking.com. I'm looking forward to the new challenges and getting settled in a new city, as well as meeting and working with somenew people.
I have released a new module to CPAN for writing Excel files in the 2007 XLSX format: Excel::Writer::XLSX
It uses the Spreadsheet::WriteExcel interface but is in a different namespace for reasons of maintainability.
Not all of the features of Spreadsheet::WriteExcel are supported but they will be in time.
The main advantage of the XLSX format over the XLS format for the end user is that it allows 1,048,576 rows x 16,384 columns, if you can see that as an advantage.
From a development point of view the main advantage is that the XLSX format is XML based and as such is much easier to debug and test than the XLS binary format.
It has become increasingly difficult to carve out the time required to add new features to Spreadsheet::WriteExcel. Even something as seemingly innocuous as adding trendlines to charts could take up to a month of reverse engineering, debugging, testing and implementation.
Hopefully the XLSX format will allow for faster, easier test driven development and may entice in some other contributors.
Marpa
is now at 0.200000.
Following a standard rhetoric of version numbers, this indicates
that it's an official release and a major step forward,
but still alpha.
Marpa is a general BNF parser generator -- it parses from any grammar that
you can write in BNF.
It's based on Earley's algorithm, but incorporates recent advances,
so that it runs in linear time for all those grammars parseable by yacc
or recursive descent.
The big news with Marpa 0.200000 is Marpa's 3rd generation evaluator.
The previous version of Marpa had two evaluators
-- one fast, but only
good for producing a single parse result,
the other capable of dealing with ambiguous grammars, but slower.
The 3rd generation has a single evaluator which combines the best
of both.
Not the least advantage of this change
is that it simplifies the documentation
and the interface.
In May and June, I worked on converting the DBIx::Class repository from SVN to Git. I’ve had a number of people ask me to describe the process and show the code I used to do so. I had been somewhat busy with various projects, including working on the web client for The Lacuna Expanse, but I’ve finally had some time to write up a bit about it. The code I used to make the conversion is on my github account, although not in a form meant for reuse.
At Jobindex we use Red Hat Enterprise Linux. The OS is very stable and we feel that Red Hat is doing a lot of good stuff for Linux and OSS in general.
When it comes to perl the current version RHEL ships with version 5.8.8 which causes a bit frustration however. Some CPAN modules won't install and it seems like the people who writes modules for CPAN don't really care about our (good) old perl verison.
At YAPC::EU several of the speakers recommended installing perl ourselves instead relying on the OS version.
We have now decided to follow this recommendation. At the same time we will also start using git to manage perl and the installed modules to keep the versions in testing and production in sync. This way we wil also avoid messing with RPM packages.
So now I am looking forward to getting my hands on perl 5.12.2 :)
We're using RT quite a lot. Today I needed to disable some 'Scrips' in one queue, while keeping the in all other queues. Unfortunatly, RT does not support this out of the box. While there is a plugin that seems to implement that feature (RT-Extension-QueueDeactivatedScrips) I decided to fix this without touching RTs innards.
The trick is to add a 'Custom condition' to the global script, which returns false for the relevant queue:
return $self->TicketObj->QueueObj->Name ne 'babilu::support'
Unfortunantly, this is not enough (and it took me some testing and manic clicking through RT to figure this out). You also need to change the 'Condition' from whatever the Scrip is using to 'User Defined', and then test for the condtion yourself:
For those of you who may notice such things, you might see that Padre's version number has jumped two numbers since the last stable release.
This development cycle we introduced a new versioning system whereby the odd number 0.71 was the development version with 0.72 the stable release version.
The reason we have done this is to accommodate development and changes to the plugin subsystem during the development cycle. We'll track how this goes and make changes where it needs to be made.
For much of this release, and I'll have to admit quite a few before, I've really been off doing other things, but I follow the irc logs for #padre to keep an eye on how things are developing within Padre.
However in saying that I have to say when it came time to roll out this release, the number of fixes and improvements in the Changes file blew me away. Admittedly it was a longer than normal period between releases, but still, there are some serious fixes in 0.72.
Thanks to Shlomi Fish who noticed the slides of the talk weren't imported very well on Slideshare. This is now fixed and you can find the slides at the same location working without a hitch.
I've also been enlightened that Slideshare only allows you to download slides if you're registered so we've added a section to Dancer's website to accommodate for slides of talks given about Dancer.
This is where you'll be able to view it (embedded) or download it in PDF form at your leisure.
SF.pm X-President Quinn Weaver will be speaking at Mother Jones on October 26th for our October SF.pm meeting.
Catalyst is the leading web MVC framework for Perl. Normally, Catalyst apps use an ORM to communicate with the database. While ORMs can be convenient, they can also hurt performance, tie your app to one database schema, and make complex queries difficult.
But this is Perl, and TMTOWTDI applies: There’s More Than One Way To Do It.
In this talk, Quinn take you through the code of a working Catalyst app that uses stored procedures rather than ORM queries as its interface to PostgreSQL. Along the way, Quinn will touch on a number of useful modules and pragmas such as DBIx::Connector, aliased, Template::Declare, and Test::XPath.
Credits: this talk is based on an app Quinn wrote with David Wheeler as a side project at PostgreSQL Experts. Thanks to David for coming up with the methodology, and writing the (IMO) greater share of of the code, including DBIx::Connector and Test::XPath.
An interesting topic came up on #distzilla. Most modules depend on other modules, but some don't depend on an explicit version. So if you use the module with an ancient version of one of its dependencies it'll break, because the author never tested that version.
We've probably all run into problems as a result of this and grudgingly upgraded the dependencies as a result, but could CPAN clients handle this better?
Technically they're doing the right thing already. The CPAN clients are unable to distinguish between "explicit 0", "any version will do" and "author didn't say".
Some don't specify versions at all. I mostly fall into this camp.
Even if you specify a version for every dependency it's really hard to get it right. You might accidentally use some API feature in a dependency that doesn't match the feature set of your declared version.
Even if you get it 100% right it's all for naught if one of your dependencies isn't as careful about its dependencies.
So what would be a better heuristic? Some suggestions:
This series of blog posts on
"Perl and Parsing",
which is evolving
into a mini-history of parsing theory,
started as an offshoot of
my own
attempt at a contribution to parsing.
I wanted to do a couple of blog posts
aimed at those trying to
decide whether it was better
to take a chance with my
new parser
(Marpa),
or to stick with the terrors of the known.
There's a large literature on parsing,
but much of it is difficult or dryasdust,
and I thought I could contribute a helpful overview.
"An informed consumer is our best customer" and all that.
Like other offshoots of the Marpa project before it,
while the "Perl & Parsing" was originally intended to draw
attention to
Marpa,
it's been better at drawing attention to itself.
I've received some positive comments,
and some helpful criticism.
I'm grateful for both,
and I plan to continue the series.
Come and enjoy Damian Conway, Perl luminary and all around great guy, as he presents his seminar, The Missing Link. The topic will be focused on Perl, but should be entertaining to all programmers.
I've done a lot of web scraping with Perl over the years, but I hadn't experienced anything quite like the "Next page" link that ASP.NET threw at me this week. The opposite of REST, ASP.NET's ctlPagePlaceHolder makes the simplest navigation beyond the reach of WWW::Mechanize as far as I can tell. Luckily Selenium came to my rescue.
If you haven't experienced Selenium automation, it's quite impressive. From a normal shell window (OS X Terminal.app in my case) you launch a Java server
and then use WWW::Selenium in your Perl program. As your Perl program runs it launches Firefox on your local workstation and performs whatever commands you issue. In my case this was simply "click 'Next Page'". :)
The Selenium IDE Firefox plugin is great for quickly mocking up what commands you need. Once it's working, drop those commands into your program to get the job done.