His first script is for firing up a browser pointed at a local POD web server, including starting one up if it’s not already running – not that useful to me, since I haven’t found myself actually using these servers very much, because of the console↔browser flipping that they entail. Plain old perldoc on a console just feels faster to juggle.
However, he also includes another script: a completion helper for bash. This allows you to type something like perldoc Cata<tab> and have bash turn it into perldoc Catalyst for you. I used this script for mere hours before I realised it’s exactly the one thing I have always missed in Perl: a way to quickly and efficiently browse my local module library – the thing that all the POD web servers promised to give me, but couldn’t deliver in a convenient enough fashion for me to use routinely.
Haven't messed with RPMs for a long while (at Yahoo! we had folks whose job it was to provide the prebuilt system tools in the Y! format - among them, Mike Schilli of Log4perl fame who was our Perl packager), but today I'm building Perl 5.10.1 RPMS for our CentOS 5 systems.
I was able to take a 5.10.0 and adapt it pretty easily, with a few tweaks. First, because of the way CentOS lays out its include files, you need to patch IO.xs:
The phase for applications for sponsorship ends on Friday, 2010-02-12. We then shall deliberate over the weekend and announce the result afterwards so that travel arrangements can be made ASAP.
If you're considering coming to Vienna, and want to apply for sponsored travel/hotel, add yourself to the wikinow.
The more you tell us about your plans, the better!
The vast majority of the time when I run the debugger on a test, I get frustrated because I have to wait for the test to load, then type '{l' (always list next window of lines before the debugger prompt) and then 'c' (continue to breakpoint -- because I've almost always set one at this point).
So I dropped this into my $HOME/.perldb file:
sub afterinit {
push @DB::typeahead, "{l", "c"
if $ENV{DEBUGGING_TESTS};
}
And in my vim config (where you put it will vary depending on your setup), I have this:
I've been working on a suite of modules for the past couple of months to make building applications on top of Amazon's SimpleDB as easy as building applications on DBIx::Class. It's called SimpleDB::Class.
It gives you an object relational mapper style interface to SimpleDB, which is heavily borrowed from DBIx::Class. It uses Memcached to speed up queries, increase capacity, and get around most of the stickiest problems with eventual consistency.
SimpleDB::Class also hides all of the things that most new developers find difficult about SimpleDB, things like cascading retries, next tokens, and the pseudo rest-like protocol for interfacing with SimpleDB. Not to mention handling searchable/sortable dates and numbers. As a developer, you simply give it some data or ask it for some data, and all those things just happen automatically behind the scenes.
There are even many nice utility functions like max and min, which are posted in questions to the SimpleDB forums all too often.
Lee Johnson found a bug in Sub::WrapPackages. It's to do with how perl 5.10 optimises stuff exported by 'use constant'. He provided a patch and test, so a bugfixed version is on its way to the CPAN.
There are two ideas that have been buzzing around in my brain lately.
The first idea is a Perl CMS that is all Perl and that scales from CGI (because that is what a lot of cheaper hosting sites offer) to whatever you want to scale it up to. I don't mean scale in the traditional sense here, though I would want it to scale that way as well.
The second one is a server monitor along the lines of a Nagios but all in Perl with plugins written in Perl.
Neither of those is necessarily a "good" idea. They are just ideas. Throwing them up here to maybe tickle myself later with them.
I had written about Testing With PostgreSQL and today discovered some failing tests. It turns out this is because I had migrated a database down two levels, altered a table, and migrated it back up (I can do this because no one is relying on a production copy). Unfortunately, when I migrated it back up, the new tables didn’t have their correct triggers assigned because the _test_changed_table check was in place. As a result, my test module assumed the testing system was in place.
07:46 XXXX damn... is it normal for DProf/Profiler to not work correctly with moose stuff?
07:46 aaaa people still use dprof?
07:47 XXXX people who might not be aware of alternatives, sure
07:48 XXXX what would you suggest instead then?
07:49 aaaa Devel::NTYProf is THE profiller these days :)
07:52 XXXX unfortunately googling perl profiling doesn't take you anywhere near it :(
If like me you think it's fantastic, please link to http://search.cpan.org/dist/Devel-NYTProf/ from your website with some appropriate link words, so we can let Google know about it, and therefor the rest of the world.
I gave a day-long master class as the first version of what we are turning into a full training course. I covered regexes, Unicode, pack, advanced subroutines, and tricks with filehandles. Each of those sections covered most of the material from those chapters in the book, and during the course I picked up even more tricks that we'll add to the Effective Perler blog as well as later classes. I'm thinking about releasing some of the slides next week when I have some time to take care of some of the notes that I made.
Josh gave a short talk on Unicode. Several of my students remarked how it was nice to see some of the same material twice since there's a lot to pay attention to in Unicode. His slides are in Google Docs, but he's adjusting some of the images so he doesn't get sued for the comments he made about the Minnesota Vikings. It's not really his fault if they aren't as good as the Bears. Everyone nag him to make his slides available with the rest of the Frozen Perl talks.
If you want one of our talks at your conference, let us know.
We've just moved our website to Amazon EC2, and within about 2 hours of going live, or proxy server went down. Just disappeared. We couldn't even terminate the instance.
OK, temporary glitch. It happens.
2 weeks go by, then last night, our alarms go crazy. All 3 database servers have gone down. They're there, just not responding to ssh or even ping.
We try to reboot the instances. From the console log, I can see that they reboot. Still not accessible. I launch another instance of our DB AMI. It boots, but is also unresponsive.
Eventually we boot a vanilla AMI, reinstall the DB and attach the EBS volumes.
Next surprise - 2 of our EBS volumes have disappeared - or at least the data on them has. Fortunately, they were redundant copies. But what happens if next time they're not?
All in all we had 2 hours of downtime. More than the previous 3 years with dedicated servers put together.
How on earth do other companies maintain their uptime (and data!) on a service that seems to fail way too frequently?
Added a coupla features over the weekend. First, you can collapse and expand parts of the tree if you have Javascript and CSS turned on. Second, now that CPANTS's tool for finding what distributions depend on a given distribution no longer works, I've implemented it myself. It's a bit ergonomically crude, but is more accurate than CPANTS was.
In the two Star Trek series,
the characters of Data and Spock pose a question:
What would it be like to be able to make decisions on a largely,
or even purely, logical basis?
Not widely known is that this is a question with an answer based on
evidence.
About a week ago, the call for proposals for the Open Source Conference closed. It's a bit fuzzy because many people only realize what month (or year) it is only after the submission link disappears, so we let a few extra proposals slip in. Don't ask now: it's too late. For reals this time.
As part of the Perl Track committee, I just reviewed all of the proposals where the submitter marked it as a "Perl" talk. Several other people from the Perl track also reviewed them. There are going to be some very nice presentations this year, and at least one demonstration of highly advanced Perl technology that you'll want to see twice in a row, and maybe a third time at the end of the conference.
So, my first blog post, and instead of Perl, I'm writing about Javascript.
I'm using a common idiom:
AJAX call returns HTML with embedded script tags
create a temporary <div>
div.innerHTML = request.responseText
move the children of the div to the appropriate spot
Firefox conveniently executes the script texts. Opera and Safari require extra steps to execute the script contents (eg,
globalEval
in jquery), and IE does whatever the hell it pleases.
IE usually works with globalEval, except when it doesn't. I found that if the AJAX response was just a single script tag, then IE would filter it out. But script tags were being created in certain circumstances.
Long story short, if you need to return a single <script> tag, wrap it in a <form> tag. For whatever reason, IE will then accept it as innerHTML and create the script node, which you can then execute with globalEval or similar
Ever since I upgraded to Snow Leopard, my Perl has been unstable. I've fixed most of it, but I sometimes have strange behavior. Today I discovered why (and why didn't I notice this sooner?)
Second, a small bugfix for distributions with weirdo version.pm-stylee v1.2.3 versions. Thes dists are buggy, of course, as version *numbers* should be, well, they should be *numbers*. But as they're on the CPAN, I have to support them.
Now, can someone explain why this 'ere form what I'm filling in has seperate fields for "tags" and "keywords"? I thought that "tag" was just a stupid web 2.0 neologism for "keyword", so why are they seperate?
hello...
i have recently started learning perl.....and having issues when i try to read input from another file. the details of a script that i tried to run are as follows:
i made a text file by the name "text.txt" and the contents were
Sidra|38|BE
Hira|48|BE
Shagufta|50|BE
Then i wrote the following script
open(DAT, "text.txt");
$data_file=" text.txt ";
open(DAT, $data_file);
@raw_data=;
close(DAT);
foreach $student (@raw_data)
{
chomp($student);
($name,$roll_no,$class)=split(/\|/,$student);
print "The student $name bearing roll number $roll_no is in class $class";
print " \n";
}
the script produces no output and displays a message saying
"readline () closed filehandle at line "
I tried the same with another file by the name "text.dat" holding the same data but it did not work either. Please help me out resolving this issue.
Thankyou...
I thought I would share my experience of submitting a module to CPAN for the first time. In summary: it's stupidly easy and if I can do it, so can you.
I'm not the best Perl coder and therefore I rely heavily on CPAN modules to do the real work. Every now and again I stumble upon bugs, many of which are simple to fix after a little debugging. However, some modules aren't heavily maintained and some haven't been touched in years. Bugs that are filed on RT can go unanswered, gathering digital dust while your development machine is riddled with heavily modified modules.
The solution? In my case it was as easy as discussing a few changes with the module author, who added me as a co-maintainer. That allows me to upload new versions, fix tickets in the bug queue and probably many other things I've not discovered.