Life was easy on the web so many years ago, urls where simple. You only bothered had one.
www.mysillylife.com/main.cgi
You simply tacked on all sort of extra little bits on end to know, what to do, what to show, where and who are you, and such things as how long you have been here. Something akin to;
anyway you get the picture. We only had get and post and a good number of sites dropped get when they discovered you could hide things in post. Of course you would see cases just post to the same page and your "mode" took care of things for you or some pages knew what to do with a post or a get.
Whenever I present a talk on Test::Class or one of its variants, invariably someone asks me about parallelization. The reason is simple: I advocate running your xUnit tests in a single process instead of multiple processes, but it's hard to run tests in parallel when they're already forced into a single process.
For Test::Class, this means using separate *.t tests for every test class versus using a single *.t test and Test::Class::Load.
I am working on making parallel tests possible with Test::Class::Moose, and while I have test classes running in parallel, the confused is output (yes, that was deliberate). I know how to solve this using only publicly exposed APIs, but there are some tricky bits. I thought about asking for a TPF grant, but since most don't use xUnit style testing, the value seems marginal. Plus, I am on the Board of Directors for the Perl Foundation and that can look like a conflict of interest. Hence, my slow work in this area.
That being said, it's worth doing the math and asking ourselves where we get the greatest gain.
I created WordPress::Grep (or in GitHub) as a way to do power searches through my WordPress databases. I've often wanted a tool that could search with Perl patterns or arbitrary code to find odd CSS uses, check links, and all sorts of other things. I didn't see an easy way to do the things I wanted with WordPress::API, which seems more of an authoring tool than an administration or editing tool.
Having some time after patch -p1 in Paris, I started to work on this. After dealing with the horror of the PHP front end, I was surprised at how easy I got something working--the database setup isn't that bad. Now I have a basic tool that I use like this:
Some time ago I was on a team that was rewriting a very large legacy system. We wanted to use a more maintainable OO style (rather then just a huge global hash-ref ) but unfortunately the team was restricted by using only perl 5.6.0. By the time I joined the project work was well underway and I was still stuck doing accessors by hand. So I came up with this little beastly Orignal to solve that problem.
It fixed things up nicely as we could have nice 5.14 style accessors with some silly other little bits that came in very handy at least to us.
I maintain Term::ReadPassword::Win32 that has a POD file in Japanese. Now it got 2 bug reports for the Japanese POD, but even though both Japanese and Hungarian write their family name first, I don't know Japanese that well.... To say the least.
I am looking for a nice person who would check if the content of that files is still relevant. Convert it to utf8 and fix the two bugs reports related to that file. (See the bug reports and the GitHub repo linked from the above page.)
I get to use all the available gettext tools (for scanning translateable strings, for merging and updating them to each translation file, specialized text editors for editing translation, etc). This is the definitely nicest thing about the migration. With David Wheeler's Dist::Zilla plugin, the workflow is basically 'dzil msg-scan', 'dzil msg-merge', and update translations (the plugin will do 'dzil msg-compile' for you during build).
I no longer need to create project (translation) classes. I've always disliked having to do that, especially if my module or application is not OO.
I get named parameters for values. Instead of having to write [_1], [_2], etc, I can now use {foo}, {bar} instead. Translation text become clearer.
Hao Wu provided a great comment showing how I could solve exercise one using sum from List::Util and grep. I'd considered sum but utilzed false laziness and didn't use it. I'd also considered grep, but did not immediately hit upon the elegant solution that Hao suggested and so went with a more verbose solution.
Well a good day today. Had an interesting though come by my desk.
Was debugging a problem with a colleague and in our back and forth he came up with the little jape'
'I prefer to never return anything from a perl subroutine!'
I could of course dive into the often fought and confusing battle of the difference
between a function or a subroutine, but that dead horse has been done over like
yesterday's poutine gravy.
We could of course take a vote on it but I think it would be this sort of reaction in the community?
Anyway on with my story I did a little digging and was surprise to discover that in perl a subroutine always returns a value.
This goes back to day 1 when
Larry
Not Perl related, but I suspect some folks may appreciate this.
Today after a nasty mistake on the command line involving find and rm, I discovered that I deleted a number of files I didn't mean to delete, including some hidden files. Oops! I opened my Time Machine backup, only to discover that it doesn't show hidden files. However, it turns out that you can use that to show hidden files so long as your main system shows hidden files. I'm using OS X Mavericks, so I dropped the following bash script into my bin folder and named it togglehidden. Running this from the command line will toggle showing hidden files in the Finder on or off.
Today I'd like to show you my testing setup which involves database testing. Hopefully it can help someone out or maybe someone could suggest me better ways of doing things.
First of all, having all the tests in the root of the 't/' directory got too messy, so I'm using mostly the same directory structure in my 't/' directory, as I have in my 'lib/' directory for the ease of navigation. Let's say this is the 'lib/' tree:
- lib/
- MyApp.pm
- MyApp/
- MyModule.pm
then my 't/' directory would have the following layout:
- t/
- MyApp/
- 0001-MyApp_first_test.t
- 0002-MyApp_second_test.t
- MyModule/
- 0001-MyModule_first_test.t
- 0002-MyModule_second_test.t
Because of the nested structure it would be messy to add the 'use lib' statement into the testfiles themselves to use my 'lib/' directory, so I give it as a parameter to prove. I run all my tests from the 't/' directory, so for the ease of use I created a 't/prove'
Well looks like old fiend smartypants was up to her old tricks again.
You know the type, spends the day at work in some IRC, never forgets the minutia of page 67 of the help file of a sys-admin command, has memorized and and has a better version every regex ever written, knows she is always right and in the bosses eyes can never make a poor choice when designing code.
Anyway let's get on with the code example. Sometimes the ugly way to do
things is the best. In this case processing a large array where lets say the following has to be extracted,
The maximum value
The minimum value
Clean out any duplicates
Group the data into 3 sets
Simple enough really. But then I saw the code (changed to protect the guilty and save space)
This challenge is a stencil quest on questhub: in each calendar month of 2014 you have to release a distribution that you haven't released before, and write a blog post about it. This might be an entirely new distribution, one that you've adopted, or one that you're helping with.
The rules so far are:
You have to release at least one such dist within each calendar month. You can't catch up, by releasing 12 in December.
You must blog about the dist and link to the blog post in a comment on your quest. It doesn't matter if the blog post is in a following month.
Renaming one of your existing dists doesn't count :-)
So I though it would be just a regular day until with me just doing a little coding, drinking my coffee* and generally enjoying life, But I was wrong.
A Little Vulnerable
So I have been babysitting and ever so slowly migrating a 15+ year old application over to a more manageable and of course we have to keep doing improvements to the code so on a very old part I found something like this. (SQL changed to protect the innocent)
my $usr_ids = join(",",@user_sel);
my $sth = $h->prepare("Select * from a_table where id in ($usr_ids)");
$sth->execute();
Params? We an't got no params. We don't need no stinking params!!
Well the old bug-bear of little Bobby Tables or SQL injection shows it ugly little head again.
But what to do? There is no bind_array, well lets just give 'execute_array' a try
my $sth = $h->prepare("Select * from a_table where id in (?)");
my $tuples = $sth->execute_array(
{ ArrayTupleStatus => \my @tuple_status },
@user_sel,
);
ora_st_execute_array(): SELECT statement not supported for array operation.
At this point, I don't have a Perl blog. I do Perl things, but nothing that ever seems worth blogging about. However, I've decided to improve my knowledge of Perl 5i and Perl 6, and generally practice my proramming, so I'm going to work through a bunch of easy through to difficult problems in each language. Hopefully I'll pick up the idiomatic solution ideas as I go along.
Feedback and alternate solutions are also welcome.
At some point I may repeat the problems with Python, but not yet.
Problem sets I'm planning on starting with include:
The latter two are language-targetted, but I believe that I should be able to gain some benefit from some of them anyway. Suggestions for other problem sets are welcome, but this is certainly enough to get me started.