Well I am finally there after what almost a month of plucking away at it I am going to do the release.
Well I did go with Dist::Zilla and after a quick 'dzil smoke' I did a 'dzil build' and I am ready to send it off.
Well after the usual looking for my Pause password and having it reset yet again (I must hold a record for that) I was finally able to send my MooseX up to PAUSE
Add a file for BYTEROCK
File successfully copied to '/home/ftp/incoming/MooseX-AuthorizedMethodRoles-0.001.tar.gz'
So now I just have to hang around and see how many bugs are reported :)
Recently, I read Pagination Done the Postgresql Way. The premise is that offset/limit combined with a date index gets slower as you page, but if you page in a way where you are always selecting by date then you will be using the index properly.
A simple example would be a table with an entry per day and a date index. Asking for Tuesday would use the date index more effectively than asking for the 3rd entry of the table sorted ascending, so you could page by just asking for an entry with a date greater than the previous one you pulled limit 1.
A few years ago, a script showed up on the git mailing list that would effectively run "git blame" across the entire tree, and aggregate the line counts by author. Here's the first 50 authors as of commit 86714aaae213175ea8c716ad22c1e10300d5bf61:
I was recently reading a brilliant post A first-person engine in 265 lines by Hunter Loftis, and instantly wanted to port it to Perl. After doing half of work, though, I figured out that the original code uses alpha blending technique to achieve wall shading, rain, and drawing images with 8-bit transparency channel.
I've used Prima to write the port, even though it doesn't support alpha channel, because native x11 API doesn't do that. Googling led me to XRender that can do the stuff, but that sounded too heavy. However Cairo (which is also used by firefox to implement Canvas rendering) is exactly the tool for the job. So I needed to spend some days creating a bridge between Prima and Cairo, and here it is on github.
The programming itself went unexpectedly smooth, because of the Cairo's API design quality .. Unfortunately the perl demo resulted to be very slow, because perl's own speed - profiling shows that pure-perl calculations eat the biggest part of speed. The next biggest CPU eater is a set of calls to Cairo that draw rain drops - each drop requires two calls, $cairo->rectangle(..) and $cairo->fill and that is also expensive.
So I wonder if there can be done something to it. Any ideas?
Well actually my blog tonight is not on MooseX, Moose or even Perl but it is part of this little project of mine and I figure it is good for a single post.
Well today I started to play with the latest and greatest version of the Github for Windows so those of you who don't care about such things you can go back to you linux command line and not bother to read any further.
Well I have used GitHub (painfully) for a few years now, started with a few command line stuff but never worked quite right for me but things improved when I installed tortoisegit as I was familiar with the original tortoisesvn so the transition over was not that painful.
So I did become some-what more productive but when Github started to do inline editing I found I was doing most of my work inline.
Less than three weeks remain for the Granada Perl Workshop that will take place on the Friday 27th of June in Granada (Spain).
The good news is that, thanks to our sponsors, finally, the event will be completely free.
Even if the workshop is a one-day event, most of the attendes are going to stay there for the full week-end in order to visit the beautiful city, socialize, and have long conversations about Perl, and well, probably anything.
So, you are still on time to get a sit on one of those cheap flights leaving from almost any european airport to the beach destinations in the south of Spain and be our guest. Also, in case you would like to give a talk, the call for speakers will still be open until thursday.
Dezi 0.3.0 (the Moose release) was just pushed to CPAN, along with its dependencies:
Search::OpenSearch 0.400
Search::OpenSearch::Engine::Lucy 0.300
Search::OpenSearch::Server 0.300
Many thanks to all the testers and to those who have encouraged me on this rewrite. I've enjoyed using and learning all the Moose and Moo and Type::Tiny libraries involved.
First, skyline is an anomaly detection tool published by Etsy. It use numpy/scipy to implement 9 detection algorithms. You can find it at http://github.com/etsy/skyline.git
I have heard PDL for a long time, and then I thought: maybe I can re-implement those algorithms using PDL? This must be the best way to learn PDL!
Now, I got it done.
I should say that PDL is not so easy to write as numpy. For example: you must write '$p->index($p->nelem - 1)' to just get the last one of an array! In numpy, the numpy object has methods almost totally like original object. But in PDL, we didn't get these easily things...you can't write '$p->[-1]' or even '$p->index(-1)'!tks, I change this to '$p->at(-1)' now.
BTW, I didn't find any K-S test implement at CPAN(well, except Statisic::R),so I use S-W test instead. S-W test is better when the length of array is less than 5000. I think the check values in latest one hour should be less than this, yes?
Where have we heard that before?? Well today I did mange to get my POD mostly written up and now I just have to look into one or two other important matters and I think I can give it a go.
First it is a question of Version.
Well there are many version patterns to pick from. Most of the time it does not matter sometimes it does as the time I stuck an 'a' in a DBD::Oracle version and took the wrath of a good number of other Module builders and I am still most likely paying the price for doing a favour for someone.
In a previous entry I blogged about creating your own DarkPAN. This is convenient when you're offline as you can still use 'cpanm' to install your modules and dependencies.
Now of course, if you're a diligent CPAN author (or a lazy worker finding ways to goof off and do busy work by writing and releasing CPAN modules), your DarkPAN will add up and need cleaning from time to time.
To delete older releases, I use this script. Just give it the path to your DarkPAN as an argument. After that you'll need to reindex your DarkPAN (I use orepan_index.pl from OrePAN, OrePAN2 is much slower).
To find distributions that have been deleted from CPAN but still linger in my DarkPAN, I cobbled up something like this:
I released GitPrep 1.8. You can install portable GitHub system into Unix / Linux easily. It is second major release.
Because you can install GitPrep into your own server, you can create users and repositories without limit. You can use GitPrep freely because GitPrep is free software. You can also install GitPrep into shared rental server.
I recently started a new project and wanted to take advantage of some cool new Perl 5.20 features. In particular, I wanted to use the experimental subroutine signatures feature and postfix dereferencing. That requires the following incantation:
use feature 'signatures';
use feature 'postderef';
no warnings 'experimental::signatures';
no warnings 'experimental::postderef';
That can be abbreviated a bit by passing a list to feature and just turning off the entire experimental class of warnings:
use feature 'signatures', 'postderef';
no warnings 'experimental';
But it's still a couple lines of boilerplate that need to be pasted atop every file in my project. I hate pasting things.
So no MooseX or AD&D post tonight as I go sidetracked on anotehr project but at least I did manage to get one post out of it and finally found out how to do something in Perl I have been wanting to do for years.
You see I came back to Perl after a number of years playing with Java and JSPs, I was happy with JSP and Java there where some annoying things (hard to manipulate text) and some very good things, strong OO. In my move back to Perl some 10 years after last playing with in (early 90s) I regret leaving Java that much but there was always always one thing I missed from that world, the 'import' command
For the work I was doing at the time is was great, as I could simply use a single import and get all the classes I needed using the wild card like this
I graduated in 1979; our computing platform was a 360/75, late upgraded to a 370/145 (I think) still running OS/360 in a VM under VM/360. This meant that for our own projects, we actually ended up doing a number of the things that Andy talks about as a matter of survival.
We did not have a version control system at all; we ended up using generation data sets and meticulous tape backups to manage our source code. A tool that just made that work would have been a miracle. (I remember well having to write and use programs to recover "deleted" partitioned data set members when one realized that it would have been a good idea to back that member up but didn't.)
For this year's Swiss Perl Workshop we rented a house that is perfect for all sorts of workshop-like activities.
Apart from the large room were talk are going to be held we have a medium sized room for about 10-15 people and two smaller rooms for 4-8 people.
So, if you are interested in running something like a small hackathon, a half-day course on your favourite Perl topic, or something completely different (but still Perl-related, of course), don't hesitate to contact us.
Perl has played an immense role in developing the computing and information technology world we see today. Even if most of the good work done by the language and it's legion of loyal users and developers have been done in a relatively stealthy mode. Due to it's freedom, versatility and ease of use among other notable attributes, it has been embraced by a large community of developers resulting in a mature and stable language fit for just about any programming task.
In today's world however, there is a great "digital divide" between the information infrastructure of the World's developed economies and the developing ones. And with the developing world doing it's best to catch up, the marketing strategies of bigger well-funded software companies makes for uneven competition for such free/libre/open source software (FLOSS) technologies to be adopted.
Well this is true this is the 25th post in this series and I still haven't released the code I started with in this post. Please to be forgiving me as it has been rather heck-tick about here but I think I should be able to wrap things up soon.
I really just have the POD to write up which I am going to look at tonight. One thing that you do here about Perl when talking to non-Perl programmer is that is lacks documentation like JAVA. Usually when I here this I send them off to Ensemble API and ask them if that is not good documentation what is??
Anyway the way Perl has cornered the genome market but that is another post.
Last week I had the joy to attend the first (and certainly not the last) MojoConf in Oslo Norway. It was an incredible experience! First and foremost I want to thank the Oslo Perl Mongers, the organisation and execution of the conference was first rate! I also want to thank Jan Henning Thorsen (batman), who graciously offered to host me. We had such productive conversations over evening congacs, both Perl and otherwise.
I will admit now, that had wondered if the community was large enough to support an international conference. I am quite happy to say that my fears were unfounded. We had attendees from all over the world, including the USA, France, Greece, Israel, the UK, Germany and others I’m forgetting I’m sure.
Glen Hinkle (tempire) gave a professional training, which was sold-out! When companies (and even a few individuals) are willing to pay real money for training, it goes a long way to prove that Mojolicious is the world-class framework that we know it is.