YAPC::Asia 2010 is over. Well, it actually ended more than two weeks ago. It was fabulous as a whole. Larry's keynote was quite interesting. Jesse's was fabulous in many ways (including that nico-nico-ish twitter stream). Miyagawa-san's was moving. This year we also had a special session where Japanese perl mongers group leaders were invited to discuss issues and encourage people to join or start another. As a consequence, a few new perl mongers groups were born and some more may come. I'm really glad at that.
For some time that on different situations I have heard programmers answer and close bug reports with a "Works for me!" sentence.
I think that kind of answer is from somebody that doesn't want to bother asking for details on what is going on, and making his/her software better. When a programmer answers this, why is he or she making his code available? Or, if he or she is making the code available `as is', why they create a project page and the ability to submit bug reports? Also, if it works for you and you are not interested why that piece of code is not working elsewhere, I can't understand why we, at Perl community, have CPAN Testers.
Note that there are situations and situations, and I can agree that in some situations this would be a suitable answer.
But what I would like to say here is, please try to reduce that amount of "Works for me!" sentences per square meter.
Not that I was too lazy but my time was very compartmentalized. So i just start slowly and since I don't get bucks til its done, you don't have to worry anyway. I just completed Tablet 2. Its a nice and dense overview what Perl 6 is about and whith which mindset Larry is designing Perl 6. (already got very positive feedback from #perl6 people).
On other hand It's small and not part of the official grant, but it belongs to the overall concept of the Perl 6 Tablets which is much more work anyway. It was not the only thing I've done but for now i mention this because its completed and you are invited to write your comment of extentions here or in the wiki.
I was reading an interesting discussion on python-dev, and it made me think about the analogous situation in Perl. I've long been in the habit of putting each package into its own file, no matter what. Now I'm starting to consider combining related packages into one file, and only breaking things up along lines of reuse.
I initially thought there was consensus in Perl circles to have a single file per package, but on further reflection, I started to doubt myself. I don't actually have much confidence that this is true. I don't usually look at the file structure of distributions I use from CPAN. Maybe there is more combining than I realized. Any thoughts?
I have been using git and Github off and on for a while now but I’ve never really learned much about it.
Today, I mistakenly added a tag with the same name as a branch and pushed it to Github. I was able to get rid of it locally but I was pulling my hair out trying to figure out to remove it from Github. I finally stumbled on the solution so here is what I learned.
Removing a git tag:
git tag -d <tag>
Removing a git branch:
git branch -d <branch>
Deleting a tag or branch from Github:
git push origin :<tag or branch>
Deleting a tag (with the same name as a branch) from Github:
git push origin :refs/tags/<tag>
Deleting a branch (with the same name as a tag) from Github:
As the Statistics site wasn't being regularly updated for the past couple of weeks, the latest milestone on the Interesting Stats page of the CPAN Testers Statistics site was way down the watch list. So I was surprised to see that we now have over 9 million test reports in the CPAN Testers eco-system. Many thanks to all the testers who have help to contribute to the milestone.
Congratulations once again go to Andreas for posting the 9 millionth report. It was a PASS for Log-Report-0.28.
Over the last couple of months, a startling array of events have transpired, and hence conspired to delay progress, but finally I have App::Office::CMS working at home.
I’ll shortly release a dev version (V < 1.00), so you can check the installation instructions, if you wish, and the docs.
I just have to finish off the POD, which is mostly written.
The dev versions will start by using textareas for input, not TinyMCE, the editor I’ll activate for V 1.00. After all, these versions are just for playing with.
Also, I have not yet integrated revision history, so there is no roll-back capability. I’m looking at Git::Repository for that. Only 1 copy of the data is stored, in a database via DBI.
Recently a friend of mine back in the US mentioned an art project she was working on. She was looking for words which are composites of two words. She didn't necessarily want obvious composites like "bluebird", but less obvious composites that can be worked into her art project. For example, "homeless" could be "home less", or "garbage" to "garb age". A few people struggled to come up with examples. I came up with /usr/share/dict/words and a few lines of Perl. I use a few nifty idioms that every Perler should be familiar with.
Several people have asked about the chat system in The Lacuna Expanse and whether it is something they could use in their own sites. Indeed it is, though until today it was anonymous only. Read on for the details.
I originally wrote a very long post about this but somehow it wasn't probably saved while restarting so you'll have the benefit of a shorter version.
I've decided to stop focusing on Perl for Android for now. This is not to say I'll never get back to it, but that I definitely won't be involved with it in the near future.
I assume many are wondering about the current status of Perl on Android. Starting with "there is Perl for Android already" and ending with "it probably isn't the Perl you wish you had" seems like a good summarize. There is a patched-but-not-documented Perl 5.10.1 cross-compiled for Android. It might be relatively old and missing some very useful core modules, but it works for most uses. Also, you could hack around what's missing.
Until a few days ago I have a lovely tool, which I wrote in perl (of course), which would visit several of Amazon's websites and download my wishlists. But it stopped working. Apparently, Amazon decided to just turn that particular part of their API off. Does anyone know of an alternative to Net::Amazon::Request::Wishlist that still works?
Last week The Linux Journal published this excellent article written by Carl Lundstedt of the University of Nebraska, Lincoln which details many ways Linux / open source is used around the world by scientists working on cutting edge physics.
The computer cluster he details is the same cluster I ran large stacks of Perl on for my "mutation grid" project whose name I stole when I formed Mutation Grid, Inc. A little trip down memory lane for me, and a good read for curious minds. :)
I've been working on a restructured perlvar and I think I've mostly got it right, but at the moment I'm almost wishing that I never have to see it again. Have a look for yourself. It's in the perl git repo in the briandfoy/perlvar branch (if you're looking at the github mirror, realize it's several hours behind).
The new version notes when each variable appeared in the Perl 5 series of releases if it wasn't there at the start.
I still have to ensure that nothing breaks the perldoc -v stuff. I've tried it on several variables without problems but I don't know if some of the restructuring affected the odd variable.
I expect to merge this for the next development release, so I have a couple works to sort out whatever is left.
My brother finally created his first GitHub account to try and work on public code and even forked and asked for a pull request on a module I'm working on.
He's now converting yet another CGI website to Dancer.
Here's hoping this will lead to a fun and joyful career.
At first I was excited that Microsoft had created PowerShell -- a usable command-line shell for Windows. (I always have 4 Cygwin Bash windows up on my XP PC at work, and before Cygwin got stable I ran the MKS Toolkit version of the Korn Shell.)
Once I started using Powershell, I quickly became disappointed. There wasn't anything in PowerShell that I wanted to do that did not exist in an easily-consumable form in Perl. That would have been acceptable -- if it hadn't been for how slow PowerShell was compared to Perl or Cygwin Bash. As someone whose bread'n'butter for several years has been .NET programming, I am still not sure why PowerShell is so much slower than Perl or Bash (if anyone knows, please tell me). (I don't have problems getting a sane level of performance out of .NET.)
I may be out of touch for a bit as I'm moving to Amsterdam tomorrow night, but in the meantime, tell me what you would like to see for "Perl 101" blog posts. Every time I post something with the newbies tag (which I'm going to change to the friendlier "perl101"), I get a fair number of comments and the post shows up a few times on Twitter. Since I'm getting a good response, I'm assuming that people actually like these posts and want to see more of them.
So tell me what you want and I'll see what I can do.
The most important part of the repository conversion I did was resolving all of the branches and calculating the merge points. The majority of the rest of the process is easily automated with other tools.
The main part of this section was determining what had happened to all of the branches. One of the important differences between Git and SVN is that if a branch is deleted in Git, any commits that only existed in that branch are permanently lost. With SVN, the deleted branches still exist in the repository history. git-svn can't delete branches when importing them, because that would be losing information. So all of the branches that existed throughout the history of the repository will exist in a git-svn import and must be dealt with.
However, the thing I'm most excited about is that ElasticSearch.pm v 0.26 is also out and has support for bulk indexing and pluggable backends, both of which add a significant performance boost.
Pluggable backends
I've factored out the parts which actually talk to the ElasticSearch server into the ElasticSearch::Transport module, which acts as a base class for ElasticSearch::Transport::HTTP (which uses LWP), ::HTTPLite (which uses, not surprisingly, HTTP::Lite) and ::Thrift, which uses the Thrift protocol
I expected Thrift to be the big winner, but it turns out that the generated code is dog-slow. However, HTTP::Lite is about 20% faster than LWP: