Just my approach of doing string formatting that supports named parameter. Instead of creating a new formatter (with a new/different syntax), I just extend sprintf a bit:
Named parameter can also be used in the width and precision field:
sprintfn "%(num)(width).(prec)f", {num=>1, width=>1, prec=>4}; # prints 1.0000
Compared to other modules, this one supports all Perl's sprintf() formats (since it's just sprintf under the hood) and allows mixing of named and positional parameters.
The current implementation is a bit naive, but it'll do for now.
Submit a talk for YAPC::NA 2012. We’re especially interested in talks on real-world Perl apps and quintessential Perl 101 talks, but we’re open to any ideas you have.
I often forget how young the field of computing is. Computers are everywhere and it is difficult to imagine a world without them.
Computers, embedded in robotic machinery or general are so darn useful.
X They replace the calendar with a more lively one. X They help you communicate and make new friends. X They take down all your notes. X They help the shopkeeper with his inventory. X They help the doctors, take readings of the patient. X They help a startup visualize latest trends. X They assemble metal things with perfect accuracy to create a Ferrari. X They help the musicians with their beats. X They help the painter undo. X They preserve history. X They help the architect. X They book your flights. X They track your packages. X They bring limitless information and possibilities to the bedside of every dreamer. X They helped the thief's in ocean's 11.
This is a very short entry, which will lead to a long one later. Right now in light of the recent hubbub about the study claiming that perl's as good as a randomized language for newbies, i am trying to find out just how a newbie searches for Perl learning materials online and what he finds first.
Right now i'm concentrating on what kind of search term a newbie would enter in Google. Luckily Google trends help a bit there, allowing me to compare between various search terms.
It's been a fairly quiet year for DBD::SQLite, but largely in a positive sense.
The release rate and delta of SQLite itself has been fairly tame, and in Perl land we've seen a significant reduction in the severity of bugs compared to 2010.
Because of the significance of DBD::SQLite and the need for extended testing periods, it has been my policy to allow smaller fixes to accumulate without doing a release and to release DBD::SQLite in line with SQLite releases that recommend upgrading.
The recent 3.7.8 release of SQLite does not come with an upgrade recommendation for our current SQLite version, but does suggest upgrading as optional. This release contains some significant index performance improvements (described as "orders of magnitude faster") for large CREATE INDEX commands, and DISTINCT, ORDER BY and GROUP BY
statements are now faster in all cases.
The 3.7.8 release also contains some changes to make SQLite play nicer with Windows anti-virus programs.
Last week I went to my first Perl workshop: 13. Deutscher Perl-Workshop. It has been a great experience – great talks and even greater people.
On the train to and from Frankfurt and at the workshop itself I hacked a bit on my pet project: the Perl Analyst. The goal is to build a PPI-based tool that parses your Perl documents, it may then answer your questions about your sources. The tool and the modules it consists of may also lead to refactoring tools.
There is a running prototype on github which may reach CPAN soon. Currently you may use it to search for declaration of subroutines and lexical variables as well as the use of strings. You may search for them by exact matches or regular expressions.
I'm on a bit of a roll about unpacking archives. Last week I wrote about peeking into archives and recently I was wondering about extracting archives.
The classic tool for extracting archives is Archive::Extract. This module was originally written for CPANPLUS and has been in the Perl core since 5.10.0. It tries a variety of methods for extracting archives, from pure-Perl modules such as Archive::Zip and Archive::Tar (portable but slow) to external tar and unzip commands (unportable but slightly faster).
Having played around with libarchive, a "C library and command-line tools for reading and writing tar, cpio, zip, ISO, and other archive formats", I wondered if it would be interesting to use it instead.
Behold my newly-written module Archive::Extract::Libarchive. This uses libarchive to extract archives, extracts most archive formats and is quite fast. It requires libarchive to be installed.
How fast? Well, it depends on what you are extracting. If you happen to be extracting all the current CPAN distributions (for a total of 7.1G), then Archive::Extract (with PREFER_BIN) takes about 20 minutes while Archive::Extract::Libarchive takes about two minutes.
Yet another tool for your archive extracting toolbox...
* Archive::Manager (or Archive::Writer) using libarchive. There is now already Archive::Extract::Libarchive but a generic interface for writing would be nice too.
* Something to replace Log::Log4perl for my Log::Any::App (or Log::Any in general, so maybe I'll end up with a Log::Any::Adapter::Something). Log4perl is nice and all, but its startup is a bit slow. I don't need all its features. All I need to do is combine the various Log::Dispatch modules and give them categories. I know, there are already too many log modules on CPAN, so I'll look at them first before reinventing the wheel.
* A front-end (or tutorial, or something) for Marpa. Being a parser newbie, I always end up with Regexp::Grammars as that's the easier one to use.
* (Continuing idea on previous post) A generic framework for App::perlmv and App::perlmp3tag and other similar application. Or even, a generic framework for application that works on a tree (not just directory tree).
* A DateTime::Format module to parse natural dates/times that is easy to translate. Currently DateTime::Format::Natural is English-only and hard to translate.
Ricardo Signes will give a talk at YAPC::NA 2012 that he describes as:
Year after year, Perl continues its steady march towards greater and greater Perlishness. Ancient mistakes are slowly sloughed off and long dreamed-of features are added. If Perl is evolving, toward what? Is it just a random collection of mutations, as desperate Perl hackers struggle to remain fit enough to survive, or is there an intelligent design behind the way Perl is changing?
In this session, Ricardo Signes (rjbs), the Perl 5 project lead, will discuss the future of the Perl language, the guiding principles of its ongoing design, and the specific changes toward which the Perl 5 Porters are working. It will also describe the way Perl 5 development really happens, how that is changing, and what we might want it to become.
I'll introduce PrePAN to you here. PrePAN is a website for all Perl Mongers, especially for those who have intention to upload their Perl modules to CPAN. PrePAN aims to be a good place for them to make discussion about Perl modules pre-uploaded to CPAN (`PrePAN' is named after that).
I introduced the website at YAPC::Asia 2011. Some Perl mongers have submitted their modules already. Please give a look at them at PrePAN.
What's you can at PrePAN?
You can submit your Perl modules and call for review.
...if it's just a proposal/idea before implementation.
Make discussion about them by comments.
If you want to invite co-developers, PrePAN may be a help of you.
Motivation
Solution to Your Problem
You may wirte a useful module for your job or your own purpose. You may think it might be worthy for others. However, you may worry about something like below:
BTW, this year punytan created a simple form for people to submit the url, and a lot of people including myself just kept collecting them blog entries. punitan++.
If you're an organizer, make sure to tell your attendees that their YAPC ain't over until you blog about it!
I am very happy to announce that
Marpa::XS is now beta.
Marpa::XS will kept stable.
Changes to its interface will be especially avoided.
What is Marpa?
Marpa is an advance over recursive descent
and yacc.
I hope the Marpa algorithm
will become the standard parser for
problems too
big for regular expressions.
Mark Allen will give a talk at YAPC::NA 2012 he describes as:
This talk presents the Dancer web framework beginning with “Hello World” and progressing through a couple of easy to digest introductory applications. All of the primary Dancer features are presented including URL routing, writing handlers, and output templating. A selection of useful and common Dancer plugins will also be covered. This talk is best suited for beginning and intermediate Perl programmers.
For a very long time, I was rather stuck because, as a scholar in the humanities who uses LaTeX, I could not get people to use LaTeX because they could not annotate people’s work without a professional copy of Acrobat. Well, I have figured out how to do this using Evince. How you may ask? It is a simple process but it is rather hidden and it should be more prominent when looking at PDFs.
These instructions should work on Evince 3.2 in Xubuntu 11.10.
Grant Street Group is a Software as a Service pioneer. In 1997, our company hosted the world’s first online bond auction for the City of Pittsburgh, Pennsylvania. Since this first auction, over 3,100 clients have used our software to process financial transactions exceeding $11 trillion.
Today, Grant Street Group is a growing company providing Software as a Service in fields such as electronic payments, auctions, and tax collection. Our customers include banks, financial companies, and state and local governments.
Are you a Perl expert? Do you prefer programming at home instead of getting interrupted at the office?
Do you want to tackle challenging software engineering problems in a well established and fast growing company?
Do you want to work with talented and smart developers? Build elegant and scalable solutions for large applications? Meet the challenges of deadlines while still delighting customers? Then you’d enjoy working at Grant Street Group and we’d like to hear from you.
Yesterday was the last day of the german perl workshop.
Denis Banovic started with a live demo demonstrating development and deployment of a dancer app in the "Cloud". It looked very easy to do.
Karl Gaissmaier was the next speaker explaining a lot of details about Captive::Portal, a hostspot software his university developed.
Stefan Hornburg talked about various advanced features of and his experiences with Dancer.
After lunch, Rolf Langsdorf held a very interesting talk comparing Perl and JavaScript. After that, Herbert Breunung had two talks: Thoughts about better documentation and a comparison of Hg and git.
And the last two speakers were Richard Lippmann discussing the usage of the right language for programming projects and Renée Bäcker with tips and tricks for doing talks.
All to gether, this was a great workshop. I will definitely attend next year's workshop which is in Erlangen from March 5 to March 7, 2012.
I'm considering ditching my RDBM for my next application and using ElasticSearch as my only data store.
My home-grown framework uses unique IDs for all objects, which currently come from a MySQL auto-increment column, and my framework expects the unique ID to be an integer.
ElasticSearch has its own unique auto-generated IDs, but:
they look like this 'KpSb_Jd_R56dH5Qx6TtxVA' and I'd say are less human-readable than an integer
I would need to change a fair bit of legacy code to migrate to non-integer IDs
Initially I thought I could keep MySQL around as a ticket server, as described by Flickr but then I wondered if I could achieve the same thing by abusing ElasticSearch's built-in versioning, allowing me to ditch MySQL completely, and give me a distributed ticket server with high availability into the bargain.
The logic is simple: when you index a document in ElasticSearch, it returns a new version number for the document, which is always incrementing and is guaranteed to be unique across the cluster.