When working with the built-in sample files, URL parameters can be used to: select a file, specify an XPath expression, override the default namespace prefix mappings. Here's an example link that does all three!
* I used the term upload in 'scare quotes' because it's a client-side app, nothing actually gets sent to the server.
]]>This new section introduces the XML::LibXML::Reader API which is a pull-parser style with much lower memory overheads that a traditional DOM parser. It also covers hybrid operation where the Reader API is used to scan through the document and extract sections as DOM fragments for further interrogation via XPath.
]]>Script Spotlight
Briefly tell us about a script you've written and use regularly. Things you might talk about:
If you can't immediately think of a candidate script then here are some places to look to jog your memory:
See you there - with your script :-)
]]>In helping people with their Perl and XML problems I've come to realise that in order to become proficient with XML::LibXML what people really need (and what I've aimed to provide) is:
The documentation is structured in such a way that there's one obvious place for an absolute beginner to start but multiple other entry points for people to dip into specific subjects such as: XPath expressions; the DOM; namespaces; and using libxml with HTML. Along the way it provides multiple links to the official documentation so people know where to go when they need more information.
On the subject of XPath expressions, I've also included an interactive "XPath Sandbox" which allows you to type in expressions and see which parts of the document they match. (Full disclosure: it doesn't actually use XML::LibXML at all - it runs entirely within the browser using Javascript).
So please take a look at the site and if you are answering questions about Perl and XML, please consider linking to it in order to set people on the right path.
The full source for the project is up on GitHub so you can raise issues or fork the repository and raise pull requests to make it even better.
]]>The thing that I find intriguing is that the referrer logs for the Map of CPAN show that every day, at least one person follows the link from that question to the mapofcpan site. Every. Single. Day.
It's not even a particularly prominent link. How many people must be asking that question and finding that StackExchange page every day?
So my question to you is what CPAN module would you recommend as rewarding to an intrepid code explorer; and why? Of course I'm not suggesting Perl programmers should only read Perl code, but I do think CPAN is a good place to start.
]]>My colleagues and I are required to track our daily activities for billing purposes. Ultimately the information needs to end up in the company work-request management system ('WRMS') but that system's user interface for timesheeting is somewhat frustrating. In response to this, two colleagues (Martyn Smith and Nigel McNie) wrote a Perl script called TKS. This allowed people to enter their activities in a plain text file in a format similar to Steven's. Then they could run the TKS script periodically to sync the contents of the file into WRMS (a modular architecture allows different backend systems to be supported).
An example of a day's entries might look like this:
2014-01-08 # Wednesday 63778 09:00-10:30 Assist Cheryl with month-end reporting 64609 10:30-11:15 Analyse logs of picking report job failure 64613 11:15-12:30 Set up puppet config for memcache servers 64618 13:00-14:30 Work around IE rendering bug in date widget 64515 14:30-17:00 Initial implementation of order reversal function
Or if you wanted to take advantage of the work-request-aliasing function and simpler time entry format it might look like this:
2014-01-08 # Wednesday billing 1.50 Assist Cheryl with month-end reporting 64609 0.75 Analyse logs of picking report job failure puppet 1.25 Set up puppet config for memcache servers 64618 1.50 Work around IE rendering bug in date widget 64515 2.50 Initial implementation of order reversal function
The simplicity of tracking time in a plain text file opened up all sorts of possibilities. For example some people use a commit hook in git to extract work request numbers from commit messages and append entries to their TKS file for later manual editing.
While I definitely agreed that the TKS command-line front-end to our official time-keeping web app was an improvement, what I immediately desired was a web front-end to TKS. And thus TKS-Web was born.
TKS-Web provides a calendar-style week view that you can use to enter and edit your timesheeting data. Activities can be created, copied, pasted, dragged, resized and deleted using hotkeys, mouse or touch gestures. The data is stored in a simple SQLite DB using a JSON/REST API provided via Dancer. A menu option (or API) allows you to export activity data in TKS format or you can ignore the TKS connection and extract the data directly from the DB. In my case, the mobile-friendly interface allows me to review and revise the day's activities during my commute home on the train.
Tracking time can be a chore but finding or building a tool that matches the way you work can reduce the drudgery.
]]>I thought it might be interesting to assemble a stop-motion animation of these changes. The result is: Map of CPAN – The Movie.
As with the static map, the colours don't really have any meaning beyond grouping together modules that share a namespace. The colours have even less meaning in the animation since I've used a static namespace-to-colour mapping which doesn't change as the areas move.
If you look closely you'll see little dark blue squares flashing by. Each flash represents the upload of a new distribution (i.e.: not a new release of an existing distribution).
]]>Here's a simple example to illustrate what the notifications plugin does. Imagine you've SSH'd into a server to kick off some long running command and you'd like to be notified when the command finishes. In this example, I'm running a database restore:
$ pg_restore -d acmecrm crm.pgdump; bnotify 'DB is restored!'
When the pg_restore completes, the bnotify
command will be run. Bnotify is an alias for bcvi which will send a message back to your workstation to pop up a desktop notification.
The new feature is that bnotify now accepts a --idle
option to send you a notification when the TTY is idle. The reason I wanted this is that I'm currently doing a bunch of server upgrades and although the upgrade process mostly runs unattended, I do get prompted for questions at intervals. Here's how I use the new feature:
$ bnotify --idle
Starting background process to monitor /dev/pts/0 for 20 second idle period
Kill monitor with: bnotify --kill
$ sudo apt-get dist-upgrade
[sudo] password for grant:
Reading package lists... Done
Building dependency tree
The following NEW packages will be installed:
...
I will then leave the upgrade running and switch to do something in another window (or desktop). Some time later, the upgrade process will either complete or will pause to ask me how I'd like to deal with a possible conflict with a new version of a config file that wants to replace one I've modified. Either way, bnotify will notice that there's been no output on the terminal for some amount of time (default 20 seconds) and will signal my workstation to alert me with a message: "Notification from [servername]. Terminal is idle".
In my case, the workstation end of the notifications process is handled by Desktop::Notify - which works on Linux desktops (e.g. Gnome2/3, XFCE, KDE probably). But you could replace that with something that talked to Growl on a Mac.
Links:
]]>