+----+----+----+
| | 3 | 17 |
+----+----+----+
| 5 | | |
+----+----+----+
| 13 | | 7 |
+----+----+----+
There are five prime numbers in a 3x3 grid, and the goal is to fill in the empty cells with four other prime numbers, so that the sum of every row, every column, and both diagonals is also a prime number, less than 100. Each number can only be used once, and this applies to the numbers in the grid as well as to all the sums. Lastly, the sum of all these numbers must be a prime number as well (greater than 100, obviously).
I took me quite a while to find a solution — but when I finally did, I had not one, but (at least) two solutions. The puzzle description didn’t mention anything about multiple solutions, so I thought I made a mistake along the way. However, having triple-checked all the math, I couldn’t find any errors. I decided it was time to use the force (I couldn’t miss the opportunity to use this phrase, considering the date when I’m posting this) — namely, the brute force.
]]> My plan was to write a quick little program to verify all the possible solutions, and thus find out if a) there is indeed more than one correct solution, and b) the ones I found are among the correct ones. And, since recently I have started playing with Perl 6, I thought it might be fun to use it for this purpose.Checking all possible solutions meant filling in the blanks with all four-element permutations of prime numbers less than 100 (excluding the ones already there in the grid) — so I first generated a list of numbers to pick from. This was trivial with the built-in is-prime
method:
my @primes = grep *.is-prime, 1..100;
I eliminated the already used numbers using the set difference operator (-)
(and the keys
subroutine, since I wanted an array instead of a set):
@primes = keys @primes (-) (3, 5, 7, 13, 17);
I then had to check every permutation of four numbers from that list — in other words, select all possible four-element combinations and work out all their permutations. Perl 6 apparently comes with batteries and a nuclear reactor included, so there are built-in methods for combinations and permutations. Getting what I wanted was then easy-peasy:
.permutations for @primes.combinations(4)
With the help of the prefix |
operator, I passed all those four-element lists to the not-yet-written check
subroutine:
check(|$_) for (|.permutations for @primes.combinations(4));
The purpose of the check
subroutine was to verify if the semi-random set of numbers happened to form a correct solution, by applying all the puzzle’s rules. I named the arguments based on their corresponding column/row indices:
sub check($a11, $a22, $a32, $a23) {
The first thing to do was to put the numbers into the grid, represented by a two-dimensional array:
my @grid = (
$a11, 3, 17 ;
5, $a22, $a32 ;
13, $a23, 7
);
I then calculated the column/row/diagonal sums (the reduction meta operator was very helpful here), and put all the numbers in a single array:
my @numbers = (
|@grid[0], ([+] @grid[0]), # First row, sum of first row
|@grid[1], ([+] @grid[1]), # Second row, sum of second row
|@grid[2], ([+] @grid[2]), # Third row, sum of third row
@grid[0;2] + @grid[1;1] + @grid[2;0], # Sum of anti-diagonal
([+] @grid[^3;0]), # Sum of first column
([+] @grid[^3;1]), # Sum of second column
([+] @grid[^3;2]), # Sum of third column
@grid[0;0] + @grid[1;1] + @grid[2;2] # Sum of main diagonal
);
Having collected all the numbers, I needed to check if they satisfied the puzzle’s conditions. First, I checked if there were no duplicate numbers, by comparing the array against the list of its unique elements:
return if @numbers != @numbers.unique;
Then, I checked for any numbers that were not prime or were greater than 100:
return if grep { !$_.is-prime || $_ > 100 }, @numbers;
The last thing to do was to calculate the sum of all the numbers and check if it was prime as well:
return if !is-prime [+] @numbers;
If the candidate solution made it this far, it had to be correct — so I wanted it printed out. With the numbers nicely arranged in a single flat array, it was easy to make a simple ASCII-art diagram-like format string (using the fancy new q:to
heredoc syntax):
my $fmt = q:to "END";
| %3s | %3s | %3s | %3s
+-----+-----+-----+
| %3s | %3s | %3s | %3s
+-----+-----+-----+
| %3s | %3s | %3s | %3s
/ \
%3s %3s %3s %3s %3s
%3s
END
…and pass it to good old printf
along with all the prime numbers and the final sum appended at the end:
printf $fmt, @numbers, [+] @numbers;
And with that, my brutal solution was complete — it was time for the moment of truth. I ran the program, and was soon happy to see that it found both my solutions, as well as two other ones. Hooray, I might not be a moron after all!
Armed with this hard proof, I contacted the author of the puzzle to ask about the multiple solutions. It turned out the description was in fact missing one crucial sentence — it should have stated that there was more than one solution, and the correct one was that for which the final sum was the smallest. Solved, both the puzzle and the mystery.
Thanks, Perl 6! A++, will use again.
Here’s the complete code:
#!/usr/bin/env perl6
my @primes = grep *.is-prime, 1..100;
@primes = keys @primes (-) (3, 5, 7, 13, 17);
check(|$_) for (|.permutations for @primes.combinations(4));
sub check($a11, $a22, $a32, $a23) {
my @grid = (
$a11, 3, 17 ;
5, $a22, $a32 ;
13, $a23, 7
);
my @numbers = (
|@grid[0], ([+] @grid[0]), # First row, sum of first row
|@grid[1], ([+] @grid[1]), # Second row, sum of second row
|@grid[2], ([+] @grid[2]), # Third row, sum of third row
@grid[0;2] + @grid[1;1] + @grid[2;0], # Sum of anti-diagonal
([+] @grid[^3;0]), # Sum of first column
([+] @grid[^3;1]), # Sum of second column
([+] @grid[^3;2]), # Sum of third column
@grid[0;0] + @grid[1;1] + @grid[2;2] # Sum of main diagonal
);
return if @numbers != @numbers.unique;
return if grep { !$_.is-prime || $_ > 100 }, @numbers;
return if !is-prime [+] @numbers;
my $fmt = q:to "END";
| %3s | %3s | %3s | %3s
+-----+-----+-----+
| %3s | %3s | %3s | %3s
+-----+-----+-----+
| %3s | %3s | %3s | %3s
/ \
%3s %3s %3s %3s %3s
%3s
END
printf $fmt, @numbers, [+] @numbers;
}
]]>
Anyway, I might change it so that the URL is passed to the server, which then does the download.
]]>Looking around, however, the only thing I could find was the pod2html page at the CPAN Search site, which allows you to upload a POD file, have it processed by pod2html, and displayed with CPAN style. I thought it might be a good idea to try building something more user-friendly, with features like editing POD in the browser, drag and drop file uploads, etc.
And what better time for a little project like this than a weekend when you're ill and not supposed to leave your apartment? Well, that's what my last weekend was like -- two days of coughing and coding, and here's the result: POD Web View.
]]> The application allows you to upload a POD file, get it from a URL, or paste its contents and edit it on the fly. The generated HTML can be displayed in the style of your choice, mimicking how it would look on CPAN, MetaCPAN, or GitHub.To give credit where it's due, the backend is built on Dancer and uses Pod::Simple::HTML to generate the HTML preview. The user interface is made with Twitter Bootstrap, a lot of JavaScript/jQuery code, and the amazing Ace editor.
I hope this will be useful for at least a few fellow Perl developers, like it already is for me. Please note that at this point this is still work in progress -- the backend code needs some more work (e.g. basic sanity checks), and there are a couple UI issues that I'm aware of (and likely a dozen more that I'm not). Anyway, be my guest and give it a try, and if you'd like to report an issue, or maybe help me with the development (more than welcome), I've put the project up on GitHub.
]]>I sort of released it (on GitHub only, not on CPAN) back in January, at that time the code was passing the tests in Plack::Test::Suite when running as a regular HTTP/HTTPS server. My next goal, before considering the module ready to be released on CPAN, was to make it pass those tests in SPDY mode. This meant I needed to add support for SPDY to good old LWP::UserAgent, which was used as the HTTP client in Plack tests.
Over the weeks/months that followed, I made a few attemps at tackling this problem, but had a hard time wrapping my head around the architecture of LWP::UserAgent and figuring out a reasonable way to add SPDY into the mix. Having very little time to devote to this project, I didn't get anywhere with it.
]]> A few days ago, I was delighted to find out that the problem went away by itself, since Plack switched from using LWP::UserAgent to its own Plack::LWPish, which is built around HTTP::Tiny. Now I needed to implement SPDY in HTTP::Tiny, which is, well, tiny when compared to LWP::UserAgent, so the task seemed much easier. I gave it a shot this weekend and got it working in a matter of hours, spawning HTTP::Tiny::SPDY, a subclass of HTTP::Tiny that works the same as the original, but can also do SPDY.I immediately used the module for the intended purpose of testing Arriba in SPDY mode, and, as expected, this revealed many problems, but most of them turned out to be easy to fix (except for one, which took me more than three hours just because I didn't RTFM in the first place -- will I ever learn?). Soon, Arriba running SPDY was passing all the tests in the suite, which I happily celebrated with a tasty porter beer. I am now cleaning up the code to prepare it to be finally released on CPAN.
And speaking of CPAN, HTTP::Tiny::SPDY is already there, as well as on GitHub. Like Arriba, this is an early release, the code is hackish and immature, and I take zero responsibility for the pain and suffering that you may bring upon yourself when you try to use it. But if you do, I crave your feedback.
]]>In case you haven’t heard of it, SPDY is a networking protocol developed at Google with a goal of reducing web page load latency. It is currently used by some of Google services (including search and Gmail) and by Twitter, and is supported natively in Firefox, Chrome, and Opera — so if you visited any of those sites with any of those browsers, it’s highly likely that your web content was transmitted by means of SPDY. An official standard for the protocol is in the works.
]]> There was a SPDY module on CPAN that looked promising — Net::SPDY by Lubomir Rintel. While not being a complete implementation of the protocol, it seemed to be working, as I found out by playing with the sample client and server scripts included in the distribution.After a few days of reading the SPDY specs, minor reverse engineering of other implementations, and blatantly copying (a lot of) code from Starman, I was able to put together a preforking web server operational enough to run a few simple Dancer applications. It’s a mess and nowhere near being ready for production use, but I’m happy to share it to maybe get some feedback from you fine folks — I’ve put it up on GitHub. I intend to continue working on it and hopefully one day turn it into something half-decent.
If you want to run it, be aware that you currently need to use the Net::SPDY module from my forked repository instead of the original one, since in the original there’s some test code that breaks normal server communication.
About the project name — I followed the idea of using friendly names like Starman and Twiggy, and since SPDY reminds me of Speedy Gonzales, I used a part of Speedy’s catch phrase (“¡Ándele! ¡Ándele! ¡Arriba! ¡Arriba!”). However, I know I’m terrible at naming things, so I’m open to suggestions for a better name.
]]>The "MK__^..."
string is the uuencoded bitmap. The second call to unpack
uudecodes it, returning a string of bytes, each representing 8 "pixels" of the image. The =~/./g part
turns it into a list of bytes (you could also do that using split//
), which is then passed as the second argument to map
.
The first argument to map
has another call to unpack
, which takes each byte and transforms it to a string with its binary representation (e.g. "00101101"
), with 0s and 1s corresponding to pixels. This string is then XORed (^
) with another string of the UTF-8 character \x{9635}
(represented with the shorter version string notation, vXXXX
) repeated 8 times (x8
), which happens to produce lighter and darker blocks for 1s and 0s -- so the result of a single iteration of map is 8 pixels of the image.
The (v10)[++$n%9]
part adds a newline character every 9 bytes (in other words, every 72 pixels). (v10)
is the newline character dressed up in a 1-element list. The $n
variable is a counter that is incremented with each iteration, and ++$n%9
returns 0 when its current value is divisible by 9 -- which means that every 9 iterations, the array index will be 0, and the newline character will be returned and concatenated to the produced pixels. All the strings of 8 pixels (plus the occasional newlines) are returned by map
as a list, and are then, finally, printed.
perl -C -e'print map+(v9635 x8^unpack B8).(v10)[++$n%9],(unpack u,"MK__^=^WW_S_WK)FNO\$3F5U\$7BJJ.=>U51S-WK)GNM>UF=W%[K[N>?___SW__")=~/./g'
]]>
say+(Fizz)[++$_%3].Buzz x/.?[50]$/||$_ until$`
In keeping with this year's Dancer Advent Calendar trend, the example app will be built on Dancer 2, but it should work just as well with Dancer 1.
Alright, let's get to work.
]]> The DataOur web application will be used to search through documents stored in a MySQL database. We'll use a simple table with the following structure:
CREATE TABLE documents ( id int NOT NULL AUTO_INCREMENT, title varchar(200) NOT NULL, contents_text text NOT NULL, contents_html text NOT NULL, PRIMARY KEY (id) );
Each document has an unique ID, a title, and contents, stored as both plain text and as HTML. We need the two formats for different purposes -- HTML will be used to display the document in the browser, while plain text will be fed to the indexing mechanism of the search engine (because we do not want to index the HTML tags, obviously).
We can populate the database with any kind of document data -- for my test version, I used a simple script to fill the database with POD documentation extracted from Dancer distribution. The script is included at the end of this article, in case you'd like to use it yourself.
Sphinx can be installed pretty easily, using one of the pre-compiled .rpm
or
.deb
packages, or the source tarball. These are available at the download
page at SphinxSearch.com -- grab the
one that suits you and follow the installation
instructions.
When Sphinx is installed, it needs to be configured before we can play with it.
Its main configuration file is usually located at /etc/sphinx/sphinx.conf
.
For our purposes, a very basic setup will do -- we'll put the following in the
sphinx.conf
file:
source documents { type = mysql sql_host = localhost sql_user = user sql_pass = hunter1 sql_db = docs sql_query = \ SELECT id, title, contents_text FROM documents } index test { source = documents charset_type = utf-8 path = /usr/local/sphinx/data/test }
This defines one source, which is what Sphinx uses to gather data, and one
index, which will be created by processing the collected data and will then
be queried when we perform the searches. In our case, the source is the
documents database that we just created. The sql_query
directive defines the
SELECT
query that Sphinx will use to pull the data, and it includes all the
fields from the documents
table, except contents_html
-- like we said,
HTML is not supposed to be indexed.
That's all that we need to start using Sphinx. After we make sure the searchd
daemon is running, we can proceed with indexing the data. We call indexer
with the name of the index:
$ indexer test
It should spit out some information about the indexing operation, and when it's done, we can do our first search:
$ search "plugin" index 'test': query 'plugin ': returned 8 matches of 8 total in 0.002 sec displaying matches: 1. document=19, weight=2713 2. document=44, weight=2694 3. document=20, weight=1713 4. document=2, weight=1672 5. document=1, weight=1640 6. document=13, weight=1640 7. document=27, weight=1601 8. document=28, weight=1601
Apparently, there are 8 documents in the Dancer documentation with the word plugin, and the one with the ID of 19 is the highest ranking result. Let's see which document that is:
mysql> SELECT title FROM documents WHERE id = 19; +----------------------------------------------------+ | title | +----------------------------------------------------+ | Dancer::Plugin - helper for writing Dancer plugins | +----------------------------------------------------+
It's the documentation for Dancer::Plugin, and it makes total sense that this is the first result for the word plugin. Sphinx setup is thus ready and we can get to the web application part of our little project.
We'll start with a simple web application (let's call it DancerSearch
) that
just shows a search form, and then we'll extend it with more features. It will
be using Dancer 2.0, and the Dancer::Plugin::Database plugin (we'll use it to
access the documents database). The code below is the initial
lib/DancerSearch.pm
file:
package DancerSearch; use Dancer 2.0; use Dancer::Plugin::Database; get '/' => sub { template 'index'; }; 1;
We're also going to need a little startup script, bin/app.pl
:
#!/usr/bin/env perl use Dancer 2.0; use DancerSearch; start;
And a simple layout -- views/layouts/main.tt
:
<!doctype html> <html> <head> <title>Dancer Search Engine</title> <link rel="stylesheet" href="css/style.css"> </head> <body> <h1>Dancer Search Engine</h1> <div id="content"> [% content %] </div> </body> </html>
And, of course, a template for our index page. For now it'll just contain a
search form -- views/index.tt
:
<form action="/" method="get"> Search query: <input type="text" name="query"> <input type="submit" value="Search"> </form>
Last but not least, we need a configuration file to tell our app which layout we
want to use, and how to connect to our documents database using the
Dancer::Plugin::Database plugin. This goes into config.yml
:
layout: main plugins: Database: driver: mysql host: localhost database: docs username: user password: hunter1
We can now launch the application, and it will greet us with a search form. Which, unsurprisingly, doesn't work yet. Let's wire it up to Sphinx.
There is a CPAN module called Sphinx::Search that provides a Perl interface
to Sphinx, and we're going to use it in our app. We put use Sphinx::Search
in
DancerSearch.pm
, and add the following piece of code before the get '/'
route handler:
# Create a new Sphinx::Search instance my $sph = Sphinx::Search->new; # Match all words, sort by relevance, return the first 10 results $sph->SetMatchMode(SPH_MATCH_ALL); $sph->SetSortMode(SPH_SORT_RELEVANCE); $sph->SetLimits(0, 10);
This creates a new instance of Sphinx::Search (which will be used to talk to the
Sphinx daemon and do the searches), and sets up a few basic options, such as how
many results should be returned and in what order. Now comes the most
interesting part -- actually performing a search in our application. We insert
this chunk of code at the beginning of the get '/'
route handler:
if (my $phrase = params('query')->{'phrase'}) { # Send the search query to Sphinx my $results = $sph->Query($phrase); my $retrieved_count = 0; my $total_count; my $documents = []; if ($total_count = $results->{'total_found'}) { $retrieved_count = @{$results->{'matches'}}; # Get the array of document IDs my @document_ids = map { $_->{'doc'} } @{$results->{'matches'}}; # Join the IDs to use in SQL query (the IDs come from Sphinx, so we # can trust them to be safe) my $ids_joined = join ',', @document_ids; # Select documents, in the same order as returned by Sphinx # (the contents of $ids_joined comes from Sphinx) my $sth = database->prepare('SELECT id, title FROM documents ' . "WHERE id IN ($ids_joined) ORDER BY FIELD(id, $ids_joined)"); $sth->execute; # Fetch all results as an arrayref of hashrefs $documents = $sth->fetchall_arrayref({}); } # Show search results page return template 'index', { phrase => encode_entities($phrase), retrieved_count => $retrieved_count, total_count => $total_count, documents => $documents }; }
Let's go through what is happening here. First, we check if there was actually a
search phrase in the query string (params('query')->{'phrase'}
). If there
was one, we pass it to the $sph->Query()
method, which queries Sphinx and
returns the search results (the returned data structure is briefly explained in
the description of the Query method in Sphinx::Search documentation).
We then check the number of results ($results->{'total_found'}
), and if
it's greater than zero, it means we found something and we need to retrieve the
documents data from the database. Sphinx only returns the IDs of the matching
documents (as shown earlier in the test search that we did using the command
line), so we need to send a query to the database to get the actual data, such
as document titles that we want to display in the results (note that we're using
the ORDER BY FIELD
construct in the SELECT
query to maintain the same
order as the list returned by Sphinx).
When we have the documents data ready, we pass it along with other information (such as the total number of results) to be displayed in our index template. But, hold on a second -- the template is not yet ready to display the results, it only shows the search form. Let's fix that now -- below the search form, we add the following code:
[% IF phrase %] <p>Search results for <strong>"[% phrase %]"</strong></p> [% IF total_count %] <p> Found [% total_count %] hits. Showing results 1 - [% retrieved_count %]. </p> <ol> [% FOREACH document IN documents %] <li> <a href="/document/[% document.id %]">[% document.title %]</a> </li> [% END %] </ol> [% ELSE %] <p> No hits -- try again! </p> [% END %] [% END %]
This displays the phrase that was submitted, the number of hits, and a list of results (or a "no hits" message if there weren't any).
And you know what? We're now ready to actually do a search in the browser:
Neat, we have a working search application! We're just missing one important
thing, and that is being able to access a document that was found. The results
link to /document/:document_id
, but that route isn't recognized by our app.
No worries, we can fix that easily:
# Get the document with the specified ID get '/document/:id' => sub { my $sth = database->prepare('SELECT contents_html FROM documents ' . 'WHERE id = ?'); $sth->execute(params->{'id'}); if (my $document = $sth->fetchrow_hashref) { return $document->{'contents_html'}; } else { status 404; return "Document not found"; } };
This route handler is pretty straightforward, we grab the ID from the URL, use
it in a SELECT
query to the documents table, and return the HTML contents of
the matching document (or a 404 page, if there's no document with that ID).
This is the complete DancerSearch.pm
file:
package DancerSearch; use Dancer 2.0; use Dancer::Plugin::Database; use HTML::Entities qw( encode_entities ); use Sphinx::Search; # Create a new Sphinx::Search instance my $sph = Sphinx::Search->new; # Match all words, sort by relevance, return the first 10 results $sph->SetMatchMode(SPH_MATCH_ALL); $sph->SetSortMode(SPH_SORT_RELEVANCE); $sph->SetLimits(0, 10); get '/' => sub { if (my $phrase = params('query')->{'phrase'}) { # Send the search query to Sphinx my $results = $sph->Query($phrase); my $retrieved_count = 0; my $total_count; my $documents = []; if ($total_count = $results->{'total_found'}) { $retrieved_count = @{$results->{'matches'}}; # Get the array of document IDs my @document_ids = map { $_->{'doc'} } @{$results->{'matches'}}; # Join the IDs to use in SQL query (the IDs come from Sphinx, so we # can trust them to be safe) my $ids_joined = join ',', @document_ids; # Select documents, in the same order as returned by Sphinx my $sth = database->prepare('SELECT id, title FROM documents ' . "WHERE id IN ($ids_joined) ORDER BY FIELD(id, $ids_joined)"); $sth->execute; # Fetch all results as an arrayref of hashrefs $documents = $sth->fetchall_arrayref({}); } # Show search results page return template 'index', { phrase => encode_entities($phrase), retrieved_count => $retrieved_count, total_count => $total_count, documents => $documents }; } else { # No search phrase -- show just the search form template 'index'; } }; # Get the document with the specified ID get '/document/:id' => sub { my $sth = database->prepare('SELECT contents_html FROM documents ' . 'WHERE id = ?'); $sth->execute(params->{'id'}); if (my $document = $sth->fetchrow_hashref) { return $document->{'contents_html'}; } else { status 404; return "Document not found"; } }; 1;
What we've built is still a very basic application, lacking many features -- the most obvious one that's missing is pagination, and being able to access results further down the list, not just the first ten. However, the code can be easily extended, thanks to the flexibility and ease of use of both Dancer and Sphinx. With a bit of effort, it can be made into an useful search app for a knowledge base site, or a wiki.
I think this application is a good example of how Dancer benefits from being part of the Perl ecosystem, giving web developers the ability to make use of the thousands of modules in CPAN (like we just did with Sphinx::Search). This allows to build working prototypes of web applications and implement complex features in a very short time.
As promised, this is the script that I used to extract the POD from Dancer distribution and store it in the MySQL database:
#!/usr/bin/env perl package MyParser; use strict; use vars qw(@ISA); use Pod::Simple::PullParser (); BEGIN { @ISA = ('Pod::Simple::PullParser') } use DBI; use File::Find; use Pod::Simple::Text; use Pod::Simple::HTML; # Variables to hold the text and HTML produced by POD parsers my ($text, $html); # Create parser objects and tell them where their output will go (my $parser_text = Pod::Simple::Text->new)->output_string(\$text); (my $parser_html = Pod::Simple::HTML->new)->output_string(\$html); # Initialize database connection my $dbh = DBI->connect("dbi:mysql:dbname=docs;host=localhost", "user", "hunter1") or die $!; sub run { my $self = shift; my (@tokens, $title); while (my $token = $self->get_token) { push @tokens, $token; # We're looking for a "=head1 NAME" section if (@tokens > 5) { if ($tokens[0]->is_start && $tokens[0]->tagname eq 'head1' && $tokens[1]->is_text && $tokens[1]->text =~ /^name$/i && $tokens[4]->is_text) { $title = $tokens[4]->text; # We have the title, so we can ignore the remaining tokens last; } shift @tokens; } } # No title means no POD -- we're done with this file return if !$title; print "Adding: $title\n"; $parser_text->parse_file($self->source_filename); $parser_html->parse_file($self->source_filename); # Add the new document to the database $dbh->do("INSERT INTO documents (title, contents_text, " . "contents_html) VALUES(?, ?, ?)", undef, $title, $text, $html); # Clear the content variables and reinitialize parsers $text = $html = ""; $parser_text->reinit; $parser_html->reinit; } my $parser = MyParser->new; find({ wanted => sub { if (-f and /\.pm$|\.pod$/) { $parser->parse_file($File::Find::name); $parser->reinit; } }, no_chdir => 1 }, shift || '.');
You can run it with one argument, which is the location of the directory that will be scanned (recursively) for .pm/.pod files, or with no arguments, in which case the script will work with the current directory.
(Note: The script makes use of Pod::Simple, which I'm not very familiar with, so it's possible that I'm doing something stupid with it -- if that's the case, please let me know.)
This post was originally published as part of the 2012 Dancer Advent Calendar.
]]>As a side note, I think if you're writing code that is supposed to be portable, you don't often need to work with absolute paths and deal with things like drive letters. In most cases, you need a single location that is your point of reference, such as the home directory of the user running your program, or the directory where your application is installed, and then you use relative paths to work your way from there. This is where this module might make things a tiny bit simpler, that's all.
Moreover, there are cases when you do need to build a system-specific path -- one example that I can think of is when you want to output a path to the user, e.g.:
print "Your log file is located at $path";
If you want your program to be user-friendly, you should display the path in a format that the user is familiar with.
]]>