I now have a basic INI file parser written in Perl 6. It's clumsy and I'm quite unsure about packaging. Any and all suggestions welcome, including patches!
The basic usage is like this. Assuming we have an INI file like this (note that we trim leading and trailing whitespace on k/v's and section names):
host = http://localhost/
port = 3333
[admin]
; these only apply to admin users
name = Administrator
access = all
[ anonymous ]
name = Guest
access = none
We can read it like this:
my Config::INI $config .= new;
$config.read($filename);
# gets the host and port properties
for $config.properties.kv -> $k, $v {
say "$k => $v";
}
# gets the name and access properties under admin
my %admin = $config.properties('admin');
# dies
$config.properties('no_such_section');
Yeah, it needs a lot of work, but Perl 6 still twists my mind from time to time.
If you want to run the tests, make sure your Rakudo is up to date and:
En algún momento, en una reunión con un equipo de “Desarrolladores”, un compañero y yo discutíamos sobre el proceso de recolección de datos durante una asamblea comunal, este compañero me planteaba que era muy complicado andar pidiendo el Nombre y Apellido a todas las personas de la asamblea, que esto retardaría el proceso de registro y que por lo tanto, solo iba a recoger los datos de la encuesta y serian datos anónimos.
Yo le sugerí que solo pidiera la Cédula de Identidad, y a partir de la cédula, usara la base de datos del CNE para obtener la información sobre Nombre y Apellidos.
A la anterior sugerencia, el compañero me contesto que si yo estaba loco, que obtener la base de datos del CNE era un tema burocrático que le iba a tomar varias semanas y varios oficios.
A continuación, en un par de lineas escritas muy rápido y sin ninguna preocupación por el estilo, muestro como podemos usar Perl para consultar automáticamente el portal del Consejo Nacional Electoral.
CPAN is one of the greatest things about Perl. However, sometimes I can't connect to the internet or I have spotty connectivity. That's why CPAN::Mini exists - it creates a local copy of CPAN. It takes up about 1.4 GB of space. However, this means that your local copy of CPAN is now out of date. You can run minicpan often, but I thought there could be a better solution, so I wrote CPAN::Mini::Live. This module allows you to have a local CPAN mirror which is instantly updated whenever new distributions are uploaded to CPAN.
How could this possibly work? First off, it relies upon using a CPAN mirror that is kept up to date. Andreas Koenig has been working on a way to do that using File::Rsync::Mirror::Recent and one of these mirrors using this method is the CPAN Testers CPAN mirror.
What better way to learn Perl 6 than to write software in it? I thought a nice, small project would be to write an INI file parser. Unfortunately, it turns out there there is no standard INI format. Or if there is, there it's pretty much ignored. As a result, I tried to write something which would handle something which is the least surprising to people. Thus, I trim whitespace, I allow comment lines starting with ';' and '#' and I ignore blank lines. It does not yet handle quoted strings, but I want to add that later.
I finally have a (clumsy) grammar which matches, but turning it into an AST and transforming it into a useful object has failed miserably. Tips? The code below runs on the latest version of Rakudo and the first parse shows that we match, but the second match generates errors such as:
Use of uninitialized value
Use of uninitialized value
Null PMC access in find_method('new')
in Main (file <unknown>, line <unknown>)
Feeling like I need to do something besides $work with my perl, I have decided recently to start working through the ProjectEuler.net problems. One of the ideas I'm looking to explore is building a framework of sorts that will host all the problems and will provide access to many common items both for use in solving the problems as well as in easing the development of the solutions.
I sort of jumped right in the other day and knocked out Problem 001 and Problem 003 while also starting to build the framework around it. I didn't start the git repository until after that but I suspect some of that will change anyway as it was real quick and dirty.
Well, google this title and it'll return almost 2 million results - so it's not an original idea, but it's worth a try. So how far along have we come this past year?
My favourite new feature is "method name suffixes" in Catalyst-Controller-HTML-FormFu which allow you to split up your code into blocks that will only be called depending on the state of the current form. E.g.
Schwern recently wrote about not using numbers in test names. He's right and it's something I've been guilty of in the past, but I want to recommend that people go further and start naming test programs after packages. For example, with SQL::Statement, you have the following tests:
I'm contemplating breaking a very well-established standard: Version arguments to Perl's "use" statement. The module writer is free to change the semantics of these, but despite the endless eccentricities you find on CPAN in other things, they rarely (never?) do.
So I'm having guilt feelings. Insecurities are coming out. I'm getting second thoughts. In this post I will handle these things the way many people do. Preach.
Consider a default module use like
use Marpa;
The standard (and almost universal) semantics is to load whatever version is out there. This works if you can assume that the modules you're using are well-behaved and upwardly compatible.
Or if you have strict controls on the libraries in the environment in which
you are running.
But what if you are dealing with software
which is avowedly alpha?
The Common Gateway Interface was revolutionary. It gave us, for the first time, an extremely simple way to provide dynamic content via HTTP. It was one of a combination of technologies that led to the explosive growth of the Web. For anyone writing an application that runs once per HTTP request, there is no other practical option. And for such applications, CGI is almost always adequate.
But modern web applications typically run in persistent environments. For anything with more than a small trickle of traffic, we don't want the overhead of launching a new process for every hit. Even in low-traffic environments, the startup costs involved with using modern Perl frameworks like Moose and DBIx::Class can make non-persistent applications prohibitive.
We have things like mod_perl and FastCGI for easily creating persistent applications. But these applications are generally built upon emulating aspects of the stateless, non-persistent CGI protocol within a persistent environment. Even pure mod_perl applications typically receive much of their input via environment variables specified in the CGI standard, often by instantiating CGI.pm or one of its clones.
This model is fundamentally broken. Read on for my list of reasons why CGI should not be used in persistent applications.
I was playing around with the Slope One collaborative filtering algorithm. Collaborative filtering is a way of trying to guess what users may like based on past choices. There are roughly two approaches. One is the "neighbor-to-neighbor" approach. In this approach, we try to find people who have expressed similar preferences to your own and we use preferences that they've expressed but you haven't to guess what you might like in the future. This has a few problems. One, it tends to be computationally expensive. Two, you might have identical preferences to many people but if you've not expressed any preferences or the ones expressed have no overlap, you lose.
I tried to update an online website with some changes. Generally, I run a production and a testing environment. Recently, however, I moved the code from using SQLite to MySQL and did not create a testing DB, so changes that require changing the text on the site are done in production. Not good? I know!
So the website is built in Catalyst. Originally used SQLite and then migrated to MySQL (which had to be done manually). It uses HTML::FormHandler to display the forms, with a generic CRUD layer I added.
When trying to load the form, I get weird characters for some of the page. From what I gathered, the data in the MySQL isn't kept in UTF8 but in latin1 but we declare the page as UTF8 encoding. The form isn't displayed in utf8 (which was changed using "use utf8;" in the form .pm file, or using Encode::Guess which yielded a better result). David Wheeler has a really interesting article on UTF8 in Perl here.
I got some failing test reports for the latest version of one of my module. The problem turned out to be outside my own module.
A dependency of a dependency of mine (Sub::Uplevel) requires Module::Build 0.35. My module is first trying to build using the installed Module::Build (in this case 0.28). When installing the dependencies, Module::Build 0.35 is installed, but it chokes on the configuration data from the older version of M::B.
Ouch.
I suspect Sub::Uplevel doesn't really need version 0.35 of M::B, but that this was a case of the auto_configure_requires option gone astray, though it certainly would be nice if M::B could handle old configuration files in a useful way.
After typing ack 'sub foo' lib for the approximately thousandth time during some refactoring sessions, I couldn't be bothered anymore and added the following snippet to my realias (after some googling on how to get params into an alias, which does not work in bash, so I had to solve it via a bash function):
sack () {
ack "sub $1" lib
}
To find a given method in some of our labyrinthine code, I now say
~/projects/Foo-Bar$ sack annoying_method
and get a list of all occurrences.
yay!
P.S.: The name sack has nothing do with subroutine ack, but of course comes from the Austrian saying "Gemma ned am sack, oida!"
P.P.S.: Cross-posted from use.perl, because I haven't made up my mind yet if/when/how I migrate my blog form there to here...
As you may know, Perl is the second most popular language on github. Well, that's what the page says and that page is wrong for a variety of reasons, but first I'm going to talk about an unexpected problem at work.
In recent times we have seen a dramatic increase in the number of testers, smokers and reports. So much so that we are seeing over 400,000 reports each month. This in turn is putting a strain on the Perl NOC, specially the email and NNTP parts of the system.
I'd like to add my praise to the heap of it already piled onto NYTProf. This is a Perl profiler available on CPAN, with a very attractive HTML interface.
If you wait for your next efficiency issue before using NYTProf, you're making a mistake. For me optimizing is no longer NYTProf's primary purpose. NYTProf is a powerful debugging tool. The count of the number of times each line was executed yields marvelous insights quickly. Consider an example: a script to process a file. It is acting strangely. You don't know where to begin. Your test file is 1000 lines long. You notice certain lines in the per-line logic are not being executed 1000 times. Hmmm.
Simply checking for lines which are not executed at all is a surprisingly powerful technique. The HTML format allows you to skim the code, looking for these. This is particularly useful when the question is not localized or some matter of detail, but whether your overall logic makes sense, and whether your code actually implements the logic/algorithm you intended.
Adam Kaplan, Tim Bunce and Steve Peters, thank you.