We’ve got a start on a track about the Perl Data Language (PDL) at YAPC::NA 2012. This area of Perl is a great fit with our “Perl in the Wild” theme. So if you have some expertise using the PDL, by all means submit a talk about it. If we can get enough talks, we can put together a one day track about it.
Likewise if you want to run a workshop to get people bootstrapped on PDL, you can submit a talk for that as well.
I have referred to
"the Marpa algorithm"
many times.
What is that?
The
implementation
involves many details,
but the Marpa algorithm itself is basically four ideas.
Of these only the most recent is mine.
The other three come
from papers
spanning over 40 years.
Idea 1: Parse by determining which rules can be applied where
The first idea is to track the progress of the a parse by determining,
for each token, which rules can be applied and where.
Sounds pretty obvious.
Not-so-obvious is how
to do this efficiently.
In fact,
most parsing these days uses
some sort of shortcut.
Regexes and LALR (yacc, etc.) require the grammar to take a restricted form,
so that they can convert the rules into a state machine.
Recursive descent, rather than list the possibilities, dives into
them one by one.
It, too, only works well with grammars of a certain kind.
brian d foy was here in Houston for two days and I got a lot of good tiny ideas:
1. implement last out of grep/map (disabled because broken with 5.6)
2 days. step out 2 scopes in dopoptoloop:
grep, grep_item, block
$ p -MO=Concise,-exec -e'grep{last if $_ == 2} 1..3'
1 <0> enter
2 <;> nextstate(main 2 -e:1) v:{
3 <0> pushmark s
4 <$> const(AV ) s
5 <1> rv2av lKPM/1
6 <@> grepstart K*
7 <|> grepwhile(other->8)[t3] vK
8 <0> enter s
9 <;> nextstate(main 1 -e:1) v:{
a <$> gvsv(*_) s
b <$> const(IV 7) s
c <2> eq sK/2
d <|> and(other->e) sK/1
e <0> last s*
f <@> leave sKP
goto 7
g <@> leave[1 ref] vKP/REFC
As you might know, TPF awards grants to some people, for some tasks. There are some huge grants, like Nicholas Clark's grant or Dave Mitchell's grant on Perl 5. Unfortunately not all of us have the time and/or the knowledge to help in such low level tasks. Nevertheless, TPF has a grant committee that awards small grants (ranging from $500 to $2000) for smaller tasks. Some examples include documentation writing or tests writing for some relevant module, the implementation of some service in the web, the development of a specific module, etc.
Note that if you have a project in mind and you think it is worth more than $2000, you can propose parts of it. Give a big picture of what you would like to do, and define a sub-task. It is relevant that the sub-task is useful by itself, of course. But once you complete it, and if you show quality in your work, and you meet deadlines, it is very probably you get a second grant to complete your work.
At the moment TPF has extended its deadline for grant proposals. Submit them until the end of the month of November. You can read the complete call for grants in The Perl Foundation blog.
I’m quite pleased to announce that Shadowcat Systems has decided to sponsor YAPC::NA 2012!
Shadowcat Systems is a developer, sponsor of, and contributor to open source software projects including Catalyst, Moose, Moo, Tak, Devel::Declare and DBIx::Class. Shadowcat provides consultancy, training and support for these projects and for most of CPAN; systems management and automation; the design and implementation of network architecture; the development of proprietary and open source custom web applications; and offers Perl refactoring and project crisis management.
Shadowcat Systems are based in the United Kingdom but delivers solutions to a global community of clients via onsite supervision along with traditional and internet based communications.
Your app is in a tarball and clients only have FTP access to install your app on their host. Furthermore, you need to customize the config file (or do other process) for each install. What you need is Net::xFTP and Archive::Tar. Net::xFTP's put allows you to pass in an open filehandle typeglob as the local file. So you can open a filehandle on a string reference and use that as the LOCAL FILE. This is a simplified version.
As we all know $#boo returns the last index of array @boo.
It is clear why we have the prefixes '$' and '@' ('$' is like the first
letter of the word 'scalar' and the '@' is like the first letter of the word
'array').
But is it unclear why there is '#' after the dollar sign. I've checked out
the perl v 1.0 and in the man page there is such text:
> you may find the length of array @days by evaluating "$#days", as in csh.
> [Actually, it's not the length of the array, it's the subscript of the last
> element, since there is (ordinarily) a 0th element.]
So the answer why the number of the last index is $#boo is somewhere is csh.
Thiago Rondon will give a talk at YAPC::NA 2012 described as:
Opendata is the idea that certain data should be freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control.
In Brasil, we worked last year with some solutions with opendata, that government can be transparent and make a good comunication with society.
And the result of this work are three websites that use Catalyst, DBIx::Class and a lot modules of CPAN.
I think I want to use this account from now on to write more essay like pieces. Many of my posts here were just short reports of the existence of slides, talks, articles and other things. Maybe I will do comparing and reflecting summaries on such things, but for recent informations please subscribe my twitter channel. Some messages there might be German. Please don't mind.
I've started using 'cpanm -n Module' to install Perl modules. The '-n' tells cpanminus to skip testing and just install the module.
"What, are you insane?"
Nope, I have just found that for most Perl modules, it is more time efficient to skip testing on the initial install, and sort out any problems later. Especially with a setup you know that works.
If I was installing a new application for the first time, I would probably not skip the tests however.
I have been excited about OO programming in Perl thanks to MooseX::Declare but I have never especially liked its performance hit and its cryptic warnings. It turns out that much of this problem is due to MooseX::Method::Signatures, which is used under the hood.
Many moons ago, I was curious about Moose and MooseX::Declare and I posted a question on StackOverflow. Venerable Perl guy Schwern then posted as a comment, that Method::Signatures was better than MooseX::Method::Signatures, and that there was a mod in the works to use it with MX::D.
Steven Lembark will give a talk at YAPC::NA 2012 described as:
Graham Barr’s Scalar::Utils and List::Utils are proof that long-lived does not have to mean obsolete. The modules provide simple, clean, fast interfaces for managing and querying references, objects, and lists — and have saved us from countless re-invented wheels.
This updated talk includes using the Utils with 5.10+ features such as smart matching.
... or Why A Good Perl Developer Is Not Automatically A Good C Developer, the
Story of C Programming via Google.
My tests failed, but only sometimes. I was building an XS module to interface
with a C wrapper around a C++ library (wrapper unnecessary? probably). make
test was failing with exit code 11. Some quick searching revealed that I had
an intermittent segfault. Calling a function as_xml would fail with a SEGV
in strlen(). This only happened in perl after as_xml when perl was making
a SV out of the return value. This also only mainly happened during make test.
Doing prove myself would succeed 19 times out of 20, where make test would
fail 19 times out of 20. Worse, my C test program would never fail at all.
A dialecticing language is a super-set of a multi-paradigm language.
Lisp is another dialecting language.
Unlike lisp, where everything looks like an s-exp, a Perl dialect will look ridiculously different from Perl.
Officially, I think we can recognize the following dialects of Perl
Perl[1-4] : the Perl that everyone hates
CPAN perl : a dialect of Perl, which emerged with maintainability, readability and reusability as the core concern. (The changes introduced in Perl5 started the dialecting trend in Perl)
Perl5.[10-14+] aka Modern Perl : another dialect of Perl, now recommended for both script writers and cpan users. Modern Perl is that which adhres to the Grammar Nazi.
Any code that passes through Perl::Critic and Perl::Tidy is a safe bet on robustness and clarity, which is an improvement over the earlier dialects.
Modern Perl might be an established term already. As radical modernist (opposed to a radical post-modernist) I still like to use my own vision of a modern perl. Not directly opposing chromatic's modern perl. Just a real modern perl, as an outsider would consider it. Which just happens to be the qore feature set. Everything I would have done to make perl modern would have been what David Nichols already did in qore. Plus channels for IPC.
Qore is basically a modern perl, a rewrite with a modern vision. One can take a perl script, and optionally enhance it with types and background and get a qore speed-up and compile-time safety, but obviously a compatibility problem.
We’d like to do a BioPerl track at YAPC::NA 2012 this year. We’re looking for contributors willing to give a 20 or 50 minute talk about some aspect of the BioPerl ecosystem. The things people do in BioPerl line up very nicely with our “Perl in the Wild” theme.
So if you’re a BioPerl person, please submit a talk. Hopefully we can get enough talks on the subject to set up at least a day-long BioPerl track.
This is one of those blog posts where I (ab)use blogs.perl.org to document some notes, primarily for myself ...
Server A : is the old "Master"
Server B : is the new "Master" (a rebuild of A) and for a transition period is replicating from A until it is promoted to the main Master and A can be switched off.
Server C : currently replicates from A, however we want to instead move it to replicate from B in advance of replacing A with B.
This is in part because of cost, and also because the client involved are, in the long term, including the functionality in their SAP "solution".
I could rant for several megabytes about how this is a silly desicion, but it was made so far above my head I can't even see it from here. Seems like that's always the case when SAP is involved.
So, after spending roughly €20.000 on something it's being scrapped, and some SAP consultant will implement it (sort of, anyway) in their way, and bill €1.000.000. Yep, makes perfect business sense for all involved.
Aaanyway, never mind this blog. It's not relevant.