I failed to pause before blogging
I got this email from PAUSE just now:
Failed: PAUSE indexer report BKB/Go-Tokenize-0.01.tar.gz module : switch
It looks like it doesn't like this line of code containing Go keywords:
chan else goto package switch
I got this email from PAUSE just now:
Failed: PAUSE indexer report BKB/Go-Tokenize-0.01.tar.gz module : switch
It looks like it doesn't like this line of code containing Go keywords:
chan else goto package switch
The PAUSE code to discover which packages you use is very simple because it doesn't want to run code. It does a static search for "package IDENTIFIER".
Although this won't help in your case, but that's why you sometimes see this trick:
I think it will help for the next version.
It seems to me that PAUSE could check for a semicolon, but there are probably exceptions for that.
This line:
https://metacpan.org/release/Go-Tokenize/source/lib/Go/Tokenize.pm#L43
also produced an unusual error message, refer to the comment above that if interested.
Indeed there are.
Which of those wouldn't be followed by a semicolon?
I think package NAMESPACE and package NAMESPACE VERSION are both always followed by one.
BLOCK just means {something} doesn't it, so the matching is package\s+.*?\s*(;|\{).
There is no reason the BLOCK has to be on the same line either.
Anyway, making sure you don't have anything following 'package' on the line is an appropriate solution in this case. And probably including a comment so the problem doesn't accidentally reoccur.
https://gist.github.com/benkasminbullock/a55283a3b74aac94b8eedbf45e892260
I think the indexer probably goes through line by line to avoid slurping in the whole file.
There are some very long modules on CPAN, like this one I saw yesterday:
https://metacpan.org/release/Lingua-EN-Opinion/source/lib/Lingua/EN/Opinion
It has a dictionary of English words, and each word has its own hash reference, with ten different keys.
It might be better not to slurp them all in at once. Perhaps the PAUSE indexer could do something like
if (-s $pm > $max) {
line_by_line_processing;
}
else {
all_at_once_processing;
}