As many of you know, the Moose module has a metaobject protocol, but it's not something that many casual hackers use. Truth be told, I don't use it a lot either, but when I do, it saves me a lot of hassle. I've been writing an extremely complicated data importer and at the end, I didn't so much need a summary of the data, but a summary of what the importer did. That's when Moose metadata made my life so much easier and my code more maintainable.
This is github clone. you can install portable github system into unix/linux.
We konw gitweb.cgi, but this is a little difficult to use, and GitHub become defact standard.
User is happy to use GitHub interface when he create private repository on the server in company.
Perl and Mojolicious is great. GitPrep can work in both CGI and prefork built-in server and PSGI. You can use GitPrep uploading shared rental server as CGI script.
GitLab is difficult to install. but GitPrep is easy to install. requirement is only Perl 5.8.7.
Mojolicious need perl 5.10.1 but mojo-legacy project need Perl 5.8.7.
mojo-legacy author is jamadam. he is Japanese programer.
GitPrep is useful to see git repository on your server. Enjoy!
One of the pain points of MetaCPAN is that URLs don't always point where you would expect them to. For example, should a script be found under metacpan.org/module/*? Does that make sense? What happens when someone releases a module with the name of the script? Please note that we're talking about the URLs on the search front end, not the API.
We've struggled for a long while with questions like this. How do we structure URLs on metacpan.org which have a sensible hierarchy? URLs (like version numbers) should be boring -- we don't want them to offer any surprises.
The bulk of the discussion on this topic can be found in this issue. However, if you want to weigh in on this without wading through the entire discussion, hop down to this comment and start from there.
Or should I say lack thereof. This period witnessed some rather major change in my personal life. I got married and moved to an apartment. So between the preparation for the wedding, the honeymoon, and the moving, there was little time and attention for long sessions of hacking.
What I managed to do is release minor updates for some of my CPAN modules:
One of the most common sources of confusion for new Perl programmers is the difference between arrays and lists. Although they sometimes look similar, they are very different things, and many bugs and misunderstandings are caused by not having a full understanding of the differences. Even experienced Perl programmers sometimes think arrays and lists are the same, but they are quite different in several important ways.
So the other day in #p5-mop we were discussing how to handle meta layer extensions. For example, doing things like adding accessor generation support to attributes.
After a while I could convince our admin at $work to prepare a virtual machine for running a Pinto server inside our company network. The primary goals were to have a well-known set of CPAN distributions available for installing developer machines, for running CI tests and for provisioning all kinds of servers.
As authors on this site, you no longer need to be diligent about breaking your posts out among the misleadingly-named “body” and “extended” tabs of the new-entry screen. From now on, the front page will automatically truncate posts at a certain length, whether or not you thought to designate a section to place above the jump.
As readers, some of you have complained of unwittingly uncooperative authors in the past. You can now rest easy – this irritation is forever banished to history, and the front page will henceforth always be easily scannable.
Either way, you can now relax and enjoy your stay a little better.
PS.: the logic places the threshold at 225 words, but the exact cut-off point depends on your markup.
Everyone knows all that command-line stuff is for weirdo geeks, right? ;-) So let's bring Data::Dumper kicking and screaming into the 21st century and give it a pretty GUI!
Introducing Data::Dumper::GUI; a GUI for Data::Dumper. It allows you to view your data structures as a tree with collapsible nodes. Data::Dumper::GUI is built using Prima (a rather nice GUI toolkit designed specifically for Perl, that supports Win32 and X11, with no dependencies and compiles pretty quickly... for a GUI toolkit) and Moo.
I spent much of the last few days working on some syntactic improvements to the p5-mop. I had originally went with a fairly straightforward method for specifying things like inheritance, role composition, etc. It basically looks like this:
class Foo (extends => 'Bar', with => ['Baz', 'Gorch']) {}
This works well because it is basically just using a simple Perl list to pass in information to the underlying meta-objects. My only issue with this is that everything is a string, which is no different then what we have now using base or @ISA, but I wanted to see if I could improve this a little.
After wrestling with Devel::Declare for a while (yes, I know it is evil, but it is just for the prototype, the real version will not use it), I was able to get the following syntax to work.
Lately while there's been discussions of various new (for varying definitions of 'new') companies who have chosen Perl, there's not been a huge amount of discussion about how they use Perl. I contacted JT Smith of Plain Black Corporation and he graciously agreed to talk about The Lacuna Expanse, a MMORPG with a Perl backend.
There was no non-single threaded Plack server for Windows. It is because current implementations use preforking which is completly broken on this system. But wait, Perl already has threads which works better on Windows.
So I wrote Thrall: multithreaded Plack server for Windows. To be strict - it is hacked Starlet server with preforking removed, so now works with Perl on Windows correctly.
Well, I've noticed that threads are really stable and useful and its API is very nice. It is only one thing that breaks everything: threads::shared. It is horribly unstable and broken on Windows, so I avoided it as far as simple server might not to need it. Spawning new threads is very slow, so it is suggested to use large value for --max-reqs-per-child option.
It is interesting, that Thrall can be converted automagically to preforking server when it is started with -Mforks option/
Kegler apparently has a script to auto-spam this site with content from his blog, explicitly ignores replies here, and flouts the policy on front-page posts. Is there a chance we can automatically reject his abuse?
I announced that I want to parse 50% of the perl5 syntax within this summer with p2.
I think this goal is doable.
The last weeks since YAPC I spent most of my p2 time integrating libuv as external "aio" module. This implements asynchronous non-blocking IO and rudimentary cross-platform process support. libuv is essentially the node.js backend library, and MoarVM stated its goal to switch from apr to libuv also.
I needed libuv before the p5 syntax to check the ability to do fast and easy library bindings with lots of callbacks, and without a FFI yet. This gives me now the feeling how the FFI should be implemented, and if the current post-XS API is good enough or lacks some API support from core.
As it turned out only some class (=type)/OO features needed to be added to the public potion API, the p2 backend. Everything else went smooth.
I also fixed some more core bugs on my way.
Fennec is a testing framework on top of Test::Builder, one that reduces boilerplate, and solves many of the limitations of vanilla Test::More. It addresses issues such as forking during tests, breaking tests into smaller parts, test-group isolation (state leaks), and mocking. With Fennec in your unit tests, testing becomes a much more enjoyable experience.
A while back I needed to create a presentation for promoting Fennec. That presentation can be found here and covers most of what makes Fennec so great. It even includes a javascript debugger-emulator that shows the order of execution, and the parallelization provided by Fennec. Most examples in this presentation can be found in both vanilla Fennec, and Fennec::Declare, which uses Devel::Declare to provide nice sugary syntax.
These are the key features of Fennec
Concurrency
Predictability
Better Mocking
State Management
RSPEC and Workflows
Customizability
Supports OOP better
No need to manage test count
fork() just works in fennec tests
Fennec also does the work of tying together several popular test modules reducing your boilerplate code.
Over the years I have written a number of implementations of roles, it started with Class::Trait back in 2004, followed by a couple of attempts in the Pugs project, then came Moose::Role and most recently in the old p5-mop project - and I never really liked how any of them turned out. The implementations always felt overly complex and never seemed to gel in my head completely. Since I am really striving to keep things simple with this version of the MOP I decided to avoid the crazy bootstrapping gymnastics and start with the simplest hack possible (then clean it up in the bootstrap).
Abstract Syntax Forests (ASF's) are my most recent project.
I am adding ASF's to my Marpa parser.
Marpa
has long supported ambiguous parsing,
and allowed users to iterate through,
and examine,
all the parses of an ambiguous parse.
This was enough for most applications.
Even applications which avoid ambiguity benefit from better ways to detect
and locate it.
And there are applications that require the ability to select among
and
manipulate very large sets of ambiguous parses.
Prominent among these is Natural Language Processing (NLP).
This post will introduce an experiment.
Marpa in fact seems to have some potential for NLP.
Writing an efficient ASF in not a simple matter.
The naive implementation is to generate complete set
of fully expanded abstract
syntax trees (AST's).
This approach consumes resources that can become
exponential in the size of the input.
Translation: the naive implementation quickly becomes unuseably slow.
Marpa optimizes
by aggressively identifying identical subtrees
of the AST's.
Especially in highly ambiguous parses,
many subtrees are identical,
and this optimization is often a big win.