I think it is about time I got down to brass tacks and get some code written for this project of mine or at least look at it.
Well they say a picture is worth a thousand words and this little diagram (from a production instance, I too off all the names to save the innocent, will give you an idea of where I have used DA and how I see other using it.
So it sits out in left field as a separate class set that is connecting to Oracle. My app's abstract classes then instantiate these classes to actual work on the DB that is called by the Controller and API classes of the Mojo App.
No I know what some will say;
Why not just use ORM or Fey or some other ORM and skip both of these layers?
Dist::Zilla is an extremely powerful and versatile CPAN authoring tool, which has enabled me to reliably publish many distributions with minimal fuss. It has the ability to automate your entire distribution building, testing, and releasing process, customized to almost any workflow, but this ability does not come without cost. One of the biggest difficulties newcomers face is sorting out the overwhelming number of available plugins and how to use them together effectively. A good way to start is with the tutorials at dzil.org and the standard [@Basic] plugin bundle. Unfortunately, the [@Basic] bundle is out of date; notably, it does not include the [MetaJSON] plugin, to generate META.json which is now the preferred metadata format for CPAN distributions. For backwards compatibility reasons the bundle itself can't easily be updated with new plugins.
nqp-js-on-js (NQP compiled to JavaScript and running on node.js) passes it's test suit (almost, there is a bug with how regexes compiled at runtime capture stuff which I haven't yet figured out).
While nqp-js-on-js compiles parts of rakudo (with a minor bug fix) it turn out for some reason it's unacceptably slow on some of the larger files (like Perl6::World).
As such I have turned my attention to figuring out what's the problem and speeding nqp-js-on-js up.
Hopefully the next blog posts will be more detailed and contain the description of some nifty optimizations.
Working on a large internationalization (I18N) for one of our clients and I found myself in a curious position where I needed to build an I18N objects from users, companies, and web sites. It's tricky because there are multiple ways the object can be instantiated:
my $i18n = Our::I18N->new( domain => $domain );
my $i18n = Our::I18N->new( user => $user );
my $i18n = Our::I18N->new( company => $company );
my $i18n = Our::I18N->new( request => $c->request ); # at the Catalyst layer
Anything consuming the I18N object should be able to do things such as determine country and language, but should not be able see the user, company, or request because they should not be tightly coupled. There are tricks I could do with BUILDARGS to make the above work, but frankly, that's a pain and often a nasty hack. That's when bare Moose attributes and meta hacking come in handy.
So I guess the best thing for me to do now is answer good old question 4 from my last post.
Is this just going to pollute the name-space?
So as I said in my last post DBIx::DA was the name-space I picked some 10 (or is it 11) years ago for my take on the DataAccessor package, and I would agree it does pollute the name-space as like I said it adds nothing to DBI except, some wrapping and SQL generation.
So where to put it?
Well I was thinking under Data:: as something like Data::Accessor.. but when you look at that name-space a good chunk of the modules in there are for working with Data directly i.e. dump it, change it, sanitize it etc. So I think something that just gets data may not work there.
I notice several groups of people:
folks who wish Perl 6's performance weren't mentioned;
folks who are confused about Perl 6's perfomance;
folks who gleefully chuckle at Perl 6's performance,
reassured the threat to their favourite language XYZ
hasn't arrived yet.
So I'm here to talk about the elephant in the room
and get the first group out of hiding and more at ease,
I'll explain things to the second group, and to the
third group... well, this post isn't about them.
Why is it slow?
The simplest answer: Perl 6 is brand new. It's not
the next Perl, but a brand new language in the Perl family. The
language spec was finished less than 4
months ago (Dec 25, 2015). While some optimization
has been done, the core team focused on getting
things right first. It's simply unrealistic to
evaluate Perl 6's performance as that of an extremely
polished product at this time.
Sometimes for some reasons processes on your server work unexpectedly long or don't die on time, this might cause many issues basically because of your server resources start to exhaust.
Stale-proc-check is sparrow plugin to show you if any some "stale" processes exists on your server. It depends on ps utility ,
so will probably work on many linux/unix boxes ...
Below is short manual.
INSTALL
$ sparrow plg install stale-proc-check
USAGE
Once plugin is installed you need to define configuration for it by using sparrow checkpoint container, which just an abstraction for configurable sparrow plugin.
You need to provide 2 parameters:
filter - perl regular expression to match a desired process
history - a time period to determine that processes found are fresh enough
In others words if any processes older then $history parameter found it will be treated as bad situation and check will fail.
Now that I am getting further from my Java story and closer to a Perl story, which is good me thinks, I am going to pause and ask myself a few questions, before I jump into the rebuilding my DataAccessor aka DBIx::DA.
Isn't this just re-factoring/re-design for re-factoring/re-design sake?
What is the end use of this new module?
Hasn't this already been done to death?
Isn't this just going to pollute the name-space?
Is there even a business case for this code?
Though questions for a one page blog but here we go with the 25c answers
Isn't this just re-factoring/re-design for re-factoring/re-design sake?
No it is not. Here is one good code reason why I am taking this on.
use base qw(DBIx::DA Exporter);
When the first version of the code was written 'use base' did only two little things;
Once again, courtesy of Oslo.pm, I’ll be returning to Oslo for a week of all things Perl.
On Monday April 18 and Tuesday April 19, I’m running two public classes on Perl 5 topics:
my advanced Perl programming class, and my Perl API design class. Bookings have just opened, so there are plenty of seats still available for either.
Then, on Wednesday April 20, I’ll be giving a free public talk at the Scotsman, at 6pm. I’ll be talking about how easy it is to extend Perl 5 with some amazingly useful extra language constructs—mostly stolen from Perl 6—using an evil new module (or possibly “a new evil module”) I recently wrote. The Scotsman is a great venue for these talks: you can lounge in comfort with a large band of fellow geeks, eat pub food, and quaff beer, while a crazy Australian slowly explodes your brain.
This is the second in a series of blog posts about the Perl toolchain and the collection of tools and modules around it that are central to the CPAN we have today. In the first post we introduced PAUSE and CPAN, and outlined how you release a module onto CPAN, and how someone else might find it and install it. In this article we're going to cover what comes before the release: creating, developing, and testing a module for release to CPAN.
This post is brought to you by ActiveState, who we're happy to announce are a gold sponsor for the QA Hackathon this year. ActiveState are long-term supporters of Perl, and shipped the first Perl distro for Windows, ActivePerl.
In my first college programming course, I was taught that Pascal language
has Integer, Boolean, and String types among others. I learned the types
were there because computers were stupid. While dabbling in C, I learned more
about what int, char, and other vermin looked like inside the warm,
buzzing metal box under my desk.
Perl 5 didn’t have types, and I felt free as a kid on a bike, rushing through
the wind, going down a slope. No longer did I have to cram my mind into the narrow
slits computer hardware dictated me to. I had data and I could do whatever I
wanted with it, as long as I didn’t get the wrong kind of data. And when I did
get it, I fell off my bike and skinned my knees.
In my last few posts I have been nattering on about DataAccessor an old Java app that set me on my path to Perl. Looking back some 20 years now I see that despite it working well and being successful I think we had on our hands one very large anti-pattern. At least when it came to any other SQL database
I do remember having a look at the new thingy called MySQL and investigating it to see if we should create SQLDataAccessor for it. I had an email trail with
David Axmark
(how do you forget a name like that) asking if MySQL did 'hierarchical queries' as we need them on SIP. His answer something like this
'Well no, not yet. But you are welcome to write one one. Attached is some of the code I think you will need to start...'
About 15 .c and .h files where attached and had I some C skills I might have had a different career path.
I'm trying to keep a list of all the features I would like in Dancer::SearchApp.
These features and changes range from large (index and search video subtitle
files and link to the scenes in the movie) to small (implement suggestions).
Usually, I keep the list of these features in a Pod file imaginatively
named todo.pod.
Until now, Mojolicious::Plugin::AutoRoute depended Mojolicious internal structure, but Mojolicious::Plugin::AutoRoute become more stable because this plugin use only public features of Mojolicious. The following is the document of Mojolicious::Plugin::AutoRoute.
Mojolicious::Plugin::AutoRoute is a Mojolicious plugin to create routes automatically.
If you put templates into auto directory, the corresponding routes is created automatically.
Well carrying on from my last post lets get a little idea what the SQLDataAccessor was suppose to be doing. It was part of a larger system called DataAccessor that was attempting to provide a common CRUD front end to a number of data sources. There was SQLDataAccessor, SIPDataAccessor of course and NTFSDataAccessor and I think an OLEDataAccessor as well. At the time a rather novel idea.
On top of this access layer there was a Java bean layer (abstract classes really) then a java servlet *.do layer (remember good old Apache Jakarta) and then finally the front end and for that we had a web version in JSP a Java App for desktop and phone (very early days) and a WAP stack as well.
My part of the team I guess would be called bit player as my job at the time was to create most of the beans and thus an endless stream of init statements à la,
This is the first in a series of blog posts about the Perl toolchain and the collection of tools and modules around it that are central to the CPAN we have today. These posts will illustrate the scope of things worked on at the QA Hackathon. We'll start with the core lifecycle of CPAN modules, focusing on PAUSE and CPAN.
This post is brought to you by FastMail, a gold sponsor for this year's QA Hackathon (QAH). It is only with the support of companies like FastMail that we're able to bring together the lead developers of these tools at the QAH.
One could easily verify if any minion jobs are failed for a certain period of time. Which could be important to understand if you have any failures for your long running task executed by minion.