Today is a relatively minor holiday in the US, but I had work off all the same. I found myself experimenting (when I probably should have been working on my YAPC::Brazil talk :-/). Thanks to today’s PerlWeekly (you are a subscriber right??), I found out about an interesting post by Johnny Moreno which creates a tiny json service backend using Perl. Of course it uses CGI.pm to do so, which made me curious what a Mojolcious port would look like.
Feedback Wanted Archives
Edit: Module::Build::CleanInstall has been released!
Following the recent work (chronicled here) by Yanick Champoux and this StackOverflow question, I got it into my mind to try to write a Module::Build subclass which first removes all the files previously installed by a module before installing the new version. In those posts, this is motivated by
File::ShareDir concerns, but this problem is more general than that.
If your new version of a module does not come with some file that a previous version did, installing the new version will not remove that file. Most of the time this is ok, but every now and again you need to know that those files don’t exist. That’s usually when you see warnings in the POD saying, “be sure to remove all previous installations …” or “only install on a fresh copy of Perl”. The author knows that a problem is possible, but the user has to fix it. Sounds bad.
What if you could just switch your build tool from Module::Build (or EU::MM) to
Module::Build::CleanInstall and let that take care of it for you?
Normally once I mock up an example, I release it to CPAN and see how it goes, but this time, I thought I might solicit comments first, since perhaps there are real world considerations I haven’t thought of. Please let me know any thoughts on this idea and the proposed implementation in the comments below.
Anyone who has been following my progress on Alien::Base knows that in the past few months I have been struggling to nail down the final problem, namely Mac’s library name problem. The short story is, on Mac, the library has the full path to the module baked into the library during compilation. My problem is that I don’t tell it the correct path during compilation. Why?
I wanted installing an Alien::Base module to feel as much like installing a regular Perl module. This means following the canonical
./Build install, incantations that we are all used to. Further I wanted each step to act as normal. To do this, the
./Build step is the one that fetches the library source, configures it compiles it, and then “installs” it to a directory inside the temporary build directory of the Alien::Base-based installer module (this location was set during configure using the
--prefix flag). This is so that the user can invoke tests that use the library before installing it to the proper permanent (user-space) location during the
./Build install phase. This is supposed to be the path of least astonishment.
That brings us back to Mac, because the library needs to know its full install path before it is built, and the best I can tell it is the temporary “install” directory, I cannot make the finally installed library function correctly. I have tried to compensate by using tools which can change the location inside the compiled binary library, but there are huge drawbacks to this too, which I will not go into here. Needless to say, it involves parsing and changing directives inside arbitrary Makefiles. It doesn’t work, at least not well enough.
This has lead to stagnation in the Alien::Base project as I have been letting this problem percolate through the old gray matter. Finally, after a few months, I think I might have an idea. Its not my favorite, and it will involve some rewriting, but I think it stays closest to the original goals.
The new plan is now this.
./Build will now fetch, configure and build the library; the
--prefix will now point to the final installed location. I will still make attempts to have
./Build test work as expected, at least on most platforms, but this is no longer a priority. I can hear you all crying, “but we need testing!”. Yes and I would like to give it to you, however tests for Alien::Base-based modules essentially come down to testing that the library was properly built (I can still make it run
make test or
make check) and that it is properly provisioned to Perl. It is this provisioning that is the problem, if I use a temporary/intermediate directory, I can make those tests pass, but its not testing the real world. If I don’t use a temporary directory, you can’t test for provisioning, but it never really was, so at least its not lying to you.
./Build install would first invoke the usual Perl installation, then run whatever
make install-type command that is established for the module, copying the files from the build directory into the designated File::ShareDir-findable location under the Alien::Base-based module’s installed directory tree.
The upside to this procedure is that it will now allow the Makefile to properly handle the location handling, especially for Mac where this is a larger concern than I originally expected.
That said, thank you all for your patience. I really needed this time to consider this second path. I look forward to any comments you may have!
N.B. The source for
Alien::Base is available through my GitHub page.
While I have been on vacation, I have found a little time to add some polish to Galileo, my recently released CMS. Recent additions include a utility for writing a mostly generic configuration file and administrative popups which explain how to create new pages or add new users. The most important thing I have added though is some more setup-time documentation! Hopefully people will now find it even easier to get a Galileo-based CMS up and running.
In writing the documentation, however, I was faced with a question that I do not know the answer to: do any other plack-based or otherwise Mojolicious compatible Perl webservers (i.e. plackup, starman, twiggy etc) support websockets? While in principle a Mojolicious app can run under all of these, Galileo depends heavily on websockets, especially for content editing.
If any of you know, please comment.
I have more ideas for improvements to Galileo, but they will take a little more work than I want to put in here on vacation. In the meantime it is still very functional, go take a peak. :-)
I am happy to announce that Galileo CMS is now available from CPAN! This project has been my on-train side-project, but its come a long way in a short time. The most exciting thing for me is that its entirely installable from CPAN. To try it out, simply do
$ cpanm Galileo $ galileo setup $ galileo daemon
of course, you can also run it using the servers provided by Mojolicious, or using your favorite psgi-compliant server (as long as they support websockets).
Authorized users edit pages using markdown with a live-preview. All updates to pages, menus and users are sent via websockets. Styling is courtesy of Twitter’s Bootstrap library.
Some things that are still on the list:
- Other than the database, it is not tremendously configurable yet, but I don’t think that should be hard to do. With a few changes it might be decently themable by overloading the styles defined by Bootstrap.
- Images cannot be uploaded yet, but they can be linked to from external sources, or at least that should work :-)
Unfortunately I missed the deadline for Perl Weekly, but I did get it out before I go on vacation tomorrow, and I’m excited that I made that goal. So please, try it out; though probably not for anything mission-critical yet.
Let me know what you think!
I rarely rant on this board, and I try not to rant whenever I can stop myself. With that in mind I am going to try to phrase this rant as a question and see if people agree with me or not. Note that for the remainder of this post I am going to be speaking in broad generalities and I know that there will be notable exceptions. Ok here goes.
This started while responding to a post from awncorp, but it really isn’t the same topic. I want to know, is the focus on “user-friendly” or “ease of use” or “one-click” hurting users in the end? Not even from a teaching standpoint, but actually in their day-to-day computing? My assertion is that some things in computing are inherently difficult, and that they should be, and people might to well to embrace that.
For example, my mother was telling me how a neighbor took a desktop computer to a store and they fixed it by reinstalling the OS and charged her $70. When I told her that that task is rather easy if you take the time to try, my mom became quite mad at the store for ripping off her friend. I told her to rethink this. Changing your brakes is easy but most people pay to have an expert do it, either for the fear of messing with a safety device or the hassle.
I don’t want my mom’s friend to be charged $70 to get her OS reinstalled, but I do want her to learn enough about computers to know that she needs to do a little routine maintenance or else. Also she should be glad that they didn’t tell her it was too old and convince her to buy a new one!
Sometimes in order to make computing easier or more interesting, additional security holes are opened. ActiveX controls still worry me. Who thought that putting a JS engine in a PDF reader was a good idea? Now the rise of cloud computing and social networks has people believing that they are safe throwing their personal data around the web, when even that is difficult, as security breaches often show.
Computers make hard things easier and that’s good. However when they get so easy that you take them for granted, you forget that the task they do IS really hard. Why do so many people use Windows when few people seem to actively like it and most people know that it has security problems? Because it comes pre-installed and they already know how to use it. Is this a good reason to risk viruses, trojans, bot-nets, and all manner of other problems? I don’t think so.
I cannot say that I want computing to be harder, but also I wish people would learn that when you take a complicated system for granted, you are likely to get whats coming to you. Perhaps what I wish is that people who use their computers, or worse, make computing decisions for others (this is a Windows shop!) would take the time to learn the factors involved.
I don’t know what I want from this post. I guess I just had to say that. I am interested in people’s feelings on the matter.
Oh and to all those people who were irate over the last twitter outage (nobody who would read this blog I’m sure), stop reading the news on the latest celebrity divorce and learn something about webapps!
This week has been an exciting week for the small but dedicated group of scientists in the Perl community. This is because this week we saw the roll-out of two science related Perl sites:
- perl4science.github.com - a site for (mostly) one-way communication about Perl and Science
- The Quantified Onion a Google Group for two-way communication about Perl and Science
As gizmo_mathboy has already announced his group, I though I should make my site official too!
I wish we could say we had a big roll-out plan, but not so. We had discussed these things, decided we liked both ideas, and should keep them both, and somehow, this week, they both went live.
A little bit about the Perl4Science site: Its hosted on GitHub pages, mostly because its free, but it also fosters that GitHub feel of “lets share our code” which is a major part of using open source for science. Futher it uses Jekyll for a rendering engine and a cool project called Octopress to manage the Jekyll stuff. The details are in the site, including details on how you can contribute.
For now it contains some links to a few Perl science-related modules, and some links to the science related talks at YAPC::NA 2012. I want to see both lists grow. If you know of good modules or good talks please fork the site repo or mention in the comments here. I also hope to share some useful code snippets, but I don’t have a place for that just yet.
Finally, and I wish I didn’t have to mention this, but we are working on a set of standards for inclusion in both sites. For now, lets just say, if you are going to contribute, lets keep it professional and Perl/Science related, and the site owners will make final ruling on what is added. Hopefully we don’t have to use that power often.
So that bit of legaleese aside, please come and enjoy both sites. I hope people learn and people teach others. Lets make Perl (good Perl) relevant in the science community again!
Edit: MojoCMS has been renamed to
Galileo and released to CPAN. Enjoy!
Over the holiday break, I decided to have a little fun learning some things about the web. I usually get my Perl fix through science, but several upcoming projects might have some web involvement; so I thought I should brush up. The following are some reflections on that experience.
use command, and that “page”-globals can be used, still feels odd. I can definitely see the need for jQuery, but that adds even further cognitive dissonance. Anyway, I think most of this is my shortcoming, not its.
HTML5/CSS3 on the other hand is brilliant. Its easy to make the markup do what you mean without too many machinations. Of course I pull in some libraries for that too, namely Bootstrap.
Back to the Perl of it though, I must say I have high marks for Mojolicious, for many reasons, but the highest are for Websockets! Now I know Mojolicious didn’t invent them, but it makes them easy. Using Websockets I was able to make the “save page” and “update main nav” windows save without reloading. That was rather cute and feels modern.
The biggest point I want to make (long ramblings aside) is my most recent addition: DBIx::Class. I’m a scientist, not a database admin. I have setup some PHP CMSes and have used mysql just enough to get them started; terrified the entire time. So much so that I started my CMS project with the idea of using DBM::Deep for as long as possible. Soon enough though, I was nesting hash-keys three deep and wishing I had objects; if I hadn’t needed persistence I would have reached for Moose long before.
I investigated KiokuDB, and while I had some hope for it, I think I would need someone sitting in on the setup process with me. Then I remembered, throughout YAPC::NA along with the list of my favorite Modern Perl modules, everyone else adds DBIx::Class. Ok they can’t all be wrong. They weren’t. Sure the syntax is a little different from Moose, but its not that hard. The payoff for me started even before running the site, in the
deploy command. With a simple script I can create all the necessary tables and inject sample content without ever needing to write SQL! After this the ORM quickly and easily replaced the DBM::Deep vestiges throughout the code, and just like that I have readable, OO structures for users, pages and the menu configuration.
Anyway, if anyone wants to play with MojoCMS (or suggest a better name!) feel free. It is still very rough but I may try to see it forward a little further. Passwords are stored in the clear for now, so be careful! But this is my next task. After that and some other work on users (like being able add them through the website!) the thing might even be able to host a small site. Not bad for a one-week side project!
I cannot decide if I was too harsh. I try not to let the usual drone of noobs on SO to get to me. My problem was that the OP is being both ignorant AND demanding. Read the post and let me know, I'm back and forth between being enraged and contrite.
I’m having a bit of a conundrum over where to put my next GSL-based module. First some background.
I’m already the author of a GSL-based module (see my first rant), the horribly named
Math::GSLx::ODEIV2. This name reflects the same odd namespacing conundrum that I find myself in again, as well as the sub-library name
Duke Leto has already essentially taken the whole
Math::GSL namespace by brute-force SWIG-ing the entire library. Much of this work is not fully implemented, but still parked. Further, since the namespace is already fairly crowded, its next to impossible to tell which parts are his and which would be anyone else’s. So lets call that out of the running. Note that I’m not complaining about his efforts, but it makes choosing a name harder.
I released my first module which uses GSL under the namespace Math::GSLx, but this is also less than desirable since it seems to be related to Math::GSL which it isn’t (at least not in the way that MooseX is related to Moose). Its also hard to type and hard to search for.
This leave me thinking about starting a new toplevel, which I do not undertake lightly. My two concepts are the simple
GSL and the more namey
PerlGSL. I say namey since toplevels with names like
Mojolicious are not contentious since they represent more of a concept than an implementation detail (like
To be distinct from
Math::GSL I would encourage users of
- Make their interfaces Perlish rather than the utilitarian output the SWIG may produce
- Give their module a name that is descriptive without squatting on the sub-library name or other implemenations
In this way if two different authors want to provide an interface to GSL’s Monte Carlo multidimensional integrator, one might be
PerlGSL::Integration::MultiDim (since there are a number of 1D integrators to be considered) and another might be
Once I settle on a toplevel, I expect that I will “release” a namespace decriptor module (not unlike
Alien) describing this for future users. It might also eventually pull in the GSL library via
Alien::GSL once my
Alien::Base work permits. From there I would release
PerlGSL::Integration::MultiDim and rechristen
PerlGSL::DiffEq (assuming the toplevel
Anyway, I’m interested in your comments, so please let me know!