Docker is quite popular solution to rapidly spin up developers environments. I have been playing with it and it seems fun for me. The other fun thing I found that Sparrow could be a good solution to build up new docker images.
Here is short example of how it could be. A few lines in Dockerfile and you have a GitPrep server up and running as docker container. Whew!
Here is my Dockerfile:
RUN apt-get update
RUN apt-get install sudo
RUN apt-get install git-core
RUN cpanm Sparrow
RUN sparrow index update
RUN sparrow index summary
RUN sparrow plg install gitprep
RUN sparrow plg run gitprep
CMD sparrow plg run gitprep --param action=start --param start_mode=foreground
I base on official perl docker image and then let all the job to be done by sparrow gitprep plugin!
This is how one can use it:
$ git clone https://github.com/melezhik/docker-projects.git
$ cd docker-projects/gitprep
If you're interested into using Travis-CI for your Perl projects, here's a few pointers that you should not miss:
Too bad that it took me so long to find them out.
Currently rakudo.js is at the point where:
node rakudo.js --setting=NULL -e 'use nqp; nqp::say "Hello World"'
node rakudo.js -e 'say "Hello World"' doesn't.
The general work-flow for that is:
- Try to compile the setting with rakudo.js.
- While rakudo.js is compiling some error appears.
- I then figure out wheter it's a result of a missing feature or some bug in the js backend.
- I implement the feature and write tests for it or fix the bug.
- I then repeat the process.
Lather, rinse, repeat.
Until the setting compiles Rakudo.js is not yet usable by users.
Even getting something very simple like say "Hello World" requires a fair chunk of the setting to work.
The rakudo specific work is done in the js branch of rakudo https://github.com/rakudo/rakudo/tree/js.
Most of the work on the backend itself is done in the master branch in the nqp repo.
I recently wrote about Veure's test suite and today I'll write a bit about how we manage our database. Sadly, this will be a long post because it's a complicated problem and there's a lot to discuss.
When I first started Veure, I used SQLite to prototype, but it's so incredibly limited that I quickly switched to Postgres. It's been a critically important decision, but I want to take a moment to explain why.
All software effectively has four "phases" which amount to:
Note that we could rewrite the above as:
- Initialization of data
- Input of data
- Calculation of data
- Output of data
Notice a pattern?
Yeah, I thought so. There are all sorts of areas where we could get things wrong in software, but the further down the stack(s) you go, the more care you need to take because the more damaging bugs can be. Data storage is often pretty low in your stack and you don't want to get this wrong. So what happens?
Following is the p5p (Perl 5 Porters) mailing list summary for the past week. Enjoy!
Read this article on Perl6.Party and play with code examples right in your browser!
Back in the day, I wrote Perl 5 module
Number::Denominal that breaks up a number into "units," say, 3661 becomes '1 hour, 1 minute, and 1 second'. I felt it was the pinnacle of achievement and awesome to boot.
Later, I ported that module to Perl 6, and recently I found out that Perl 6
.polymod method built in, which makes half of my cool module entirely useless.
Today, we'll examine what
.polymod does and how to use it. And then I'll
talk a bit about my reinvented wheel as well.
.polymod method takes a number of divisors and breaks up its invocant
my $seconds = 1 * 60*60*24 # days
+ 3 * 60*60 # hours
+ 4 * 60 # minutes
+ 5; # seconds
say $seconds.polymod: 60, 60;
say $seconds.polymod: 60, 60, 24;
# (5 4 27)
# (5 4 3 1)
While playing with docker I created a simple sparrow plugin to install docker engine on Ubuntu Trusty 14.04 (LTS) - https://sparrowhub.org/info/docker-engine . Please let me know if other platform to support you need! ;))
Much of what I do involves retrieving stuff over the HTTP family of protocols.
My go-to solutions are either the APIs of LWP::UserAgent/WWW::Mechanize
or the API of AnyEvent::HTTP, depending on whether I want some kind
of concurrency or not. Since I found Future as a somewhat nicer way of
structuring callback hell a bit differently, I've looked around for a nice
implementation of a HTTP client that works with various backends and maybe
even without a backend.