I was extremely pleased with how easy it was to do the above, and wanted to share it!
]]> After creating the account on DotCloud, a quick and painless endeavour, I followed along their first steps tutorial, making sure I’d change the various steps in order to deploy a Perl Dancer webapp.Here is the transcript of the first steps of the tutorial, which did not change for me:
$ sudo aptitude install python-setuptools
[...]
$ sudo easy_install dotcloud
[...]
$ dotcloud
Warning: /home/okram/.dotcloud/dotcloud.conf does not exist.
Enter your api key (You can find it at http://www.dotcloud.com/account/settings): XXXXXXXXXXXXXXXXXXXXXXXXX
error: usage: dotcloud [-h]
{status,info,run,logs,deploy,setup,list,alias,ssh,destroy,push,rollback,create,restart}
...
At this point I just went to the linked URL, and copy/pasted my API key. Nothing strange so far.
I then created a new namespace, “weasel”, and an endpoint “www”. Lastly, I checked it was created:
$ dotcloud deploy -t perl weasel.www
Created "weasel.www".
$ dotcloud info weasel.www
cluster: wolverine
config: {}
created_at: 1304117136.2035949
name: weasel.www
namespace: weasel
state: booting
type: perl
Time for code! I created a new Git repo, and edited some files. The full listing (heh!) will be at the end of the article.
$ mkdir -p GIT/weasel/www
$ cd !$
$ git init
Initialized empty Git repository in /home/okram/GIT/weasel/www/.git/
$ vi -p myapp.pl Makefile.PL app.psgi
3 files to edit
$ git add * ; git commit -am "Initial commit"
[...]
The Makefile.PL contained Dancer
as a prerequisite, as the myapp.pl
did not make use of any other Perl module.
It came the time to try deploying the webapp to dotcloud; easy!
Dotcloud took care of installing the Dancer
prerequisite, along with
all the dependencies, and then tried to start the app.
$ dotcloud push weasel.www .
# upload . ssh://dotcloud@uploader.dotcloud.com:1060/weasel.www
# git
Warning: Permanently added '[uploader.dotcloud.com]:1060,[174.129.15.77]:1060' (RSA) to the list of known hosts.
Counting objects: 5, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (5/5), 533 bytes, done.
Total 5 (delta 0), reused 0 (delta 0)
To ssh://dotcloud@uploader.dotcloud.com:1060/weasel.www
* [new branch] master -> master
Scheduling build
Fetching logs...
Warning: Permanently added '[www.weasel.dotcloud.com]:3254,[174.129.17.131]:3254' (RSA) to the list of known hosts.
-- Build started...
Makefile.PL
app.psgi
myapp.pl
Fetched code revision e4d8d81
--> Working on .
Configuring /home/dotcloud/e4d8d81 ... OK
==> Found dependencies: Dancer
--> Working on Dancer
Fetching http://search.cpan.org/CPAN/authors/id/X/XS/XSAWYERX/Dancer-1.3030.tar.gz ... OK
Configuring Dancer-1.3030 ... OK
==> Found dependencies: HTTP::Server::Simple::PSGI, HTTP::Body, MIME::Types
--> Working on HTTP::Server::Simple::PSGI
[...]
Successfully installed MIME-Types-1.31
Building Dancer-1.3030 ... OK
Successfully installed Dancer-1.3030
<== Installed dependencies for .. Finishing.
8 distributions installed
uwsgi: stopped
uwsgi: ERROR (abnormal termination)
Connection to www.weasel.dotcloud.com closed.
Oh-oh! Something went wrong! I checked the log files to see what was it:
$ dotcloud logs weasel.www
# tail -F /var/log/{supervisor,nginx}/*.log
[...]
==> /var/log/supervisor/uwsgi.log <==
[...]
Plack::Request is needed by the PSGI handler at /home/dotcloud/perl5/lib/perl5/Dancer.pm line 334
Compilation failed in require at (eval 3) line 1.
[...]
That’s it: it’s clearly missing Plack::Request
to be able to handle PSGI
requests.
No problem; just add it to the Makefile.PL
and deploy again!
This will install the new dependency, and hopefully the service will start!
$ vi Makefile.PL
$ git commit -am "Add Plack::Request as prerequisite also"
[master faa342f] Add Plack::Request as prerequisite also
1 files changed, 1 insertions(+), 0 deletions(-)
$ dotcloud push weasel.www .
# upload . ssh://dotcloud@uploader.dotcloud.com:1060/weasel.www
# git
Warning: Permanently added '[uploader.dotcloud.com]:1060,[174.129.15.77]:1060' (RSA) to the list of known hosts.
Counting objects: 5, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 381 bytes, done.
Total 3 (delta 1), reused 0 (delta 0)
To ssh://dotcloud@uploader.dotcloud.com:1060/weasel.www
e4d8d81..faa342f master -> master
Scheduling build
Fetching logs...
Warning: Permanently added '[www.weasel.dotcloud.com]:3254,[174.129.17.131]:3254' (RSA) to the list of known hosts.
-- Build started...
[...]
Fetched code revision faa342f
--> Working on .
Configuring /home/dotcloud/faa342f ... OK
==> Found dependencies: Plack::Request
--> Working on Plack::Request
[...]
Successfully installed Devel-StackTrace-AsHTML-0.11
Building Plack-0.9976 ... OK
Successfully installed Plack-0.9976
<== Installed dependencies for .. Finishing.
12 distributions installed
uwsgi: started
Connection to www.weasel.dotcloud.com closed.
And so it did! Let’s see if my webapp really worked:
$ curl http://www.weasel.dotcloud.com/
Why, hello there!
$ curl http://www.weasel.dotcloud.com/test-123
Hello test123!
Indeed it did!
There you go, an easy way to deploy a Dancer webapp to the cloud!
Here are the sample files I used:
$ cat Makefile.PL
#!/usr/bin/env perl
use ExtUtils::MakeMaker;
WriteMakefile(
PREREQ_PM => {
'Dancer' => '1.3030',
'Plack::Request' => '0.9976',
},
);
$ cat myapp.pl
#!/usr/bin/env perl
use Dancer;
get '/' => sub {
return "Why, hello there!\n";
};
get '/:name' => sub {
my $name = params->{name};
$name =~ s/[^a-z0-9 ]//i;
return "Hello $name!\n";
};
dance;
$ cat app.psgi
require 'myapp.pl';
Really, really easy! Thanks Dotcloud! And, thanks CPAN and CPAN authors, without which the above certainly would not have been possible!
]]>Nothing's wrong with the hash lookup at all: I can't believe I had not thought of it at the time.
Thanks for the suggestion of using the defined and undefined sections of a lookup array: that is certainly faster than even the hash version.
Looking at the hash version, I can't help wondering if Ilmari's "exists-or" operator would even make it faster ;)
I've updated the post with the new benchmarks, and updated the code at the link.
Is there any other trickery that could be performed in Perl land to make the above faster?
]]>After some ooohs, aaahs, and mockery... I said I'd follow up and post exactly the code I was talking about. I'd quite much like to see if the Perl version could be modified to be faster in any way: that would be a great learning experience!
I have been involved in MUD development for a while, and my Anacronia MUD had quite a bit of Italian players at the turn of the century. The MUD went through various iterations before closing down roughly before I moved to Scotland. Since then, I have participated in the development of a couple other MUDs, one of which is still open to this day, adding things like MCCP1/2 support (zlib compression over telnet) and other low-level things like that.
Some time ago I decided to start writing a MUD engine in Perl, using more modern tools like Moose for the objects, and POE for the event loop. I'm still considering using Async but lack good examples :(
Since content for testing and trying out things is hard to come by, I "reused/borrowed" some of the older Smaug area files, specifically the "help.are" area file. The file is in a format I know quite well, and basically contains a list of human-readable "man pages" for the MUD, littered with some sigils which the MUD then translates into coloured strings.
Handling these tricky little sigils are the object of the Inline::C optimizations.
Leaving aside the formatting sigils which mean "reset the (foreground|background) color", we're left with the sigils which govern the colors.
For example, the following string can get parsed and produce an ANSI coloured string, with the correct fore/background colour changes:
--^p&WA&yn&Wa&yc&Wr&yo&Wn&yi&Wa &CV4&^--
Here's the image of the (unsightly, I agree!) result:
The format of the sigils is simply an ampersand followed by a colour letter for a foreground color change, and a caret followed by the same colour letter for a background colour change. Capital letters on the foreground set the bold status, etc.
The letters are x, r, g, y, b, p, c, w for black/reset, red, green, yellow and so on
The ansifier does a unpack("C*",$input_string);
and then uses a state machine (with state given as the ansifier's parameters) to analyse the character and either output it verbatim, change its internal state, or output a coloured ANSI sequence. This is written in Perl, and works quite well.
A specific bit of the ansifier is the place where the parser is right after a & or a ^: since the next character is supposed to be a color letter, it needs to identify if that's a valid color, and if it is use the corresponding "index number" to build the correct ANSI string. For example, "&x" with no state should output "\e[30m", "&r" with no state should output "\e[31m", etc.
As part of my small MUD test suite, I created a "bot" program which would make a number of connections to the MUD and start shouting coloured strings or reading coloured help pages, or generally stressing out what the server could do. It felt dog slow. Some times I have launched the mud under Devel::NYTProf to see what was the cause, and was not too surprised to see that one of the topmost subroutines was the one which handled the help pages request.
Well, these help pages don't change that often so they can be parsed once and then cached! That problem out of the way, the ansify() function became the most problematic one. You see, it's called whenever output is sent from the MUD to the player. That is, at every prompt or at every output. Again, I did some caching... but players' input is basically random so caching didn't help too much with this.
Delving in the internals of the report I realised that a very, very long time was spent in that tiny fraction of code which looked up whether a character was one of "xrgybpcw" and what its index should be.
The Perl function I had at the time was along these lines:
{
my @colors = map { ord } qw/x r g y b p c w/;
sub perl_getcolor {
my $clrchar = shift;
for ( 0 .. $#colors ) {
return $_ if ( $clrchar == $colors[$_] );
return $_ if ( ( $clrchar + 32 ) == $colors[$_] );
}
return 255;
}
}
This works well, but it's quite slow.
If Perl had -funroll-loops, that's where it could be used ;) The resulting "unrolled" Perl subroutine isn't that great either. I have not tested the below, so it may actually be completely wrong!
{
#my @colors = map { ord } qw/x r g y b p c w/;
# 120 114 103 121 98 112 99 119
sub opt_perl_getcolor {
#$_[0]+32 == 120 && return 0; # 120 - 32 : 88; etc
$_[0] == 120 && return 0;
$_[0] == 88 && return 0; # $_[0]+32 == 120
$_[0] == 114 && return 1;
$_[0] == 82 && return 1;
$_[0] == 103 && return 2;
$_[0] == 71 && return 2;
$_[0] == 121 && return 3;
$_[0] == 89 && return 3;
$_[0] == 98 && return 4;
$_[0] == 66 && return 4;
$_[0] == 112 && return 5;
$_[0] == 80 && return 5;
$_[0] == 99 && return 6;
$_[0] == 67 && return 6;
$_[0] == 119 && return 7;
$_[0] == 87 && return 7;
return 255;
}
}
In the MUD, I went with the below Inline::C version:
use Inline C => <<'END_INLINE_C';
unsigned char __colors[8] = "xrgybpcw";
unsigned char getcolor(unsigned char clrchar) {
char i = 0;
for ( i = 0; i < 8; i++ ) {
if ( clrchar == __colors[i] ) { return i; }
if ( ( clrchar+32) == __colors[i] ) { return i; }
}
return (unsigned char) -1;
}
I'm aware that the above can be further "optimized" in the same way as the Perl one, of course, but there's not really a huge need for that: premature optimization, yadda yadda yadda
unsigned char opt_getcolor(unsigned char clrchar) {
switch (clrchar) {
case 120: case 88: return 0;
case 114: case 82: return 1;
case 103: case 71: return 2;
case 121: case 89: return 3;
case 98: case 66: return 4;
case 112: case 80: return 5;
case 99: case 67: return 6;
case 119: case 87: return 7;
default: return (unsigned char) -1;
}
}
How well do the above four functions perform? The code is at ansi--inline-c--vs--perl.pl.txt and the results on my machine below:
Rate Pure Perl version Opt Perl version Inline::C version Opt Inline::C version Pure Perl version 34262/s -- -65% -90% -91% Opt Perl version 98814/s 188% -- -71% -75% Inline::C version 337079/s 884% 241% -- -14% Opt Inline::C version 391645/s 1043% 296% 16% --
The Inline::C version gave me a ~10x better performance than the original Perl one, and it made the function go away from the top of the list of subroutines in which the most time was spent. Now I'm left with POE internals occupying the topmost bits, hence my starting to research the Async territory.
Unrolling the loop on the "optimized Perl version" gave it a x3 performance boost. What other trickery would you have performed to make that Perl subroutine faster?
If you're from Glasgow.pm or come to our meetings, you may want to bring a lightning talk answering the above question ;)
Or, you could leave a comment below
Thanks for reading,
-marco-
Updated benchmarks at same link (thanks Illusori!): using a hash lookup is certainly faster, and the array lookup is blazing fast:
Rate Pure Perl Opt Perl Hash Perl Lookup Perl Inline::C Opt Inline::C Pure Perl 35294/s -- -67% -76% -80% -90% -91% Opt Perl 106157/s 201% -- -27% -39% -71% -74% Hash Perl 146056/s 314% 38% -- -16% -60% -65% Lookup Perl 174216/s 394% 64% 19% -- -52% -58% Inline::C 363196/s 929% 242% 149% 108% -- -12% Opt Inline::C 412088/s 1068% 288% 182% 137% 13% --
The new array lookup version's performance means the basic Inline::C version is now only 2x as fast as it; Are there any other optimizations that could be done in Perlish land?
]]>I talked about it a bit with a friend who'll be back in the country soon enough and even today at work, and I think we may have enough people to kickstart the new Perl mongers group.
I know there are several businesses and people in the West of Scotland who either use or like Perl, so I am hoping that enough of them would like to join the group.
I've sent the request to pm.org's support to reinstate the old Glasgow.pm. I will endeavour to bring the mailing list and website up as soon as possible, and will start reaching out to people who use Perl in the area. Dear lazyweb, could you help me?
In contrast with Edinburgh.pm's meetings, which are mostly "social" gatherings in the pub, I would like Glasgow.pm's meetings to be a bit more on the technical side. I am hoping we will be able to secure a good venue for that (crossing fingers!), and that we'll have enough people and interest to have a bunch of talks and lightning talks at each meeting.
Depending on what the consensus is, I think we may end up having a technical meeting one month, and one social meeting the next.
Please spread the word if you think somebody in the area may be interested in coming along, and feel free to contact me both on this blog and via e-mail at mfontani at cpan dot org.
Oh, and I'd be proposing the second tuesday of the month as the meeting's day (Edinburgh.pm's is the fourth Thursday -- tomorrow). This would allow people in either city to attend either meetings.
Wish me good luck! :)
-marco-
I followed through the POD for Tatsumaki to get an idea of how the various methods (put, get, post) would have to be implemented. Some things weren't quite clear, and had to reach the source to find out, for example, exactly how to set the HTTP response code.
All in all quite a fun experience. Please do read it and feel free to modify it and give feedback via the comments!
The tutorial is at wiki.sproutcore.com and is part of the bigger tutorial for Sproutcore, which starts at the tuturials page.
Till soon,
-marco-
Just finished installing the brand spanking new Perl 5.12 on the netbook on my "perl" account, with which I tested already earlier -RC releases with my personal code.
It's fair to say that the user experience has gone a long way from the "old days" of manual configuration, installation, swearing and $ENV madness. Hell, one doesn't even need local::lib anymore!
All I had to do to test 5.12 after creating my "perl" account was:
curl -LO http://xrl.us/perlbrew
chmod +x ./perlbrew
./perlbrew install
rm perlbrew
~/perl5/perlbrew/bin/perlbrew init
echo 'source /home/perl/perl5/perlbrew/etc/bashrc' >> .bashrc
source .bashrc
echo $PATH
perlbrew install perl-5.12.0
# wait some time..
perlbrew switch perl-5.12.0
perl -V
There you go, shiny new Perl to be used!
git clone git://github.com/miyagawa/cpanminus.git
cd cpanminus
make test install
cpanm YAML
# and YAML's installed
perl -E'use YAML;say Dump({"sound"=>["YAML!"]})'
On my desktop machine instead I started versioning ~/perl5/ under Git starting with -RC4, adding and committing all the cpanm-installed modules as I went. This helped a lot with remembering which modules got installed (although perldoc perllocal also keeps track of that!) in order to install them again when testing the new -RC or the new 5.12. Although, I wasn't entirely consistent with the naming of the commits. Meh.
I only had a couple issues with modules like Gearman::XS which required the latest and greatest library which wasn't available on Ubuntu Lucid. I simply reset the Git tree when the builds didn't work. For Gearman::XS I downloaded, committed, tested and installed the latest version (having it Git-versioned helps) and then rebuilt Gearman::XS with a couple environment variables (also noted on the commit logs).
One learns from their mistakes, so I'll use another format for the commit messages when I'll install and test 5.12 on my desktop account: I'll make sure to "tag" the cpanm commits as such (for example, with which cpanm command I issued and with the list of additional modules installed after the blank line in the Git commit) and the modules requiring installation of other libraries also in a different and sensible way.
The idea would be to be able to do something like the following, and being able to have a good list of commands I need to type to have all my favourite modules on the new Perl version with minimum effort:
git log --pretty=oneline | cut -d ' ' -f 2- | tac
From there, I'll just go through my ~/GIT/ and run the test suites there. Maybe I'll have some code ready for the next post.
For now, I just want to say THANKS to the wonderful Perl 5 Porters who did manage to get the next stable release out as promised. I look forward to 5.14!
-marco-
]]>Boy was I wrong.
I found some code on the 'net which used Net::XMPP::Client to create the connection to talk.google.com, authenticate and send the message. It just didn't work.
Turning some debugging on revealed I was getting an "incorrect-authz" error from Google. The username and password weren't at fault (that'd be another error).
As from this page the reason for the authorisation failure has to be found in the feature called "JID Domain discovery".
This is an example of the string used to authenticate to gtalk:
<auth xmlns='urn:ietf:params:xml:ns:xmpp-sasl'
mechanism='PLAIN'
xmlns:ga='http://www.google.com/talk/protocol/auth'
ga:client-uses-full-bind-result='true'>
HASHED_USER+PASS_INFO
</auth>
The difference between what Google needs and what the Jabber modules give is enough for the Google servers to give an error.
The following program connects to Google Talk and sends a message to a user specified (available at http://darkpan.com/files/notify-via-jabber.pl.txt as well):
#!/usr/bin/perl -w
use strict;
use warnings;
use Net::XMPP;
{
# monkey-patch XML::Stream to support the google-added JID
package XML::Stream;
no warnings 'redefine';
sub SASLAuth {
my $self = shift;
my $sid = shift;
my $first_step =
$self->{SIDS}->{$sid}->{sasl}->{client}->client_start();
my $first_step64 = MIME::Base64::encode_base64($first_step,"");
$self->Send( $sid,
"<auth xmlns='" . &ConstXMLNS('xmpp-sasl') .
"' mechanism='" .
$self->{SIDS}->{$sid}->{sasl}->{client}->mechanism() .
"' " .
q{xmlns:ga='http://www.google.com/talk/protocol/auth'
ga:client-uses-full-bind-result='true'} . # JID
">".$first_step64."</auth>");
}
}
my $username = shift or die "$0: username needed";
my $password = shift or die "$0: password needed";
my $resource = shift or die "$0: client handle needed";
my $recipient = shift or die "$0: need recipient address";
my $message = shift or die "$0: need message to send";
my $conn = Net::XMPP::Client->new;
my $status = $conn->Connect(
hostname => 'talk.google.com',
port => 5222,
componentname => 'gmail.com',
connectiontype => 'tcpip',
tls => 1,
);
die "Connection failed: $!" unless defined $status;
my ($res,$msg) = $conn->AuthSend(
username => $username,
password => $password,
resource => $resource, # client name
);
die "Auth failed ",
defined $msg ? $msg : '',
" $!"
unless defined $res and $res eq 'ok';
$conn->MessageSend(
to => $recipient,
resource => $resource,
subject => 'message via ' . $resource,
type => 'chat',
body => $message,
);
Use it to send a chat from youruser@gmail.com to another@googlemail.com like so:
perl notify.pl youruser PASSWORD 'notify v1.0' another@googlemail.com 'this is a test message'
I hope this will be of help to somebody :)
-marco-
]]>okram@bluedesk: (with_app_cmd) ~/GIT/Net-RackSpace-CloudServers$ perl cloudservers.pl create \ --name=mfapitest --imagename karmic --flavorname 256 --verbose Server name: mfapitest Metadata: Paths: Image id 14362 named Ubuntu 9.10 (karmic) Flavor id 1 named 256 server Creating new server... Created server ID 124999 root password: mfapitestXXXXXXX Public IP: 174.143.242.999 Private IP: 10.176.140.999 Server status: ACTIVE progress: 100.. Server now available!
The same is indeed doable with the sample scripts/newserver.pl in the dist -- and I also need to use scripts/deleteserver.pl to destroy the test instance -- but I assume that a command-line interface may be useful in the longer term to scale up/down specific instances, create N new servers in a shared IP group, or destroy no longer needed instances.
The command was pretty painless to write, and most of it was the validate_args routine...
As usual, comments would be much appreciated!
Till soon,
-marco-
#!/usr/bin/perl
# Runs modules' "compiles" tests before committing
# Dies (halting commit) if they don't compile
print "pre-commit => testing..\n";
do {qx{
prove -Ilib t/00*.t
}} or die <<'DIEMSG';
pre-commit => ERRORS:
$@
DIEMSG
print "pre-commit => test OK\n";
This is an example of a 00-load.t file, that can literally be dropped-in the t/ directory:
use strict;
use warnings;
use Test::More;
use File::Find::Rule;
my @files = File::Find::Rule->name('*.pm')->in('lib');
plan tests => ~ ~ @files;
for (@files) {
s/^lib.//;
s/.pm$//;
s{[\\/]}{::}g;
ok(
eval "require $_; 1",
"loaded $_ with no problems",
);
}
File::Find::Rule
is one of the golden gems found on CPAN, rclamp++!
Till soon,
-marco-
perl -E'say join " ", reverse world, hello' # :)
I've finally found some time to play again with the Rackspace API manual, and added a couple features to the Net-RackSpace-CloudServers module I hadn't touched since moving home some months ago.
The project is semi-alive on http://github.com/mfontani/Net-RackSpace-CloudServers/, and the latest 0.09_10 development version should be on CPAN soon.
It's basically a one-to-one adaptation of the Rackspace API document available at http://www.rackspacecloud.com/cloud_hosting_products/servers/api.
On the scripts/ directory there are some examples on how to use the module: it's possible to list all images, flavors, and servers you own, as well as delete servers by ID or create new servers in what I think is quite a simple syntax.
As an example, let's delete that pesky server whose ID's 666:
use strict; use warnings; use Net::RackSpace::CloudServers;
my $cs = Net::RackSpace::CloudServers->new(
user => $ENV{'CLOUDSERVERS_USER'},
key => $ENV{'CLOUDSERVERS_KEY'},
);
my @servers = $CS->get_server_detail;
my $srv_666 = ( grep { $_->id == 666 } @servers )[0];
die "No such server id 666\n" if ( !defined $srv_666 );
$srv_666->delete_server(); # dies in case of error
print "Server #666 deleted\n";
On a branch not on Github yet, I'm working on an App::Cmd interface to it, as well.
Any comments at all would be truly appreciated, as this is my first module I'm trying to get on CPAN ;)
Till soon,