Check the workshop website for more details.
]]>Check the workshop website for more information.
]]>Check the official site for details.
]]>END { print "Twelve\n"; } BEGIN { print "One\n"; } CHECK { print "Six\n"; } INIT { print "Seven\n"; } BEGIN { print "Two\n"; } END { print "Eleven\n"; } CHECK { print "Five\n"; } INIT { print "Eight\n"; } BEGIN { print "Three\n"; } INIT { print "Nine\n"; } CHECK { print "Four\n"; } END { print "Ten\n"; }
When we run the script it prints someting like:
$ perl blocks.pl One Two Three Four Five Six Seven Eight Nine Ten Eleven Twelve]]>
class Greeter;method greet($name = 'world') {
say "hello $name";
}
Now to use the module we can write something like:
BEGIN { @*INC.push('Greeter/lib') }use Greeter;
my $x = Greeter.new;
$x.greet('rakudo');
The only tricky part is actually the push in the BEGIN block, to add your lib directory. Of course running this is as simple as:
$ ./perl6 greeter.p6 hello rakudo]]>
Fork creates a new process running the same program, usually the process that calls the fork is named the parent process, and the created process is named the child process. The fork function returns 0 to the child process, and the newly created process pid to the parent. Using fork can be as simple as:
print "($$) hello\n"; my $pid = fork; if ($pid == 0) { print "($$) I r child!\n"; } else { print "($$) I r parent!\n"; }
A possible output for running this script can be:
$ perl fork.pl (27121) hello (27121) I r parent! (27123) I r child!
We can see the different pid for the child process, which is different from the running script pid (the parent). We now have two distinct processes that are running simultaneously, and this can be used to process some tasks that can consume long periods of time, or complex operations that can be divided in smaller operations that can run in parallel. Let's look at a more academic example, the echo server:
use IO::Socket::INET; my $listener = IO::Socket::INET->new( LocalPort => 9999, Listen => 5, Reuse => 1);while(my $client = $listener->accept) {
my $pid = fork;
if ($pid == 0) {
handle_client($client);
}
else {
print STDERR "New client, fork'ing (child $pid)\n";
}
}sub handle_client {
my $client = shift;$client->send("Hello client, this is an echo server!\n> ");
while(1) {
my $msg;
$client->recv($msg, 100);
$client->send("ECHO from $$! $msg> ");
}
}
First we create a simple socket listening on port 9999, and then start an infinite loop waiting for connections on this socket. Nothing special so far, the beautiful part comes into play when we accept a connection, instead of hanging the script there while we handle the client requests we create a new child process to handle this, and let our parent process continue waiting for new connections and spawning new child's as needed. Let's see it working, the script running:
$ perl echo_server.pl New client, fork'ing (child 27379) New client, fork'ing (child 27381)
And a couple of telnet clients:
$ telnet localhost 9999 (...) Escape character is '^]'. Hello client, this is an echo server! > hello echo server ECHO from 27379! hello echo server$ telnet localhost 9999
(...)
Escape character is '^]'.
Hello client, this is an echo server!
> and another request
ECHO from 27381! and another request
This is great because not only we can handle more than one client at the same time, since several child process can be working simultaneously but also we don't make the new clients wait for the previous client to finish processing. A client can be very slow, and have a huge amount of requests to process, but we are accepting new connections and processing more requests at the same time.
Hope I haven't bored anyone to death with this post. Happy fork'ing.
]]>use Linux::Inotify2;
my $inotify = new Linux::Inotify2;
$inotify->watch($dir, IN_CREATE, \&handle_new);
sub watch_new {
my $e = shift;
print "New file or dir: " . $e->fullname . "\n";
}
This will execute the callback function hande_new everytime a file is created in $dir. The function will simply print the new directory or file name created.
Two things interesting, there is no simple way to declare this watcher recursive, but you can add a watch for each directory in a list, for example:
foreach (@dirs) {
$inotify->watch($_, IN_CREATE, \&handle_new);
}
Or even do something more magical, start by watching every subdirectory, for example:
opendir(DIR, $dir);
while(readdir DIR) {
-d $_ and $inotify->watch($_, IN_CREATE, \&handle_new);
}
close DIR;
And do this recursively for every directory inside the top directory. Also add a little magic to the callback function:
sub watch_new {
my $e = shift;
print "New file or dir: " . $e->fullname . "\n";
if (-d $e->fullname) {
$inotify->watch($e->fullname, IN_CREATE, \&handle_new);
}
}
This way you get a watcher for every directory in the top directory, and whenever a new directory is created a new watcher is created for the new directory. This is a simple way to make the watcher "recursive" in real time.
Another interesting tip, imagine that you are watching an upload directory, and people upload files that need to be processed after they are uploaded. Maybe the IN_CREATE event isn't the best one, because it is fired up as soon as the file is create, but in this case you want to wait for the file to finish copying before start processing it. For these situations look at the IN_CLOSE_WRITE event, which is fired up as soon as a file descriptor is closed -- when the file finishs copying.
]]>$ history | perl -ne 'END { map {print ++$i.": $_\n";} splice(@{[sort {$h{$b}<=>$h{$a}} (keys %h)]},0,5); } m/\s+\d+\s+(.*)/; $h{$1}++;'
In one of the servers I use I got:
1: ls 2: fg 3: cd .. 4: sudo tail -f /var/log/httpd/error_log 5: cd
Well, I actually added:
alias j=jobs alias vl='sudo tail -f /var/log/httpd/error_log'
To my .bashrc after this.
Memcached is one of the most famous cache engines on the web. It can be used to cache any arbitrary pair key/value, later on in the process you need to know the key to retrieve the stored value. This is one possible caching solution easily to use in Perl. To start using Memcached you can use the following module for example:
use Cache::Memcached;
Next, a new connection needs to be established:
my $cache = new Cache::Memcached { 'servers' => [ "127.0.0.1:11211" ] };
Now we can use $cache to perform operations. For example:
# store in cache $cache->set($key, $value);# retrieve from cache
my $value = $cache->get($key);
There are many ways to take advantage of the caching power itself, and imagination is the limit. You can use cache for your requests for data to the database, or cache entire webpages to be immediately served, or somewhere in the middle, you can cache several components in your website and put them together as needed to produce the final output.
A typical workflow using cache for entire web pages done in a dispatcher could look something like:
# handle request and argumentsmy $key = calculate_request_key();
my $content = $cache->get($key);
unless ($content) {
process_request();
$cache->store($key, $content);
}# return content to client
These techniques can greatly improve the number of requests a complex application can answer per second. Keep in mind that although we cache information it doesn't mean your site can't deploy completely dynamic content, since you can set expire time on cached information. Which means that information available in cache can be valid for like a minute for example, and you can also have other processes updating content, cron jobs for example, that are also able to talk to the cache engine and can update information. But instead of having the application processing the output for 10 requests per second, just process it once and immediately return the processed output for the same request in the next 30 seconds. Of course you can argue that your content is 30 or 60 seconds (whatever you cache life time is) late in time, and that is true, but the time spent for a slow application to process thousands of requests in those same 30 or 60 seconds could introduce a much bigger content delay, or even content not being served at all.
There are some hairy problems with these type of approaches, and some issues to be aware of, but there are also some simple solutions that can be introduced in your application to handle them. Some details on those in a post to come.
]]>my $found = 0; foreach (@list) { $search eq $_ and $found++; }
I prefer to use something like:
my $found = grep {$search eq $_} @list;
Code is simpler and more elegant, there's no significant performance from using one or another, although grep seems to actually run faster if you want to squeeze all the crumbs:
find_with_for 28818/s -- -5% find_with_grep 30211/s 5% --
Grep can be used to filter lists on elements very nicely too, for example, imagine you have a list of users and want to filter the users that are older than 18, just:
my @users_over_18 = grep { $_->{'age'} > 18 } @users;
All and all, grep is great.
#!/usr/bin/perl
this works great if you want to use a wide, system based, Perl for your scripts. But what if you have several different installations of Perl and want to run the same script using different Perl versions without having to change the shebang line. Well, one possible and straightforward solution is to change it to something like:
#!/usr/bin/env perl
and then push the version you want to use to your $PATH environment variable. This can even be easily done based on the user. For example, you can have a user named 'perl_5.10' and for that user have:
PATH=/usr/local/perl_5.10/bin:$PATH
now, this user runs scripts (with the env version of the shebang line) using Perl 5.10.