GPW2014 - day 2

This morning startet with a debugging session. Denis Banovic demonstrated the elegance and ease in which Devel::hdb can get used to debug a running app inside a browser.

The next two talks were held by Ralf Peine who showed his Perl Open Report Framework. In his first talk he demonstrated the typical usage for generating reports from possible huge amounts of data. The framework consists of several decoupled parts getting coordinated by a mediator. In his next talk he mentioned many patterns and idioms he used inside his software.

Jürgen Peters demonstrated some Haskel features by live hacking a couple of functions doing recursive or list-prcessing work. He raised errors by mis-typing types for demonstrating the type-checking capabilities of the Haskel compiler. His conclusion was that compile-time checking is of high value which lives in contrast to the dynamic nature of Perl.

The final talk before the lunch break was held by Steffen Ullrich. In this talk he explained the technical background for using TLS with various protocols as well as the problems that some implementing Modules had and sometimes still have.

After Lunch, Johann Rolschewski demonstrated Catmandu, the Perl data toolkit. Catmandu contains abstractions for data import, transformation, storage and manipulation. This enables a developer to focus on the data instead of technical details.

Next was Hernert Breunung talking about GCL, his DSL build on top of WxPerl for creating GUI applications much easier than using WxPerl directly.

Stefan Hornburg talked about Dancer and DBIx::Class and demonstrated his work on a Table Editor based on a REST interface implemented with Dancer and forwarding REST operations to Resources accessable via a given DBIx::Class Schema. Inside the browser an interface based on angular.js is issuing all REST Requests via Ajax calls. The aim is to get a tool comparable to PhpMyAdmin.

Matt S. Trout was talking about Devops Logique. The amusing part at the beginning of his talk compared several programming languages and their strengths or weaknesses. Getting back to system configuration he admired Prolog and its backtracking capabilities for edge cases which sometimes fail when using conventional approaches. Finally he presented his idea of a configuration tool based on a ruleset and explained the steps taken inside the machinery. A strong complain against puppet is that it "stamps over an existing configuration" instead of triggering a ticket. Looks like we can look forward to get a great configuration tool.

Max Maischein demonstrated his tool WWW::Mechanize::PhantomJS which is using the headless version of Webkit to be able to handle modern web pages requiring JavaScript to operate.

Lars Dieckow gave his amusing talk about "removing the uppercase (ß) == SS hack in unicode".

Herbert Breunung gave a talk entitled "Complete Programming". His suggestion for an order was to start with generating Documentation followed by a Prototype. Finally Code is written and Tests come last. The reason for doing tests last in his oppinion is to prevent double coding due to changes during coding.

GPW2014 - Afternoon talks

During the afternoon we had a couple of interesting talks.

Steffen Winker talked about Testing binary data. He mentioned several distributions being capable of generating useful messages in case of test failures. It turns out that only meaningful messages enable the developer to quickly be able to react to failing tests. All of the four candidates Test::BinaryData, Test::Bits, Test::HexString or Test::HexDifferences are doing a good job and it is merely a matter of taste or the amount of binary data which one to use.

Sebastian Willing was talking about his logging experience in his daily business. His company has taylored a solution receiving lots of messages with mixed severities from different sources. All messages are grouped by events which are basically the body of a message without any variable data. Using this approach lots of consequential messages differing only by eg. line number receive be a repetitive message. All these messages are then assigned to tickets and depending on frequency, amount of messages and their content actions are taken. The backend system also knows several filter rules to avoid spamming. The system he demonstrated looks promising and many people wished to get something similar.

After the lightning talks Getty was talking about various experiences he received from his work. Several people inside the same company might have different views of the same thing. This might result in different priorities assigned to various tasks. He presented his analysis of success sometimes based on political or ecominical things outside of his comany's influence. However quality products are important otherwise success may get missed. His closing words were the Call-For-Applications written in Perl. Applications instead of Libraries and Modules :-)

GPW2014 - REST workshop

On the first day of the German Perl Workshop I was attending a workshop held by Lars Dɪᴇᴄᴋᴏᴡ 迪拉斯 (daxim) from Vienna.PM. Based on code taken from a real world application he showed us how to implement a modern REST Architecture that goes far beyond the otherwise seen and mostly unusable demo programs.

The Infrastructure Layer of his architecture is a DBIx::Class driven Schema model containing all entities which are later made accessable by the REST API.

The Application Layer is based on Catalyst and uses some very clever tricks for dispatching incoming requests to the right action methods that care about the various use cases (think: HTTP Verbs).

He used a couple of supporting modules like Data::HAL for generating standards-complient HAL easily. HAL offers a set of conventions to provide hyperlinks in JSON. Furthermore, syntactical and semantical validation is done using a subclass of JSON::Tiny::Subclassable and JE evaluating a prepared javascript due to the lack of a Perl implementation currently.

Caching is the final ingedient of the game ensuring reasonable performance when used in a real-life environment.

The astonishing part for me was that the pure operations triggered by modifying HTTP verbs are 90 percent error-checking and only 10 percent work. I had guessed a 50:50 ratio before.

I which, we had more time. Daxim, can we start at 7:30 next time?

the first productive days with Pinto

After a while I could convince our admin at $work to prepare a virtual machine for running a Pinto server inside our company network. The primary goals were to have a well-known set of CPAN distributions available for installing developer machines, for running CI tests and for provisioning all kinds of servers.

Pinto::Remote and Pinto::Server

Today I was glad to read that the successful merge of Pinto::Remote and Pinto::Server into its main Pinto repository made Pinto::Remote work again.

I wanted to know how difficult a setup of a Pinto server could become. The requirement behind was to access a single cpan-like repository for deploying server machines. The repository should contain company-provided distributions optionally combined with a collection of cpan-available distributions.