Because building KDE takes hours, and you wont need it other than for cachegrind.
But there's a QT variant coming with kcachegrind, called qcachegrind.
Maybe ports wants to use this variant. Or not, because kdelibs3 is listed as dependency.
Description: KCachegrind visualizes traces generated by profiling, including a tree map and a call
graph visualization of the calls happening. It's designed to be fast for very large
programs like KDE applications.
Homepage: http://kcachegrind.sourceforge.net/
Library Dependencies: kdelibs3
Platforms: darwin
License: unknown
Maintainers: nomaintainer@macports.org
I have been trying to conduct interviews for, I think at least a year, but I was never able to figure out the technology. It is still not what I'd like to have, but I think it is reasonably good already to get started.
Hannover.pm is organising the 16th German Perl Workshop 2014 ( GPW 2014 ) in Hanover.
An official act.yapc.eu website is currently in the making and will be published in early June. Give us some time to understand and fully configure its back end.
The gpw2014 will take place from March 26th to 28th 2014 (Wednesday to Friday). The CeBIT will take place from March 11th to 15th, the Hannover Messe will take place from April 7th to 11th. We're smack-dab in the middle of those two big fairs, but hotel rooms will be affordable during that week.
I'll blog about the gpw2014 at least every month to keep you informed. But please also have a look at the official act website for major news.
If you like to chat you can join the IRC channel #gpw (#gpw2014 is for the organisers) on irc.perl.org.
My first job was as a bus conductor, and my second one was as a student trainee in an engineering company - proper engineering, with production lines, big machines, hot things, and "danger of death" notices on equipment. In both of these, safety was an important concern, and especially in the second one it was drilled in to me that safety and quality are closely related and arise from systems, not merely from individual endeavour. While I never completed my degree in manufacturing/systems engineering (I dropped out because I was fed up after too many years in the classroom) I still retain an interest in the subject.
I recently came across the excellent Disastercast podcast by Drew Rae. Of particular interest to programmers is the sixth episode, which looks at the report into a fatal rail crash caused by a poor safety and testing culture.
1: Right man.I was there when it started. At German Perl Workshop this march in Berlin Richard ignited with his inofficial keynote a lot of controversy. All what said wasn't new or IMO just opinion or chatter/not relevant. Later I spoke with him @ the social meeting in the computer game museum. (seriously, is there a better place for such an event?)
During our conversation I found out: he listens to people, he really loves Perl and he's the right kind of Person to do that, with the right experience set. Even if I don't share some of his fews/considerations what is important.
Stratopan is a new service for hosting custom repositories of Perl modules in the cloud. Private beta trials will begin early this summer. If you'd like to participate in the trials, please stop by https://stratopan.com and leave us your email address. We'll contact you with all the details when the trials begin.
Stratopan will host both public and private repositories with any combination of proprietary and open source Perl modules. And Stratopan is built on Pinto, the open source tool for creating custom CPAN-like repositories, so it has the same helpful tools for managing your application dependencies.
Welcome to Perl 5 Porters Monthly, a summary of the email traffic of the
perl5-porters email list.
Well, that was a nice little break. I didn't intend for it to be so long
but there you go. Life gets real sometimes. To catch up I'm basically
going month by month through most of April. I plan to resume monthly
summaries starting the week of April 29, 2013.
I didn't actually intend for this to be a series of posts, but hey, that's the consequence of going with the flow rather than rigidly planning everything out beforehand and it nicely mirrors the theme of:
If you have not read those, I strongly recommend that you do so before continuing on this post. Mostly the comments have been positive, but Adrian Howard has offered some interesting counter-points and some good resources for further reading. I will not say that he's wrong, but there is a different way of looking at this situation.
During Apr 7 - 19, I conducted a web questionnaire targeting Japanese Perl users, which I titled "Perl5 Census Japan 2013" ;)
I purposely asked people spreading the news to specifically state that this was not just for hardcore Perl users, and that even if you did't use Perl much these days, I still wanted your input.
So here's a (terse) version in English so y'all can see. This will probably give you an insight into how the Japanese Perl community looks like, and what type of technology they prefer.
Today I was glad to read that the successful merge of Pinto::Remote and Pinto::Server into its main Pinto repository made Pinto::Remote work again.
I wanted to know how difficult a setup of a Pinto server could become. The requirement behind was to access a single cpan-like repository for deploying server machines. The repository should contain company-provided distributions optionally combined with a collection of cpan-available distributions.
I've been playing with Heroku (a platform as a service) and Mojolicious, which works very well if all of the modules install. Greg Hinkle shows you how to do it. Create your mojo app and deploy it easily to Heroku. As you do, it's dependencies are installed for you.
Pure Perl modules are usually fine, but Heroku is a limited Ubuntu environment that doesn't have all the libraries you probably expect to already be there. One of the modules I needed had DB_File as a deep dependency, so deploying my app (with Heroku toolbelt and Mojolicious::Command::deploy::heroku) fails. I get the rainbow barf from the uni-raptor.
Marpa's SLIF (scanless interface)
allows an application to parse directly from any BNF grammar.
Marpa parses vast classes of grammars in linear time,
including all those classes currently in practical use.
With
its latest release,
Marpa::R2's SLIF
also allows an application to intermix
its own custom lexing and parsing logic
with Marpa's,
and to switch back and forth between them.
This means,
among other things,
that Marpa's SLIF can now
do procedural parsing.
What is procedural parsing?
Procedural parsing is parsing using
ad hoc code in a procedural language.
The opposite of procedural parsing is declarative parsing
-- parsing driven by some kind of formal description
of the grammar.
Procedural parsing
may be described as what you do when you've given up
on your parsing algorithm.
Dissatisfaction with parsing theory
has left modern programmers accustomed to procedural parsing.
And in fact some problems are best tackled with procedural parsing.
Recently I've been working on factoring and primality proofs (with certificates) for Math::Prime::Util. I thought I'd give a brief summary and comparison of the modules I know of for factoring integers from Perl.
Recently I wrote about how to be agile without testing (if you haven't read that, you should do so before reading this). I was planning on a follow-up after some comments came in and so far the reaction was decidedly mixed. I think that's a shame because not many people seemed to focus on the punchline:
And that's really the most interesting idea of this entire post: your customer's behavior is more important than your application's behavior.
Just a little more than a month ago I only had the English version of the Perl Tutorial I have been writing for a while. Then Felipe da Veiga Leprevost offered to translate it to Portuguese and now there are 9 more translations at various stages.
It is extremely nice to see people volunteer their time to help others....
Travis CI is a hosted continuous integration service for the open source community.
Essentially you set up a git post-commit hook that causes your tests to get run on every commit, against a number of different Perl versions, with databases and other services available if needed. And it's all free!
If you visit https://travis-ci.org/ you can get a feel for the interface and the tests that are being run. For a particular commit you get a build, for example WebService::Nestoria::Search build 1, which has a sub-build per Perl version, for example WebService::Nestoria::Search build 1.1 (perl 5.16). As you can see you get the full output from the Ubuntu VM that's running your tests, so if anything does go wrong it's pretty simple to debug.
For the rest of this post I'm going to describe the integration process, in particular hitting on how to make it work with Dist::Zilla-based projects.