Type::Tiny is probably best known as a way of having Moose-like type constraints in Moo, but it can be used for so much more. This is the first in a series of posts showing other things you can use Type::Tiny for.
Let's imagine you have a function which takes three parameters, a colour, a string of text, and a filehandle. Something like this:
Last week I gave a "Intro Into Perl 6 Regexes and Grammars" talk at the Toronto
Perl Mongers, whom I thank for letting me speak.
For google hangout that is usually set up, we got to use the fancy equipment provided by the company that was letting us use their space. Unfortunately,
it's currently unclear if the hangout was recorded and if there would be a video
of the talk.
So, I figured I'd make a screencast of the talk. You won't get some of the discussions that occurred during the meeting, but the content of the talk
itself is pretty much identical.
There are now three search backends available: PostgreSQL, Elasticsearch, and SQLite using the FTS5 extension. While SQLite is of course the simplest to deploy as it requires no setup, I decided against using it for the main perldoc.pl instance as it does not support skipping stopwords, but the backend is now provided so basic search features can be added to a deployment without setting up a database. Additionally, the application can be deployed without any search backend configured, which will just remove the search box and allow viewing of all pages normally.
Last week I gave a "Faster Perl 6 Programs" talk at the Toronto Perl Mongers, whom I thank for letting me speak.
For google hangout that is usually set up, we got to use the fancy equipment provided by the company that was letting us use their space. Unfortunately,
it's currently unclear if the hangout was recorded and if there would be a video
of the talk.
So, I figured I'd make a screencast of the talk. You won't get some of the discussions that occurred during the meeting, but the content of the talk
itself is pretty much identical.
Still playing with the API here in the Moos-Pen today.
Now that I have that new
API
for 'Gather/Group By' I will have to update how Driver::DBI works. Starting with the new test in the 50_group_by.t test case.
For some time now, most of the updates to
Astro-satpass
have been to maintain the canned Iridium status table. This seemed wrong
to me, so I have pulled Astro::Coord::ECI::TLE::Iridium
out into its own distribution. This is currently available on CPAN. The
first distribution of
Astro-satpass
without this module will be version 0.100, which should be
out in a few days.
For people who use one of the CPAN clients, this should be a
non-event. I know of no downstream packagers except for ActiveState and
MacPorts. Last I looked, ActiveState was just automatically picking up
all of CPAN (or not, as the case may be), though if there was no PPM
package a CPAN client would work. I have filed a bug with MacPorts
describing what is going on. We will see what happens.
"I always insisted on commenting my Perl. I never got to the very end of the Camel Book. Not in one reading, anyway. I never experimented with the darker side-effects; three or four separate operations per line was always enough for me. Over time, as my responsibilities moved more to programming, I cut back on the sysadmin tasks. Of course, that didn't stop the Perl use completely--it's amazing how often you can find an excuse to automate a task and how often Perl is the answer. But it reduced my Perl to manageable levels, levels that didn't affect my day-to-day functioning."
SPVM - Fast array and numeric operation, and provide easy way to C/C++ Binding
0.0359 2018-07-16
- SPVM::CORE become done native compile
- add join function
- fix const assignment bug
- support list syntax
my $nums = [(1, 2), (3, 4), (5, 6)];
- object have body field at offset 0. This will fix alignment bugs.
Well I was just going to do a simple test post-ette today but on my first run I ran into this in the '30_view.t' test case for Database::Accessor;
Can't call method "view_count" on an undefined value at D:\GitHub\database-
accessor\lib/Database/Accessor.pm line 762.
# Looks like your test exited with 255 before it could output anything.
Hmm so I have a few loose ends from
yesterday's
post. The code in question is this line
@elements = @{$self->gather->view_elements()}
if ($self->gather->view_count());
It has been six years since I last mentioned anything about githook-perltidy, a tool for the automatic tidying of Perl and POD files during a Git commit. I rely on it every day, and I still make minor improvements to it, so I thought it worth a quick shout out to others who haven't heard about it or upgraded for a while.
Some CPAN distribution authoring tools come with automatic README generation support. At least in the case of Module::Install::ReadmeFromPod and the various Dist::Zilla plugins I've seen, that generation occurs when Makefile.PL is run. For reasons related to my workflow, Git and GitHub, that timing is too late for me. So I've added a README generation feature to githook-perltidy. See the documentation for details.
One final thing to note. This latest round of work on githook-perltidy was triggered by an unrelated minor issue that a user raised. So don't hesitate to let the author of a tool know when you use something, and that you find it sub-optimal. It might just motivate them to do a bunch of work.
I am giving a short talk on "How to become a CPAN contibutor?" at the The Perl Conference in Glasgow 2018. This is going to be my first ever talk at the European Perl Conference. I have already prepared the first draft of the slides for my talk. I will be doing final cleanup in the next few days. I have also got the hotel and train tickets booked.
For a change, this time I am taking my family with me. The plan is to have Glasgow City tour one day and one day Edinburgh City tour with the family.
Before I was mostly associated with London Perl Workshop for obvious reason as I live in London. Few months ago, I got the opportunity to attend German Perl Workshop 2018 in Gummersbach. I must say, I was very impressed with the way the event was handled by capable team of organisors, specially Jens and Roland.
Hello all,
this is a fourth blog post in the Machine learning in Perl series,
focusing on the AI::MXNet, a Perl interface to Apache MXNet, a modern and powerful machine learning library.
If you're interested in refreshing your memory or just new to the series, please check previous entries over here:
123
If you're following ML research then you're probably well aware of two most popular libraries out there, Google's TensorFlow and a relative newcomer to the field but rapidly gaining widespread acceptance, Facebook's PyTorch.
The reason why PyTorch has gained so much ground on TensorFlow is in dynamic nature of that library. TensorFlow started as a static graph library (which is easier to optimize) and PyTorch went with dynamically allocated graphs and NumPy (read PDL)
style of programming (with a robust GPU support and auto-differentiation of the gradients) that is as easy to debug as an ordinary Python's code.
For those of you who might have been a little disappointed that there has not been much Moose in my posts of late today you are in for a treat.
Yesterday
I had the problem where I may have a nice Database::Accessor all set up that returns all the people in a DB. Good and nice but now I wanted to group them by 'region' which I could do like this;
Our new Perl internship started on the 9th of July, 7 daring new interns took on the challenge of becoming Perl developers. The internship will last for 4 weeks, during which they will get familiar with Perl and all that goes with it.
Perl is one of the pillars of Evozon, the company was founded in 2005 as a Perl and Java shop, expanding to other technologies over the years. After 13 years of Perl development we have plenty of experience and excitement for this language, something we want to share with this new group of interns.
We have one of the largest Perl teams in Europe and we’re proud to say that quite a few of our developers started their career with us, learning about Perl through internships just like this one. Now, they’re passing on the knowledge they’ve accumulated over the years to another round of Perl developers to be.
The internship will have two parts, a theoretical part where the interns will be introduced to the world of Perl by our trainers and a practical part where they will be working on an application built in Mojolicious with DBIx and MySQL, using the microcontroller Raspberry Pi. The microcontroller will have several types of sensors attached and will be able to monitor the temperature and humidity in our office.
A long time coming, this looks along /usr/lib64 and /lib/x86_64-linux-gnu for existing libreadline.so.* libraries, otherwise assumes v7. I'll add more directories as I find them, and make sure that it looks recursively in /lib if all else fails as part of the next release. I'll be adding comments to the GitHub issues shortly, as it should address most of the existing problems.
I've requested The Perl Foundation to cancel my currently running grant "Perl 6 Bugfixing and Performance of Rationals / Fixing Constraints on Constants" on the grounds that a more detailed investigation into proposed features that occurred during the course of the grant showed many of them to be unwanted or unimplementable.
Since any further grant's work now differs significantly from what the TPF and community voted on, I prefer to cancel the grant and perform any of the remaining work on a volunteer basis, whenever I get a chance.
No payments will be made for any of the completed work to date and it is to be deemed to have been performed on a volunteer basis.
Summary of Changes from Original Proposal and Reasons for Cancellation
The newest blog post on the Ocean of Awareness blog is "Undershoot: parsing theory in 1965". It revisits the question "Why, despite all evidence, is parsing considered solved?", this time supplying some more background.
If the state of the art of computer parsing is taken as anything close to its ultimate solution, then it is a case of "human exceptionalism" -- the human brain has some power that makes it much better at parsing than computers can be. It is very unlikely resorting to human exceptionalism as an explanation would be accepted for any other problem in computer science. Why is it accepted for parsing theory?