I thought it might be fun to write a "polymorphic virus", but that would involve learning executable file formats and machine code, which sounds like a lot of effort. So instead I went for the next best thing: Perl instead of machine code, and self-replicating code in the form of a quine rather than "infecting" other executables.
When you run this code, it will output a new version of itself, which you can run again to get another version, etc. Unlike a regular quine, which will produce an exact copy of its source code, this program is polymorphic: Its output is functionally identical to the original, but the code might look completely different. This is just a proof-of-concept, but it does try to introduce variations in all parts that a simple static scanner might use for detection (such as signatures in an antivirus). There are still fixed parts, but they're small and generic (such as ; or =) and can be found in most any perl script.
On my last post I asked Gemini to create a product requirement document to outline a plan to improve the overall state of the ev-calc-pricing application. In this blog entry I will discuss my interaction with Gemini and how it increased code coverage of the ev-calc-pricing application. Below is an example of the prompts:
----
> review PRD.md for @ev-calc-pricing
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✓ ReadFile PRD.md │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ The PRD.md outlines a plan to refactor the ev-calc-pricing application to improve maintainability, data accuracy, and
robustness.
Key issues identified:
* Manual, hardcoded electricity rate data in StateRates.pm.
* Minimal test coverage.
* Unused scripts and lack of input validation.
The plan focuses on four epics:
1. Automated Data Pipeline: To fetch, process, and load electricity rate data automatically.
2. Comprehensive Test Suite: To ensure accuracy and stability of calculations and data loading.
3. Code Refactoring and Cleanup: To remove hardcoded data, unused scripts, and consolidate data files.
4. Enhanced User Experience: To add input validation and better user feedback.
Get at the usual place...
Elsewhere, I've moved house, into a retirement village.
Yes, folks. I'm 75, much to my horror and astonishment.
It was an exhausting process, but I'll settled in now. And still programming!
And not just the wikis. I have various other projects I can get back to now I've moved.
Why move?
Because I downsized. The price difference gives me a little bit of money in the bank.
In my previous blogpost I briefly discussed the use of Gemini Cli on a Perl dancer application to analyze its codebase. The next step is to generate a product requirement document. Why is this important ? well I had a bad experience with Gemini on another application where I allowed it to roam free "YOLO" mode and it basically started fixing all the problem areas one after another for about an hour or so until my free trial of Google Gemini expired for the day. This resulted into a completely rewritten application that was not able to run due to so many different errors. The cool thing is that I had version control so a good old "git reset --hard Head" cleared up my project and I could start over again.
We had a small amount of helpful feedback on the named signature parameters PR. Paul wants to merge by the end of the week for the purposes of inviting more feedback, assuming no issues are raised in the meantime.
Every year we “welcome” a new bunch of trainees into our department. Young,
intelligent and enthusiastic, their psyche and physique have yet to be shaped to
accommodate cynical scepticism, efficient laziness, and an integument
thickened by years of abuse into something that offers natural protection
from radiation emanating from the monitors they will stare at all day playing
Solitaire.
One such fellow, let’s call him Nik the Greek, came up to me
with that sickening joie de vivre characteristic of youth, and proceeded
to reveal how eager he was to demonstrate his enormous intellectual
assets. I would have raised an eyebrow, had I the energy to do so. But
been there, done that. I was once his age I suspect, though either I
can’t remember or have developed a block to my memories as an act of
self-preservation.
Binary Golf Grand Prix is an annual small file format competition, currently in it's sixth year. The goal is to make the smallest possible file that fits the criteria of the challenge.
This year's BGGP challenge was to output or display 6. I always wanted to work with actual machine code, so I decided to submit a DOS COM executable. Why? Because the COM format has no headers or other metadata; you can just put some x86 instructions in a file and run it directly.
Having no experience with DOS, I started by looking up a "hello world" example and found https://github.com/susam/hello:
MOV AH, 9
MOV DX, 108
INT 21
RET
DB 'hello, world', D, A, '$'
We touched on the recent discussion about the classification of modules included with the perl distribution. We agreed that the PSC and p5p need to do a better job of tracking which modules have active maintainers and who they are. Something like a dashboard would be useful for this, and we identified as a starting point that it would be good to have an up-to-date overview of dists, their maintainers, and whether they are active. Maintainers.pl probably already covers the list of dists but it would be useful to split the data out into a separate file so it could be used in other scripts, which could also populate records with additional data, outside of what Maintainers.pl needs, for their own purposes. For example a tool for querying PAUSE for the maintainers of dual-life dists and flagging deviations could check a custom field where known deviations (along with a comment) would be recorded, so as to not flag them.
We had a somewhat surface-level discussion about the future of stdio in Perl. Leon has been thinking about this and intends to write a separate mail to p5p about it.
Available from the Wiki Haven.
I have still not had time to update CPAN::MetaCustodian, so it does not yet work correctly with the latest version of Perl.Wiki.html.
At the beginning of the year, we ran a small experiment at work. We hired four annotators and let them rate 900 sentences (the details are not important). To decide whether the inter-annotator agreement was significant, we calculated (among others) Krippendorff’s alpha coefficient.
I’d used Perl for everything else in the project, so I reached for Perl to calculate the alpha, as well. I hadn’t found any module for it on CPAN, so I wrote one: I read the Wikipedia page and implemented the formulas.
The Real Data
The experiment was promising, so we got additional funding. We hired 3 more annotators, and a few months later, another nine. This increased the number of raters to 16. So far, they’ve rated about 200K sentences. Each sentence has been annotated by at least two annotators (usually three).
One day, I decided to calculate the inter-annotator agreement for the new data. To my surprise, the calculation took more than 6 hours.
Without weighing in to the pros and cons of using a Monorepo approach to your organizations codebase, I am interested in hearing about tools and approaches that have been used with Perl.
For example, I have found that Bazel has Perl support which seem fairly actively. I wonder if there is anything that can integrate with Dist::Zilla? Or any way of managing pulling third party code?
Experiences with CI/CD in the normal Git hosting platforms are also of interest - although it does seem like Github and Gitlab are designed around death-by-repo - and I have seen some features to vary the "actions" behavior based on whats changed. I am however just as interested in if you have had experiences withotherplatforms please chime in!
Fwiw I realize that perhaps Git isn't the best for Monorepos (although you could argue that the Linux Kernel is in a monorepo) and I have no issue with current alternatives and upcoming ones.
Any plugins that can help? For anything mentioned or not.
I apologize it took longer than expected - open source doesn't always
move as fast as we'd like it to - but there's a lot of great things in
this release that make it worth the wait.
Head on over to Perl.com to check out the details. Here's a quick
summary of what's new:
New documentation, courtesy of a grant from the Perl and Raku
Foundation
Extendable config system, thanks to Mikko Koivunalho
Removed Dancer2::Template::Simple from core Dancer2
Retired our fork of Template::Tiny, and integrated our changes into
the core distribution (thanks, @ether!)
Configurable data/secrets censoring with Data::Censor
Logging of hooks as they are executed
New hook: on_hook_exception
CLI improvements:
Changes that allow for scaffolding of other things (plugins, etc.)
in future Dancer2 releases
Scaffolding of tutorial app
We're really excited for this release, and we hope you are too!
We often rely on our tools and just deploy new DB versions and move on.
Lets see these simple examples. Example 1
You have Schema v1 where table's column has the name `X`. At the next Schema v2 instead of it you created column named `Y`.
v1 -> v2
X -> -
- -> Y
So the tool correctly drops the `X` and creates `Y`. Example 2:
For downgrades it looks the similar:
v2 -> v1
Y -> -
- -> X
Simple! Right??
Example 3
Let's do it in more advanced way. Now instead of create/drop we will rename field:
v1 -> v2
X -> Y{renamed X}
In this scenario SQL:T will detect `renamed` option and will generate `ALTER ...` statements correctly instead of CREATE/DROP one.
Example 4
Let's move to Schema v3 where we create `X` and drop `Y` (like we did in the example 1):
The annual Perl-Conf.Ru/25 will take place in Moscow on September 27, 2025 — a special date marking the birthday of Perl's creator, Larry Wall!
The conference will bring together the Perl community to share experiences and discuss current trends in development. The program includes reports on modern tools and practical approaches to solving complex problems.
The talks will cover:
Test2: life after Test::More
Turbo Perl: batteries debugger included
Writing typemap for passing structures to XS
Perl & Postgres: the hard way, because it's easier
After a very long hiatus due to the triplet of work-vacation-work, we return to Part 3 of my AI assisted coding of a Perl interface to a foreign library.
I will not repeat the post on my Github pages Github pages, or the documentation of my MetaCPAN package Bit::Set which features a "vibecoding" section.
However, I would like to share the take home points from my exercise:
I found the agentic bots to not be very helpful, as they entered these long repetitive and useless reflections without being able to fix the problems they identified when the build of `Bit::Set` failed.
The "Ask" mode chatbots could generate lots of code, but with subtle mistakes.
Success with porting test suites from one language to the other was highly variable, ranging from near perfect to outright refusal to execute a difficult task.
On the other hand, the chatbot was excellent as an auto-complete, often helping me finish the structure of the POD and putting together the scaffold to fill things in.
We are moving full steam ahead. The Journals are not so easy to put out 2x a year we are finding, but the editing process for Issue #2 is moving ahead nonetheless. We are now collecting papers for inclusion for Issue #3. But our hybrid conferences are proving to be very successful endeavors. We hope you will consider submitting a Science Track paper or regular Perl talk to this 2 day hybrid conference in sunny ole Austin, TX, USA.