Crypt::Passphrase is a module for managing passwords. It allows you to separate policy and mechanism, meaning that the code that polices authorization doesn’t have to know anything about what algorithms are used behind the screen, and vice-versa; thus making for a cryptographically agile system.
It’s not only handling the technical details of password hashes for you but also it deals with a variety of schemes. It’s especially useful for transitioning between them.
A configuration might look like this (Koha):
]]> my $auth = Crypt::Passphrase->new( encoder => { module => 'BCrypt', cost => 8, }, validators => [ 'MD5::Base64' ], );Using it might look like this:
if (!$auth->verify_password($password, $hash)) { die "Invalid password"; } elsif ($auth->needs_rehash($hash)) { my $new_hash = $auth->hash_password($password); ... }
It supports a variety of algorithms, but argon2 and bcrypt are by far the most popular ones. That said, it can do much more that that: it can do peppers for you.
The function of peppers is to protect leaked passwords, especially the bad ones. Password hashes try making brute-force attacks so expensive that attackers won’t even bother, but in the end they can’t really protect passwords from a dictionary attack. The key-space of bad passwords is so small that you can’t really prevent that.
When you add a pepper, that means an attacker needs to brute-force both password and pepper, but because the pepper doesn’t need to be memorized by a human it can actually be a piece of high-entropy (e.g. a 16 or 32 byte chunk of good randomness). That would make it well outside of reach of any brute force attack for sheer physical reasons.
The most important thing you must understand about peppers is that like passwords the security they provide hinges entirely on their secrecy. If that secrecy is compromised they don’t do anything for security. If you remember nothing else of this blog post, please remember that.
The first thing you’d probably notice about my modules is that you don’t pass it a pepper, but a map of peppers. This is an essential quality of the system that a lot of naive pepper implementations are lacking. Peppers are keys, and all keys must be rotatable. Like passwords, you need to be able to change them if they may have been compromised. By using a map and adding the identifier in the metadata section of the hash, you can rotate in a new key while still able to check old ones. This gives the system the agility it needs to
The second thing you might notice is that I provide two very different styles of peppering; using some sort of MAC before the password hash, and using symmetric encryption after the password hash (e.g. Crypt::Passphrase::Argon2::AES and Crypt::Passphrase::Bcrypt::AES). The former approach appears to be more common out in the wild, but that latter is by far the better one. Firstly because its security is easily provable (it hinges only on symmetric encryption, not on an unusual combination of constructs), but secondly because it allows for easy re-peppering without needing the user’s password to recompute the password hash inside of it (essentially just decrypting with the old key and encrypting with the new one). For that reason I would strongly recommend the latter approach.
It can be as simple as this:
my $auth = Crypt::Passphrase->new( encoder => { module => 'Argon2::AES', peppers => \%peppers, }, );
All you really need to do is change the module name and pass in the peppers. The hardest part of it is probably securely storing the peppers. There are many tools to help you with this (e.g. vault, sealed secrets, and/or my own Mojolicious::Plugin::Credentials). How to best do this really depends on your setup.
Arguably the best option is using a hardware security module (e.g. CP::Argon2::HSM), but few people has a hardware security module laying around (good ones are rather expensive, though you might convince your TPM2 to function as one).
Using peppers doesn’t have to be that hard. If you have an appropriate credentials store, you can easily add it to your application and enhance the security of your passwords. Maybe you too should give Crypt::Passphrase::Argon2::AES
a try.
Half of my new modules were related to my password framework Crypt::Passphrase
. To be honest most of them are either small (± 100 LOC) glue two or three other pieces of code together. And then there was Crypt::HSM
, a PKCS11 interface (to use cryptographic hardware without exposing cryptographic keys) that was probably more work (2600 LOC of XS) than the others combined.
Most of this was with the aim to add peppering support to Crypt::Passphrase
, a subject extensive enough that I should probably dedicate a separate blogpost to it.
ExtUtils::Typemaps::Magic
contains a set of typemaps that help me write XS based objects. In particular the MagicExt typemap allows me to write thread-safe objects (in my particular case: refcounted), which no built-in typemap does. App::typemap
helps one integrate typemap bundles into your local typemap file, and Dist::Zilla::Plugin::Typemap
does the same for dzil
.
I finally got around to publishing two pieces of toolchain that had been in the pipeline for years. CPAN::Static
contains a specification and reference implementation for static installation of modules in CPAN clients. For 90% of all dists, ExtUtils::MakeMaker and Module::Build are an overkill and all they really need is to copy some files and run tests.
CPAN::API::BuildPL
, a specification for Build.PL implementations was mostly written by David Golden but never got published, but now CPAN::Static depends on it so was published alongside with it.
These two modules add a little typing to Perl. Magic::Check
implements runtime (type) checking on a variable, and Magic::Coerce
implements coercers. They're both really more low-level backend modules that beg for a wrapper with a better syntax that I haven't come up with yet.
This module brings Thread::CSP style channels to threads.pm as an alternative to Thread::Queue
. As its name indicates, its semantics are close to that of Go channels, instead of the more asynchronous behavior of Thread::Queue
.
This is an implementation of a simpler and more predictable kind of smartmatching than the one that comes with core. It's intended to be usable even if smartmatching gets removed from core itself.
I had a productive year, and some pretty good leads to move forward this year. I'm looking forward to it.
]]>&
prototype that's useful IME, the other never really are.]]>
That sort of transpilation was exactly what dzil was designed for. It could hide all of Mite in a way that doesn't even require the author to change any part of their workflow.
]]>Thread::Csp::Promise
class:
MODULE = Thread::Csp PACKAGE = Thread::Csp::Promise PREFIX = promise_]]> How did I write XS with so little code/boilerplate? By using XS the way it was originally intended: to glue Perl and C together, not to implement any behavior.SV* promise_get(Promise* promise)
bool promise_is_finished(Promise* promise)
SV* promise_get_notifier(Promise* promise)
CODE
block in your XS functions, but often you don't. For example
SV* promise_get(Promise* promise)is actually equivalent to
SV* promise_get(Promise* promise) CODE: RETVAL = promise_get(promise); OUTPUT: RETVALBy giving the `promise_get` function the right shape and name, I don't need to write any of that.
This doesn't only mean less code (which is always good), it also means that it's much easier to split a large amount of code into multiple files, as doing this in C is much easier than doing it in XS (e.g. DBI.xs is 5700 lines). This aids in making your project more maintainable.
SV* promise_get(promise) Promise* promise;The author of perlxs and perlxstut was clearly fond of K&R style, and everyone seems to have copied it from the documentation, but ANSI style is far more familiar to most people, and less repetitive. While the K&R style can do a few things ANSI style can't (e.g. with regards to custom conversions), it's very uncommon to need any of that.
TYPEMAP Promise* T_PROMISEINPUT
T_PROMISE
$var = sv_to_promise($arg)OUTPUT
T_PROMISE
$arg = promise_to_sv($var);
Using these templates, you don't need the XS or the individual functions to worry about type conversions for the most common argument types.
MODULE = Thread::Csp PACKAGE = Thread::Csp::Promise PREFIX = promise_That way I can namespace my C functions to all start with `promise_`, but on the Perl side
promise_get
will be a sub called get
(in the package Thread::Csp::Promise
).
C_ARGS
to override only how the arguments are passed from Perl to C, without overriding any of the rest of the code generation. And then I defined a helper that turns the arguments on the stack into an array that is passed to the function.
Promise* thread_spawn(SV* class, SV* module, SV* function, ...) C_ARGS: slurp_arguments(1)]]>
I had a reasonably productive year, releasing several modules that I think/hope are useful for the wider ecosystem.
This module manages the passwords in a cryptographically agile manner. That means that it can not only verify passwords using different ciphers, but it also aids in gradually upgrading passwords hashed with an outdated cipher (or outdated settings) to the current one; for example when you want to upgrade from bcrypt to argon2. Password hashing is both a rather common form of cryptography, and one that is more subject to change than others; you should probably reevaluate your password handling every couple of years. With this module, you can initiate such a transition with a simple configuration change.
This also includes a number of extension distributions (e.g. Crypt::Passphrase::Argon2
, Crypt::Passphrase::Bcrypt
, etc
), and one new backend module (Crypt::Bcrypt
)
My most ambitious project of the year by far. It's actually been in the making for a decade, full of lessons learned in my previous attempt. Thread::Csp is a new threading library (build on ithreads primitives, but not on threads.pm and doesn't clone whole interpreters); it is based on Communicating Sequential Processes (hence the name), the same model that Go uses (in particular for its channels).
I firmly believe share-nothing message-passing models of multi-threading are the overlap between what is useful and what is realistically possible given the current interpreter.
This is essentially an autodie replacement with one important difference: it's based on opcode overrides instead of function overrides. This means not only that it interacts better with other pragmas, but also that it can support keywords that can not easily be overriden (such as print
and system
). It should also give less weird edge-cases than autodie.
I didn't produce as much Raku code this year, most of my Raku energy went into writing a series of blog posts that eventually I made a conference presentation instead.
This was a port of the previously mentioned Perl module. It doesn't quite have the backend ecosystem that its big brother has, but given that there's a lot less legacy software in Raku that's not all that much of a problem.
A friend complained about the lack of MQTT support in Raku, and binary protocols just happen to be something I have a lot of experience with, so I implemented an MQTT client. While arguably this is the least useful module of the bunch, it was the most fun to write. Raku's typesystem and integrated event loop made this experience a lot smoother than they would have been in other languages.
]]>Right now there are two groups of people with opinions on this matter.
One group is appalled by the original report, because they have a number of serious concerns with the report. There was
Combined this means that people fear the CAT because this is exactly the sort of behavior that can easily result in innocent people being banned.
The other group was relieved that someone they have known to be toxic is finally being removed from the community. Most have had so many negative experiences with him that they'll readily believe any further accusations in his general direction without need for further evidence. Others genuinely don't care anymore how the sausage is made as long as he's eliminated from the community.
These different worldviews make it almost impossible for people to talk about the issue at hand, because they're talking past each other. Almost any discussion on the subject quickly devolves to bickering between people saying "How can you defend this toxic person" versus people saying "how can you defend this miscarriage of justice". For a lot of people it becomes a "you're either with us or against us" type of issue. Without splitting these conversations, we can't actually meet each other eye-to-eye. One can admit that what happened here was a cockup without denying that it tries to deal with an actual issue.
Simply put, it rather appears like the CAT is firmly in the second camp. Everything that happens makes sense if they already believed him to be toxic and this incident was an opportunity to kick him out for once and for all. I'm not saying this was a conspiracy or some such; I'm suggesting that they were sufficiently biased that they got sloppy in dealing with this incident, they were entirely caught off guard by opposition to what they had done. The thing is, the CAT should not be doing a witch hunt, even if we know the target to actually be a witch.
The CAT's (draft) charter says "the CAT must be trusted and viewed as consistent and impartial" and "to maintain the trust of the community, the CAT must make its processes and actions transparent while not sacrificing privacy" but right now a large segment of the community doesn't trust them anymore because they have failed to do exactly those things. Despite all their good intentions, the CAT's actions actively worked against those intentions by focusing on the "easy win" and made the situation more difficult the next time action needs to be taken.
The CAT is supposed to enforce accountability in our community, but it can not credibly and effectively do that if it is not accountable itself. What TPF should have done IMHO is pull the report and let someone else redo the entire thing, but it's probably too late for that now. What they can still do to put our community on a path out of this conflict is for them to:
but from my conversations with them over the past five weeks it rather looks like they intend to just move on without doing any of those things.
We've been infighting for a full year now, for a brief moment between the PSC's use v7
announcement and the CAT's report it seemed we might finally get some peace. The CAT clearly underestimated just how divisive this action would be, and more division is the very last thing our community needs right now. This is the thing that upsets me the most of all; I remember a few weeks ago telling myself "finally we can put all this drama behind ourselves", I even wrote a blog post with that perspective and quite the opposite has happened.
I am tired of conflict and very disappointed.
]]>In it he defines a list of project values:
All these values are important - but they are in tension. In the end one has to choose between them.
Perl's has traditionally prioritized certain values over these others, and in my experience these are:
Extensibility is probably less obvious, probably because it was less of a concious choice, but feels like the right pick for a language that has several OO frameworks and custom keywords.
Stability, in particular backwards compatibility, is thoroughly embedded in our policy document:
Lately, ignoring or actively opposing compatibility with earlier versions of Perl has come into vogue. Sometimes, a change is proposed which wants to usurp syntax which previously had another meaning. Sometimes, a change wants to improve previously-crazy semantics.
Down this road lies madness.
...
When in doubt, caution dictates that we will favor backward compatibility.
...
Using a lexical pragma to enable or disable legacy behavior should be considered when appropriate, and in the absence of any pragma legacy behavior should be enabled.
...
No matter how frustrating these unintentional features may be to us as we continue to improve Perl, these unintentional features often deserve our protection. It is very important that existing software written in Perl continue to work correctly.
More than any other major scripting language, we value keeping code working. Where other similar languages (especially Python) are breaking relatively common constructs regularly, we generally tried to limit that to the margins (though there's certainly some breakage in any major release).
That doesn't mean all subcommunities share exactly the same values though. I'm involved in the toolchain, and in the toolchain we have very specific values:
These are the values of sysadmins. Environments where working things have to keep working.
Whereas for example the Mojo community generally seems to prioritize
These are the values of modern web development, where change is the only constant.
And mostly, that difference is fine. It helps a lot if a community's values overlap with the language values, but different communities can have different values without biting each other.
That said, Perl has been having an internal conflict over its values and where to take the language itself. This tension has existed for several years now, and is focused primarily around stability. The primary axis of tension is approachability versus stability.
Simply put, should new features and defaults be guarded by a version or feature guard (e.g. use v5.34
or use v7
) (stability), or should they be enabled by default in the next perl version (approachability). 7.0 doesn't aim to bring new features, it doesn't enable us to do anything that isn't possible without it (other than not writing that guard). Instead, it aims to change perl culture as we know it. The whole point of perl7 is radically choosing approachability over stability.
The crucial thing to realize here is that that means that perl7 is not just a fork of the interpreter, it is also a fork of our community and our ecosystem. To some extent that fork can be postponed until perl8 drops perl5 compatibility, but given this new course it is inevitable. Some will join this brave new world, and some will not.
To make this fork of values complete, even the values of governance are completely different. Where perl5 had perl5-porters, a mailing list that was open to the entire community (and historically perhaps a bit too open), perl7 has a steering committee whose membership is invite-only and that only posts summaries of its activities to p5p.
And while everyone is wondering where perl7 is going, the other crucial question is where perl5 is going; will it stop where it is now (the current official plan), will there be a 5.34 (something I have repeated argued for because it makes no sense for the sunsetting release to have experimental features, and is lacking a perl5 executable out the box), will perl5 development continue as it did before? This is something that isn't talked about much and I'm not sure yet what will happen, but I am pretty sure that decision shouldn't be taken by the people who don't want to use it.
I don't know where we're going. I'm not even sure if this forking is good or bad in the long run (it could be good if managed well, but so far it isn't). And that terrifies me.
]]>Hegel remarks somewhere that all great world-historic facts and personages appear, so to speak, twice. He forgot to add: the first time as tragedy, the second time as farce.
The Eighteenth Brumaire of Louis Napoleon - Karl Marx
Sawyer just announced his plans for perl 7. And while Perl 7 sounds like a lovely language, I do see a number of issues:
The proposal is presented as a linear progress, I don't believe this is realistic. This would be fork much like the python 3 transition is (which also wanted to be a simple linear progression). As we all know, they're currently in year 12 of a 5 year transition.
There are several problems here. CPAN as an ecosystem is the one that is given most attention to (not without reason; it is without doubt the most important collection of Perl code), but it's not even the biggest problem.
The biggest problem is that /usr/bin/perl
is infrastructure. We can't do breaking changes to its basic functionality for the same reason that shell and awk can't. Too many things in too many places are dependent on it, from system administration scripts to bioinformatics workflows to build systems (e.g. autotools, postgresql) and many more.
And this change is vastly breaking. Enabling strict and disabling prototypes (to make way for signatures) will break vast amounts of code, especially in the scripting domain of perl.
It's quite telling that 12 years after python3 was released /usr/bin/python
isn't a python3 by default on any of the big distributions (Ubuntu, Debian, Fedora, Red Hat, OpenSuse); and arguably python is less entrenched than perl is. I don't believe that /usr/bin/perl
will ever be perl7. That means that perl7 can only meaningfully exist if it's set up to coexist alongside of perl5 for a very long time. And that actually comes with a number of challenges that may not seem obvious at first (e.g. colliding script names and man pages).
Releasing a Perl7 will not erase perl5. Perl5 will in all likelihood remain the Perl that's available on any platform regardless of how successful perl7 will be.
Major version transitions are costly, and often traumatic (Perl 6 and Python 3 being obvious examples). Communities also take a lot of time catching up with them (again, see above examples); at least a decade if not more.
A big, breaking release is something a mature programming language can only do once per decade or so; anything else would result in two transitions going on at the same time. We shouldn't even be thinking about a perl8 this decade, let alone a perl9. If we are to do a perl7, we must get it right the first time. And I don't think this plan is quite getting it right. And quite frankly, I can't imagine any reason for wanting to do a big breaking release if we'd do 7 right.
The current plan is essentially Enable all non-controversial features by default, and I don't think that that is the best we can do. There are a lot of features that haven't been implemented before because they don't make much sense in a minor release (in particularly the kind that removes syntax like no feature 'indirect'
). Releasing it now will force a perl8 relatively soon, and that would be undesirable for all the reasons stated above.
We have been failing at shipping non-experimental signatures for more than half a decade now, why would we be able to ship a perl 7? The most significant new feature that made it out of experimental in the past half decade was postfix dereferencing, and while welcome it's not quite a game changer.
Sadly, the most convincing reason not to go through with this may very well be "we may not be able to". I think we need to figure out what problems we can resolve before deciding to actually go forward with this.
There's just no way we can do all of the above before the end of the year for a variety of reasons. Not only because it will require adaptations on the perl5 side to enable cohabitation, but also because we will need to sort out a lot of details. Trying to rush is likely to result in a failure, and is not something we can afford. I can't imagine any way of successfully doing this that doesn't involve releasing a v5.34 first (and possibly more).
The if you don't want your code to break then don't upgrade argument is rather assuming users have a firm control over which perl they are running. This is generally true for million line perl web applications, but this is not true for system perl.
If our objective is to limit ourselves to perlbrew/perlbuild/etc , many of my objections become moot. But I don't think that should be the target, I think that would exclude a wide range of applications. So no, I don't think saying "then don't upgrade" really solves that problem. We may be able to postpone the problem, but it won't go away by itself.
I do not recognize this distinction at all. Just because I actively maintain my stuff doesn't mean I want to be dealing with other people breaking my code. If I wanted to deal with the whims of a platform breaking trivial things I'd be programming python.
Associating not wanting the language to break with diminishing use of the language is perplexing to me. Perl is a language of which a lot has been written already, and relative to that past popularity not all that much new code is being written. Quite a lot of Perl strongholds are attributable to Perl not breaking, it's uncertain if the pain of this process will be worth the gain.
There's also this suggestion that people who care about backwards compatibility contribute less to the language. This isn't actually explained further but it seems like a rather bold statement to me.
Perl has many different types of users, with many different needs. This is inherent to a language that tries to be useful at 1 line and at 1 million lines.
The argument that has been made in the keynote suggests that the only reason why one would use "old-style Perl" is because you've abandoned your code, and I don't think that is true. Many best practices that are essential when writing large applications are not nearly as valuable in a small script; it would be outright silly to suggest one-liners need strict.
The changes that are proposed are largely serving the manipulexity end of the spectrum. And this is an important user base, but it's not our only user base. For the whipuptitude end of the spectrum, the scripters, this represents their code breaking without them getting anything in return. That is the priority that is being chosen here.
I believe this "bad pattern" rhetoric is flawed. Ultimately the only good code is working code, and the only bad code is code that doesn't get the job done. What I hear being described as bad code is actually merely ugly code. And this transition can break stuff for people, and breaking code is bad, whereas ugly code is only a problem to me if it ends up on my plate.
How did we get into this brave new world where one calls judgment on users and deplatforming the ones deemed bad?
This reminds me of a bioinformatician I met at a recent TPC. Was their code strict? No. Did it get the job done? Yes. Why would they care if we in the echo chamber approve of her code, they have more important things to do, like curing ovarian cancer. In my book, they got their priorities straight.
This seems like a lot of pain, just to avoid having to type use v5.32
. The real problem of course is that that doesn't only not enable warnings (which we can easily fix for 5.34), but it also doesn't enable signatures (probably the recent feature people care most about). If we can make use v5.34
do those two things, I don't think I need a perl7, even if I understand why some other people feel they do want it. Boilerplate may be annoying, but one line of boilerplate in every file is way more tolerable to me than the pain of a fork.
This was Jesse Vincent's vision 9 years ago, and I still think this is the right trade-off for a platform like Perl.
]]>Oops, fixed that. These keywords really aren't intuitive to me.
Unfortunately when/whereis/whereso are not among the keywords exposed as functions in CORE::, so it is hard for me to see how to write code that works across the divide other than (yukk!) stringy eval or (double yukk!) source filters.
Yeah, that's probably the worst part of it. Not having a single way that works before and after this change.
]]>