Many of you will know my good friend Peter Scott as a Perl luminary. More recently he has turned his attention and his considerable talents to focus on the future of AI, both as an unprecedented opportunity for our society...and as an unprecedented threat to our species.

A few years back, he released an excellent book on the subject, and just recently he was invited to speak on the subject at TEDx. His talk brilliantly sums up both the extraordinary possibilities and the terrible risks inherent in turning over our decision-making to systems whose capacities are increasingly growing beyond our own abilities, and perhaps soon beyond even our own understanding.

Whether our accelerating use of AI brings us utopia or extinction, the very real possibility of either outcome surely makes these twelve minutes well worth paying attention to.

For over a decade now, I've been running public training classes in both presentation skills
and software development in conjunction with the Swiss Institute of Bioinformatics,
the University of Lausanne, and the École Polytechnique Fédérale de Lausanne.

This year, in the week of March 9-13, we're offering my full set of
Presentation Skills classes as a three-day sequence (though, of course, you can also sign up
for just one or two of the classes, if you prefer):

These two classes are based on my popular Perl courses on these topics, but I've now
redesigned and adapted them to be entirely language-neutral, so they're equally useful
to developers working in any other mainstream language(s).

During the same week I’ll also be giving a
half-day seminar on Raku,
which has been generously sponsored by EPFL and so will cost nothing to attend. It’s
suitable for anyone who would like a quick but comprehensive overview of this remarkable
new programming language.

Besides making the Raku seminar entirely free, SIB/UNIL/EPFL have done an amazing job
keeping the prices of the other classes extremely competitive...especially if you can claim
a plausible association to any academic institution, either as a student or staff member.

If you’re looking for some training that’s economical, practical, and just plain fun,
in a location that’s central, civilised, and simply
breathtaking,
then this week
in Switzerland might fit just the bill.

At present these are the only public classes I have scheduled anywhere in the world in 2020,
so if you’d like to work with me to substantially improve your skills in either public speaking or software
development, this is your opportunity. I hope to see you there.

In writing my past few blog entries I’ve repeatedly come across a situation that Raku doesn’t handle as well as I could wish. It’s a little thing, but like so many other little things
its presence is a source of minor but persistent irritation.

In my previous entry
I used some Raku code that illustrates the point perfectly.
I needed to build an object for every value in the @values array, or a single
special object if @values was empty:

for @values Z $label,"",* -> ($value, $label) {
Result.new:
desc => "$label ($param.name() = $value)",
value => timed { $block($value) },
check => { last if .timing > TIMEOUT }
}
if !@values {
Result.new:
desc => $label,
value => timed { $block(Empty) }
}

At almost the same time, in other (non-blog) code I was writing, I needed
exactly the same construction...to do something with every element of an
array, or something different if the array had no elements:

for @errors -> $error {
note $error if DEBUG;
LAST die X::CompilationFailed.new( :@errors );
}
if !@errors {
note 'Compilation complete' if DEBUG;
return $compilation;
}

These are just two examples of a surprisingly common situation: the need to iterate through a
list...or else do something special if the list is empty. In other words: if the loop
doesn’t iterate, do this instead.

There are several other ways I could have written those loops. For example, I could have
prefixed the for with a do, thereby converting the loop into an expression. Then I could
append an or and the special case code, so that code would be executed if the first do was
false, which would happen if the loop didn’t iterate at all:

do for @values Z $label,"",* -> ($value, $label) {
Result.new:
desc => "$label ($param.name() = $value)",
value => timed { $block($value) },
check => { last if .timing > TIMEOUT }
} or
Result.new:
desc => $label,
value => timed { $block(Empty) };
do for @errors -> $error {
note $error if DEBUG;
LAST die X::CompilationFailed.new( :@errors );
} or do {
note 'Compilation complete' if DEBUG;
return $compilation;
}

That certainly works, and it eliminates the repeated testing of @values or @errors,
but it’s aesthetically unsatisying, besides making the code less readable.

I could improve the readability (though not the aesthetics) by hoisting the
“if-there-were-no-iterable-values” test to the top, like so:

if @values {
for @values Z $label,"",* -> ($value, $label) {
Result.new:
desc => "$label ($param.name() = $value)",
value => timed { $block($value) },
check => { last if .timing > TIMEOUT }
}
}
else {
Result.new:
desc => $label,
value => timed { $block(Empty) }
}
if @errors {
for @errors -> $error {
note $error if DEBUG;
LAST die X::CompilationFailed.new( :@errors );
}
}
else {
note 'Compilation complete' if DEBUG;
return $compilation;
}

...but that just underscores the absurdity of needing to test the state of the iterated
array twice within the first two lines.

However, it does suggest a cleaner solution: one that eliminates repetition, and maximizes
readability. A solution whose only drawback is that it’s impossible in standard Raku.

That solution is: for loops should be able to have an else block!

An else block executes when the preceding if or when block doesn’t. In just the
same way, it ought to be possible to append an else block to a for,
so that the else block executes when the preceding loop block doesn’t.

If that were possible in Raku, then my two pieces of code would simplify to:

for @values Z $label,"",* -> ($value, $label) {
Result.new:
desc => "$label ($param.name() = $value)",
value => timed { $block($value) },
check => { last if .timing > TIMEOUT }
}
else {
Result.new:
desc => $label,
value => timed { $block(Empty) }
}
for @errors -> $error {
note $error if DEBUG;
LAST die X::CompilationFailed.new( :@errors );
}
else {
note 'Compilation complete' if DEBUG;
return $compilation;
}

There’s just that very minor problem of its not being valid Raku syntax (or semantics).
But, as usual, that’s not really much of a problem at all in Raku.
To solve it, we just redefine the for keyword...

To replace the standard definition of for we need to tell the compiler two things:
what the new definition looks like, and how it works. In other words, we need to define how to
recognize the new for syntax, and how to convert that new syntax into an “abstract syntax
tree” of opcodes that the compiler can optimize and execute. And, of course, we also
need to tell the compiler to use these new components instead of the standard ones.

In Raku, the grammar and semantics we are using to interpret any part of the code
is known as a “sublanguage”, or “slang” for short. A typical Raku program consists
of a number of slangs braided together: the main Raku sublanguage, the Pod documentation
sublanguage, the string sublanguage, the regex sublanguage, etc.
The objects implementing these various active sublanguages are available through a compile-time
variable: $*LANG.

In this instance, we need to augment the main Raku slang, so we create a role with the new
grammar rule for the extended for syntax, then mix that new syntax into the existing grammar.
Likewise, we need to extend the actions the compiler takes on encountering the new syntax, so
we create a second role specifying those actions, and later mix it into the existing actions.

In its simplest form, the new grammar rule looks like this:

# Encapulate new grammar component in a composable role...
role ForElse::Grammar {
# Replacement 'for' syntax...
rule statement_control:sym<for> {
<sym> <xblock(2)>
[ 'else' <pblock(0)> ]?
}
}

We declare the role (ForElse::Grammar) that will contain the new grammar rule,
then declare the rule itself. The rule’s name is: statement_control:sym<for>,
which tells the compiler that it’s a statement-level control structure, introduced by the
symbol for. In the body of the rule, we first match that symbol (<sym>), followed by an
“expression-block” (<xblock(2)>). An expression block is simply a shorthand for
matching a non-optional expression, followed by a non-optional block. The 2 passed into the
call to xblock tells the subrule that the block it matches must contain a topic variable of
some kind (because for loops always set a topic variable, and we might as well enforce that
when the source code is being parsed).

After parsing the for component, we now want to allow an optional else, so we
specify that as a literal ('else'), after which we expect a parameterized
block (<pblock(0)>). The zero argument tells the subrule that the else block
is not expected to have a topic variable. Then we wrap the entire else syntax
in non-capturing brackets ([...]) and make it optional (?).

Note that we don’t need to specify rules for <xblock> and <pblock>. Their rules are
already defined in the standard Raku grammar, to which we will eventually be adding
this new statement_control:sym<for> rule.

This two-line rule is sufficient to parse every valid for...else, but it will also
successfully parse several other invalid constructs. So we next add in a small number of
extra components to prevent that. The extended version of the rule looks like this:

The call to the <.kok> subrule
is used to check that, having matched the initial 'for', those three characters really do
constitute a keyword that’s okay. For example, if the 'for' is followed by a
=>, then it’s not the start of a for loop, but the key of a pair. Similarly, if the
'for' is immediately followed by an opening parenthesis, then it’s not the start of a for
loop; it’s a call to some function named &for. The <.kok> subrule (once again inherited from
the standard Raku grammar) does various lookaheads to check for these and other edge-cases, and
fails if any is found.

The empty braces ({}) after the call to <.kok> are there to indicate the end of longest-token
matching within that part of the rule. The issue here is that other statement-control symbols
might be defined by other people wanting to modify the code, and the grammar needs to know
which one to select if two or more of them match. In general, when considering a set of
alternatives within a regex or rule, Raku takes the alternative that matches the longest
substring (not the first alternative that matches, like in Perl). This is known as “longest token matching” or LTM for short.

When the grammar is trying to decide between our new for...else syntax, and (say) someone else’s
for...otherwise syntax, we don’t want it selecting ours just because ours matched more
total characters. We want it to select whichever syntax is more appropriate. So we need
to stop the LTM evaluator from considering the entire match, and only consider the keyword.
There are several ways to signal “end of LTM”, but the shortest and easiest is just to
insert an empty code block (i.e.{}) into the rule. Which is what we’ve done here.

The next addition to the rule is a call to the <.vetPerl5Syntax> subrule:

This call was added because the standard Raku grammar always looks particularly
closely at for loops to make sure that someone hasn’t accidentally used one of
the two older Perl 5 syntaxes by mistake. If the subrule looks ahead and finds a my
and/or a variable immediately after the for, and then an opening parenthesis
(<?before 'my'? '$'\w+ \s+ '(' >), it concludes that it’s seeing a Perl 5 for
and throws an X::Syntax::P5 exception. If it looks ahead and finds a pair of
parentheses containing three expressions separated by semi-colons
(<?before '(' <.EXPR>? ';' <.EXPR>? ';' <.EXPR>? ')' >),
it concludes that it’s seeing a Perl 5 C-style for loop and warns the user
to replace it with a loop instead.

Finally, we modify the call to <pblock(0)> like so: <elseblock=.pblock(0)>
This causes any match by the <pblock(0)> call to be stored under the key 'elseblock'
instead of the key 'pblock'. That will subsequently improve the readability of our
else-processing code.

Once these extra checks and balances are in place, our statement_control:sym<for>
rule is ready to be added to the current slang. If we did so, the compiler would now be
able to recognize for...else constructs, but we’d see no useful effect from its doing so.
That’s because we haven’t yet told it how to convert the new for...else syntax into
executable opcodes.

To tell it that, we declare a second role (so we can later mix it into the existing
compiler actions). In that role we specify a method of the appropriate name,
which the compiler will then call automatically every time it successfully parses
with our new statement_control:sym<for> rule:

# Encapsulate new actions for new 'for' syntax...
role ForElse::Actions {
use nqp;
use QAST:from;
# Utility function...
sub lookup(Mu \match, \key) {
nqp::atkey(
nqp::findmethod(match, 'hash')(match),
key
).?ast
}
# New actions when a 'for' is parsed...
method statement_control:sym (Mu $match) {
my $forloop := callsame;
if lookup($match, 'elseblock') -> $elseblock {
match.make:
QAST::Op.new: :op<unless>, $forloop,
QAST::Op.new: :op<call>, $elseblock
}
}
}

The first thing we do in our new action role is to load the facilities of the nqp and QAST
modules. NQP is the “Not Quite Perl 6” subset of Raku in which the majority
of the Raku compiler is written. QAST is the
“QuisquousAbstract Syntax Tree” representation of
opcodes and arguments to which all Raku code is reduced within the compiler. As we’re
effectively upgrading the compiler to handle our new syntax, we’re going to need to access
those syntactic components via NQP commands. And, to implement our new behaviour, we’re going
to need to build a suitable QAST structure.

First we build a simple utility function (lookup) that takes a pattern match from the grammar
and attempts to retrieve the abstract syntax tree of a particular named submatch from within
that match. Note that, because this code will be inserted into the compiler,
it can’t rely on the usual Raku data structures and access methods being available.
Instead, we need to use the underlying NQP access functions. In this case, the utility first
locates the function that extracts the hash-like component of the match object
(nqp::findmethod(match, 'hash')), then calls that hash-extractor function on the match data
structure ((match)), then does a key-lookup into the resulting hash (nqp::atkey(..., key))
then attempts to retrieve the abstract syntax tree associated with that match (.?ast).

Once we have this ability to extract particular components from a grammar match, we can
write a method that pulls out the various pieces of a for...else match and rearranges them into
a suitable QAST representation. That method has to have the same name as the rule whose match
it is processing, so we declare:

The method takes as its only argument the match object produced by the
corresponding grammar rule. We declare that parameter to be of type Mu(the root type
of the entire Raku hierarchy) because
it’s an NQP object and the Raku type system won’t pass it otherwise.
Note that we can’t just omit the type declaration from the $match parameter,
because then it would default to type Any, which would be too specific in this case.

Once we have the match object, the first thing we need to do in order to turn the parsed
match into a suitable QAST object is to convert the for component. But the standard Raku
parser already knows how to do that, so we can just tell it to fall back on the previous
behaviour...by invoking callsame.

That redispatched call will return a QAST object representing the for loop, and will also
install that same QAST object as the new abstract syntax tree for the $match object.
As we may need to override that behaviour (if there is an else involved), we keep the
for loop’s QAST object, by aliasing it to $forloop:

my $forloop := callsame;

Then we need to discover whether the parser actually did find an else block,
which we do by looking for a capture named 'elseblock' within with $match object
(lookup($match, 'elseblock')). If there was an else after the for, we need to
build a QAST structure that executes the else block only if the for loop didn’t execute.
In pseudocode, that’s:

That is, we build a new QAST unless operation (QAST::Op.new: :op<unless>).
passing it its two required operands:

a QAST object representing the condition to be tested,

a QAST object representing what to do if that condition is false.

In this case, the first argument (the condition) is the QAST object we got back from
callsame; the QAST object that implements the entire for loop (i.e.$forloop).
The second argument (what to do) is a new QAST object implementing a call to the else block
(i.e.QAST::Op.new: :op<call>, $elseblock).

Once we’ve build that QAST structure, we simply install it as the abstract syntax
tree for the original match ($match.make: ...).

And that’s it. When the new statement_control:syn<for> rule in the extended grammar
successfully matches, the compiler will invoke the equivalent statement_control:syn<for>
method in the extended actions, which will convert the parsed syntax into a QAST implementing
the extended behaviour.

Provided, of course, we actually extend the grammar and its actions.
Which we haven’t done yet.

But, like most things in Raku, actually extending the grammar and its actions is not hard to
do. Our goal is to have a module (let’s call it: Slang::ForElse) that installs our new slang
within any lexical scope where it’s use’d:

{
use Slang::ForElse;
for @values {
.say;
}
else {
say 'No values';
}
}

That module will need to modify the $*LANG object to install an augmented main grammar and
the corresponding extended actions. Specifically, the module will need to call the $*LANG
object’s .define_slang method, passing it the name of the slang to be modified (in this case: the
"MAIN" slang), and the new grammar and actions to be installed for that slang.

We’re going to need to call that .define_slang method every time the module is use’d,
so we should put the call in the module’s EXPORT subroutine. Like so:

sub EXPORT () {
$*LANG.define_slang:
"MAIN",
$*LANG.slangs<MAIN> but ForElse::Grammar,
$*LANG.slangs<MAIN-actions> but ForElse::Actions;
return hash();
}

The subroutine calls the .define_slang method of the $*LANG object, requesting it to update
the definitions of the "MAIN" slang. The second argument is the grammar to be installed,
which is just the current"MAIN" grammar ($*LANG.slangs<MAIN>), but with the
new for...else grammar mixed into it
(but ForElse::Grammar). The third argument is
the actions object to be installed, which is just the current"MAIN-actions" object
($*LANG.slangs<MAIN-actions>) but with our new for...else actions
mixed in (but ForElse::Actions).

Finally, we have to make sure that the EXPORT subroutine returns an empty hash
(hash()), to tell the compiler that we’re not actually exporting anything here.

And that’s it. In less than 25 lines, we’ve modified the syntax and semantics of Raku to
add a “missing” construct. It would be no harder to extend or modify the language in other
ways to scratch other itches. And because each extension or modification is
performed lexically, by mixing new behaviours into the existing slangs, these extra
features are likely to play nicely with one another.

For example, we could load both Slang::ForElse and
Slang::SQL and write:

use Slang::ForElse;
use Slang::SQL;
sql drop table if exists stuff;
sql create table if not exists stuff (
id integer,
sid varchar(32)
);
for @ids {
sql insert into stuff (id, sid)
values (?, ?); with ($_, ('g'..'Z').pick(16).join);
}
else {
sql insert into stuff (id, sid)
values (?, ?); with (99, 'default');
}
sql select * from stuff order by id asc; do -> $row {
"{$row<id>}\t{$row<sid>}".say;
};

The ability to define and deploy lexically scoped slangs makes Raku highly
future-proof. Anything we forgot to add to Raku in the original design
(such as for...else!) can easily be added later if needed.

And slangs also have the potential to make Raku highly interoperable with other tools.
For example, Raku could potentially become the ultimate “glue language”,
by allowing us to switch into slangs that look suspiciously like other programming languages,
whenever those languages might be more convenient to code in:

for @values -> \value {
use Slang::Python;

from math import floor, sqrt
def fac(n):
step = lambda x: 1 + (x<<2) - ((x>>1)<<1)
maxq = long(floor(sqrt(n)))
d = 1
q = n % 2 == 0 and 2 or 3
while q <= maxq and n % q != 0:
q = step(d)
d += 1
return q <= maxq and [q] + fac(n//q) or [n]
print(fac(value)))

}
else {
use Slang::Ruby;

require 'io/console'
print "No values. Continue?"
loop do
case $stdin.getch
when "Y" then return
when "N" then break
else print "\rNo values. Continue? [YN]"
end
end

}

Unfortunately, those particular slang modules don’t exist yet, but fully functional Raku
interfaces to both Python and Ruby (and to Perl and Lua and Scheme and Go and C) are
already available, so writing fully integrated
slangs for each of them would be just a simple matter of metaprogramming. ;-)

The first task of the
21st Weekly Challenge
was a very old one:
to find (what would eventually become known as) Euler’s number.

The story starts back in 1683 with Jacob Bernoulli, and his investigation of the mathematics of
loan sharking. For example, suppose you offered a one-year loan of $1000 at 100% interest,
payable annually. Obviously, at the end of the year the markclient has to pay
you back the $1000, plus ($1000 × 100%) in interest...so you now have $2000. What can I say?
It’s a sweet racket!

But then you get to thinking: what if the interest got charged every six months? In that case,
after six months they already owe you $500 in interest ($1000 x 100% × ^{6}∕_{12}), on which amount
you can immediately start charging interest as well! So after the final six months they now owe
you the original $1000 plus the first six months interest, plus the second six months interest,
plus the second six months interest on the first six months interest: $1000 + ($1000 × 50%)
+ ($1000 × 50%) + ($1000 × 50% × 50%). Which is $2250.
Which is an even sweeter racket.

Of course it’s easier to calculate what they owe you mathematically: $1000 × (1 + ½)^{2}
The added ½ comes from charging half the yearly interest every six months.
The power of 2 comes from charging that interest twice a year.

But why stop there? Why not charge the interest monthly instead? Then you get back the
original $1000, plus $83.33 interest (i.e.^{1}∕_{12} of $1000) for the first month,
plus $90.28 interest for the second month (i.e.^{1}∕_{12} of: $1000 + $83.33),
plus $97.80 interest for the third month (i.e.^{1}∕_{12} of: $1000 + $83.33 + $90.28),
et cetera. In other words: $1000 x (1 + ^{1}∕_{12})^{12}
or $2613.04. Nice.

If we charged interest daily, we’d get back $2714.57 (i.e. $1000 x (1 + ^{1}∕_{365})^{365}).
If we charged it hourly, we’d get back $2718 (i.e. $1000 x (1 + ^{1}∕_{8760})^{8760}).
If we charged by the minute, we’d get back $2718.27 (i.e. $1000 x (1 + ^{1}∕_{525600})^{525600}).

Bernoulli then asked the obvious question: what’s the absolute most you could squeeze outa
deese pidgeons? Just how much would you get back if you charged interest continuously?
In other words, what is the multiplier on your initial $1000 if you charge and accumulate interest
at every instant? Or, mathematically speaking, what is the limit as x→∞ of
(1 + ^{1}∕_{x})^{x}?

The answer to that question is the transcendental numeric constant 𝑒:

Which is named after Leonhard Euler instead of Jacob Bernoulli because (a) Euler was the first person to use it explicitly in
a published scientific paper, and (b) it really does make everything so much simpler if we just name
everything
after Euler.

𝑒 is for elapsed

So, given that 𝑒 is the limit of (1 + ^{1}∕_{x})^{x},
for increasingly large values of x, we could compute
progressively more accurate approximations of the constant with just:

That’s an easy solution, but not a great one. After 20 iterations, at N=524288,
it’s still only accurate to five decimal places (2.718279236108013).

More importantly, it’s a slow solution. How slow?
Let’s whip up an integrated timing mechanism to tell us:

# Evaluate the block and add timing info to the result...
sub prefix:<timed> (Code $block --> Any) {
# Represent added timing info...
role Timed {
# Store the actual timing...
has Duration $.timing;
# When stringified, append the timing...
my constant FORMAT = "%-20s [%.3f𝑠]";
method Str { sprintf FORMAT, callsame, $.timing }
method gist { self.Str }
}
# Evaluate the block, mixing timing info into result...
return $block() but Timed(now - ENTER now);
}

Here we create a new high-precedence prefix operator: timed.
It’s an genuine operator, even though its symbol is also a proper identifier.
We could just as easily have named it with an ASCII or Unicode
symbol if we’d wanted to:

# Vaguely like a clock face... sub prefix:« (<) » (Code $block --> Any) {…} # Exactly like a stopwatch... sub prefix:« ⏱ » (Code $block --> Any) {…}

Even without the fancy symbols, we still want an operator rather than a subroutine because
the precedence of a regular subroutine call is too low. If timed had been declared as sub timed we wouldn’t be able to place a timed block in a surrounding argument list,
as the call to timed would attempt to gobble up all the following arguments, and then
discover it is only allowed one:

say timed { expensive-func() }, " finished at ", now; # Too many positionals passed; expected 1 argument but got 3 # in sub timed at demo.p6 line 5

Making timed a prefix operator allows the compiler to know that it can
only ever take a single argument, so inline calls like the one above work fine.

Note that, as we’re feeling in need of a little extra discipline today, we’re going to make
use of Raku’s type system and strictly type
every variable and parameter we use. Of course, because Raku’s static typing is
gradual, the code would still work exactly the
same if we later removed every one of these type declarations...except then the compiler
would not be able to protect us quite so well from our own stupidity.

The timed operator takes a Code object as its argument, executes it ($block()),
augments the result with extra timing information, and returns the augmented result,
which can be of any type (--> Any).

That timing information is added to the block’s result by mixing into the returned object
some extra capabilities defined by a generic class component (a
role) named Timed.
The role confers the ability to store and access a duration (has Duration $.timing),
as well as methods for enhancing how a timed object is stringified and printed (method Str
and method gist). The overridden .Str method calls the object's original version of
the same method (callsame) to do the actual work, then appends the timing information to
it, neatly formatted in a sprintf. The overridden .gist just reuses .Str.

The most interesting feature is that, when we add the timing information to the result
of the block (but Timed(...)), we calculate the duration of the block’s execution
by subtracting the instant after the block executes (now) from the current
instant when the surrounding subroutine was entered (ENTER now). Prefixing any
expression with ENTER sets up a “phaser”(yeah, we know: we’re incurable geeks).
A phaser is a block or thunk
that executes when the surrounding block is entered. The ENTER then remembers the value
generated by the thunk, and evaluates to it when the surrounding code executes: in this
case, when it tries to subtract the value of the ENTER from the value of now.

You can use this same technique anywhere in Raku, as a handy way to time any particular block
of code. You can also use other phasers (such as LEAVE, which executes when control leaves
the surrounding block) to inline the entire operation into an easily pasteable snippet. For
example, to time each iteration of a for loop, you could add a single line to the start of
the loop block:

for @values -> $value { LEAVE say "$value took ", (now - ENTER now), " seconds";

In 100 Great Problems of Elementary Mathematics Heinrich Dörrie shows that, just as
(1+^{1}∕_{x})^{x} is the lower bound on 𝑒 as x → ∞, so (1 + ^{1}∕_{x})^{x+1} is the upper bound.
So we can get a much better approximation of 𝑒 for the same value of N by taking the
average of those two bounding values:

for 1, 2, 4 ... ∞ -> \x {
say timed {
½ × sum (1 + x⁻¹)**(x),
(1 + x⁻¹)**(x+1)
}
}

That’s ten correct decimal digits (2.7182818284920085) before it hits
the exponential wall. Better, but still not nearly good enough.

𝑒 is for evaluation

It looks like we’re going to need to try a lot of different techniques, over a wide
range of values. So it would be handy to have a simpler way of specifying a series of tests like
these,
and a better way of seeing how well or poorly they perform.

So we’re going to create a framework that will let us
test various techniques more simply, like so:

Or, if we prefer, to specify a more appropriate range of trial values
than just 1..∞, like so:

#| Dörrie's bounds assess -> \x=(1,10,100...10⁶) { ½ × sum (1 + x⁻¹)**(x), (1 + x⁻¹)**(x+1) }

Either way, the assess function will calculate each result, determine when to give up on
slow computations, tabulate the outcomes, colour-code their accuracy, and print them out neatly
labelled, like so:

To implement all that, we start with a generic class component (i.e. another role)
suitable for storing, vetting, and formatting whatever the data we’re collecting:

# Define a new type: Code that takes exactly one parameter...
subset Unary of Code where *.signature.params == 1;
# Reportable values, assessed against a target...
role Reportable [Cool :$target = ""]
{
# Track reportable objects and eventually display them...
state Reportable @reports;
END @reports».display: @reports».width.max;
# Add reportable objects to tracking list, and validate...
submethod TWEAK (Unary :$check = {True}) {
@reports.push: self;
$check(self);
}
# Each report has a description and value...
has Str $.desc handles width => 'chars';
has Cool $.value handles *;
# Display the report in two columns...
method display (Int $width) {
printf "%*s: %s\n", $width, $!desc, &.assess-value;
}
# Colour-code the value for display...
method assess-value () {
# Find leading characters that match the target...
my Int $correct
= ($!value ~^ $target).match( /^ \0+/ ).chars;
# Split the value accordingly...
$!value.gist ~~ /^ $<good> = (. ** {$correct})
$<bad> = (\S*)
$<etc> = (.*)
/;
# Correct chars: blue; incorrect chars: red...
use Terminal::ANSIColor;
return colored($<good>.Str, 'blue')
~ colored($<bad>.Str, 'red')
~ $<etc>
}
}

We start by creating a new subtype (subset Unary)
of the built-in Code type (i.e. the general type of blocks, lambdas, subroutines, etc.)
This new subtype requires that the Code object must take exactly one argument (where *.signature.params == 1).
Because (almost) everything in Raku is an object, it’s easy for the language to provide
these types of detailed introspection
methods on values, variables,
types, and code.

Next we create a pluggable component (role Reportable) for building classes that
generate self-organizing reports. This role is
parameterized
to take a named value (:$target) that will subsequently be used to assess the accuracy of
individual values being reported. That target value is specified to be of type
Cool, which allows it to be a string, a number, a
boolean, a duration, a hash, a list, or any other type automatically convertible to a
string or number. This type constraint will ensure that any value reported will later be
able to be assessed correctly. The :target parameter is optional; the default target
being the empty string.

The Reportable role is going to automatically track all reportable objects for us...and then
generate the aggregrated report at the end of the program. So we declare a shared variable
to hold each report (state Reportable @reports), with a type specifier that requires
each element of the array be able to perform the Reportable role. To ensure that the
reports are eventually printed, we add an ENDphaser:

END @reports».display: @reports».width.max;

At the end of execution, this statement calls the .display method of each report
(@reports».display), passing it the maximum width of any report description
(@reports».width.max) so that it can format the report into two clean columns.

To accumulate these reports, we arrange for each Reportable object to automatically
add itself to the shared @reports array, when it is constructed...by declaring a
TWEAK submethod. A submethod
is a class-specific, non-inherited method, suitable for specifying per-class initializers. The
TWEAK submethod is called automatically after an object is created: usually to adjust
its initial value, or to test its integrity in some way.

Here, we’re doing both: by having the submethod add each object to the report list
(@reports.push: self) and by applying any :check code passed to the constructor
($check(self)). This allows the user to pass in an arbitrary test during the constructor call
and have it applied to the object once that object is initialized. We’ll see shortly how
useful that can be.

Each report consists of a string description and a value that can be any Cool type, so we
need per-object attributes to store them. We declare them with the has keyword and a
“dot” secondary sigil
to make them public:

has Str $.desc;
has Cool $.value;

We also need to be able to access the string width of the description.
We could declare a method to provide that information:

method width { $.desc.chars }

But, as this is just forwarding a request from the Reportable object to the $.desc object
inside it, there’s a much easier way to achieve the same effect: we simply tell the
$.desc attribute that it should handle all object-level calls to .width by calling
its own .chars method. Like so:

has Str $.desc handles width => 'chars';

More significantly, we also need to be able to forward methods to the $.value attribute
(for example, to interrogate its .timing information). But as we don’t know what kind of
object the value may be in each case (apart from generically being Cool), we can’t know in
advance which methods we may need to forward. So we simply tell $.value to handle
whatever the surrounding object itself can’t, like so:

has Cool $.value handles *;

Now we just need to implement the .display method that the ENDphaser will use to output
each Reportable object. That method takes the width into which the description should be
formatted, and prints it out justified to that width using a printf. It also prints the value,
which is pre-processed using the .assess-value method.

.assess-value works out how well the value matches the target, by taking a
character-wise XOR between the two ($!value ~^ $target). For every character
(or digit in a stringified number) that matches, the XOR will produce a null character
('\0'). For every character that differs, the resulting XOR character will be something else.
So we can determine how many of the initial characters of the value match the target by
counting how many leading '\0' characters the XOR produces (.match( /^ \0+/ ).chars).

Then we just split the value string into three substrings using a regex,
colouring the leading matches blue and the trailing mismatches red,
using the Terminal::ANSIColor module.

Once we have the role available, we can build a couple of Reportable classes
suited to our actual data. Each result we produce will need to be assessed
against an accurate representation of 𝑒:

class Result
does Reportable[:target<2.71828182845904523536028747135266>]
{}

And we’ll also want to format our report with empty lines between the various
techniques we’re reporting, so we need a Reportable class whose .display
method is overridden to print only empty lines:

class Spacer
does Reportable
{
method display (Int) { say "" }
}

Finally we build the assess subroutine itself:

constant TIMEOUT = 3; # seconds;
# Evaluate a block over a range of inputs, and report...
sub assess (Unary $block --> Any) {
# Introspect the block's single parameter...
my Parameter $param = $block.signature.params[0];
my Bool $is-topic = $param.name eq '$_';
# Extract and normalize the range of test values...
my Any @values = do given try $param.default.() {
when !.defined && $is-topic { Empty }
when !.defined { 1, *×2 ... ∞ }
when .?infinite { .min, *×2 ... ∞ }
default { .Seq }
}
# Introspect the test description (from doc comments)...
my Str $label = "[$block.line()] {$block.WHY // ''}".trim;
# New paragraph in the report...
Spacer.new;
# Run all tests...
for @values Z $label,"",* -> ($value, $label) {
Result.new:
desc => "$label ($param.name() = $value)",
value => timed { $block($value) },
check => { last if .timing > TIMEOUT }
}
if !@values {
Result.new:
desc => $label,
value => timed { $block(Empty) }
}
}

The subroutine takes a single argument: a Code object that itself takes
a single parameter (which we enforce by giving the parameter the type Unary).

We immediately introspect that one parameter ($block.signature.params[0]),
which is (naturally) a Parameter object, and store it in a suitably typed variable
(my Parameter $param). We also need to know whether the parameter is the
implicit topic variable (a.k.a.$_), so we test for that too.

Once we have the parameter, we need to determine whether the caller gave it a default
value...which will represent the set of values for which we are to assess the code
in the block passed to assess. In other words, if the user writes:

assess -> \N=1..100 { some-function-of(N) }

...then we need to extract the default value (1..100) so we can iterate through
those values and pass each in turn into the block.

We can extract the parameter’s default value (if any!) by calling the appropriate
introspection method: $param.default, which will return another Code object
that produces the default value when called. Hence, to get the actual default
value we need to call .default, then call the code .default returns.
That is: $param.default.()

Of course, the parameter may not have a default value, in which case .default will
return an undefined value. Attempting the second call on that undefined value would
be fatal, so we make the attempt inside a try, which converts the exception
into yet another undefined value.

We then test the extracted default value to determine what it means:

when !.defined && $is-topic { Empty }
when !.defined { 1, *×2 ... ∞ }
when .?infinite { .min, *×2 ... ∞ }
default { .Seq }

If it’s undefined, then no default was specified, so we either use an empty list as our test
values (if the parameter is just the implicit topic, in which case it’s a parameterless
one-off trial), or else we use the endlessly doubling sequence 1, 2, 4 ... ∞. Using this
sequence instead of 1..∞ gives us reasonable coverage at every numeric order of
magnitude, without the tedium of trying every single possible value.

If the default was a range to infinity (when .?infinite), then we adjust it to similar
sequence of doubling values to infinity, starting at the same lower bound (.min). And if the
value is anything else, we just use it as-is, only converting it to a sequence (.Seq).

Once we have the test values, we need a description for the overall test. We could have simply
added a second parameter to assess, but the powerful introspective capabilities of Raku
offer a more interesting alternative...

We need to convey three pieces of information: the line number at which the call to assess
was made, a description of the trial being assessed, and the trial value for each trial.
However, to avoid uncomely redundancy, only the first trial needs to be labelled with the line
number and description; subsequent trials need only show the next trial value.

We can get the line number by introspecting the code block: $block.line()
But where can we get the description from?

Well...why not just read it directly from the comments?!

In Raku, any comment that starts with a vertical bar (i.e.#| Your comment here)
is a special form of documentation known as a
declarator block.
When you specify such a comment, its contents are automatically attached to the first
declaration following the comment. That might be a variable declared with a my, a
subroutine declared with a sub or multi, or (in this case) an anonymous block of
code declared between two braces.

...the string "Bernoulli's limit" is automatically attached to the block that
is passed into assess. We could also put the documentation comment after the
block, by using #= as the comment introducer, instead of #|:

assess -> \x { (1+x⁻¹)**x } #= Bernoulli's limit

Either way, Raku’s advanced introspective facilities mean that we can retrieve that
documentation during program execution, simply by calling the block’s .WHY method (because comments are supposed to tell you “WHY”, not “HOW”).

So we can generate our description by asking the block for its line number
and its documentation, making do with an empty string if no documentation comment was
supplied ("[$block.line()] {$block.WHY // ''}").

Now we’re ready to run our tests and generate the report. We start by inserting
an empty line into the report...by declaring a Spacer object. Then we iterate
through the list of values, and their associated labels, interleaving the two
with a “zip” operator (Z). The labels are the initial label we built earlier,
then an empty string, then a “whatever” (*), which tells the zip operator
to reuse the preceding empty string as many times as necessary to match the
remaining elements of @values. That way, we get the full label on the first
line of the trial, but no repeats of it thereafter.

The zip produces a series of two-element arrays: one value, one label. We then iterate through
these, building an appropriate Result object for each test:

Result.new:
desc => "$label ($param.name() = $value)",
value => timed { $block($value) },
check => { last if .timing > TIMEOUT }

The description for the report is the label, followed by the block’s parameter name
($param.name()) and the current trial value being passed to it on this iteration
($value). The value for the report is just the timed result of calling the block with the
current trial value passed to it (timed { $block($value) }).

But we also want to stop testing when the tests get too slow, so we pass the Result constructor
a check argument as well, which causes its TWEAK submethod to execute the check block once the
result has been added to the report list. Then we arrange for the check block to terminate
the surrounding for loop (i.e.last) if the timing of the new object exceeds our chosen
timeout. The call to .timing will be invoked on the Result object, which (not having a
timing method itself) will delegate the call to its $.value attribute, which we specified
to handle “whatever” methods the object itself can’t deal with.

The only thing left to do is to cover the edge-case where the user provides no test values
at all (i.e. when there are no @values to iterate through). That can happen either when an
explicit empty list of default values is passed to assert, or when the block passed in
doesn’t explicitly declare a parameter at all (and therefore defaults to the implicit topic
parameter: $_). In either case, we need to call the block exactly once, without any argument.
So if the array of test values has no elements (if @values.elems == 0),
we instantiate a single Result object, passing it the full label, and the timed result of
calling the block with a (literally) empty argument list:

if !@values {
Result.new:
desc => $label,
value => timed { $block(Empty) }
}

And we’re done. We now have an introspective and compact way of performing multiple trials on
a block, across a range of suitable values, either explicit or inferred, with automatic
timing of—and timeouts on—each trial.

So let’s get back to computing the value of 𝑒...

𝑒 is for eversion

The constant 𝑒 is associated with Jacob Bernoulli in another, entirely unrelated way.
In his posthumous 1712 publication,
Ars Conjectandi, Bernoulli explored the
mathematics of binomial trials: random experiments in which the results are strictly binary:
success or failure, true or false, yes or no, heads or tails.

One of the results of binomial theory has to do with the probability of extremely bad luck.
If we conduct a random binomial trial where the probability of success is
^{1}∕_{k}, then the probability of failure must be 1 -
^{1}∕_{k}. Which means that, if we repeat our same random trial k times, then the probability of failing every time is (1 -
^{1}∕_{k})^{k}. As k grows larger, this probability
increases from zero (for k=1), to 0.25 (for k=2), to 0.296296296296... (for k=3), to
0.31640625 (for k=4), gradually converging on an asymptotic value of 0.36787944117144233...
(for k=∞). And the value 0.36787944117144233... is exactly
^{1}∕_{𝑒}.

Which means that (1 - ^{1}∕_{k})^{-k} must tend towards 𝑒 as k tends to ∞. So we can try:

Which, sadly, converges no faster than Bernoulli’s original loan sharking scheme.

𝑒 is for exclamation

Despite its disappointing performance, the Bernoulli trials approach highlights a
useful idea. The constant 𝑒 appears in a great many other mathematical equations,
all of which we could rearrange to produce a formula starting: 𝑒 = ...

For example, in 1733 Abraham de Moivre first
published
an asymptotic approximation for the factorial function, which (just a few days later!)
was refined
by James Stirling, after whom the approximation is now named.
That approximation is: n!≅√2πn × (^{n}∕_{𝑒})^{n}
with the approximation becoming more accurate as n becomes larger.

Rearranging that equation, we get: 𝑒≅^{n}/_{n√n!}

For which we’re going to need both an n-th root operator
and a factorial operator. Both of which are missing from standard
Raku. Both of which are trivially easy to add to it:

# n-th root of x...
sub infix:<√> (Int $n, Int $x --> Numeric) is tighter( &[**] ) {
$x ** $n⁻¹
}
# Factorial of x...
sub postfix:<!> (Int $x --> Int) {
[×] 1..$x
}

The new infix √ operator is specified to have a precedence just higher than the existing
** exponentiation operator (is tighter( &[**] )). It simply raises the value after the
√ to the reciprocal of the value before the √.

The new postfix ! operator multiplies together all the numbers less than or equal to its
single argument (1..$x), using a reduction
by multiplication ([×]).

With those two new operators available, we can now write:

#| de Moivre/Stirling approximation
assess -> \n { n / n√n! }

Unfortunately, the results are less than satisfactory:

This approach converges on 𝑒 even slower than the previous ones, and for the
first time
Raku has actually failed us when it comes to a numerical calculation. Those zeroes are
indicating that the compiler has been unable to take the 256th root of the factorial of 256.
Or deeper roots of higher numbers.

It computed the factorial itself (i.e. 8578177753428426541190822716812326251577815202
79485619859655650377269452553147589377440291360451408450375885342336584306
15719683469369647532228928849742602567963733256336878644267520762679456018
79688679715211433077020775266464514647091873261008328763257028189807736717
81454170250523018608495319068138257481070252817559459476987034665712738139
28620523475680821886070120361108315209350194743710910172696826286160626366
24350228409441914084246159360000000000000000000000000000000000000000000000
00000000000000000) without difficulty, but then taking the 256th root of that huge number (by
raising it to the power of ^{1}∕_{256}) failed. It incorrectly produced the
value ∞, whereupon n divided by that wrong result produced the zero. The point of failure seems to
be around 170! or 10^{308}, which looks suspiciously like an internal 64-bit
floating-point representation limit.

Of course, we could work around that limitation, by changing the way we compute n-th
roots. For example, the n-th root of a number also given by:
10^{log10X∕n}

So we could try:

# n-th root of x... sub infix:<√> (Int $n, Int $x --> Numeric) is tighter(&[**]) { 10 ** (log10($x) / $n) }

...but we get exactly the same problem: the built-in log10 function breaks
at around 10^{308}, meaning we can’t apply it to factorials of
numbers greater than 170.

But when X is an integer, there’s a surprisingly good approximation to log_{10}X
that doesn’t have this 10^{308} limitation at all: we simply count the number of
digits in X and subtract ½.
In the worst cases, this result is out by no more than ±0.5, but that means its average
error over a sufficiently large range of values...is zero.

Using that approximation for n-th roots:

# n-th root of x... sub infix:<√> (Int $n, Int $x --> Numeric) is tighter(&[**]) { 10 ** (($x.chars - ½) / $n) }

...we can now extend our assessment of the de Moivre/Stirling approximation
far beyond n=170. In which case we find:

To our disgust, the convergence of this approach clearly does not accelerate at higher values
of n.

In fact, even when n is over a million, the result is still only accurate to three decimal
places; worse even than the classic
palindromic fractional approximation:

#| Palindromic fraction
assess { 878
/ 323
}

...which gives us:

[40] Palindromic fraction: 2.718266 [0.001𝑠]

Alas, the search continues.

𝑒 is for estimation

Let’s switch back to probability theory. Imagine a series of uniformly distributed random
numbers in the range 0..1. For example:

If we start adding those numbers up, how many terms of the series do we need to add
together before the total is greater than 1? In the above example, we’d need to add
the first three values to exceed 1. But in other random sequences we’d need to add only
two terms (e.g.0.76217621, 0.55326178, ...) and occasionally we’d need to add quite
a few more (e.g.0.1282827, 0.00262671, 0.39838722, 0.386272617, 0.77282873, ...).
Over a large number of trials, however, the average number of random values required
for the sum to exceed 1 is (you guessed it): 𝑒.

So, if we had a source of uniform random values in the range 0..1, we could get an
approximate value for 𝑒 by repeatedly adding sufficient random values to exceed 1,
and then averaging the number of values we required each time, over multiple trials.
Which looks like this in Raku:

That is: we conduct a specified number of trials (1..trials), and for each trial (.map:)
we generate an infinite number of uniform random values (rand xx ∞), which we then convert
to a list of progressive partial sums ([\+]). We then look for the first of these partial
sums that exceeds 1 (.first(* > 1)), find out its index in the list (:k) and add 1
(because indices start from zero, but counts start from 1).

The result is a list of counts of how many random values were required to exceed one in
each of our trials. As 𝑒 is the average of those counts, we sum them
and divide by the number of trials. And find:

That’s pretty good...for random guessing. If we’d kept it running for a
greater number of trials, we’d eventually have gotten reasonable accuracy. But it’s
unreliable: sometimes losing accuracy as the number of trials increases. And it’s slow:
only three digits of accuracy after a million trials...and 40 seconds of computation. To get a
useful number of correct digits we’d need billions, possibly trillions, of trials...which
would require tens, or thousands, of hours of computation.

So on we go....

𝑒 is for efficiency

Or, rather, back we go. Back to 1669, to that Isaac Newton of mathematical geniuses:
Isaac Newton. In a manuscript entitled
De analysi per aequationes numero terminorum infinitas
Newton set out a general approach to solving equations that are infinite series,
an approach that implies a highly efficient way of determining 𝑒.

Specifically, that:𝑒=Σ ^{1}∕_{k!} for k from 0 to ∞

So let’s try that:

#| Newton's series
assess -> \k=0..∞ { sum (0..k)»!»⁻¹ }

Here we compute 𝑒 by taking increasingly longer subsets of length k from the
infinite series of terms (\k=0..∞). We take the indices of the chosen terms
((0..k)), and for each of them (») take its factorial (!). Then for each factorial
(») we take its reciprocal (⁻¹). The result is a list of the successive terms
^{1}∕_{k!}, which we finally add together (sum) to get 𝑒:

Finally...some real progress. After summing only 20 terms of the infinite
^{1}∕_{k!} series, we have 19 correct decimal places, and (at last!)
a reasonably accurate value of 𝑒.

And we can do even better: 𝑒 can be decomposed into many other infinite summations.
For example: in 2003 Harlan Brothers
discovered
that 𝑒=Σ ^{(2k+1)}∕_{(2k)!}
which we could assess with:

#| Brothers series
assess -> \k=0..∞ {
sum (2 «×« (0..k) »+» 1) »/« (2 «×« (0..k))»!
}

By making every arithmetic operator a
hyperoperator,
we can compute the entire series of 0..k terms in a single vector expression and then sum
them to get 𝑒.

The code for the Brothers formula might be a little uglier than for Newton’s original,
but it makes up for that by converging twice as fast, giving us 19 correct decimal places from
just the first ten terms in the series:

We’re making solid progress on an accurate computation of 𝑒 with these two
Newtonian series, but that ugly (and possibly scary) hyperoperated assessment code is still
kinda irritating. And too easy to get wrong.

Given that these efficient methods all work the same way—by summing (an initial subset
of) an infinite series of terms—maybe it would be better if we had a function to do that
for us. And it would certainly be better if the function could work out by itself exactly
how much of that initial subset of the series it actually needs to include in order to produce
an accurate answer...rather than requiring us to manually comb through the results of multiple
trials to discover that.

And, as so often in Raku, it’s surprisingly easy to build just what we need:

sub Σ (Unary $block --> Numeric) {
(0..∞).map($block).produce(&[+]).&converge
}

We call the subroutine Σ, because that’s the usual mathematical notation for this function
(okay, yes, and just because we can). We specify that it takes a one-argument block,
and returns a numeric value (Unary $block --> Numeric), with the return value representing a
“sufficiently accurate” summation of the series specified by the block.

To accomplish that, it first builds the infinite list of term indexes (0..∞) and passes
each in turn to the block (.map($block)). It then builds successive partial sums of the
resulting terms (.produce(&[+])). That is, if the block was { 1 / k! }, then the list
created by the .map would be: (1, 1, ^{1}∕_{2}, ^{1}∕_{6}, ^{1}∕_{24}, ^{1}∕_{120}, ...) and the .produce(&[+]) method call would progressively
add these, producing the list: 1, 1+1, 1+1+^{1}∕_{2}, 1+1+^{1}∕_{2}+^{1}∕_{6}, 1+1+^{1}∕_{2}+^{1}∕_{6}+^{1}∕_{24}, 1+1+^{1}∕_{2}+^{1}∕_{6}+^{1}∕_{24}+^{1}∕_{120}, ...) That is: (1, 2, 2.5, 2.666, 2.708, 2.717, ...).

We set the calculation up this way because we need to be able to work out when to stop adding
terms, which will be when the successive elements of the .produce list converge
to a consistent value. In other words, when two successive values differ by only an
inconsequential amount.

And, conveniently, Raku has an operator that can tell us precisely that:
the ≅is-approximately-equal-to operator (or
its ASCII equivalent: =~=).

So we could write a subroutine that takes a list and returns the first “converged” value
from it, like so:

The .rotor method extracts sublists of
N elements from its list. Here we tell it to extract two elements at a time, but to also make
them overlap, by stepping back one place (-1) between each extraction. The result is that,
from a list such as (1, 2, 2.5, 2.666, 2.708, 2.717 …), we get a list of lists of every
pair of adjacent values: ( (1, 2), (2, 2.5), (2.5, 2.667), (2.667, 2.708), (2.708, 2.717) …)

Then we simply step through the list of lists, looking for the first sublist in which the head and
tail elements are approximately equal (.first({ .head ≅ .tail })). That gives us back a single
sublist, from which we extract only the more-accurate tail element (.tail).

We can then just apply this function to the list of partial sums being produced in Σ:

(0..∞).map($block).produce(&[+]).&converge

...which is just a conveniently left-to-right way of passing the entire list into converge.
Of course, we could also have written it as a normal name-arglist style subroutine call:

converge (0..∞).map($block).produce(&[+])

...if that makes us more comfortable.

The insanely cool thing about all this—and the only reason it works at all—is that
every component of the two method-call chains within Σ and converge is lazily evaluated.
Which means that .map doesn’t actually execute the block on any of the (0..∞) values,
and .produce doesn’t progressively add any of them together, and .rotor doesn’t extract
any overlapping pairs of them, and .first doesn’t search any of them...unless the final
result requires them to. The .map only invokes the block as many times as necessary to
get enough values for .produce to add, to get enough values for .rotor to extract,
to get enough values for .first to find the first approximately equal pair.

I truly love that about Raku: not only does it allow you to write code that’s concise,
expressive, and elegant; your concise, expressive, elegant code is naturally efficient
as well.

Meanwhile, we now have a much better (i.e. more concise, more expressive, more elegant)
way of generating a highly accurate value of 𝑒:

And that would be the end of the story...except that we’ve still ignored half of the
possibilities. Every technique we’ve tried so far has been mathematical in nature.
But Raku is not just for algebraists or statisticians or probability theorists.
It’s also for linguists and authors and poets and all other lovers of natural language.

So how could we use natural language to compute an accurate value of 𝑒?

Well, it turns out that algebraists and statisticians and probability theorists have been doing that
for centuries. Because once you’ve spent several hours (or days, or weeks) manually calculating the
first eight digits of 𝑒, you never want to have to do that again! So you
come up with a mnemonic: a sentence that helps you remember those hard-won digits.

Most mnemonics of this kind work by encoding each digit in the length of successive words.
For example, to remember the constant 1.23456789, you might encode it as: “I am the
only local Aussie abacist publicly available”. Then you just count the letters of each
word to extract the digits.

As usual, that’s trivial to implement in Raku:

sub mnemonic(Str $text --> Str) {
with $text.words».trans(/<punct>+/ => '')».chars {
return "{.head}.{.tail(*-1).join}"
}
}

We first extract each word from the text ($text.words), and for each of them (»)
we remove any punctuation characters by translating them to empty strings
(.trans(/<punct>+/ => '')), and finally count the number of remaining
characters in each word (».chars). We then take the first element from that list
of word lengths (.head), add a dot, and then append the concatenation of the rest
of the N-1 character counts (.tail(*-1).join), and return that as the
string representation of the resulting number.

Then we test it:

#| Mnemonic test
assess {
mnemonic "I am the only local Aussie abacist
publicly available"
}

...which prints:

[90] Mnemonic test: 1.23456789 [0.002𝑠]

...which is clearly a very poor approximation to 𝑒.

But for three centuries people have been making up better ones.
One of the most widely used is the slightly self-deprecating:

#| Modern mnemonic
assess {
mnemonic "It enables a numskull to remember
a sequence of numerals"
}

...which gives us nine correct decimal places:

[100] Modern mnemonic: 2.718281828 [0.003𝑠]

If we need more accuracy, we can just compose a longer sentence. For example:

#| Titular mnemonic
assess {
mnemonic "To compute a constant of calculus:
(A treatise on multiple ways)"
}

...which produces one additional digit:

[110] Titular mnemonic: 2.7182818284 [0.002𝑠]

Or, for better accuracy than any of the mathematical approaches so far,
we could use Zeev Barel’s self-referential description:

#| Extended mnemonic
assess {
mnemonic "We present a mnemonic to memorize a constant
so exciting that Euler exclaimed: '!'
when first it was found. Yes, loudly: '!'.
My students perhaps will compute e, via power
or Taylor series, an easy summation formula.
Obvious, clear, elegant!"
}

...which gives us far more accuracy than we’d ever actually need:

And even that isn’t the end of the story. Just like π (its main rival for World’s Most Awesome
Mathematical Constant), 𝑒 exerts a strange fascination over the mathematically
whimsical.

For example, in 1988, Dario Castellanos
published the following identity:
𝑒=(π^{4} + π^{5})^{1∕6}
Which, translated to Raku is:

...which (fittingly) gives us nine correct digits in total:

[140] Sabey's digits: 2.718281826838823 [0.000𝑠]

But those curiosities are as nothing compared to the true highlands of 𝑒 exploration.

For example, Maksymilian Piskorowski found that if you happen to have a spare
eight 9s, you can compute𝑒=(^{9}∕_{9} + 9^{-99})^{999},
which is accurate to a little over 369 million
decimal places.

But, alas, our assessment fails (eventually), producing a disappointing:

[150] Piskorowski's eight 9s: 1 [332242.506𝑠]

...because the value of 9^{-99} is so vanishingly small that adding it to
^{9}∕_{9} in Raku
merely produces 1, and then 1^{999} is still 1.

Piskorowski’s incalculable ⅓ billion decimal places of accuracy seemed like the (il)logical
end-point of this quest. But only until the aforesaid Richard Sabey spoiled the game for
everyone, by reformulating his aforementioned pan-digital formulation to: 𝑒=(1 + 9^{-47×6})^{3285}

Which is accurate to a staggering 18457734525360901453873570 decimal places
(that’s 18.4 octillion digits)...but which, tragically, immediately underflows Raku’s
numeric representation when attempting to compute the initial 9^{-442};
Raku being unable to accurately represent ^{1}∕_{919342813113834066795298816}.

Meanwhile, if you’d like to further explore these and other 𝑒-related exotica,
check out Erich Friedman’s mesmerizing
Math Magic website.

𝑒 is for exquisite

There is still one higher mathematical summit for us to surmount in our quest
for 𝑒.
The pinnacle of mathematical elegance, the peak of arithmetical pulchritude,
the single most beautiful equation in all of mathematics:
Euler’s Identity

In 1748 Leonhard Euler published his masterwork:
Introductio in analysin infinitorum,
in which he included the first mention of the general
Euler Formula:𝑒^{ix}=cos x + i sin x.
Although not explicitly mentioned in the treatise, this formula implies a remarkable special
case (at x=π) of:𝑒^{iπ}=-1 + 0i

This special-case equality is nowadays more usually written:𝑒^{iπ} + 1 = 0
Thereby uniting the five most important constants in mathematics.

Quite apart from this extraordinarily lovely unification of mathematical fundamentals,
for our purposes the significant point here is that one of the five components is 𝑒.
And, if we rearrange the formula to isolate that constant, we get: 𝑒=(-1)^{1∕πi}
which we can easily assess in Raku:

#| From Euler's Identity
assess { (-1) ** (π×i)⁻¹ }

Unfortunately, Raku is not (yet) smart enough to infer from the imaginary exponent
that it needs to use the Complex version
of the ** operator(well, yes, of course complex mathematics is already built into Raku).
So we get back:

[160] From Euler's Identity: NaN [0.000𝑠]

...because the non-complex exponentiation fails when given a complex exponent,
producing the value NaN+NaN\i.

But if we start with a suitably 2D version of -1 (i.e. -1+0i), like so:

#| From Euler's Identity
assess { (-1+0i) ** (π×i)⁻¹ }

...we get the correct complex arithmetic, and a highly accurate answer from it:

[160] From Euler's Identity: 2.718281828459045 [0.000𝑠]

𝑒 is for effortless

It’s fitting that we have now come full circle: finding Euler’s Number
from the special case of Euler’s Formula that is Euler’s Identity.
Euler’s Ouroboros, if you will.

All that effort...to end up more or less back where we started. Which is highly appropriate,
because where we started already had the answer built in. First of all, in the form of
the standard exp function, which returns the value of 𝑒^{x}.

So we could have tried:

#| Built-in exp() function
assess -> \x=1 { exp(x) }

...which would have given us:

[170] Built-in exp() function (x = 1): 2.718281828459045 [0.000𝑠]

But even that is considerably more effort that we actually need in Raku.
Because (of course) the constant itself is also built right in to the language:

#| Built-in e constant
assess { e }

...and produces an equally accurate result:

[180] Built-in e constant: 2.718281828459045 [0.000𝑠]

So the optimal solution to the original task was just five characters long:

say e

...which shall henceforth be known as: “Euler’s One-liner”.