Given a list of words, return the sorted list of the characters common to every word.
Map each word into a Set
of characters, take the intersection of those Set
s.
sub task1 ( @words > Seq ) {
return sort keys [∩] map *.lc.comb.Set, @words;
}
I needed keys
because Set
s are represented like Hash
es of KEY => True
, and we only want the key.
.Set
was not actually needed within the map
, because the intersection operator would interpret its input as Set
s. I made it explicit for clarity.
Build a Bag of (character => count)
, then extract those characters with the same count as the number of words.
use v5.36;
use List::Util qw<uniq>;
sub task1 ( @words ) {
my %h;
$h{$_}++ for map { uniq split '', lc $_ } @words;
return [ grep { $h{$_} == @words } sort keys %h ];
}
Note that uniq
is key to making this work, otherwise multiples of a character within a word would fool the logic.
Given a list of integers, return all the permutations that are "squareful" (the sum of every adjacent pair is a perfect square).
We will need:
A way to identify a perfect square.
A way to either: A. identify that all adjacent pairs have a property, and a way to filter all permutations against it, or B. permute with early identification that some adjacent pair lack the property.
I use $n.sqrt.round.² == $n
(in Raku) or (int(sqrt $n) ** 2) == $n
(in Perl) to spot perfect squares.
Other methods that do not resquare the integer portion of the square root, such as .sqrt == .sqrt.Int
and .sqrt % 1
, will fail at (2 ** 26)² == 67108864²
and above. .sqrt.narrow ~~ Int
fails at 22351742²
.
In Raku, sub task2_slow
is my "permute and filter" solution. I did not use that approach in Perl.
Handling our own permutation via recursion allows us to terminate branches early when we detect a failure, offering a huge performance increase. For example, [1..8]
would have 8! == 40320
permutations, but we only have to check 24
of them.
Subroutine task2
can handle my whole test suite (which includes [1..25]
) in 3 seconds.
sub isperfectsquare ( UInt $n > Bool ) {
return $n.sqrt.round.² == $n;
}
sub task2_slow ( @ns > Seq ) {
sub allpairssumtoperfectsquares ( @p > Bool ) {
return @p.rotor(2 => 1)
.map( *.sum.&isperfectsquare ).all.so;
}
return @ns.permutations
.grep(&allpairssumtoperfectsquares)
.sort.squish(:with(&[eqv]));
}
sub task2 ( @ns > Seq ) {
my %square_pairs = @ns.map: > $a {
$a => @ns.grep(> $b { isperfectsquare($a + $b) }).Set;
}
my BagHash $n_count = @ns.BagHash;
sub recursing ( UInt $head ) {
return $head if not $n_count;
return gather for %square_pairs{$head}{$n_count.keys}:k > $c {
$n_count{$c};
.take for recursing( $c ).map({ $head, $_ });
$n_count{$c}++;
}
}
return sort gather for $n_count.keys > $head {
$n_count{$head};
.take for recursing( $head );
$n_count{$head}++;
}
}
Translation of the Raku code, with these notable changes:
My implementation of Bag is "insulated".
$square_pairs{$x}{$y} = $y; Set up for :kreplacing allsquarecandidates
Note also the use of __SUB__
to recurse from within a anonymous sub.
use v5.36;
sub is_perfect_square ( $n ) {
return $n == ( int(sqrt $n) ** 2 );
}
sub task2 ( @ns ) {
my %square_pairs;
for my $x (@ns) {
for my $y (@ns) {
if ( is_perfect_square( $x + $y ) ) {
$square_pairs{$x}{$y} = $y;
$square_pairs{$y}{$x} = $x;
}
}
}
my ( $bag_add, $bag_del, $bag_keys, $bag_empty );
{
my %n_count;
$bag_add = sub($n){$n_count{$n}++};
$bag_del = sub($n){$n_count{$n}; delete $n_count{$n} if not $n_count{$n}};
$bag_keys = sub() {sort { $a <=> $b } keys %n_count};
$bag_empty = sub() { not %n_count};
}
$bag_add>($_) for @ns;
my $recursing = sub ( $head ) {
return [$head] if $bag_empty>();
my @ret;
for my $c (grep {defined} @{$square_pairs{$head}}{$bag_keys>()}){
$bag_del>($c);
push @ret, map { [$head, @{$_} ] } __SUB__>( $c );
$bag_add>($c);
}
return @ret;
};
my @ret;
for my $head ( $bag_keys>() ) {
$bag_del>($head);
push @ret, $recursing>( $head );
$bag_add>($head);
}
return \@ret;
}
]]>
Given a n x n matrix where n >= 2, find the third smallest element.
2 x 2 == 4, so we don't have to handle the condition of there being less than three elements.
We will either need to flatten the matrix to a list, or keep track of intermediate three smallest
for each matrix row.
Unlike TWC 205.1, we are not removing duplicates.
We could optimize for clarity/readability, or brevity, or largematrix performance.
sub task1 ( $m ) {
my ( $first, $second, $third, @rest )
= sort { $a <=> $b } map { $_>@* } @{$m};
return $third;
}
sub task1 ( @m ) { @m[*;*].sort.[2] }
Note that the Raku code (on GitHub) has twice as many tests as the Perl code, copying the numeric tests and translating them to alpha. I am using Raku's default .sort
, which is based on the generic cmp
and so sorts numbers and strings correctly based on type.
There are two other approaches I wanted to explore:
Extending the concept that gives us O(n) performance for .max
and .min
, we can calculate top N
or bottom N
in a single pass, without sorting. (I keep intending to write a module to do just that). The benefits would be: using much less space than a copysort, and often less time, and the availability of running top
or bottom
intermediate results (like [\+] lines()
to get running totals). However, the larger the N
in top N
, the worse the performance will be, and the more complex the code will be, and the more time it takes to verify the logic is correct (or to develop the test cases that expose all the miscoding you might have done). I could not devote enough time this week to pursue this.
The lowest 3 of each row can be computed via many smaller .sort
s, greatly reducing memory pressure and also reducing the number of comparisons by half, while still keeping the code simple.
Given a 1000x1000 matrix of 1
s, looped 100 times, this approach ran in 1/2 the time that the full flattening sort took.
sub task1_alternate ( @m ) {
my @top;
@top.append( .sort.head(3) ) for @m;
return @top.sort.head(3).tail;
}
# task1 => 209.35s, task1_alternate => 98.74s, x => 2.12
Given a list of positive integers, find the highest value possible from concatenating those integers.
Solvable via permutation.
At first, this looks efficiently solvable by just reversesorting the integers by their stringified values. This works with samenumberofdigit integers, but fails with (1, 10) => 110
; the nothingness after the 1
causes it to sort before the 0
, and we need the opposite sort for our reversedsort to work.
Temporarily padding shorter integers with a character of codepoint > digitcodepoints looks like a good fix for the above problem.
(Spoiler: it wasn't.)
First, let's look at the roads not taken.
sub task2 ( @ns > UInt ) {
return @ns.permutations.map( + *.join ).max;
}
This code is simple, and always returns the correct answer. Eventually!
Changing the number of elements in @ns
from 9 to 11 caused a 100x increase in runtime.
20 elements will not finish before Sol consumes the Earth.
sub task2 ( @ns > Str ) {
my $max_length = @ns».chars.max;
sub padded ( $s ) { $s ~ ( '~' x ($max_length  $s.chars) ) }
return @ns.sort(&padded).reverse.join;
}
The ~
tilde character has a codepoint higher than any ASCII digit, so this code fixes the (1, 10) => 110
testcase mentioned above. If fails a new testcase: (10, 100) => 10100
. It turns out that what character you pad with if affected by what you are comparing to, which changes on each comparison, so we must abandon this strategy. (Well, I think we must.)
Playing with this approach, I saw that you can fool yourself into thinking you have a good solution, when you have actually create a tie with no tiebreaker to the ordering, and the test passes just because the input was in the correct order to start with! We must also test (100, 10) => 10100
.
As I considered the problem in the shower, I realized that the answer to the differentlength orderings could be found by numerically comparing both "ab" and "ba" stringcombinations.
Easily written as @^ns.sort($^b ~ $^a <=> $^a ~ $^b).join
, I named the lessclear part to ease the burden on the reader.
sub numerically_highest_concatenation ( $a, $b > Order ) {
$b ~ $a <=> $a ~ $b
}
sub task2 ( @ns > Str ) {
return @ns.sort(&numerically_highest_concatenation).join;
}
By the way, at the Perl and Raku Conference in Toronto, I am giving a talk on Sorting.
In the Raku part of the talk, I will show off the superior arity1 form of .sort
(like .sort(*.date_of_last_purchase)
), and had planned to show the only two use cases that remained for the old $a <=> $b
arity2 form from Perl. Because of this TWC task, I will be adding a third case, where pairings must be compared to determine ordering.
use v5.36;
sub task2 ( @integers ) {
return join '', sort { "$b$a" <=> "$a$b" } @integers;
}
Note that, where I increased clarity by using a named sort subroutine in Raku, here I named the parameter, and used doublequotes instead of .
to stress the concatenation aspect while removing any doubts about operator precedence (.
vs <=>
).
dash dot dash dash dash dot dash dash dash dash dot dot
dash dot dash dash dash dot dash dash dash dash dot dot
dash dot dash dash dash dot dash dash dash dash dot dot
 "YYZ", in 10⁄8 time, by Rush audio video
Given an array of integers, return the Third Highest if found, otherwise return the maximum.
The last example shows that duplicate values are to be suppressed; task1(5,4,4,3)
should return 3
.
The very efficient singlepass algorithm for thirdhighestmaximum is very inefficient, w.r.t. programmer time, so I just sorted.
use v5.36;
use List::Util qw<uniq>;
sub task1 ( $ns ) {
my ( $x1, $x2, $x3 ) = sort { $b <=> $a } uniq @{$ns};
return $x3 // $x1;
}
sub task1 ( @ns ) {
(.elems == 3 ?? .head !! .tail) given @ns.sort.squish.tail(3);
}
.squish
is more efficient than .unique
, and can be used as an exact replacement when the list is already sorted.
given
aliases the RHS expression into $_
, allowing the concise (elems
,head
,tail
) method calls.
Given an array of integers, find the highest value obtained by XORing any two distinct elements.
use v5.36;
use ntheory qw<forcomb>;
use List::Util qq<max>;
sub task2 ( @ns ) {
my $r = 0;
forcomb {
$r = max( $r, $ns[$_[0]] ^ $ns[$_[1]] )
} @ns, 2;
return $r;
}
sub task2 { @_.unique.combinations(2).map({ [+^] .list }).max }
[+^]
is the "reduction" form of the +^
XOR operator; it XORs all the things on its right, which in this case are both of the elements in the twoelement list passed to map
.
That simple .combinations(2)
works fine, with it's N*(N1)/2
performance, until the input array gets large.
With a bit of analysis, we can go much faster!
sub task2_fast ( @ns ) {
sub hibit ( UInt $n ) { $n.log2.floor }
my %grouped = @ns.unique.classify(&hibit);
my ($top_group, @lesser_groups) = %grouped.sort(*.key)
.map(*.value);
if !@lesser_groups {
my $removes_hibit = 1 +< hibit($top_group[0]);
return task2_fast( $top_group.list »» $removes_hibit );
}
else {
return [max] @lesser_groups.map: {
($top_group.list X+^ .list).max;
}
}
}
XOR on single bits:
XOR is commutative (order does not matter): a^b == b^a
.combinations(2)
instead of @ns Xop @ns
Cartesian product.a^a==0, which is the minimum possible result, and is only achievable with a^a.
x^y == 0
implies x==y
.unique
is safe (and helpful); duplicate elements XOR to zero.Raku has 3 XOR ops:
xor
shortcircuiting boolean XOR, low precedence^^
shortcircuiting boolean XOR, high precedence+^
numeric bitwise XOR (which is what this task needs)The highest binary bit set in an integer can be found in many ways in C and assembler. Traditionally, the rightmost bit ("ones" position) is 0
, next to the left ("twos" position) is 1
, next ("fours" position) is 2
, and so on.
floor(log($N)/log(2))
works, where floor
is from the POSIX
module, or from the new Perl v5.36
"builtin".In Raku, this is easily done with .log2.floor
. e.g. the highest bit in 3 (0b11) is 1
, and the highest bit in 3 (0b100) is 2
:
$ raku e 'for sort classify *.log2.floor, ^1024 {
say join "\t", .key, .value.minmax.gist,
.value.minmax.bounds.fmt("0b%b", "..")
}'
HiBit RangeInDecimal RangeInBinary
Inf 0..0 0b0..0b0
0 1..1 0b1..0b1
1 2..3 0b10..0b11
2 4..7 0b100..0b111
3 8..15 0b1000..0b1111
4 16..31 0b10000..0b11111
5 32..63 0b100000..0b111111
6 64..127 0b1000000..0b1111111
7 128..255 0b10000000..0b11111111
8 256..511 0b100000000..0b111111111
9 512..1023 0b1000000000..0b1111111111
Even if all we know about two integers is the highest bit set in each, we still know something about their XOR. Given A > B
:
A
and B
have different highest bits, A
has the higher of the two highest bits, and A XOR B
will have that highest bit from A. Further, (A XOR B)
will always be greater than B
.A
and B
have the same highest bit, so A XOR B
will clear that bit. Further, (A XOR B)
will always be less than A
.We will see concrete examples of these "different highest bits" and "same highest bits" later. For now, we posit that some benefit will be gained if we group (via .classify
) the input array by the highest bit set.
sub hibit ( UInt $n ) { $n.log2.floor }
my %grouped = @ns.unique.classify(&hibit);
Now if we only had a way to recombine all these groups, that guaranteed we did not miss any combinations of the original @ns
elements...
(or, The Whole Is Always Equal To The Sum Of Its Parts, But Only When Using The Correct Definition Of "Sum")
Classifying (subgrouping) all of some list into groups, and then taking the cross of each .combinations(2)
of those groups, plus the .combinations(2)
within each group, will exactly equal a generic .combinations(2)
of the initial list. Every twoelement pairing will still occur exactly once. e.g:
$ raku e '.say for (1..30).combinations(2)'  wc l
435 # 30*29/2, as expected
$ raku e '
my @groups = (1..30).classify(* % 10).sort.map(*.value); # 10 subgroups, by lastdigit for fun
for @groups.combinations(2) > ($g1, $g2) { # All 2combinations of groups,
.say for $g1.list X $g2.list; # one group crossed with another group
}
for @groups > $g1 { # Each group,
.say for $g1.combinations(2); # 2combinations of only itself
};
'  wc l
435 # same count as the simple `.combinations(2)`, just sliced in a (potentially) more useful fashion.
.combinations(2)
) within a group, is if that group is the only group.X+^
; only the top group needs to be crossed with every other group..unique
.Consider @ns = 876, 920, 930;
. All are in "group 9" (512..1023).
876 == 0b1101101100, after bit9 removed: 0b101101100 == 364
920 == 0b1110011000, after bit9 removed: 0b110011000 == 408
930 == 0b1110100010, after bit9 removed: 0b110100010 == 418
876 +^ 920 == 364 +^ 408 == 244
876 +^ 930 == 364 +^ 418 == 206
920 +^ 930 == 408 +^ 418 == 58
So, max_XOR(876,920,930)
will have the same result as max_XOR(364,408,418)
, because all the XORed combinations of the former equal all the XORed combinations of the latter. So, we recurse with the reduced numbers. Again, they are all in the same group, this time "group 8".
364 == 0b101101100, after bit8 removed: 0b01101100 == 108
408 == 0b110011000, after bit8 removed: 0b10011000 == 152
418 == 0b110100010, after bit8 removed: 0b10100010 == 162
So, max_XOR(364,408,418)
will have the same result as max_XOR(108,152,162)
, so we recurse again.
(108,152,162)
are not all in one group, so they have "different highest bits", which I needed a bigger example to show.
Consider @ns = flat 1003, 1001, 1..511;
When grouped by hibit, (1003,1001) are in "group 9", because their highest bit is 9
: (0b1111101011, 0b1111101001). In this example, I call this group G9
.
All the rest are in groups between 0
and 8
. In real code, they would be in separate groups, but for this example, I lump them all together, calling the supergroup G0_8
.
How can we combine these?
G0_8 X+^ G0_8 always has bit 9 unset, because none of them have bit 9 set.
G9 X+^ G9 always has bit 9 unset, because the XOR cleared it.
G9 X+^ G0_8 always has bit 9 set, since the G9 element has bit 9, and nothing in G0_8 has bit 9 to counteract it.
More generally, for any case of groups with "different highest bits", we only need to check the top group against all the lower groups.
In this particular case, the simple .combinations(2)
would require 513*512/2 = 131328
XORs, while this grouping only needs 2*511 = 1022
XORs. 131328/1022 == 128.500978
. Even given the extra work of grouping, that will be quite a speedup!
Worst case would be if the top group was full, and all lower groups were full. Even then, we cut the number of XORs in half.
I expect that a further speedup is possible, by noting that for any individual N
in the top group, if a group exists whose hibit matches the first "zero" bit in N
, no group lower than that needs searching. E.g. for 943 (group 9) (0b1110101111), the zeros are in bits 6
and 4
. All elements of groups 8,7,6 must be XOR, but if group 6 had any elements, groups lower than 6 may be skipped. Otherwise, groups 5,4 must be XOR, but if group 4 had any elements, groups lower than 4 may be skipped. I did not have time to code this supposition into my solution above, nor did I verify my own logic. It might even result in a net loss of speed, given the work to find the positions of zeros in every N
in the top group.
]]>Bud: Well, let’s see, we have on the bags, Who’s on first, What’s on second, I Don’t Know is on third…
Lou: That’s what I want to find out.
 Abbott & Costello Video
In Raku, Perl, and C {via Perl's Inline::C} .
]]> (Still editing!)Count the integers that have no repeating digits, between 1 and $n
.
This is the more timeconsuming of the two tasks, in both CPU and programmer time.
Since the largest number with allunique digits is 9876543_210, any requested $n
higher than that will have the same answer.
Scanning every integer in the range will scale linearly, and since the largest input in in the 9 billions, we should look for a faster algorithm. The problem relates to combinations of individual digits, irrespective of their order, so something from combinatorics might be fruitful.
Using simple oneatatime iteration:
use v5.36;
use List::Util qw<uniq>;
sub task1 ( $n ) {
return 0 + grep { length == uniq split '' } 1..$n;
}
I also could have said:
return 0 + grep !/(.).*\1/, 1..$n;
, which would have been faster, but less clear.
Also, for large $n
, I observe that about 25% of the total time for perl
to return control to my terminal happens after perl
outputs the answer. I expect this is due to Perl not optimizing the scalar grep
into a "countonly" version of itself, and so a huge list really is allocated and built by grep
, which must be DESTROY
ed at END
time. Recoding as something like /(.).*\1/ or $c++ for 1..$n
recovered all that cleanup time.
This code takes 2m42s to calculate task1(9_876_543_210)==8_877_690
using oneatatime iteration. That is nearly 3 minutes, at full C
speed; no bouncing between Perl and C is done except at start and end of a calculation.
use v5.36;
use Inline 'C';
# ... Testing code omitted here
__END__
__C__
int is_special(long x) {
int ds[10];
memset(ds, 0, 10*sizeof(int));
while (x) {
if (ds[x % 10]++)
return 0;
x /= 10;
}
return 1;
}
int count_special(long in) {
long x = in > 9876543210 ? 9876543210 : in;
int r = 0;
for ( ; x ; x ) {
if ( is_special(x) )
r++;
}
return r;
}
I have two versions that use combinatorics to solve the task. The first rips into the problem in two phases
(Ack! Must finish this explanation soon, but cannot right now.)
# https://oeis.org/A073531 Number of ndigit positive integers with all digits distinct.
constant @ndigitsdistinct = 0, 9, ( 9 X* [\*] (9...1) );
constant @ndigitsdistinctsum = [\+] @ndigitsdistinct;
sub task1 ( UInt $n > UInt ) {
constant MAX = 9_876_543_210;
return &?ROUTINE(MAX) if $n > MAX;
# Knuth's "falling powers"; kfp(9,3) == 9*8*7
sub kfp ($n, $k) { [*] ( ($n$k) ^.. $n ) }
my $nc = $n.chars;
my @totals;
push @totals, @ndigitsdistinctsum[$nc  1];
my SetHash $used;
for $n.comb».Numeric.kv > UInt $k, UInt $digit {
my UInt $combinations_in_rightward_places
= kfp(9  $k, $nc  $k  1);
my Range $space_below_digit = ( 0 + (1 if $k == 0 ))
.. ($digit  (1 if $k < $nc1));
my Set $using_for_this_digit = $space_below_digit () $used;
push @totals, $using_for_this_digit.elems
* $combinations_in_rightward_places;
$used{$digit}++;
}
return @totals.sum;
}
This version is simpler to understand, but does not perform as well. It does the initial optimization to skip over about .log10 places, then generates all the combinations with the correct leading digit, filtering on which ones are less than $n.
sub task1_one_big_skip ( UInt $n > UInt ) {
constant MAX = 9_876_543_210;
return &?ROUTINE(MAX) if $n > MAX;
my @totals;
my $lead = $n.substr(0, 1);
my $core = $n.chars  1;
push @totals, @ndigitsdistinctsum[$core];
push @totals, +combinations(9,$core) * ([*] 1..$core)
* ($lead  1);
my $L3 = 0;
for (0..9).grep(* != $lead).combinations($core) > @comb {
$L3 += +@comb.permutations.grep: { ($lead ~ .join) <= $n };
}
push @totals, $L3;
return @totals.sum;
}
Given a @list
of integers, find the most frequent even numbers in the list,
with smallest of those most frequent as a tiebreaker.
return 1
if no even numbers are in the list.
@ns.grep( * %% 2 ).Bag.max({ .value, .key }).?key // 1 ;
Yes, the solution can be expressed in a single, um, expression.
The Bag
counts the even numbers, giving a hash of key=originalnumber => value=countoftimesseen.
The .max
method will receive Pair
objects from the Bag
, and find the maximum of each Pair
's .value
(the count), with negative (because we want the lowest in a .max
) .key
(original number) as tiebreaker.
Now we just need to return the .key
of the Pair
that .max
found, or 1
if grep
found no even numbers. But wait! .max
returns Inf
when given a empty list; we cannot call .key
on Inf
.
The .?
methodop is the "Safe call operator". It works as .
, but if the lefthand side lacks the requested method, it returns Nil
, which is just what we need for //
to trigger the 1
return.
use v5.36;
use List::Util qw<min>;
use List::UtilsBy qw<max_by>;
sub task2 (@ns) {
my %bag;
$bag{$_}++ for grep { $_ % 2 == 0 } @ns;
return 1 if not %bag;
return min max_by { $bag{$_} } keys %bag;
}
Compared to the Raku code, having %bag
as a separate variable does prevent a singleexpression solution, but we gain clarity; returning 1
happens much sooner in the dataflow, and in a place that is simpler to read.
Also, because we can refer to the count via (less efficient) hashlookup, and because Perl's max_by
does return all the participants in a tie, we can feed in only the keys, so min
is operating only on the keys.
]]>With great pleasure, we announce Bruce Gray
as the next Champion of The Weekly Challenge.
 TWC 194 Mohammad S AnwarThe thing about me that's so impressive
is how infrequently I mention all of my successes
 Video "I'm So Humble" {The only song by The Lonely Island that is Safe For Work}.
Old Mr. Kringle is soon gonna jingle
The bells that'll tingle all your troubles away
Everybody's waitin' for the man with the bag
'cause Christmas is coming again  The Brian Setzer Orchestra
Given time in the format hh:mm
with one missing digit, find the highest digit 09
that makes it a valid time.
Example #4 says '2?:00'
should return 3
, which tells us that we should work with 24hour time, and that 24:00
is not allowed as a alternate expression of 00:00 thenextday
.
There are only 2460=1440 possible times, but only (1060)+( 360)+(2410)+(24* 6) = 1164 possible (valid) inputs.
sub is_time_valid ( Str $s > Bool ) {
constant $valid_times = ( ^24 X ^60 ).map( *.fmt('%02d', ':') ).Set;
return $s ∈ $valid_times;
}
sub task1 ( Str $s > UInt ) {
return (9…0).first: { $s.subst( '?', $_ ).&is_time_valid };
}
Given a string, determine whether removing only one character can make the frequency of the remaining characters the same.
The test cases led many participants to shortcut the analysis and produce concise solutions that would fail one of these cases:
The count of a
(5
) is not within striking distance of the count of b
(1), but because b
is the only solo character, removing it would succeed.
3
can be reduced to 2
be removing 1
, but there are two letters that each have 3
, so the frequency cannot be made equal with just one character removed.
Alternately, one of the groups has only one letter in it (c
), and abs(32) == 1
, but in the wrong direction; you would have to add a c
to equalize.
Removing any of the letters leaves us with equal frequency.
Of interest to me were methods to generate minimal sets of test cases:
raku e 'my @a = "a".."e"; for ( [X] (@a xx 5) ) { say .join if [le] .list and (.join ~~ /^a+[b+[c+[d+[e+]?]?]?]?$/)}'  m
aaaaa aaaab aaabb aaabc aabbb aabbc aabcc aabcd abbbb abbbc abbcc abbcd abccc abccd abcdd abcde
Longer code, but more efficient to run:
raku e 'my $w = 5; my $f = "\%0{$w}b"; for ^(2 ** $w) { my @bin = .fmt($f).comb;
my $c = "a";
my $out = "a";
for @bin {
$c++ if +$_;
$out ~= $c;
}
say $out;
}'
The above is not fully minimal; it misses the nuance that aabbc
and aabcc
both are "two of two, and one of one". We need partitioning!
perl MList::Util=zip MInteger::Partition wE 'my $i = Integer::Partition>new(shift); while (my $p = $i>next){say map { $_>[0] x $_>[1] } zip [("a".."z")[keys @$p]], $p}' 5
aaaaa
aaaab
aaabb
aaabc
aabbc
aabcd
abcde
# Best compromise between performance, complexity, reducing chance to "get it wrong",
# and likelihood of a maintenance programmer to reverseengineer the original requirements.
sub task2 ( Str $s > Bool ) {
my BagHash $b = $s.comb.BagHash;
my @k = $b.keys;
for @k > $k {
$b.remove: $k;
return True if $b.values.unique.elems == 01;
$b.add: $k;
}
return False;
}
Translation of my Raku solution, with %h
as a improvised Bag
.
The code grep { $_ != 0 }
is needed to adapt to the hash entry not disappearing when the value goes to 0
.
use v5.36;
use List::Util qw<uniq>;
sub task2 ($s) {
my %h;
$h{$_}++ for split '', $s;
my @k = keys %h;
for my $k (@k) {
$h{$k};
my $c = 0 + grep { $_ != 0 } uniq values %h;
return 1 if $c == 0
or $c == 1;
$h{$k}++;
}
return 0;
}
]]>He'll make this December the one you'll remember
The best and the merriest you ever did have
Everybody's waitin', they're all congregating
Waitin' for the man with the bag  #Voctave {A Cappella, and Breathtaking; We got to see them in concert last week!}
Calculate all possible binary numbers of size $n.
In 7th grade, I noticed that you can construct all the binary numbers by starting with 0 and 1, then prepending 0and1 to all the prior numbers, doubling the size of the calculated each time:
0
1
00
01
10
11
000
001
010
011
100
101
110
111
I used this method in the Perl solution.
Loosely related: This week, my 5yearold granddaughter
showed us how she can count to 2_000_000
:
Cora Kate: "one million, two million !!!"
I could have written:
sub task1 ( UInt $n ) { ^(2**$n) .fmt("\%0{$n}b") }
, but that gives the wrong answer when $n
is 0
. So instead:
multi sub task1 ( 1 ) { <0 1> }
multi sub task1 ( UInt $n ) { [X~] (<0 1> xx $n) }
The multi
is needed because the X
crossoperator will operate on a solo <0 1>
by unpacking it, translating to '0' ~ '1'
, which is not what we want.
I could have written:
sub task1 ( $n ) { map { sprintf "%0${n}b", $_ } 0 .. (2**$n)1 }
, but that gives the wrong answer when $n
is 0
. (Sounds familiar.) So instead:
sub task1 ($n) {
my @r = ("");
@r = ( map("0$_", @r),
map("1$_", @r) ) for 1..$n;
return @r;
}
All string elements of a given list will have the same "distance between adjacent characters", except one. Return that one string element.
This task is naturally broken into:
The solution will probably be clearer if we keep those aspects separate.
Changing "be"
into 1,4
to get the difference of 3
can be replaced with ord
, since the base values don't matter, only their relative distances.
I could have golfed this to:
@L.classify(~*.comb.rotor(2=>1).map({[] $_».ord})).values.min(+*)
, but T'is the season of thankfulness and giving. Also, I am not happy with the assumptions that the task is making, so I structured it to better allow for warnings.
sub oddballs ( @list, &matcher ) {
my %h = @list
.classify(&matcher)
.values
.classify({ .elems == 1 ?? 'Oddball' !! 'Clique' });
warn "Multiple cliques exist" if %h<Clique>.elems > 1;
return %h<Oddball>.list;
}
sub neighbor_distances ( Str $s > Str ) {
return $s.comb
.map(&ord)
.rotor(2 => 1)
.map({ .[1]  .[0] })
.Str;
}
sub task2 (@list) {
die "Must have at least 3 to have an oddball" if @list.elems < 3;
my @o = oddballs( @list, &neighbor_distances );
warn "No oddballs found" if not @o;
warn "More than one oddball" if @o.elems > 1;
return @o.head;
}
I want to point out that the "neighbordifference after numeric translation" is a fine way to specify the desired grouping, but an alternative could also serve:
$s .= trans( ['b'..'z'] => ['a'..'y'] ) until $s.contains: 'a';
Reducing each letter along the alphabet until any of them is an a
would produce the same results.
Using many modules, to better parallel the Raku solution:
use v5.36;
use List::Util qw<mesh pairvalues>;
use List::MoreUtils qw<slide>;
use List::Categorize qw<categorize>;
sub diffs ( $s ) {
state %A_N = mesh ['a'..'z'], [0..25];
return join ':',
slide { $b  $a }
@A_N{ split '', $s };
}
sub oddballs ( @s ) {
return grep { @{$_} == 1 }
pairvalues
categorize { diffs($_) } @s;
}
sub task2 ( @s ) {
my @r = oddballs(@s);
warn if @r != 1;
return $r[0][0];
}
mesh
is Raku's Z
.slide
is Raku's .rotor(2 => 1)
. categorize
is Raku's .classify
.pairvalues
makes up for Raku's Pairs being true objects.I used the @
sigil on a hash to access Perl's "hash slicing", allowing all of the letters to be translated in one go.
( [\+] ( @a X target ) )».abs.sum
In Raku, Perl, and Rust.
]]> (Still editing)Given a positive Int, flip all the bits.
Some possible approaches:
split
to list of chars, map
with translation table, join
, change to int.Thoughts:
tr///
.Approach #2:
use v5.36;
sub task1 ($n) {
return oct '0b' . sprintf('%b', $n) =~ tr/01/10/r;
}
In recent Perl, the tr///
translation accepts the /r
modifier, to return a modified copy instead of modifying the original.
Poorlynamed oct()
does not just translate octal; it looks for other prefixes like 0b
or 0x
and translates them from binary or hex to decimal.
Approaches #1 and #2:
# Faster, by about 10x
sub all_ones (UInt $n > UInt) { ( 1 +< ($n.log2.floor + 1) )  1 }
sub fast1 (UInt $n > UInt) { $n +^ all_ones($n) }
# Clearer:
sub task1 (UInt $n > UInt) {
return $n.base(2)
.trans( <0 1> => <1 0> )
.parsebase(2);
}
The full file has a bonus test that task1()
and fast1()
match output over the first 16_384 integers.
fn flip_bits(n: u32) > u32 {
let all_ones = (n + 1).next_power_of_two()  1;
return n ^ all_ones;
}
The full file has a bonus test that the first 64 integers match a pattern of jumpingthenfalling values.
Given a list of nonnegative integers, how many moves of surfit>deficit, one at time until all are equal? Return 1 if equality is not achievable.
1
via early exit.A: 6 3 3 3 3 0 3 3 3 3 0 1 3 3 3 6 5
B: 3 0 0 0 0 3 0 0 0 0 3 2 0 0 0 3 2 @a X $target
C: 3 3 3 3 3 0 0 0 0 0 3 5 5 5 5 2 0 [\+]
D: 3 3 3 3 3 0 0 0 0 0 3 5 5 5 5 2 0 abs
E: 3 6 9 12 15 15 15 15 15 15 18 23 28 33 38 40 40 [\+] to view
A: The initial array.
Clearly, the target level is 3.
The excess 3 in that first 6 needs to fill in the first zero, 5 cells away, so that will cost 3*5=15
moves.
The excess 3 in the last 6 moves 5 cells left into the last zero; 3*5=15
moves.
The excess 2 in the last 5 moves 5 cells left into the one; 2*5=10
moves, so 40
moves total.
Cells already holding the target were "neutral" to our calculation. This suggests we should try "redefining the baseline" by removing the target from every cell, making the cells that already held the target become zero, and those with excess change to hold only the excess amount. That means that cells with less than the target will become negative. Would that break our model?
We moved some to the right, and some to the left, which feels like frosting a sheet cake, and would require forward and backward passes (loops over the array).
But, as we learned with electricity, the flow of electrons, and the backwards "flow" of "holes" where each electron has left, are identical in everything but direction (+1/1 "sign", in our case).
B: Removed 3 from every cell.
@a X $target
map { $_  $target } @a
C: Running total of B.
C
; they are not just copied down from B
, the zeros are showing , unless we weirdly allowed "negative frosting"
First attempt, as I explored the problem:
my $target = @a.sum div +@a;
# Ack! when a low value is in the middle, pull from which side?
# Sweep like icing a cake, either backandforth, or circularly via modulo.
# make test cases of 1 2 4 8 and 8 4 2 1
# No, cannot modulo, because task disallows.
my @forward = @a.keys .rotor(2 => 1);
my @backward = @a.keys.reverse .rotor(2 => 1);
my $count = 0;
my $i = 0;
until @a.all == $target {
say "\nstart:", @a;
for @forward > ( \i, \j ) {
if @a[i] > ( @a[j] & $target ) {
@a[i];
@a[j]++;
$count++;
}
}
say "FWD :", @a;
for @backward > ( \i, \j ) {
if @a[i] > ( @a[j] & $target ) {
@a[i];
@a[j]++;
$count++;
}
}
say "BACK :", @a;
}
Intermediate:
for @a.rotor( 2 => 1 ) {
my $diff = .[0]  $target;
.[0] = $diff;
.[1] += $diff;
$count += $diff.abs;
}
die @a.raku unless @a.all == $target;
Final:
sub task2 ( @a > Int ) {
return 1 if @a.sum !%% +@a;
my $target = @a.sum div @a.elems;
my ($count, $mound) = 0, 0;
for @a > $a {
$mound += $a  $target;
$count += $mound.abs;
}
die "Cannot happen: $mound" unless $mound == 0;
return $count;
}
How does that work?
The $mound
is the excess (or deficit) value being pushed from the prior element onto the current element of the array.
use v5.36;
use List::Util qw<sum0 reductions>;
sub functional ( @a ) {
my $s = sum0 @a;
return 1 if $s % @a;
my $target = $s / @a;
return sum0 map { abs $_ }
reductions { $a + $b }
map { $_  $target }
@a;
}
sub task2 ( @a ) {
return 1 if sum0(@a) % scalar(@a);
my $target = sum0(@a) / scalar(@a);
my ($count, $mound) = 0, 0;
for my $n (@a) {
$mound += $n  $target;
$count += abs $mound;
}
die "Cannot happen: $mound" unless $mound == 0;
return $count;
}
A few tricks used here:
List::Util::reductions() is the Perl equivalent to Raku's .produce()
or "triangular" reduce metaop.
Both task2()
and its functional()
twin are tested by each test case.
fn task2 ( a : Vec<i32> ) > i32 {
let sum : i32 = a.iter().sum();
let len : i32 = a.len() as i32;
if sum % len != 0 {
return 1;
}
let target = sum / len;
// Raku code: ( [\+] ( @a X target ) )».abs.sum
return a.iter()
.scan(0, state, &x { *state += x  target;
Some(*state) })
.map(x { x.abs() })
.sum();
}
While this code is generally concise, Rust's scan()
feel clumsy to me.
I am not it's only critic.
Next door to the Haunted Mansion.
]]> Note: Both of my Perl solutionsuse v5.36
, because it autoenables strict
, warnings
, and signatures
.
The Raku solutions are registered as mindaltering substances.
Produced in a facility that may have contained peanuts, tree nuts, or Halloween.
This task has a mismatch between the description and the examples.
"...smallest character in the given array lexicographically greater than the target character"
implies an undef
ined answer if nothing in the array is gt
the target.
However, the 5th example expects the target to be returned in that null case.
I chose to code to match the examples, because it was a more interesting problem.
Final answer:
use v5.36;
use List::Util qw<minstr>;
sub task1 ( $aref, $target ) {
my @gt = grep { $_ gt $target } @{$aref};
return @gt ? minstr(@gt) : $target;
}
List::Util::minstr
has the useful property of returning undef
from an empty list, which allows this shorter version that I had wanted in Raku:
return minstr( grep { $_ gt $target } @{$aref} )
// $target;
I decided against that singlestatement version, because:
1. it requires knowledge that minstr returns undef when given an empty list, and
2. compared to the twostatement form, //
deemphasizes the illdefined part of the task.
I also considered increasing the emphasis by splitting the solution into two subs along the faultline of the contention:
sub smallest_gt_target ($a, $t) { minstr grep { $_ gt $t } @{$a} }
sub task1 ($a, $t) { smallest_gt_target($a, $t) // $t }
What do you think, would that increase in emphasis be worth the loss in D.R.Y.ness?
Final answer:
sub task1 ( @a, Str $target > Str ) {
return .elems ?? .min !! $target given @a.grep( * gt $target );
I could have used .so
instead of .elems
, but weighed .so
as less wellknown.
Similar:
return $_ eq 'Inf' ?? $target !! $_ given @a.grep(* gt $target).min;
I rejected it because:
.min
returns Inf
when called on an empty list.$_
two times, instead of zero times (.elems, .min) in the final answer.Inf
to string in order to compare with 'Inf' via eq
felt icky for a Numeric.Inf
does not ==
itself; I was confusing its behavior with NaN
, which indeed does not equal itself.Singlepass tooclever alternative:
sub task1 ( @a, Str $target > Str ) {
return @a.min({ $_ !gt $target, $_ }) max $target;
BWAHAHAHAHA!
How does that work?
.min
, .max
, .minmax
, and .sort
all default to using cmp
as the comparator.cmp
is run on the first element (.[0]
) of each List, and if they are Order::Same
, proceeds to cmp
the second elements, and so on until the tie is broken or the lists are exhausted.(True,False).sort
would return (Bool::False, Bool::True).!gt
operator is the same as le
, but used in its negative form here to keep the "greater than" from the original wording and make the "greater than target" values "sort
" (really the more efficient min
) to the top, so we can only get a minimum value le
target if no value is gt
target.@a
contained anything gt
target, .min
returns it, and it always wins in the fight against max $target
, so the result is the intended value.@a
contained nothing gt
target, .min
returns the lowest value in the whole array, and it must always lose in the fight against max $target
, so the result is the specified target.Even though I am writing Raku documentation compare&contrast sort
versus min
/max
/minmax
, and I wrote this code myself, I find this code hard to grok after being away for 5 minutes.
It is particularly egregious "clever code", because it swallows the original problem requirements into its cleverness. You can't read that (uncommented) code and infer the spec. So, we don't really do such things in real life. Really!
This task will involve keeping track of element counts and slice lengths and start indexes and end indexes; all of these are nonnegative integers that the developer will be scrutinizing and staring at, past the midnight hour. And yet, even though the problem is exactly the same with arrays filled with anyoldthing, the task specifies nonnegative integers for the array contents, and indeed, every example contains nothing but nonnegative integers. Nonnegative integers everywhere! It is enough to make a person... nonpositive.
So, while I have kept the original examples intact, I commented them out during development. Just below them, you will see exact equivalent letter transpositions:
1 => A, 2 => B, etc. Thus, I kept my sanity. (But seem to have increased my verbosity?)
The positions of the element(s) contributing to the degree
are all that matter.
If @a
is 1000 elements, each a single capital letter, and the array's Degree is 99 (due to having 99 B
, 99 F
, 99 Y
, and the other letters 98orless each),
then our final slice of @a
must include all the B
's or all the F
's or all the Y
's. Otherwise, we have reduced the degree.
Also, whichever of (B
F
Y
) we choose, the slice must start and end with that letter. Otherwise, the slice is not as small as it could be.
Raku has the Bag
type, which can concisely give us the array Degree with @a.Bag.values.max
, but it will turn out that knowledge of the array Degree will not be useful enough to be worth calculating early!
For each element that exists in the array, we only need to know:
last_position  first_position + 1
)Once we have that information, of all the elements with maximum count (Degree), find which has the minimum span, and maybe use first position as a tiebreaker.
For that element, make a Range of the first and last positions, and return a slice of @a[ $first .. $last ]
.
Raku (builtin) and Perl (module) have concise methods to "group" the array indexes by the elements found there. You will see code that does, and does not use them.
Raku's min
/max
do take &by
parameters, but do not return a list when there are ties.
Raku's minpairs
/maxpairs
do return a list when there are ties, but do not take &by
parameters.
The Perl module List::UtilsBy does both, and is a great fit for this problem.
Final answer:
#!/usr/bin/env perl
use v5.36;
use experimental qw<for_list builtin>;
use builtin qw<indexed>;
use List::UtilsBy qw<max_by min_by>;
sub task2 ( @a ) {
my %h;
for my ($k, $v) (indexed @a) {
my $href = ( $h{$v} //= {} );
$href>{KEY } = $v;
$href>{COUNT} += 1;
$href>{FIRST} //= $k;
$href>{LAST } = $k;
}
$_>{SPAN} = $_>{LAST}  $_>{FIRST} + 1 for values %h;
my $best = min_by { $_>{FIRST} }
min_by { $_>{SPAN } }
max_by { $_>{COUNT} }
values %h;
return [ @a[ $best>{FIRST} .. $best>{LAST} ] ];
}
A few tricks used here:
for_list
and indexed
are combined to have the same effect as Raku's for @a.kv > $k,$v {...}
.my $href = ( $h{$v} //= {} );
just grabs a reference when the subhash already exists in %h
. Otherwise it creates the subhash in %h
just before taking the reference. This is a wellknown idiom here, but I should have split it into two lines for expository code.+=
, so adding 1 gives us a running total of the times the element has been seen.//=
only populates it once.for
can populate it concisely and efficiently with the size of the slice reaching from first index where the element was seen to the last such index.//=
for efficiency.max_by
and min_by
receive lists, and are called in list context until the last min_by
, so ties are passed through. The last one cannot be a tie, because only one element could have any particular FIRST.max_by
and min_by
act as tiebreakers; if only one element makes up the array's Degree, then it will be the only hashref emitted by max_by
, and the two min_by
will just pass it through without needing to do other work. If more than one element create the same maximum Degree, then the lowest SPAN is found, and so on.I also wrote a version that collapsed (most of) the building of %h
into a single line using List::Categorize:
use List::Categorize qw<categorize>;
sub task2 ( @a ) {
my %h = categorize { $a[$_] } keys @a;
my $best = min_by { $_>[0] } # First
min_by { $_>[1]  $_>[0] + 1 } # Span
max_by { 0+@{$_} } # Count
values %h;
my ($head, $tail) = @{$best}[0,1];
return [ @a[ $head .. $tail ] ];
}
This may be less efficient than the earlier version; categorize
(like .classify
later in the Raku code) will keep all of the locations where each element is found, which is much more info than we strictly need.
More important to me is that handrolling %h
allows me to NAME the slots in each href. The %h
that categorize
populates requires comments to clarify the min_by
and max_by
code blocks.
Final answer:
sub task2 ( @a ) {
my $best =
@a.pairs
.classify( {.value}, :as{.key} )
.map({ .value })
.max({ .elems, (.tail  .head + 1), .head });
return @a[ $best.head .. $best.tail ];
}
How does that work? Well, let's look at almost the same code, with intermediate variables for each line, like the Raku REPL could demonstrate:
my @a = <A B C B D A B D A>;
# 0 1 2 3 4 5 6 7 8 are the indexes.
my @b = @a.pairs;
[0 => A 1 => B 2 => C 3 => B
4 => D 5 => A 6 => B 7 => D 8 => A]
my @c = @b.classify( {.value}, :as{.key} );
[A => [0 5 8] D => [4 7] C => [2] B => [1 3 6]]
my @d = @c.map({ .value });
[[0 5 8] [4 7] [2] [1 3 6]]
(Without the map
of .value
, we would have to precede many methods with .value
in the lines below. Surprisingly, we do not need the .key
at all any more!)
Here we will diverge from the actual code, so you can see the values that .max
will operate on in the real code.
my @e = @d.map: {
my $r = .head .. .tail;
(.elems, $r.elems, .head, $r);
};
say @e.sort.reverse; # For illustration only; we will really
# use `max` instead of `sort.reverse`.
# Note that Span and First will be negated.
Count = Number of times it occured

 Span = size of the range from first to last occurance
 
  First position found
  
   Range of first..last positions found
   
v v v v
( (3 6 1 1..6) # 'B' 3 times, span=6, first in pos 1, over 1..6
(3 9 0 0..8) # 'A' 3 times, span=9, first in pos 0, over 0..8
(2 4 4 4..7) # 'D' twice, is less than the degree of A or B
(1 1 2 2..2) ) # 'C' once , is less than the degree of A or B
my $best = @e.max;
(3 6 1 1..6)
return @a[ $best[3].list ]; # Expand the Range 1..6
# to allow the slicing.
We are relying on the max
/min
/cmp
action on List that we used in Task#1.
The blackcatlevel sneakiness here is that we can get the min
of numbers by scanning for the max
of their negated values.
So, we don't really need that big .map
to build the List of sorting fields; we can specify them inline as the &by
argument to .max
:
.max({ .elems, (.tail  .head + 1), .head });
Now $best
is the full list of indexes of the winning element, and we need the full range from the first place that element was found, to the last, even including other intervening elements (or it wouldn't be a "slice").
return @a[ $best.head .. $best.tail ];
By the way, these three produce the same result Pairs. Which do you like better?
@a.keys .classify( { @a[$_] } )
@a.pairs .classify( { .value }, :as{ .key } )
@a .classify( { $_ }, :as{ $++ } )
]]>I'm fixing a hole where the rain gets in
And stops my mind from wandering
Where it will go
(La la la la la)
 Fixing a Hole, by The Beatles
(as covered by George Burns, after being rearranged by the Bee Gees. BWAHAHAHAHAHA!)
This task allowed me to revisit a technique that I was recently considering writing about; using patterns of output of idioms to recognize visualize the decomposition.
Six is three, three is five, five is four, four is magic.
...
At the bottom of the code, I detail some quirks in the language. For a BASIC implementation, YaBasic works fine; I may even use it for an early introduction to programming with my grandchildren. "If it was good enough for me,...!"
For my own use, I never expect to inflict any variant of BASIC on myself again.
Things I really missed in YaBasic:
eqv
map()
split
???)if
; even though YaBasic has a oneline form, it give no early indication that you are using it, so your eyes keep reading to endofline, and hopefully you see that then
is missing.You can tell I missed these things, from all the code I wrote to emulate them.
I am the author of one of the three Raku solutions to this problem, and had studied the other two extensively (12 years ago).
To make up for it, I tried to do some clever things by updating the FP solution with :k
and R+
into a oneliner, and by solving the broader problem of possible equilibrium at halfway points and finding smallest imbalances.
]]>It's a Human insult. It's devastating. You're devastated right now.  audio Michael, "The Good Place"
(Still editing)
]]> TWC Task #1  Fortunate NumbersProduce first 8 Fortunate Numbers (unique and sorted).
Find the period of the 3rd Pisano Period.
Find all permutations missing from a list.
I have snip
ped most of the task permutations to make the code fit better in the blog post.
I don't want to write my own permutation code again.
Raku has builtin .permutations
method, and ()
setdifference operator.
Perl has several CPAN modules for permutations; List::Permutor
is the first one Google returned, and its >next
method allowed me to write a loop that was different than my Raku solution.
Python has Set
s, and the itertools
library handles permutations. itertools has lots of features that I miss when working outside of Raku, but they cannot be combined as nicely as Raku's, due to the basic(underlying(Python(function(call(syntax()))))
.
The partial list of juggled letters is in @in
.
my @in = <PELR PREL ...snip...
Take the first word; we could have used any of them.
With no argument, .comb
splits into a list of single characters.
.permutations
gives all the possible rearrangements of those characters.
The ».
makes a hyper method call that will be run on each item in the list of permuted characters, join
ing them back into words.
my @all = @in[0].comb.permutations».join;
When we do a set operation on a List, it is automatically converted to a Set.
()
is the ASCII version of the set difference
operator; it returns a Set of items present in the lefthand Set that are absent from the righthand Set.
Iterating over the resulting Set gives us Pair
objects, where the .value
is always True
, and the .key
is the part we are interested in.
say .key for @all () @in;
The Perl version of Raku's Set
is a hash,
initialized via my %h = map { $_ => 1 } @stuff;
.
use List::Permutor;
my @in = qw<PELR PREL ...snip...>;
my %in_set = map { $_ => 1 } @in;
my $permutor = List::Permutor>new( split '', $in[0] );
while ( my @letters = $permutor>next() ) {
my $word = join '', @letters;
say $word if not $in_set{$word};
}
The Python code mirrors the Perl solution. When given only a single string, list
breaks it into characters.
In hindsight, I really should have named the last variable word
instead of s
.
from itertools import permutations
input_words = "PELR PREL ...snip...".split()
input_set = set(input_words)
for i in permutations(list(input_words[0])):
s = "".join(i)
if s not in input_set:
print(s)
Compute the first 10 distinct prime Padovan Numbers.
is_prime()
code again.There were several ways to write the code block. I chose one that highlights $c
being requested then deliberately unused.
.squish
only suppresses consecutive duplicates, so it works efficiently with lazy lists.
constant @Padovan = 1, 1, 1, { sink $^c; $^a + $^b } ... *;
say @Padovan.grep(&isprime).squish.head(10);
I was very happy to discover List::Lazy, which make easy both the generation of the Padovan Numbers, and filtering them for primes.
It was going to be more trouble than it was worth to make a version of Raku's .squish
that would work with List::Lazy
, so I used the foreknowledge that there is exactly one duplicate, and just called uniq
on the 10+1 numbers returned by >next
.
use List::Util qw<uniq head>;
use List::Lazy qw<lazy_list>;
use ntheory qw<is_prime>;
my $Padovan = lazy_list {
push @$_, $_>[2] + $_>[3];
shift @$_;
} [1, 1, 1];
my $prime_pad = $Padovan>grep( sub { is_prime($_) } );
say join ', ', uniq $prime_pad>next(11);
I am pleased with the brevity of the Padovan()
generator.
head()
is from an itertools
recipe; it is not my own code.
Comparing the final print
line to the second line of the Raku code makes me wish for Raku's flexibility of choosing method calls vs function calls.
from sympy import isprime
from itertools import islice
def Padovan():
p = [1, 1, 1]
while True:
p.append(p[2] + p[3])
yield p.pop(0)
def squish(a):
last = None
for i in a:
if i != last:
yield i
last = i
def head(n, iterable):
return list(islice(iterable, n))
print(head(10, squish(filter(isprime, Padovan()))))
]]>Primes, Primes, everywhere a Prime.
Making all the Integers, alone or combined
Sieve N, to the square root.
Can't you see its Prime?
 audio by the Five Man Electric Band
(Placeholder; still editing)
In which we see that you don't need all the Fibs, and have trouble turning 21.
Given an input $N, generate the first $N numbers for which the sum of their digits is a Fibonacci number.
(i.e. Generate OEIS A028840)
(i.e. Generate OEIS A287298)
Given a number base, derive the largest perfect square with no repeated digits and return it as a string. (For base>10, use ‘A’..‘Z’.)
e
, and realize that first
implies an ordering.
]]>
TWC Task #1  Eban Numbers
Eban: A number that has no letter e
when spelled in English.
Since the Lingua::EN::Numbers
exists for both Perl and Raku, the solutions are just expanded oneliners. Both output:
2 4 6 30 32 34 36 40 42 44 46 50 52 54 56 60 62 64 66
use Lingua::EN::Numbers;
put grep *.&cardinal.contains('e').not, 0..100;
use Modern::Perl;
use Lingua::EN::Numbers qw<num2en>;
say join ' ', grep { !(num2en($_) =~ /e/) } 0..100;
Note: I missed using Perl's !~
operator, which would have been clearer than negating the whole expression: grep { num2en($_) !~ /e/ }
________ ________
³√ 𝑎 + 𝑏√𝑐 + ³√ 𝑎  𝑏√𝑐 == 1
Most of my calculations are in the README.
Exact wording of the task is "Write a script to generate first 5 Cardano Triplets".
The order that triplets occur depends on the method you use to search for them. Therefore, if more than one method exists to search for Cardano triplets, the output of all the participants might not match each other.
Several methods exists to search for Cardano triplets.
a
,b
,c
are independent and not interchangeable; they each play a different role in the equation. So, we cannot use the common technique of "for a = 1..Inf", then have inner loops that bound b
and c
to <= a
, or we risk missing some triplets.
Whenever searching for Ntuples (in this case, triples) that satisfy some condition, a good approach is to see if one piece of the tuple can be calculated from the other pieces. If so, you can remove a loop!
c
can be calculated from a,b
.
WolframAlpha says that
Solve[CubeRoot[a + b Sqrt[c]] + CubeRoot[a  b Sqrt[c]] == 1, c]
has the solution c = (8a  1)(a + 1)² / 27b²
.
So, if (8a  1)(a + 1)²
is evenly divisible by 27b²
, then we have found a triplet and computed c
.
All triplets must have 𝑎 ≡ 2 𝑚𝑜𝑑 3
, which is the same as saying a = 3k+2
where k
is a positive integer.
(Stated halfway through Jean Marie's StackExchange answer to Parametrization of Cardano triplet)
Combining the last two points, we can replace a
with 3k+2
in (8a  1)(a + 1)²
, and simplify it to 27(8k + 5)(k + 1)²
.
Since that has a factor of 27
, and every triple must be evenly divisible by 27b²
, then every valid a
(that is, 𝑎 ≡ 2 𝑚𝑜𝑑 3
) has a least one triple, where b=1
. If a
has square divisors, then there are multiple triples for that a
.
A straightforward coding of is_Cardano_triplet()
uses floatingpoint math then checks for an equality, which is the classic FP bug. We must compare to some tolerance, so that we do not exclude a valid triple.
say "Bad!" if sqrt(3)**2 != 3' # Output: Bad!
Since the math involves more than just division, even Raku's invisible Rat(s) will not save us; ($r  1).abs < $some_tolerance
, or maybe $r =~= 1
.
Raku's isalmostequalto operator, =~=
uses the current $*TOLERANCE
setting, whose default of 1e15 is too strict to find some triplets.
(Aside: if you do something like my $*TOLERANCE = 1e14;
and omit the minus sign, then suddenly every triple is a Cardano triple, because now one is almost equal to a trillion. Blew my mind until I saw my error.)
I found it concise to use two uncommon techniques: find_Cardano_triplet()
and is_Cardano_triplet()
both take a single argument of a triplet (notice the doubled parenthesis in the sub
declarations), and return either a complete triplet or Nil. You might expect find_Cardano_triplet()
to return c
, is_Cardano_triplet()
to return a Bool
and both to take 3 arguments. This allowed a slick combining of candidate sources and filters/generators via map
and grep
.
While I used $m % $n
in the Perl code to check for divisibility, Raku's Rat types always reduce to GCD, so I can just divide and then examine .denominator == 1
. The .narrow
method is needed so that I don't return a Rat
where one would expect an Int
.
Similar to using Unicode superscript numerals to express exponents ($n²
), you can use Unicode "vulgar fractions" in Raku. This was convenient for cube roots, although special handling was needed for negatives.
say [15 * ⅖, 27 ** ⅓]; # Output: [6 3]
sub cbrt (\n) { n.sign * n.abs ** ⅓ }
# Handles positives and negatives uniformly.
While Raku provides the Cross operator (and metaoperator), it does not work usefully with more than one infinite list, so we can only use it (or the equivalent three nested for
loops) if we already know the answer to the problem. I have always disliked this limitation (across many problems), but I did use the preknowledge this time for comparison, as constants @fixed_X_triplets
and @fixed_X_doublets
with 1..21
ranges.
I invented (probably rediscovered) an algorithm to iterate over Ntuple crossproducts of N infinite lazy lists, never generating (or even having to check for) duplicates. It works well in any language with generators, and is a good fit here, but the ordering that it produces the tuples looks irregular to humans.
I expanded the output count from 5 triplets to 6, to better show the differences in the orderings. Here are the 4 lines that my program produces (plus one from elsewhere).
(2 1 5) (5 2 13) (17 18 5) (17 9 20) ( 8 3 21) (11 4 29) # 1
(2 1 5) (5 2 13) ( 8 3 21) (17 9 20) (17 18 5) # 2
(2 1 5) (5 1 52) ( 5 2 13) ( 8 1 189) ( 8 3 21) (11 1 464) # 3
(2 1 5) (5 1 52) ( 8 1 189) (11 1 464) (14 1 925) (17 1 1620) # 4
(2 1 5) (5 2 13) ( 8 3 21) (11 4 29) (14 5 37) (17 6 45) # 5
triplet_generator().grep( &is_Cardano_triplet)
"smallest" for minimizing max(a,b,c)
@fixed_X_triplets.grep( &is_Cardano_triplet)
Notice that it omits the 6th triplet, because the (1..21) that was just enough for 5; it "ran out" of triplets before finding the 6th!
@fixed_X_doublets.map(&find_Cardano_triplet)
a,b
can be smaller because calculated c
is not restricted by a range. This one will "run out" too.
(^Inf).map({find_Cardano_triplet( (3 * $_ + 2, 1), )})
Locks b=1
, and walks through all the a=3k+2
since we know they all work with b=1
, finding the correct c
. This is reasonable; you would get the same answer from a big enough tripleloop if b
was the outer variable, find_Cardano_triplet
would return Nil
for all the a
that were not 𝑎 ≡ 2 𝑚𝑜𝑑 3
and therefore be suppressed.
This line does not come from my challenge solution; Flavio Poletti adapted Jean Marie's complete answer (that I only managed half of, above) to derive a formula that, given b
, produces the smallestworking a
and its corresponding c
. Splendid! And also, yet another ordering, with a reasonable take on "first 5", that gives different output from any of my four. The cost in finding (5 2 13)
so easily, is never seeing other b=2
like (11 2 116)
.
say (1..Inf).map({ 3 * $_  1, $_, 8 * $_  3 }).head(6)
Inspired by Flavio, I wrote this just before posting. It was experimentally arrivedat (doublet_generator+find_Cardano_tripletsort
, OEIS), not algebraically derived, but it produces correct output (with tolerance=1e12) for the first 1000 triplets, locking c=5
.
say (1..Inf).map(> \n { (15 * (2 * n  1)² + 1)/8, (2 * n  1)*(5 * n²  5 * n + 2)/2, 5 }).head(6)
Output: ((2 1 5) (17 18 5) (47 80 5) (92 217 5) (152 459 5) (227 836 5))
Six first 5's, all for valid interpretations of "first". Surprising, and Fun!
I simply translated find_Cardano_triplet
from Raku, and called it with a=0..4, b=1
for the most humorous definition of "first 5".
sub find_Cardano_triplet ( $x, $y ) {
my $m = ($x + 1)**2 * (8*$x  1);
my $n = 27 * $y * $y;
return if $m % $n;
return [ $x, $y, $m / $n ];
}
say sprintf('%3d %3d %3d', @{$_}) for map { find_Cardano_triplet( (3 * $_ + 2, 1), ) } 0..4;
Output is the same triples as #4 in the Raku section.
]]>"Lefttruncatable prime" is not fully defined by the task; are leading zeros valid?
e.g. 103 > 03 > 3 ; all are prime, but is 03
considered a "number"?
UPDATE: SF.pm's Raku Study Group just pointed out that task description does say "contains no 0", so the task was fully defined, and I had no need for the "filter" half of the solutions below. Mea culpa!
OEIS has separate pages for each definition, but both start with:
(2, 3, 5, 7, 13, 17, 23, 37, 43, 47):
A033664 …, 53, 67, 73, 83, 97, 103, 107, 113, …
A024785 …, 53, 67, 73, 83, 97, 113, …
Since one definition is more easily written as a filter, and the other definition is best written as a generator, I wrote both.
My Raku program starts with the "filter" approach:
sub islefttruncatableprime ( UInt \N > Bool ) {
return (0 ..^ N.chars) # Start of each substring
.map( { N.substr($_) }) # All lefttruncated substrings
.first({ .isprime.not }) # Find the first nonprime
.defined.not; # If no nonprimes, then True
}
constant @LTP_A033664 = grep &islefttruncatableprime, ^Inf;
The .first
method, combined with the laziness of the .map
method, allows an early return
without .substr
having to generate every single substring.
Rephrasing to use .all
is only slightly clearer, so I used .first
.
The "generator" approach starts with the single digit primes as the first "generation", and prepends 1..9 to each element of gen#1 (and filters out nonprimes) to create alldoubledigit gen#2. Gen#3 will all be tripledigits, and so on.
my @LTP_A024785 = lazy gather loop {
state @current_gen = grep &isprime, ^10;
.take for @current_gen;
@current_gen = grep &isprime, ((1..9) X~ @current_gen);
}
Since each number in a generation has the same number digits, and the first generation is in numeric order, each subsequent (1..9) X~ @current_gen
generation will also be in order.
Both arrays are lazy, so they get their elements populated on demand. Final output is just:
put @LTP_A033664.head(20);
put @LTP_A024785.head(20);
2 3 5 7 13 17 23 37 43 47 53 67 73 83 97 103 107 113 137 167
2 3 5 7 13 17 23 37 43 47 53 67 73 83 97 113 137 167 173 197
My Perl program is just a conversion of the Raku, with adaptations to loosely replace the laziness that Perl lacks.
The ntheory (Number Theory) module has is_prime
, which saves me much code.
Some obvious problems:
A+B
for presence in the list of pents, the value will not exist in the list yet..combinations
method that we can invoke with (2)
, but it will not work with the lazy infinite list that idiomatic for @pents
.If we did already know how big to prebuild the pents, then the solution would be simple:
constant @pents = map { $_ * (3 * $_  1) div 2 }, 1..*;
my %p = @pents.head(2400).Set;
say @pents.head(2400).combinations(2).first: {
%p{ [+] .list } and
%p{ [R] .list }
};
I don't want to do that.
If we "solve" the pent equation of n(3n1)/2 = P
via quadratic formula (a=3,b=1,c=2P), we can write a is_pentagon_number
sub, which would solve the first two problems!
sub ispentagonnumber ( \p ) {
my \inner = 24 * p + 1;
my \near_root = inner.sqrt.round;
return near_root ** 2 == inner
&& near_root % 6 == 5
}
This would work perfectly.
I chose not to do that, either.
Instead, let's call the sum of the two pents "A", and the difference "D". Then rearrange like so:
# Where A,B,C,D are all pentagonal numbers:
# B + C == A , B  C == D Original problem statement
# C == A  B , B  C == D Rearranged as two differences
# C == A  B , B(AB)==D (C,D), expressed only in A and B
So, if we find any two pentagonal numbers A,B where AB is pentagonal and B(AB) is pentagonal, then we have a solution. The desired numbers will be the inner two: (B,C).
With this reorganization, we will always be "looking backwards" into parts of @pent
that have already been generated. The cost will be in generating all the way to A; a solution using ispentagonnumber
would only need to generate to B.
My Raku program uses for @pents.kv
as a outer loop, and for @pents.head(i)
as the inner loop, to replicate the disallowed .combinations(2)
.
sub findfirstplusandminuspentagon_numbers ( ) {
constant @pents = map >\n { n *(3*n  1) div 2 }, 1..*;
my %p;
for @pents.kv > \i, \A {
%p{A} = 1;
for @pents.head(i) > \B {
my \D = A  B;
my \C = B  D;
return B, C if %p{C} and %p{D};
}
}
}
put findfirstplusandminuspentagon_numbers();
The three body lines of the inner loop could be replace with one line (return B, C if %p{A  B} and %p{B  (A  B)}
), and then the whole inner loop could become a return … with first {…}
statement, but then I suspect it would "spark joy" in no one.
Aside: SF.pm's Raku Study Group just pointed out that the constant
line uses a sigilless n
, which means it gets defined as > \n
, which confusingly looks like a newline character. Good point!
My Perl solution needed almost no structural changes from the Raku, because the lazy generation of the pents can just be appended at the same pace as the outer loop.
sub find_first_plus_and_minus_pentagon_numbers ( ) {
my @pents;
my %p;
for ( my $i = 1 ; ; $i++ ) {
my $A = $i * (3*$i  1) / 2; # Pentagon number
for my $B (@pents) {
my $D = $A  $B;
my $C = $B  $D;
return $B, $C if $p{$C} and $p{$D};
}
$p{$A} = 1;
push @pents, $A;
}
}
say join ' ', find_first_plus_and_minus_pentagon_numbers();
Five is right out.  Monty Python and the Holy Grail
]]>