June 2017 Archives

Perl 6: Seqs, Drugs, And Rock'n'Roll (Part 2)

Read this article on Perl6.Party

This is the second part in the series! Be sure you read Part I first where we discuss what Seqs are and how to .cache them.

Today, we'll take the Seq apart and see what's up in it; what drives it; and how to make it do exactly what we want.

PART II: That Iterated Quickly

The main piece that makes a Seq do its thing is an object that does the Iterator role. It's this object that knows how to generate the next value, whenever we try to pull a value from a Seq, or push all of its values somewhere, or simply discard all of the remaining values.

Keep in mind that you never need to use Iterator's methods directly, when making use of a Seq as a source of values. They are called indirectly under the hood in various Perl 6 constructs. The use case for calling those methods yourself is often the time when we're making an Iterator that's fed by another Iterator, as we'll see.

Pull my finger...

In its most basic form, an Iterator object needs to provide only one method: .pull-one

my $seq := Seq.new: class :: does Iterator {
    method pull-one {
        return $++ if $++ < 4;
        IterationEnd
    }
}.new;

.say for $seq;

# OUTPUT:
# 0
# 1
# 2
# 3

Above, we create a Seq using its .new method that expects an instantiated Iterator, for which we use an anonymous class that does the Iterator role and provides a single .pull-one method that uses a pair of anonymous state variables to generate 4 numbers, one per call, and then returns IterationEnd constant to signal the Iterator does not have any more values to produce.

The Iterator protocol forbids attempting to fetch more values from an Iterator once it generated the IterationEnd value, so your Iterator's methods may assume they'll never get called again past that point.

Meet the rest of the gang

The Iterator role defines several more methods, all of which are optional to implement, and most of which have some sort of default implementation. The extra methods are there for optimization purposes that let you take shortcuts depending on how the sequence is iterated over.

Let's build a Seq that hashes a bunch of data using Crypt::Bcryptmodule (run zef install Crypt::Bcrypt to install it). We'll start with the most basic Iterator that provides .pull-one method and then we'll optimize it to perform better in different circumstances.

use Crypt::Bcrypt;

sub hash-it (*@stuff) {
    Seq.new: class :: does Iterator {
        has @.stuff;
        method pull-one {
            @!stuff ?? bcrypt-hash @!stuff.shift, :15rounds
                    !! IterationEnd
        }
    }.new: :@stuff
}

my $hashes := hash-it <foo bar ber>;
for $hashes {
    say "Fetched value #{++$} {now - INIT now}";
    say "\t$_";
}

# OUTPUT:
# Fetched value #1 2.26035863
#     $2b$15$ZspycxXAHoiDpK99YuMWqeXUJX4XZ3cNNzTMwhfF8kEudqli.lSIa
# Fetched value #2 4.49311657
#     $2b$15$GiqWNgaaVbHABT6yBh7aAec0r5Vwl4AUPYmDqPlac.pK4RPOUNv1K
# Fetched value #3 6.71103435
#     $2b$15$zq0mf6Qv3Xv8oIDp686eYeTixCw1aF9/EqpV/bH2SohbbImXRSati

In the above program, we wrapped all the Seq making stuff inside a sub called hash-it. We slurp all the positional arguments given to that sub and instantiate a new Seq with an anonymous class as the Iterator. We use attribute @!stuff to store the stuff we need to hash. In the .pull-one method we check if we still have @!stuff to hash; if we do, we shift a value off @!stuff and hash it, using 15 rounds to make the hashing algo take some time. Lastly, we added a say statement to measure how long the program has been running for each iteration, using two now calls, one of which is run with the INIT phaser. From the output, we see it takes about 2.2 seconds to hash a single string.

Skipping breakfast

Using a for loop, is not the only way to use the Seq returned by our hashing routine. What if some user doesn't care about the first few hashes? For example, they could write a piece of code like this:

my $hash = hash-it(<foo bar ber>).skip(2).head;
say "Made hash {now - INIT now}";
say bcrypt-match 'ber', $hash;

# OUTPUT:
# Made hash 6.6813790
# True

We've used Crypt::Bcryptmodule's bcrypt-match routine to ensure the hash we got matches our third input string and it does, but look at the timing in the output. It took 6.7s to produce that single hash!

In fact, things will look the worse the more items the user tries to skip. If the user calls our hash-it with a ton of items and then tries to .skip the first 1,000,000 elements to get at the 1,000,001st hash, they'll be waiting for about 25 days for that single hash to be produced!!

The reason is our basic Iterator only knows how to .pull-one, so the skip operation still generates the hashes, just to discard them. Since the values our Iterator generates do not depend on previous values, we can implement one of the optimizing methods to skip iterations cheaply:

use Crypt::Bcrypt;

sub hash-it (*@stuff) {
    Seq.new: class :: does Iterator {
        has @.stuff;
        method pull-one {
            @!stuff ?? bcrypt-hash @!stuff.shift, :15rounds
                    !! IterationEnd
        }
        method skip-one {
            return False unless @!stuff;
            @!stuff.shift;
            True
        }
    }.new: :@stuff
}

my $hash = hash-it(<foo bar ber>).skip(2).head;
say "Made hash {now - INIT now}";
say bcrypt-match 'ber', $hash;

# OUTPUT:
# Made hash 2.2548012
# True

We added a .skip-one method to our Iterator that instead of hashing a value, simply discards it. It needs to return a truthy value, if it was able to skip a value (i.e. we had a value we'd otherwise generate in .pull-one, but we skipped it), or falsy value if there weren't any values to skip.

Now, the .skip method called on our Seq uses our new .skip-one method to cheaply skip through 2 items and then uses .pull-one to generate the third hash. Look at the timing now: 2.2s; the time it takes to generate a single hash.

However, we can kick it up a notch. While we won't notice a difference with our 3-item Seq, that user who was attempting to skip 1,000,000 items won't get the 2.2s time to generate the 1,000,000th hash. They would also have to wait for 1,000,000 calls to .skip-one and @!stuff.shift. To optimize skipping over a bunch of items, we can implement the .skip-at-least method (for brevity, just our Iterator class is shown):

class :: does Iterator {
    has @.stuff;
    method pull-one {
        @!stuff
            ?? bcrypt-hash( @!stuff.shift, :15rounds )
            !! IterationEnd
    }
    method skip-one {
        return False unless @!stuff;
        @!stuff.shift;
        True
    }
    method skip-at-least (Int \n) {
        n == @!stuff.splice: 0, n
    }
}

The .skip-at-least method takes an Int of items to skip. It should skip as many as it can, and return a truthy value if it was able to skip that many items, and falsy value if the number of skipped items was fewer. Now, the user who skips 1,000,000 items will only have to suffer through a single .splice call.

For the sake of completeness, there's another skipping method defined by Iterator: .skip-at-least-pull-one. It follows the same semantics as .skip-at-least, except with .pull-one semantics for return values. Its default implemention involves just calling those two methods, short-circuiting and returning IterationEnd if the .skip-at-least returned a falsy value, and that default implementation is very likely good enough for all Iterators. The method exists as a convenience for Iterator users who call methods on Iterators and (at the moment) it's not used in core Rakudo Perl 6 by any methods that can be called on users' Seqs.

A so, so count...

There are two more optimization methods—.bool-only and .count-only—that do not have a default implementation. The first one returns True or False, depending on whether there are still items that can be generated by the Iterator (True if yes). The second one returns the number of items the Iterator can still produce. Importantly these methods must be able to do that without exhausting the Iterator. In other words, after finding these methods implemented, the user of our Iterator can call them and afterwards should still be able to .pull-one all of the items, as if the methods were never called.

Let's make an Iterator that will take an Iterable and .rotate it once per iteration of our Iterator until its tail becomes its head. Basically, we want this:

.say for rotator 1, 2, 3, 4;

# OUTPUT:
# [2 3 4 1]
# [3 4 1 2]
# [4 1 2 3]

This iterator will serve our purpose to study the two Iterator methods. For a less "made-up" example, try to find implementations of iterators for combinations and permutations routines in Perl 6 compiler's source code.

Here's a sub that creates our Seq with our shiny Iterator along with some code that operates on it and some timings for different stages of the program:

sub rotator (*@stuff) {
    Seq.new: class :: does Iterator {
        has int $!n;
        has int $!steps = 1;
        has     @.stuff is required;

        submethod TWEAK { $!n = @!stuff − 1 }

        method pull-one {
            if $!n-- > 0 {
                LEAVE $!steps = 1;
                [@!stuff .= rotate: $!steps]
            }
            else {
                IterationEnd
            }
        }
        method skip-one {
            $!n > 0 or return False;
            $!n--; $!steps++;
            True
        }
        method skip-at-least (Int \n) {
            if $!n > all 0, n {
                $!steps += n;
                $!n     −= n;
                True
            }
            else {
                $!n = 0;
                False
            }
        }
    }.new: stuff => [@stuff]
}

my $rotations := rotator ^5000;

if $rotations {
    say "Time after getting Bool: {now - INIT now}";

    say "We got $rotations.elems() rotations!";
    say "Time after getting count: {now - INIT now}";

    say "Fetching last one...";
    say "Last one's first 5 elements are: $rotations.tail.head(5)";
    say "Time after getting last elem: {now - INIT now}";
}

# OUTPUT:
# Time after getting Bool: 0.0230339
# We got 4999 rotations!
# Time after getting count: 26.04481484
# Fetching last one...
# Last one's first 5 elements are: 4999 0 1 2 3
# Time after getting last elem: 26.0466234

First things first, let's take a look at what we're doing in our Iterator. We take an Iterable (in the sub call on line 37, we use a Range object out of which we can milk 5000 elements in this case), shallow-clone it (using [ ... ] operator) and keep that clone in @!stuff attribute of our Iterator. During object instantiation, we also save how many items @!stuff has in it into $!n attribute, inside the TWEAK submethod.

For each .pull-one of the Iterator, we .rotate our @!stuff attribute, storing the rotated result back in it, as well as making a shallow clone of it, which is what we return for the iteration.

We also already implemented the .skip-one and .skip-at-least optimization methods, where we use a private $!steps attribute to alter how many steps the next .pull-one will .rotate our @!stuff by. Whenever .pull-one is called, we simply reset $!steps to its default value of 1 using the LEAVE phaser.

Let's check out how this thing performs! We store our precious Seq in $rotations variable that we first check for truthiness, to see if it has any elements in it at all; then we tell the world how many rotations we can fish out of that Seq; lastly, we fetch the last element of the Seq and (for screen space reasons) print the first 5 elements of the last rotation.

All three steps—check .Bool, check .elems, and fetch last item with .tail are timed, and the results aren't that pretty. While .Bool took relatively quick to complete, the .elems call took ages (26s)! That's actually not all of the damage. Recall from PART I of this series that both .Bool and .elems cache the Seq unless special methods are implemented in the Iterator. This means that each of those rotations we made are still there in memory, using up space for nothing! What are we to do? Let's try implementing those special methods .Bool and .elems are looking for!

This only thing we need to change is to add two extra methods to our Iterator that determinine how many elements we can generate (.count-only) and whether we have any elements to generate (.bool-only):

method count-only { $!n     }
method bool-only  { $!n > 0 }

For the sake of completeness, here is our previous example, with these two methods added to our Iterator:

sub rotator (*@stuff) {
    Seq.new: class :: does Iterator {
        has int $!n;
        has int $!steps = 1;
        has     @.stuff is required;

        submethod TWEAK { $!n = @!stuff − 1 }

        method count-only { $!n     }
        method bool-only  { $!n > 0 }

        method pull-one {
            if $!n-- > 0 {
                LEAVE $!steps = 1;
                [@!stuff .= rotate: $!steps]
            }
            else {
                IterationEnd
            }
        }
        method skip-one {
            $!n > 0 or return False;
            $!n--; $!steps++;
            True
        }
        method skip-at-least (\n) {
            if $!n > all 0, n {
                $!steps += n;
                $!n     −= n;
                True
            }
            else {
                $!n = 0;
                False
            }
        }
    }.new: stuff => [@stuff]
}

my $rotations := rotator ^5000;

if $rotations {
    say "Time after getting Bool: {now - INIT now}";

    say "We got $rotations.elems() rotations!";
    say "Time after getting count: {now - INIT now}";

    say "Fetching last one...";
    say "Last one's first 5 elements are: $rotations.tail.head(5)";
    say "Time after getting last elem: {now - INIT now}";
}

# OUTPUT:
# Time after getting Bool: 0.0087576
# We got 4999 rotations!
# Time after getting count: 0.00993624
# Fetching last one...
# Last one's first 5 elements are: 4999 0 1 2 3
# Time after getting last elem: 0.0149863

The code is nearly identical, but look at those sweet, sweet timings! Our entire program runs about 1,733 times faster because our Seq can figure out if and how many elements it has without having to iterate or rotate anything. The .tail call sees our optimization (side note: that's actually very recent) and it too doesn't have to iterate over anything and can just use our .skip-at-least optimization to skip to the end. And last but not least, our Seq is no longer being cached, so the only things kept around in memory are the things we care about. It's a huge win-win-win for very little extra code.

But wait... there's more!

Push it real good...

The Seqs we looked at so far did heavy work: each generated value took a relatively long time to generate. However, Seqs are quite versatile and at times you'll find that generation of a value is cheaper than calling .pull-one and storing that value somewhere. For cases like that, there're a few more methods we can implement to make our Seq perform better.

For the next example, we'll stick with the basics. Our Iterator will generate a sequence of positive even numbers up to the wanted limit. Here's what the call to the sub that makes our Seq looks like:

say evens-up-to 20; # OUTPUT: (2 4 6 8 10 12 14 16 18)

And here's the all of the code for it. The particular operation we'll be doing is storing all the values in an Array, by assigning to it:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one { ($!n += 2) < $!limit ?? $!n !! IterationEnd }
    }.new: :$^limit
}

my @a = evens-up-to 1_700_000;

say now - INIT now; # OUTPUT: 1.00765440

For a limit of 1.7 million, the code takes around a second to run. However, all we do in our Iterator is add some numbers together, so a lot of the time is likely lost in .pull-oneing the values and adding them to the Array, one by one.

In cases like this, implementing a custom .push-all method in our Iterator can help. The method receives one argument that is a reification target. We're pretty close to bare "metal" now, so we can't do anything fancy with the reification target object other than call .push method on it with a single value to add to the target. The .push-all always returns IterationEnd, since it exhausts the Iterator, so we'll just pop that value right into the return value of the method's Signature:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method push-all (\target --> IterationEnd) {
            target.push: $!n while ($!n += 2) < $!limit;
        }
    }.new: :$^limit
}

my @a = evens-up-to 1_700_000;
say now - INIT now; # OUTPUT: 0.91364949

Our program is now 10% faster; not a lot. However, since we're doing all the work in .push-all now, we no longer need to deal with state inside the method's body, so we can shave off a bit of time by using lexical variables instead of accessing object's attributes all the time. We'll make them use native int types for even more speed. Also, (at least currently), the += meta operator is more expensive than a simple assignment and a regular +; since we're trying to squeeze every last bit of juice here, let's take advantage of that as well. So what we have now is this:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method push-all (\target --> IterationEnd) {
            my int $limit = $!limit;
            my int $n     = $!n;
            target.push: $n while ($n = $n + 2) < $limit;
            $!n = $n;
        }
    }.new: :$^limit
}

my @a = evens-up-to 1_700_000;
say now - INIT now; # OUTPUT: 0.6688109

There we go. Now our program is 1.5 times faster than the original, thanks to .push-all. The gain isn't as dramatic as we what saw with other methods, but can come in quite handy when you need it.

There are a few more .push-* methods you can implement to, for example, do something special when your Seq is used in codes like...

for $your-seq -> $a, $b, $c { ... }

...where the Iterator would be asked to .push-exactly three items. The idea behind them is similar to .push-all: you push stuff onto the reification target. Their utility and performance gains are ever smaller, useful only in particular situations, so I won't be covering them.

It's worth noting the .push-all can be used only with Iterators that are not lazy, since... well... it expects you to push all the items. And what exactly are lazy Iterators? I'm so glad you asked!

A quick brown fox jumped over the lazy Seq

Let's pare down our previous Seq that generates even numbers down to the basics. Let's make it generate an infinite list of even numbers, using an anonymous state variable:

sub evens {
    Seq.new: class :: does Iterator {
        method pull-one { $ += 2 }
    }.new
}

put evens

Since the list is infinite, it'd take us an infinite time to fetch them all. So what exactly happens when we run the code above? It... quite predictably hangs when the put routine is called; it sits and patiently waits for our infinite Seq to complete. The same issue occurs when trying to assign our seq to a @-sigiled variable:

my @evens = evens # hangs

Or even when trying to pass our Seq to a sub with a slurpy parameter_Parameters):

sub meows (*@evens) { say 'Got some evens!' }
meows evens # hangs

That's quite an annoying problem. Fortunately, there's a very easy solution for it. But first, a minor detour to the land of naming clarification!

A rose by any other name would laze as sweet

In Perl 6 some things are or can be made "lazy". While it evokes the concept of on-demand or "lazy" evaluation, which is ubiquitous in Perl 6, things that are lazy in Perl 6 aren't just about that. If something is-lazy, it means it always wants to be evaluated lazily, fetching only as many items as needed, even in "mostly lazy" Perl 6 constructs that would otherwise eagerly consume even from sources that do on-demand generation.

For example, a sequence of lines read from a file would want to be lazy, as reading them all in at once has the potential to use up all the RAM. An infinite sequence would also want to be is-lazy because an eager evaluation would cause it to hang, as the sequence never completes.

So a thing that is-lazy in Perl 6 can be thought of as being infinite. Sometimes it actually will be infinite, but even if it isn't, it being lazy means it has similar consequences if used eagerly (too much CPU time used, too much RAM, etc).


Now back to our infinite list of even numbers. It sounds like all we have to do is make our Seq lazy and we do that by implementing .is-lazy method on our Iterator that simply returns True:

sub evens {
    Seq.new: class :: does Iterator {
        method pull-one { $ += 2 }
        method is-lazy (--> True) {}
    }.new
}

sub meows (*@evens) { say 'Got some evens!' }

put         evens; # OUTPUT: ...
my @evens = evens; # doesn't hang
meows       evens; # OUTPUT: Got some evens!

The put routine now detects its dealing with something terribly long and just outputs some dots. Assignment to Array no longer hangs (and will instead reify on demand). And the call to a slurpy doesn't hang either and will also reify on demand.

There's one more Iterator optimization method left that we should discuss...

A Sinking Ship

Perl 6 has sink context, similar to "void" context in other languages, which means a value is being discarded:

42;

# OUTPUT:
# WARNINGS for ...:
# Useless use of constant integer 42 in sink context (line 1)

The constant 42 in the above program is in sink context—its value isn't used by anything—and since it's nearly pointless to have it like that, the compiler warns about it.

Not all sinkage is bad however and sometimes you may find that gorgeous Seq on which you worked so hard is ruthlessly being sunk by the user! Let's take a look at what happens when we sink one of our previous examples, the Seq that generates up to limit even numbers:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
    }.new: :$^limit
}

evens-up-to 5_000_000; # sink our Seq

say now - INIT now; # OUTPUT: 5.87409072

Ouch! Iterating our Seq has no side-effects outside of the Iterator that it uses, which means it took the program almost six seconds to do absolutely nothing.

We can remedy the situation by implementing our own .sink-all method. Its default implementation .pull-ones until the end of the Seq (since Seqs may have useful side effects), which is not what we want for our Seq. So let's implement a .sink-all that does nothing!

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method sink-all(--> IterationEnd) {}
    }.new: :$^limit
}

evens-up-to 5_000_000; # sink our Seq

say now - INIT now; # OUTPUT: 0.0038638

We added a single line of code and made our program 1,520 times faster—the perfect speed up for a program that does nothing!

However, doing nothing is not the only thing .sink-all is good for. Use it for clean up that would usually be done at the end of iteration (e.g. closing a file handle the Iterator was using). Or simply set the state of the system to what it would be at the end of the iteration (e.g. .seek a file handle to the end, for sunk Seq that produces lines from it). Or, as an alternative idea, how about warning the user their code might contain an error:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method sink-all(--> IterationEnd) {
            warn "Oh noes! Looks like you sunk all the evens!\n"
                ~ 'Why did you make them in the first place?'
        }
    }.new: :$^limit
}

evens-up-to 5_000_000; # sink our Seq

# OUTPUT:
# Oh noes! Looks like you sunk all the evens!
# Why did you make them in the first place?
# ...

That concludes our discussion on optimizing your Iterators. Now, let's talk about using Iterators others have made.

It's a marathon, not a sprint

With all the juicy knowledge about Iterators and Seqs we now possess, we can probably see how this piece of code manages to work without hanging, despite being given an infinite Range of numbers:

.say for ^∞ .grep(*.is-prime).map(* ~ ' is a prime number').head: 5;

# OUTPUT:
# 2 is a prime number
# 3 is a prime number
# 5 is a prime number
# 7 is a prime number
# 11 is a prime number

The infinite Range probably is-lazy. That .grep probably .pull-ones until it finds a prime number. The .map .pull-ones each of the .grep's values and modifies them, and .head allows at most 5 values to be .pull-oned from it.

In short what we have here is a pipeline of Seqs and Iterators where the Iterator of the next Seq is based on the Iterator of the previous one. For our study purposes, let's cook up a Seq of our own that combines all of the steps above:

sub first-five-primes (*@numbers) {
    Seq.new: class :: does Iterator {
        has     $.iter;
        has int $!produced = 0;
        method pull-one {
            $!produced++ == 5 and return IterationEnd;
            loop {
                my $value := $!iter.pull-one;
                return IterationEnd if $value =:= IterationEnd;
                return "$value is a prime number" if $value.is-prime;
            }
        }
    }.new: iter => @numbers.iterator
}

.say for first-five-primes ^∞;

# OUTPUT:
# 2 is a prime number
# 3 is a prime number
# 5 is a prime number
# 7 is a prime number
# 11 is a prime number

Our sub slurps up_Parameters) its positional arguments and then calls .iterator method on the @numbers Iterable. This method is available on all Perl 6 objects and will let us interface with the object using Iterator methods directly.

We save the @numbers's Iterator in one of the attributes of our Iterator as well as create another attribute to keep track of how many items we produced. In the .pull-one method, we first check whether we already produced the 5 items we need to produce, and if not, we drop into a loop that calls .pull-one on the other Iterator, the one we got from @numbers Array.

We recently learned that if the Iterator does not have any more values for us, it will return the IterationEnd constant. A constant whose job is to signal the end of iteration is finicky to deal with, as you can imagine. To detect it, we need to ensure we use the binding (:=), not the assignment (=) operator, when storing the value we get from .pull-one in a variable. This is because pretty much only the container identity (=:=) operator will accept such a monstrosity, so we can't stuff the value we .pull-one into just any container we please.

In our example program, if we do find that we received IterationEnd from the source Iterator, we simply return it to indicate we're done. If not, we repeat the process until we find a prime number, which we then put into our desired string and that's what we return from our .pull-one.

All the rest of the Iterator methods we've learned about can be called on the source Iterator in a similar fashion as we called .pull-one in our example.

Conclusion

Today, we've learned a whole ton of stuff! We now know that Seqs are powered by Iterator objects and we can make custom iterators that generate any variety of values we can dream about.

The most basic Iterator has only .pull-one method that generates a single value and returns IterationEnd when it has no more values to produce. It's not permitted to call .pull-one again, once it generated IterationEnd and we can write our .pull-one methods with the expectation that will never happen.

There are plenty of optimization opportunities a custom Iterator can take advantage of. If it can cheaply skip through items, it can implement .skip-one or .skip-at-least methods. If it can know how many items it'll produce, it can implement .bool-only and .count-only methods that can avoid a ton of work and memory use when only certain values of a Seq are needed. And for squeezing the very last bit of performance, you can take advantage of .push-all and other .push-* methods that let you push values onto the target directly.

When your Iterator .is-lazy, things will treat it with extra care and won't try to fetch all of the items at once. And we can use the .sink-all method to avoid work or warn the user of potential mistakes in their code, when our Seq is being sunk.

Lastly, since we know how to make Iterators and what their methods do, we can make use of Iterators coming from other sources and call methods on them directly, manipulating them just how we want to.

We now have all the tools to work with Seq objects in Perl 6. In the PART III of this series, we'll learn how to compactify all of that knowledge and skillfully build Seqs with just a line or two of code, using the sequence operator.

Stay tuned!

-Ofun

Perl 6: Seqs, Drugs, And Rock'n'Roll

Read this article on Perl6.Party

I vividly recall my first steps in Perl 6 were just a couple of months before the first stable release of the language in December 2015. Around that time, Larry Wall was making a presentation and showed a neat feature—the sequence operator—and it got me amazed about just how powerful the language is:

# First 12 even numbers:
say (2, 4 … ∞)[^12];      # OUTPUT: (2 4 6 8 10 12 14 16 18 20 22 24)

# First 10 powers of 2:
say (2, 2², 2³ … ∞)[^10]; # OUTPUT: (2 4 8 16 32 64 128 256 512 1024)

# First 13 Fibonacci numbers:
say (1, 1, *+* … ∞)[^13]; # OUTPUT: (1 1 2 3 5 8 13 21 34 55 89 144 233)

The ellipsis () is the sequence operator and the stuff it makes is the Seq object. And now, a year and a half after Perl 6's first release, I hope to pass on my amazement to a new batch of future Perl 6 programmers.

This is a 3-part series. In PART I of this article we'll talk about what Seq s are and how to make them without the sequence operator. In PART II, we'll look at the thing-behind-the-curtain of Seq's: the Iterator type and how to make Seqs from our own Iterators. Lastly, in PART III, we'll examine the sequence operator in all of its glory.

Note: I will be using all sorts of fancy Unicode operators and symbols in this article. If you don't like them, consult with the Texas Equivalents page for the equivalent ASCII-only way to type those elements.

PART I: What the Seq is all this about?

The Seq stands for Sequence and the Seq object provides a one-shot way to iterate over a sequence of stuff. New values can be generated on demand—in fact, it's perfectly possible to create infinite sequences—and already-generated values are discarded, never to be seen again, although, there's a way to cache them, as we'll see.

Sequences are driven by Iterator objects that are responsible for generating values. However, in many cases you don't have to create Iterators directly or use their methods while iterating a Seq. There are several ways to make a Seq and in this section, we'll talk about gather/take construct.

I gather you'll take us to...

The gather statement and take routine are similar to "generators" and "yield" statement in some other languages:

my $seq-full-of-sunshine := gather {
    say  'And nobody cries';
    say  'there’s only butterflies';

    take 'me away';
    say  'A secret place';
    say  'A sweet escape';

    take 'meee awaaay';
    say  'To better days'    ;

    take 'MEEE AWAAAAYYYY';
    say  'A hiding place';
}

Above, we have a code block with lines of song lyrics, some of which we say (print to the screen) and others we take (to be gathered). Just like, .say can be used as either a method or a subroutine, so you can use .take as a method or subroutine, there's no real difference; merely convenience.

Now, let's iterate over $seq-full-of-sunshine and watch the output:

for $seq-full-of-sunshine {
    ENTER say '▬▬▶ Entering';
    LEAVE say '◀▬▬ Leaving';

    say "❚❚ $_";
}

# OUTPUT:
# And nobody cries
# there’s only butterflies
# ▬▬▶ Entering
# ❚❚ me away
# ◀▬▬ Leaving
# A secret place
# A sweet escape
# ▬▬▶ Entering
# ❚❚ meee awaaay
# ◀▬▬ Leaving
# To better days
# ▬▬▶ Entering
# ❚❚ MEEE AWAAAAYYYY
# ◀▬▬ Leaving
# A hiding place

Notice how the say statements we had inside the gather statement didn't actualy get executed until we needed to iterate over a value that take routines took after those particular say lines. The block got stopped and then continued only when more values from the Seq were requested. The last say call didn't have any more takes after it, and it got executed when the iterator was asked for more values after the last take.

That's exceptional!

The take routine works by throwing a CX::Take control exception that will percolate up the call stack until something takes care of it. This means you can feed a gather not just from an immediate block, but from a bunch of different sources, such as routine calls:

multi what's-that (42)                     { take 'The Answer'            }
multi what's-that (Int $ where *.is-prime) { take 'Tis a prime!'          }
multi what's-that (Numeric)                { take 'Some kind of a number' }

multi what's-that   { how-good-is $^it                   }
sub how-good-is ($) { take rand > ½ ?? 'Tis OK' !! 'Eww' }

my $seq := gather map &what's-that, 1, 31337, 42, 'meows';

.say for $seq;

# OUTPUT:
# Some kind of a number
# Tis a prime!
# The Answer
# Eww

Once again, we iterated over our new Seq with a for loop, and you can see that take called from different multies and even nested sub calls still delivered the value to our gather successfully:

The only limitation is you can't gather takes done in another Promise or in code manually cued in the scheduler:

gather await start take 42;
# OUTPUT:
# Tried to get the result of a broken Promise
#   in block <unit> at test.p6 line 2
#
# Original exception:
#     take without gather

gather $*SCHEDULER.cue: { take 42 }
await Promise.in: 2;
# OUTPUT: Unhandled exception: take without gather

However, nothing's stopping you from using a Channel to proxy your data to be taken in a react block.

my Channel $chan .= new;
my $promise = start gather react whenever $chan { .take }

say "Sending stuff to Channel to gather...";
await start {
    $chan.send: $_ for <a b c>;
    $chan.close;
}
dd await $promise;

# OUTPUT:
# Sending stuff to Channel to gather...
# ("a", "b", "c").Seq

Or gathering takes from within a Supply:

my $supply = supply {
    take 42;
    emit 'Took 42!';
}

my $x := gather react whenever $supply { .say }
say $x;

# OUTPUT: Took 42!
# (42)

Stash into the cache

I mentioned earlier that Seqs are one-shot Iterables that can be iterated only once. So what exactly happens when we try to iterate them the second time?

my $seq := gather take 42;
.say for $seq;
.say for $seq;

# OUTPUT:
# 42
# This Seq has already been iterated, and its values consumed
# (you might solve this by adding .cache on usages of the Seq, or
# by assigning the Seq into an array)

A X::Seq::Consumed exception gets thrown. In fact, Seqs do not even do the Positional role, which is why we didn't use the @ sigil that type- checks for Positional on the variables we stored Seqs in.

The Seq is deemed consumed whenever something asks it for its Iterator after another thing grabbed it, like the for loop would. For example, even if in the first for loop above we would've iterated over just 1 item, we wouldn't be able to resume taking more items in the next for loop, as it'd try to ask for the Seq's iterator that was already taken by the first for loop.

As you can imagine, having Seqs always be one-shot would be somewhat of a pain in the butt. A lot of times you can afford to keep the entire sequence around, which is the price for being able to access its values more than once, and that's precisely what the Seq.cachemethod does:

my $seq := gather { take 42; take 70 };
$seq.cache;

.say for $seq;
.say for $seq;

# OUTPUT:
# 42
# 70
# 42
# 70

As long as you call .cache before you fetch the first item of the Seq, you're good to go iterating over it until the heat death of the Universe (or until its cache noms all of your RAM). However, often you do not even need to call .cache yourself.

Many methods will automatically .cache the Seq for you:

There's one more nicety with Seqs losing their one-shotness that you may see refered to as PositionalBindFailover. It's a role that indicates to the parameter binder that the type can still be converted into a Positional, even when it doesn't do Positional role. In plain English, it means you can do this:

sub foo (@pos) { say @pos[1, 3, 5] }

my $seq := 2, 4 … ∞;
foo $seq; # OUTPUT: (4 8 12)

We have a sub that expects a Positional argument and we give it a Seq which isn't Positional, yet it all works out, because the binder .caches our Seq and uses the List the .cache method returns to be the Positional to be used, thanks to it doing the PositionalBindFailover role.

Last, but not least, if you don't care about all of your Seq's values being generated and cached right there and then, you can simply assign it to a @ sigiled variable, which will reify the Seq and store it as an Array:

my @stuff = gather {
    take 42;
    say "meow";
    take 70;
}

say "Starting to iterate:";
.say for @stuff;

# OUTPUT:
# meow
# Starting to iterate:
# 42
# 70

From the output, we can see say "meow" was executed on assignment to @stuff and not when we actually iterated over the value in the for loop.

Conclusion

In Perl 6, Seqs are one-shot Iterables that don't keep their values around, which makes them very useful for iterating over huge, or even infinite, sequences. However, it's perfectly possible to cache Seq values and re-use them, if that is needed. In fact, many of the Seq's methods will automatically cache the Seq for you.

There are several ways to create Seqs, one of which is to use the gather and take where a gather block will stop its execution and continue it only when more values are needed.

In parts II and III, we'll look at other, more exciting, ways of creating Seqs. Stay tuned!

-Ofun

Perl 6 Release Quality Assurance: Full Ecosystem Toaster

Read this article on Perl6.Party

As some recall, Rakudo's 2017.04 release was somewhat of a trainwreck. It was clear the quality assurance of releases needed to be kicked up a notch. So today, I'll talk about what progress we've made in that area.

Define The Problem

A particular problem that plagued the 2017.04 release were big changes and refactors made in the compiler that passed all the 150,000+ stresstests, however still caused issues in some ecosystem modules and users' code.

The upcoming 2017.06 has many, many more big changes:

  • IO::ArgFiles were entirely replaced with the new IO::CatHandle implementation
  • IO::Socket got a refactor and sync sockets no longer use libuv
  • IO::Handle got a refactor with encoding and sync IO no longer uses libuv
  • Sets/Bags/Mixes got optimization polish and op semantics finalizations
  • Proc was refactored to be in terms of Proc::Async

The IO and Proc stuff is especially impactful, as it affects precomp and module loading as well. Merely passing stresstests just wouldn't give me enough of peace of mind of a solid release. It was time to extend the testing.

Going All In

The good news is I didn't actually have to write any new tests. With 836 modules in the Perl 6 ecosystem, the tests were already there for the taking. Best of all, they were mostly written without bias due to implementation knowledge of core code, as well as have personal style variations from hundreds of different coders. This is all perfect for testing for any regressions of core code. The only problem is running all that.

While there's a budding effort to get CPANTesters to smoke Perl 6 dists, it's not quite the data I need. I need to smoke a whole ton of modules on a particular pre-release commit, while also smoking them on a previous release on the same box, eliminating setup issues that might contribute to failures, as well as ensuring the results were for the same versions of modules.

My first crude attempt involved firing up a 32-core Google Compute Engine VM and writing a 60-line script that launched 836 Proc::Asyncs—one for each module.

Other than chewing through 125 GB of RAM with a single Perl 6 program, the experiment didn't yield any useful data. Each module had to wait for locks, before being installed, and all the Procs were asking zef to install to the same location, so dependency handling was iffy. I needed a more refined solution...

Procs, Kernels, and Murder

So, I started to polish my code. First, I wrote Proc::Q module that let me queue up a bunch of Procs, and scale the number of them running at the same time, based on the number of cores the box had. Supply.throttle core feature made the job a piece of cake.

However, some modules are naughty or broken and I needed a way to kill Procs that take too long to run. Alas, I discovered that Proc::Async.kill had a bug in it, where trying to simultaneously kill a bunch of Procs was failing. After some digging I found out the cause was $*KERNEL.signal method the .kill was using isn't actually thread safe and the bug was due to a data race in initialization of the signal table.

After refactoring Kernel.signal, and fixing Proc::Async.kill, I released Proc::Q module—my first module to require (at the time) the bleedest of bleeding edges: a HEAD commit.

Going Atomic

After cooking up boilerplate DB and Proc::Q code, I was ready to toast the ecosystem. However, it appeared zef wasn't designed, or at least well-tested, in scenarious where up to 40 instances were running module installations simultaneously. I was getting JSON errors from reading ecosystem JSON, broken cache files (due to lack of file locking), and false positives in installations because modules claimed they were already installed.

I initially attempted to solve the JSON errors by looking at an Issue in the ecosystem repo about the updater script not writing atomically. However, even after fixing the updater script, I was still getting invalid JSON errors from zef when reading ecosystem data.

It might be due to something in zef, but instead of investigating it further, I followed ugexe++'s advice and told zef not to fetch ecosystem in each Proc. The broken cache issues were similarly eliminated by disabling caching support. And the false positives were eliminated telling each zef instance to install the tested module into a separate location.

The final solution involved programatically editing zef's config file before a toast run to disable auto-updates of CPAN and p6c ecosystem data, and then in individual Procs zef module install command ended up being:

«zef --/cached --debug install "$module" "--install-to=inst#$where"»

Where $where is a per-module, per-rakudo-commit location. The final issue was floppy test runs, which I resolved by re-testing failed modules one more time, to see if the new run succeeds.

Time is Everything

The toasting of the entire ecosystem on HEAD and 2017.05 releases took about three hours on a 24-core VM, while being unattended. While watching over it and killing the few hanging modules at the end without waiting for them to time out makes a single-commit run take about 65 minutes.

I also did a toast run on a 64-core VM...

Overall, the run took me 50 minutes, and I had to manually kill some modules' tests. However, looking at CPU utilization charts, it seems the run sat idle for dozens of minutes before I came along to kill stuff:

So I think after some polish of avoiding hanging modules and figuring out why (apparently) Proc::Async.kill still doesn't kill everything, the runs can be entirely automated and a single run can be completed in about 20-30 minutes.

This means that even with last-minute big changes pushed to Rakudo, I can still toast the entire ecosystem reasonably fast, detect any potential regressions, fix them, and re-test again.

Reeling In The Catch

The Toaster database is available for viewing at toast.perl6.party. As more commits get toasted, they get added to the database. I plan to clear them out after each release.

The toasting runs I did so far weren't just a chance to play with powerful hardware. The very first issue was detected when toasting Clifford module.

The issue was to do with Lists of Pairs with same keys coerced into a MixHash, when the final accumulative weight was zero. The issue was introduced on June 7th and it took me about an hour of digging through the module's guts to find it. Considering it's quite an edge case, I imagine without the toaster runs it would take a lot longer to identify this bug. lizmat++ squashed this bug hours after identification and it never made it into any releases.

The other issue detected by toasting had to do with the VM-backed decoder serialization introduced during IO refactor and jnthn++ fixed it a day after detection. One more bug had to do with Proc refactor making Proc not synchronous-enough. It was mercilessly squashed, while fixing a couple of longstanding issues with Proc.

All of these issues weren't detected by the 150,000+ tests in the testsuite and while an argument can be made that the tests are sparse in places, there's no doubt the Toaster has paid off for the effort in making it by catching bugs that might've otherwise made it into the release.

The Future

The future plans for the Toaster would be first to make it toast on more platforms, like Windows and MacOS. Eventually, I hope to make toast runs continuous, on less-powerful VMs that are entirely automated. An IRC bot would watch for any failures and report them to the dev channel.

Conclusion

The ecosystem Toaster lets core devs test a Rakudo commit on hundreds of software pieces, made by hundreds of different developers, all within a single hour. During its short existence, the Toaster already found issues with ecosystem infrastructure, highly-multi-threaded Perl 6 programs, as well as detected regressions and new bugs that we were able to fix before the release.

The extra testing lets core devs deliver higher-quality releases, which makes Perl 6 more trustworthy to use in production-quality software. The future will see the Toaster improved to test on a wider range of systems, as well as being automated for continued extended testing.

And most importantly, the Toaster makes it possible for any Perl 6 programmer to help core development of Perl 6, by simply publishing a module.

-Ofun

About Zoffix Znet

user-pic I blog about Perl.