The Rakudo Book Project

Read this article on Rakudo.Party

When I first joined the Rakudo project, we used to say "there are none right now; check back in a year" whenever someone asked for a book about the language. Today, there's a whole website for picking out a book, and the number of available books seems to multiply every time I look at it.

Still, I feel something is amiss, when I talk to folks on our support chat, when I read blog posts about the language, or when I look at our official language documentation. And it's due to that feeling that I wish to join the Rakudo book-writing club and write a few of my own. I dub it: The Rakudo Book Project.


The Books

The Rakudo Book Project involves 3 main books—The White Book, The Gray Book, and The Black Book—as well as 2 half-books—The Green Book and The Cracked Book.

The White Book will aim to provide introductory material to the Rakudo language. The target audience will benefit from prior programming experience, but it won't be strictly necessary for computer-savy people. The target audience is "adept beginners", as some might call it.

The book will cover most of Rakudo's features a typical Rakudo programmer might use in their projects, but it won't cover every little thing about each of them. By the end of the book, the readers will have written several programming projects and will be comfortable making useful, real-world Rakudo programs. More in-depth coverage of the language will be provided by The Gray Book, which is what The White Book's readers would read next. The Black Book will reach even deeper, exploring all of the arcane constructs. The progression through the books can be thought of as a plant growing in a flower pot. Initially, the roots extend through a large area of the pot, but they don't go all the way to all the walls and are rather sparse. As the plant grows, more and more roots shoot out, covering more and more volume of the pot. Same is with the books; while reading The White Book alone will let the plant survive, the root coverage will be sparse. However, by the end of The Black Book, the reader will be an expert Rakudo programmer.

Those three books are the core of my planned project. They're supplemented by two half-books on each end of the knowledge spectrum. The Green Book will target absolute programming beginners and get them up to speed just enough so they would be able to comfortably continue their learning using The White Book. On the other end of the spectrum is The Cracked Book. It's a half-book that follows The Black Book and won't provide more advanced techniques per say, but rather arcane "hacks" or even "bad ideas" that one might not wish to use in real-life code but which nevertheless provide some insight into the language.

The Cracked Book is yet a faint glimmer of an idea. Whether it will actually be made will depend on how much more I will want to say after The Black Book is complete. The Green Book is currently a bit amorphous as well. I have a 12-year old sibling interested in computers, so The Green Book might end up being a Rakudo For Kids.

The likely order in which the books will be produced is White, Gray, Green, Black, and Cracked. It's an ambitious plan, and so I won't be making any promises for producing more than one book at a time. Thus, the current aim is to produce just The White Book.

The Price

The digital versions of the books will be available for free.

Since Rakudo development can always use more funding, I plan to run crowd-funding campaigns during each of the book's development. 100% of all the collected funds will be used to sponsor Rakudo work (sponsoring someone other than me, of course). The campaigns will start once half of the target book has been created and the backers will get early preview digital copies as the book is developed further, as well as honourable mentions as Rakudo sponsors in the book itself.

Thus, the first Rakudo Core Fundraiser will launch once I have the first half of The White Book finished. I'm hoping that will happen soon.

The Why

Other than the obvious reason why people write the books—giving an alternate take on the material—I'd like to do this to cross off an item off my bucket list. Having written a terrible non-fiction book, lackluster fiction book, and a decent illustrated children's book, I hope to add a great technical book to the list, to complete it. I figure, with 5 books to attempt it, I'll be successful.

As for my alternate take, I hope to squash the myth that Rakudo is too big to learn as well as carve out a well-defined path for learners to follow. Just as I could make a living 10 years ago, when I barely spoke English, so a beginner Rakudo programmer can make useful programs with rudimentary knowledge of the language. The key is to not try to learn everything at once as well as have a definite path to walk through. Hence the 5 separate books.

I'm hoping at the end of this journey I will have accomplished all of these goals.

See you at the first Rakudo Core Fundraiser.

On Troll Hugging, Hole Digging, and Improving Open Source Communities

Read this article on Rakudo.Party

While observing a recent split in a large open source community, I did some self-reflection and thought about the state of the Rakudo community that I am a part of. It involved learning of its huggable past; thinking of its undulating present; as well as looking for its brighter future.

This article is the outcome. It contains notes to myself on how to be a better human, but I hope they'll have wider appeal and can improve communities I am a part of.

Part I: Digging a Hole

A lot of organizational metaphors involve the act of climbing. You start at the base of a hill or a ladder and you start climbing. The higher you get, the more knowledge, power, and resources you attain. There's a problem with that metaphor: you're facing the backs of the people who came before you and they're not really paying attention to you.

The people higher up can pull others up to their level, but the problem is they can also push them down, prevent them from climbing, or even accidentally kick down some dirt in the face. As we get higher and higher, the tip of the hill we're climbing gets narrower and narrower, accommodating fewer and fewer people, until progress stops and everyone freezes, waiting for someone higher up to disappear and free up the space for someone lower down to move up to.

A more useful metaphor I think is directly the opposite of a dirt hill: a dirt hole. People dig it.

When you are just starting a project, you're alone. It's just you and a shovel. You dig a few feet down and someone comes to the edge of your hole and looks down on you. You are vulnerable. You offer them a shovel and now there's more than one person digging the hole.

You've been digging longer, so you're a bit further down. You know what the ground is like on that level, and the person above you asks you how to best dig the layers you've already dug through. Once in a while some dirt falls down from their level onto yours, so it's in your best interest to bring them to your level sooner than later. Unlike a hill or a ladder, you have no easy way to kick them off; you have to help them. At the same time, you have to ensure more people come to the edge of the hole and start digging along with you. Otherwise, it'd just be a narrow and deep hole, with no easy way in.

There's a parallel to open source development of a large community project like Rakudo: there's a need for a constant supply of fresh users and volunteers and there's a need for more seasoned members of the community to show the ropes and mentor the less-skilled members. The veterans are too far from the edge of the hole to really know how easy it is to join it, but the newbies are well-aware of the challenges that prevent more people joining. No one is more important than someone else; for a well-shaped hole both the veterans and the newcomers need to contribute in their patches of the hole.

Here's a badly shaped hole. The walls are too vertical and are crumbling, and it's tough to navigate the hole.

And here's a well-shaped hole. Everyone's more connected. It's easy to get in and start digging. And even those who dug the deepest can still go and help out those who are about to start.

The hole digging metaphor isn't just about the shape of the hole. It's also about people's position within it.

Those who have been digging the hole the longest are the lowest in it. Anything happening up above has great potential to impact those in the lowest ranks: a careless footstep breaks off some dirt and kicks it down the hole.

If a fight breaks out, the community's most senior members' would notice the dirt flying down the hole and it's in their best interests to calm the fighting down and resolve the conflict peacefully.

In fact, a particularly gruesome conflict kicks down enough dirt to make the hole shallower, and in severe cases, entirely burried.


Part II: The Seven Hugs for a Better Community

Audrey Tang, now Taiwan's Minister of Digital, was a prominent Perl 6 community member who created the concept of Troll Hugging. In a nutshell, it's this: Do not feed trolls, but hug them tenderly until they feel comfortable enough to speak about their authentic selves, and then they turn into beautiful princes(ses).

I've never met Audrey in real-time and only have her inspiring writing to go by, but I'd like to carry forward the concept of troll hugging, as well as include non-trolls in those we aim to hug.

I thought up some Tips for how to improve things; but Tips is too cliché a name, so how about some Hugs instead? The seven Hugs for a better community.

Hug 1: Gift a Shovel

Always seek to expand our community. Invite people to help us.

A person comes to the edge of the hole you're digging and says: "What the heck are you doing over there?" You explain a few things, the person nods agreeably, wishes you good luck, and continues on their merry ways. It was an amicable interraction, but could it be better?

Instead of walking away, the person can help the hole grow larger, by picking a site on the edge of it and starting to dig their own patch. On occasion, some passerby will realize how awesome your hole digging idea is and join you on their own initiative, but you can greatly improve the chances of people joining by gifting the curious passerby a shovel and actively asking them to help you. Some won't be able to, but it's a lot easier to start digging if you already have a shovel in hand.

If someone on the help channel is asking a question, it's possible your project's learning resources could be improved. Answer the question and then ask that very same person to help improve the learning resources. Now that you answered the question, that person is most qualified to improve the learning resources in this situation: they both now know the answer and still remember their thinking processes that led to them asking their question and the eventual understanding of the answer.

This works especially well with issues you could fix in less than a minute. It's easy to explain to the person—even to a fresh newcomer—what needs to be done to fix the problem and it gives them experience with working on your project, as well as confidence to try their hand at harder issues in the future.

So invite people to join in. Given them appropriate commit bits and guidance on how to get involved. Even people who think your project sucks could be asked to give a helping hand making it better. They just might.

Hug 2: Feed The Hand That Bites You

Always assume positive intent behind people's words and actions.

The biographical film Temple Grandin depicts Professor Temple Grandin's first steps working at a cattle farm, where cows are constantly prodded and, especially by today's standards, abused. Being autistic, Temple was a lot more sensitive to the environmental stimuli that affected cattle behaviour, and she was able to design a much more efficient and humane holding pen and supporting equipment where cows moved with ease, without prodding and with less stress.

I recall the most infuriating scene in the film, when the old timer workers came over to Temple's newly-built, state-of-the-art holding pen and, confused about the new design, angrily dismantled many of its key pieces. By the time Temple arrived on the job, several cows have drowned on the washing platform, and the workers were pissed off about whatever "idiot" designed this holding pen.

I was hoping Temple would get back at them: get them fired, insult them, anything really! They're clearly too damn dumb to realize just how much better Temple's equipment is and they shouldn't be allowed anywhere near cattle. Am I right? Not really.

Both Temple and the other workers had the same goal: get the cattle washed, dried, and chopped up into delicious steaks and burgers. Without autism, however, the workers didn't have a clue why Temple's design was superior. And lacking that understanding, they went back to what they knew does the job. Temple never got back at the workers, but I've seen others (and myself) get back at the "offenders" in very similar circumstances.

When Rakudo implemented atomic operators that incorporate emoji atom symbol, over 220 comments were made about them on Reddit. The overall theme was: how the hell am I supposed to type that and have Rakudo people lost their minds, using an emoji as an operator? These comments are from programmers who've been using ASCII symbols in their code for decades. Just like Temple's cattle workers, programmers who never learned how to easily type fancy Unicode characters could, understandably, be baffled that an emoji could ever be efficent to use.

Temple could lash back at these programmers and ridicule them for not being autistic enough to have the required extra knowledge, or she could patiently explain the missing pieces (like Rakudo's ASCII-only alternatives to all fancy Unicode ops).

If we spend time to patiently explain the missing information, we get potential new community members. If we merely try to prove who's right and who's wrong, at best we'd just be right. Just like Temple and the workers had a common goal, so do we and many of the people we interact with. If you perceive someone as attacking and dismantling your work, perhaps all they're trying to do is understand how it helps us achieve our common goal. Assume positive intent and respond positively.

Hug 3: We All Leave Footprints

What you do today, the others will follow and do tomorrow.

There's a famed experiment on chimps that demonstrates an interesting quirk in thinking that humans likely possess as well. In a room with several chimps, a bundle of bananas is placed. Whenever any chimp tries to reach for a banana, all of the chimps get sprayed with water. The chimps quickly learn not to reach for bananas.

A new chimp is placed into the room. When it tries to reach for bananas, the other chimps who know they will get sprayed with water actually attack the new chimp and prevent it from reaching the bananas. Now, slowly, one-by-one, start removing the chimps who were sprayed with water in the past and replace them with new chimps who weren't. The pattern remains: whenever a new chimp tries to reach for bananas, all the rest attack it, including the chimps who weren't ever sprayed with water.

The surprising discovery of this experiment is eventually you will end up with a room full of chimps, none of which were ever sprayed with water, who will avoid reaching for the bananas and attack any new chimp that tries to. There are two lessons we can learn from these findings.

First, be mindful of your actions; the new chimps will follow your lead. If all the newbie questions are answered with snark and contempt, the people who manage to stick around and learn things will likely continue to respond with snark and contempt to all the new newbies, perpetuating the cycle of negativity. How we treat newcomers, how we treat old timers, how we treat members of other communities, are all patterns that show new members of the community how to act. Ensure the patterns you leave behind to emulate are positive ones.

Second, avoid attacking chimps who try to reach for bananas. In other words, avoid telling people they can't do something or that something is very hard or impossible. A common pattern is someone says "I'm going to try doing X" and the immediate response is "you can't" or "X is useless". Now the first person's enthusiasm is curbed; they doubt they can succeed. If the first person perceives the naysayer as the expert, they might not even question the judgment and give up right away. And worse yet, the chimp has learned to attack new chimps when they try to reach for the same bananas.

Similar issue exists when you claim something can only be done by the super-star chimp. The claim carries the inherent assumption that the task is so hard, it'd be foolish for other chimps to even attempt it. Yes, some tasks are tougher than others, but the only sure way to fail at them is to never try to do them at all.

Hug 4: Speak Up

Point out unwanted behaviour, regardless of who you are and who the offender is.

If a friend ever invites you to participate in an experiment studying authority, you probably should decline, as you might kill someone.

The experiment is this: the man in a lab coat tells you to turn the dial and press the button that gives the person next to you an electric shock. The man in a lab coat writes something down, then tells you to dial in higher numbers and give a larger shock. It's a little fun at first, but as you keep dialing in larger and larger numbers, the person you're shocking appears to be in more and more distress, showing visible signs of severe pain. The scientist tells you to keep going, and you do, shocking your hapless victim with currents far above lethal, until the victim dies. Or rather, until it's revealed the victim is an actor who was faking it all along.

So what's going on? Why did you just fake-kill a guy? The answer is: authority. You perceived the scientist as an authority in this situation and trusted their judgement of the situation more than your own. A similar experiment showed that when you're jaywalking at a busy intersection, more people will follow and jaywalk with you if you look like an authority (e.g. wearing a business suit and carrying a brief case).

Similar factors are at play when a support chat's "regular" is being abusive to a "newbie". The regular says parsing HTML with regex is wrong and the newbie should use an HTML parser. The newbie, on the other hand, struggled for the whole day to get half the regex working and feels learning to use an HTML parser is far beyond their current skill, so they keep asking for regex help. Tempers flare. Feelings get hurt. Meanwhile, the rest of the people silently look on.

Two things can improve such situations. First, if you're a percieved authority, be mindful of your actions, as they set an example for others to follow (see above, Hug 3: We All Leave Footprints).

Second, and even more important: speak up, regardless of who you are. Question the judgement of the scientist who's applying lethal electric shocks. It's important to point out abusive behaviour and request the person to stop it. It's quite possible they're not realizing just how negative their actions are, for reasons ranging from something as simple as being too tired to much more complex like drug addiction or mental illness.

Speak up. It's beneficial for all parties involved.

Hug 5: Simply a Hug

A simple hug is a positive interruption.

The aforementioned Professor Temple Grandin had another useful contribution to humanity: a hugging machine.

It's a therapeutic aid for autistics that, in its crude form, consists of two boards and a lever that brings the boards together, pushing them against the person lying in the middle of the machine. When you have autism, being touched by other humans is unpleasant, distressing, or even scary. The relaxing and pleasant feeling from the pressure of the machine's boards is likely similar to how neurotypical people experience a hug from another human.

I built my own hugging machine! Now, I'm not good at carpentry, so my machine is entirely digital, but on the bright side, anyone can use it:

It's a bot on #perl6 support chat. Type .hug to hug everyone, or type .hug SomeOne to hug SomeOne. It's a silly, simple thing, but a hug wedged in the middle of a heated, unproductive discussion can quickly shift the tone to something more positive and remind the participants to be kinder to each other.

There's not much more to say about this. It's simply a hug.

Hug 6: Love Others

People are more important than code.

Think back on the last few heated arguments you had with someone. You likely can easily recall who you were arguing with. What you were arguing about is a lot more foggy. And perhaps you don't remember the other party's counter-arguments at all. You remember the person, but the argument faded into the unimportance.

It's easy to get caught up in the moment and defend your position to the death; after all, there are specifications, studies, and all sorts of best practices you could link to. It's easy to overestimate the importance of the thing being argued about in the grand scheme of things. It's also easy to push too far and people will not want to dig the dirt hole with you any more.

Always remember that people are more important than code. The argument you so desperately tried to win won't build more code, won't train more people, and won't write more blog posts. At least until the robot uprising, those things get done by people. You need to care for them.

First, consider if the argument you're participating in is something you even care about. Does it even affect you if the other person tries doing things their way? You'd be surprised how often you'd realize you can just walk away from the argument, without care. But when you can't walk away, consider the impact of your emotional state on the clarity of the discussion. You always have the option to re-schedule and ask to discuss it later.

You need people to dig the hole. Cherish them.

Hug 7: Go For The Third Option

Instead of me being right and you being wrong we both could be right-ish.

When you're in a discussion trying to decide something; or giving criticism; or receiving it; there's a trick you can use to make the process more friendly and palatable. I call it, going for the third option.

Suppose you don't like something I tend to do. You ask me to stop. You grasp for words, trying to put the request as softly as you can, while I blush and hold back the tears, realizing that I, the "I", is a terrible human being. The discussion looks something like this:

However, there's no the "I". Since time parameter is involved, the "I" being reprimanded for offending behaviour is the person in the past. If you're over 30 years old, you can probably easily recall the "you" from a decade or more ago and see that the past "you" and the current "you" differ vastly on many ideas. The two "you"s are different people.

With that in mind, when discussing my offending behaviour, you and the "I" from the present can work on the third "I", the one in the future. Under this paradigm, the discussion looks like this:

You no longer have the need to be reserved about your criticism and perhaps can discuss things you were originally planning to hold back; things that still matter. And I no longer feel that I'm being attacked—after all, we're examining the past me to figure out how the future me could be better.

The same technique applies to discussions about issues we might disagree about. Instead of trying to list all the things you're right about and all the things I am wrong about and trying to figure out whose solution "wins", we could work on the entirely new third option that combines the best of our ideas, leaving the parts either of us thinks are problematic behind. In the end, we get something we both feel to have had a hand at creating. We both win.


Conclusion

At the time of this writing, I've been applying the ideas I discussed in this article for about a week. I think they have something real behind them, as I feel a lot happier now than a week ago and I see some positive changes around me that I think I could attribute to these ideas.

I saw new faces appear in our community, who were gifted shovels and invited to join in the hole digging. I no longer dread reading negative comments on our project's articles, as I know I can view the third option in any feedback given, as well as realize the negative feedback might only be a misunderstanding. I no longer get too wrapped up in decisions that barely affect me.

Working from Audrey's Troll Hugging concept that seeds a positive framework for our community, I think we can expand on it and start hugging each other, as well as the trolls.

I think we can build something pretty damn good.

Let's grab our shovels and get digging.

You're invited: Community Bug SQUASHathon

Rakudo and other repositories in perl6 GitHub org have plenty of open bug tickets. We decided it would be neat to give them an extra push with concentrated effort, which is why we'd like to organize a monthly, 1-day virtual event where we pick a repository and everyone works on open tickets in that repository.

The day will be first Saturday of every month. This month we'll be hacking on the Issues of github.com/perl6/doc repository.

Whether you're a seasoned Rakudo developer or just starting out, join us this Saturday in #perl6 on irc.freenode.net chat channel (no specific time) and contribute! If you'd like to simply hang out, you're welcome too, we love company!

See also: our SQUASHathon Wiki or talk to a human about this.

The Hot New Language Named Rakudo

This article represents my own thoughts on the matter alone and is not an official statement on behalf of the Rakudo team or, perhaps, is not even representative of the majority opinion.


When I came to Perl 6 around its first stable Christmas 2015 release, "The Name Issue" was in hot debate. Put simply: Perl 6 is not a replacement to Perl; Perl 6 is not the "next" Perl; Perl 6 is a very different language to Perl; so why does it still have 'Perl' in its name?

From what I understand, the debate raged on for years prior to my arrival, so the topic always felt taboo to talk about, because it always ended up in a heated discussion, without a solution at end. However, we do need that solution.

The major argument I heard (and often peddled myself) for why Perl 6 had 'Perl' in the name was because of brand recognition. The hypothesis was that fewer people would bother to use an unknown language "Foo" than a recognizable language "Perl". Now, two years later, we can examine whether that hypothesis was true and beneficial and act accordingly.

Fo6.d for Thought

The Perl 6 language—to which I shall refer to as Rakudo language, for the rest of the article—is versioned separately from its implementations and is defined by the specification. The current version is 6.c "Christmas" and the upcoming version is 6.d "Diwali"

As some know, despite slinging a lot of code in my spare time, I earn my bread under the banner of Multi-Media Designer. While one of the "media" I work with is Web and so I do get to write some code once in a while, my office for the past 8-ish years has been located squarely in the Marketing Department, not I.T.

As the Rakudo core team was recently penning down the dates for 6.d release, I got excited to have the opportunity to do some design and marketing for something quite different than products at my job. However, I very quickly hit a roadblock. The name "Perl 6" isn't quite marketable.

Ignoring trolls and people whose knowledge of Perl ends with the line-noise quips, Perl is the Grandfather of Web, the Queen of Backwards Compatibility, and Gluest language of all the Glues. Perl is installed by default on many systems, and if you're worthy enough to wield its arcane magic, it's quite damn performant.

Rakudo language, on the other hand, is none of those things. It's a young and hip teenager who doesn't mind breaking long held status quo. Rakudo is the King of Unicode and Queen of Concurrency. It's a "4th generation" language, and if you take the time to learn its many features, it's quite damn concise.

Trying to market Rakudo language as a "Perl 6" language is like holding a great Poker hand while playing Blackjack—even with a Royal Flush, you still lost the game that's actually being played. The truly distinguishing features of Rakudo don't get any attention, while at the same time people get disappointed when a "Perl" language no longer does things Perl used to do.

So did the hypothesis about Perl brand name recognition hold true? Yes, but Rakudo language has very different strengths than those that brand represents. Which leads to a lot of confusion, disappointment, and annoyance.

As the 6.d language release nears, and with it the ability to make large changes, I think it would benefit us to reflect on the issues of the past two years and improve.

"Just Rename It"

Even if the entire Rakudo community would decide a different name is good, there's a teenie-tiny problem of existing infrastructure. Need documentation? You go to perl6.org, not rakudo.org. Need a live, squishy human to help you out? You go to #perl6 IRC channel, not #rakudo. Need a Rakudo book? Why, then go to perl6book.com and pick any of the books with "Perl 6" in their titles.

This is one of the major things that derailed my thinking on the subject in the past: people saying "just rename it," when clearly it's no easy task. Domain names, email addresses, bug trackers, Reddit subreddits, Facebook groups, Twitter feeds, GitHub orgs, IRC channels, presentations, books, blog posts, videos, hell, even names of some variables ($*PERL) and env vars (PERL6_TEST_DIE_ON_FAIL) would all need to change for a thorough rename job.

Not only would all those things need a rename, the old versions in many cases would need to be able to redirect to the new name. "Just renaming" perl6.party website and its contents will take me some effort and already incurred a minor expense for a new domain name. The effort required to do the same everywhere would be monumental and in the end we'd still go to The Perl Conference and get sponsored by grants from The Perl Foundation.

I think the ship for "just renaming" it has sailed a few years before first stable language release. However, we don't have to be at the mercy of all-or-nothing tactics, when there are clear benefits to reap from a name tweak.

Rakudo Perl 6

Rakudo is the name of a mature—and to date, the only one that's usable—implementation of the language. If Wikipedia is to be believed, the name means "The Way of The Camel" or "Paradise."

It's also the name that's ripe for the picking to be the name of the language: those who use the language already have heard the name, so it's familiar; the compiler's repo is rakudo/rakudo, not perl6/rakudo; newcomers are told to install "Rakudo Star," not "Perl 6 Star"; and having an already bikesheded name can cut down on irrelevant discussions when the need for change itself is controversial.

While it's true that re-using the compiler's name for the language creates an ambiguity, it can be resolved by using all-lowercase letters for the compiler and title case for the language—Perl 5 has been doing that for years. In addition, if the executable were to be renamed from perl6 to rakudo, there'd be fewer accidents of running Rakudo scripts with perl command, which is currently is actually actively fought against by the recommendation to put use v6 in all of the programs.

The "Rakudo Perl 6" name for the language was suggested by lizmat++, so I assume there's at least one other core team member who's open to the language name tweak. And I do precicely mean tweak, not change. While change would be more preferable, it stands opposed by existing infrastructure naming and, of course, those who believe Perl 6 is a fine name and should be kept unchanged. So by tweaking the language name to be "Rakudo Perl 6," we get the benefit of marketing a new release of a hot new language "Rakudo 6.d" instead of a new release of same-name-but-totally-not-Perl-5 "Perl 6.d"; we get to keep using "perl6" ticket queue on RT, without raising too many confused eyebrows; we get to publish Rakudo blog posts that don't get knee-jerk reactions form non-Perl users; we get to attend The Perl Conference without feeling we don't belong; we get to mention how awesome Rakudo is to our peers without fearing yet-another pointless "Perl is dead" discussion; we save the trees by not reprinting all of the existing "Perl 6" books"; yet we get to... start anew.

It's The Beginning, Not The End

Humans are funny creatures. We don't like to change our minds, lest we appear to not have a clue. We cling to past decisions and things said because abandoning them is admitting you were wrong. However, looking at the past two years, it's very clear to me the name of "Perl 6" has been detrimental to the language. I'm not afraid to admit I was wrong in defending the "Perl 6" name.

It's an indicator that something's wrong, when you spent days writing an amazing technical post but have anxiety posting it to r/programming because it'll inevitably end up with quips and jokes about Perl being late to the party. It's an indicator that something's wrong, when you're apprehensive joining a tech discussion to mention how easy the task is to do in "Perl 6," because even well-meaning people have a hard time realizing Perl 6 is an entirely new language.

I'm under no delusion that merely changing the name would instantly make everyone love the language. There are still performance problems to tackle. More bugs to fix. More documentation and tests to write. All these things need humans to work on them and humans care about perception. The assumption that many humans will start using Rakudo simply because it's a better product just does not match reality.

It would be beneficial to change the perception of the Rakudo language. Ignoring the problem won't do that. Including boiler plate text about Perl 6 being new language that's totally different from Perl 5 at the start of every conversation won't do it. Tweaking the language name to be unique will. It doesn't have to be a dramatic event, but...

I can't do it alone

Last night I registered rakudo.party and changed my Twitter bio to no longer refer to the language as "Perl." In the coming days, I'll update all mentions of "Perl 6" on rakudo.party to read "Rakudo" or "Rakudo language" where it's ambiguous with the rakudo compiler. My IRC hostmask and module descriptions on GitHub will follow suit. My conversations, Twitter hashtags, Facebook posts... all will refer to Rakudo instead of Perl 6, just as I've been doing in this post.

However, that is about the end of my unilateral control of the whole thing. I can't change docs.perl6.org or the next blog post you'll write, which is why I strongly encourage those who care about The Name Issue and especially those who care about success of the Rakudo language to do the same active language name tweak I'm doing.

Acknowledge that language's full name is "Rakudo Perl 6". Yes, there's a compiler with a similar name, but it's the next best thing after nothing. Shorten the full name of the language to just "Rakudo," to differentiate it from THE Perl; you don't even have to worry about spacing issues if you do! Tell people about Rakudo's unique features, not about how it's trying to catch up to the things Perl 5 does well.

Rakudo has many strengths but they get muted when we call it "Perl 6". Perl is a brand name for a product with different strengths and attempting to pretend Rakudo has the same strengths for the past 2 years proved to be a failed strategy. I believe a name tweak can help these issues and start us on a path with a more solid footing. A path that invites newcomers, not scares them with knee-jerk reactions and fear of using an outmoded product.

I may be wrong about it. I may be the only fucking idiot on the planet with a "#Rakudo" hashtag in their Twitter bio. But... I think I'm right about it, and I hope you'll join me and use the tweaked language name.

-Ofun

Perl 6: Seqs, Drugs, And Rock'n'Roll (Part 2)

Read this article on Perl6.Party

This is the second part in the series! Be sure you read Part I first where we discuss what Seqs are and how to .cache them.

Today, we'll take the Seq apart and see what's up in it; what drives it; and how to make it do exactly what we want.

PART II: That Iterated Quickly

The main piece that makes a Seq do its thing is an object that does the Iterator role. It's this object that knows how to generate the next value, whenever we try to pull a value from a Seq, or push all of its values somewhere, or simply discard all of the remaining values.

Keep in mind that you never need to use Iterator's methods directly, when making use of a Seq as a source of values. They are called indirectly under the hood in various Perl 6 constructs. The use case for calling those methods yourself is often the time when we're making an Iterator that's fed by another Iterator, as we'll see.

Pull my finger...

In its most basic form, an Iterator object needs to provide only one method: .pull-one

my $seq := Seq.new: class :: does Iterator {
    method pull-one {
        return $++ if $++ < 4;
        IterationEnd
    }
}.new;

.say for $seq;

# OUTPUT:
# 0
# 1
# 2
# 3

Above, we create a Seq using its .new method that expects an instantiated Iterator, for which we use an anonymous class that does the Iterator role and provides a single .pull-one method that uses a pair of anonymous state variables to generate 4 numbers, one per call, and then returns IterationEnd constant to signal the Iterator does not have any more values to produce.

The Iterator protocol forbids attempting to fetch more values from an Iterator once it generated the IterationEnd value, so your Iterator's methods may assume they'll never get called again past that point.

Meet the rest of the gang

The Iterator role defines several more methods, all of which are optional to implement, and most of which have some sort of default implementation. The extra methods are there for optimization purposes that let you take shortcuts depending on how the sequence is iterated over.

Let's build a Seq that hashes a bunch of data using Crypt::Bcryptmodule (run zef install Crypt::Bcrypt to install it). We'll start with the most basic Iterator that provides .pull-one method and then we'll optimize it to perform better in different circumstances.

use Crypt::Bcrypt;

sub hash-it (*@stuff) {
    Seq.new: class :: does Iterator {
        has @.stuff;
        method pull-one {
            @!stuff ?? bcrypt-hash @!stuff.shift, :15rounds
                    !! IterationEnd
        }
    }.new: :@stuff
}

my $hashes := hash-it <foo bar ber>;
for $hashes {
    say "Fetched value #{++$} {now - INIT now}";
    say "\t$_";
}

# OUTPUT:
# Fetched value #1 2.26035863
#     $2b$15$ZspycxXAHoiDpK99YuMWqeXUJX4XZ3cNNzTMwhfF8kEudqli.lSIa
# Fetched value #2 4.49311657
#     $2b$15$GiqWNgaaVbHABT6yBh7aAec0r5Vwl4AUPYmDqPlac.pK4RPOUNv1K
# Fetched value #3 6.71103435
#     $2b$15$zq0mf6Qv3Xv8oIDp686eYeTixCw1aF9/EqpV/bH2SohbbImXRSati

In the above program, we wrapped all the Seq making stuff inside a sub called hash-it. We slurp all the positional arguments given to that sub and instantiate a new Seq with an anonymous class as the Iterator. We use attribute @!stuff to store the stuff we need to hash. In the .pull-one method we check if we still have @!stuff to hash; if we do, we shift a value off @!stuff and hash it, using 15 rounds to make the hashing algo take some time. Lastly, we added a say statement to measure how long the program has been running for each iteration, using two now calls, one of which is run with the INIT phaser. From the output, we see it takes about 2.2 seconds to hash a single string.

Skipping breakfast

Using a for loop, is not the only way to use the Seq returned by our hashing routine. What if some user doesn't care about the first few hashes? For example, they could write a piece of code like this:

my $hash = hash-it(<foo bar ber>).skip(2).head;
say "Made hash {now - INIT now}";
say bcrypt-match 'ber', $hash;

# OUTPUT:
# Made hash 6.6813790
# True

We've used Crypt::Bcryptmodule's bcrypt-match routine to ensure the hash we got matches our third input string and it does, but look at the timing in the output. It took 6.7s to produce that single hash!

In fact, things will look the worse the more items the user tries to skip. If the user calls our hash-it with a ton of items and then tries to .skip the first 1,000,000 elements to get at the 1,000,001st hash, they'll be waiting for about 25 days for that single hash to be produced!!

The reason is our basic Iterator only knows how to .pull-one, so the skip operation still generates the hashes, just to discard them. Since the values our Iterator generates do not depend on previous values, we can implement one of the optimizing methods to skip iterations cheaply:

use Crypt::Bcrypt;

sub hash-it (*@stuff) {
    Seq.new: class :: does Iterator {
        has @.stuff;
        method pull-one {
            @!stuff ?? bcrypt-hash @!stuff.shift, :15rounds
                    !! IterationEnd
        }
        method skip-one {
            return False unless @!stuff;
            @!stuff.shift;
            True
        }
    }.new: :@stuff
}

my $hash = hash-it(<foo bar ber>).skip(2).head;
say "Made hash {now - INIT now}";
say bcrypt-match 'ber', $hash;

# OUTPUT:
# Made hash 2.2548012
# True

We added a .skip-one method to our Iterator that instead of hashing a value, simply discards it. It needs to return a truthy value, if it was able to skip a value (i.e. we had a value we'd otherwise generate in .pull-one, but we skipped it), or falsy value if there weren't any values to skip.

Now, the .skip method called on our Seq uses our new .skip-one method to cheaply skip through 2 items and then uses .pull-one to generate the third hash. Look at the timing now: 2.2s; the time it takes to generate a single hash.

However, we can kick it up a notch. While we won't notice a difference with our 3-item Seq, that user who was attempting to skip 1,000,000 items won't get the 2.2s time to generate the 1,000,000th hash. They would also have to wait for 1,000,000 calls to .skip-one and @!stuff.shift. To optimize skipping over a bunch of items, we can implement the .skip-at-least method (for brevity, just our Iterator class is shown):

class :: does Iterator {
    has @.stuff;
    method pull-one {
        @!stuff
            ?? bcrypt-hash( @!stuff.shift, :15rounds )
            !! IterationEnd
    }
    method skip-one {
        return False unless @!stuff;
        @!stuff.shift;
        True
    }
    method skip-at-least (Int \n) {
        n == @!stuff.splice: 0, n
    }
}

The .skip-at-least method takes an Int of items to skip. It should skip as many as it can, and return a truthy value if it was able to skip that many items, and falsy value if the number of skipped items was fewer. Now, the user who skips 1,000,000 items will only have to suffer through a single .splice call.

For the sake of completeness, there's another skipping method defined by Iterator: .skip-at-least-pull-one. It follows the same semantics as .skip-at-least, except with .pull-one semantics for return values. Its default implemention involves just calling those two methods, short-circuiting and returning IterationEnd if the .skip-at-least returned a falsy value, and that default implementation is very likely good enough for all Iterators. The method exists as a convenience for Iterator users who call methods on Iterators and (at the moment) it's not used in core Rakudo Perl 6 by any methods that can be called on users' Seqs.

A so, so count...

There are two more optimization methods—.bool-only and .count-only—that do not have a default implementation. The first one returns True or False, depending on whether there are still items that can be generated by the Iterator (True if yes). The second one returns the number of items the Iterator can still produce. Importantly these methods must be able to do that without exhausting the Iterator. In other words, after finding these methods implemented, the user of our Iterator can call them and afterwards should still be able to .pull-one all of the items, as if the methods were never called.

Let's make an Iterator that will take an Iterable and .rotate it once per iteration of our Iterator until its tail becomes its head. Basically, we want this:

.say for rotator 1, 2, 3, 4;

# OUTPUT:
# [2 3 4 1]
# [3 4 1 2]
# [4 1 2 3]

This iterator will serve our purpose to study the two Iterator methods. For a less "made-up" example, try to find implementations of iterators for combinations and permutations routines in Perl 6 compiler's source code.

Here's a sub that creates our Seq with our shiny Iterator along with some code that operates on it and some timings for different stages of the program:

sub rotator (*@stuff) {
    Seq.new: class :: does Iterator {
        has int $!n;
        has int $!steps = 1;
        has     @.stuff is required;

        submethod TWEAK { $!n = @!stuff − 1 }

        method pull-one {
            if $!n-- > 0 {
                LEAVE $!steps = 1;
                [@!stuff .= rotate: $!steps]
            }
            else {
                IterationEnd
            }
        }
        method skip-one {
            $!n > 0 or return False;
            $!n--; $!steps++;
            True
        }
        method skip-at-least (Int \n) {
            if $!n > all 0, n {
                $!steps += n;
                $!n     −= n;
                True
            }
            else {
                $!n = 0;
                False
            }
        }
    }.new: stuff => [@stuff]
}

my $rotations := rotator ^5000;

if $rotations {
    say "Time after getting Bool: {now - INIT now}";

    say "We got $rotations.elems() rotations!";
    say "Time after getting count: {now - INIT now}";

    say "Fetching last one...";
    say "Last one's first 5 elements are: $rotations.tail.head(5)";
    say "Time after getting last elem: {now - INIT now}";
}

# OUTPUT:
# Time after getting Bool: 0.0230339
# We got 4999 rotations!
# Time after getting count: 26.04481484
# Fetching last one...
# Last one's first 5 elements are: 4999 0 1 2 3
# Time after getting last elem: 26.0466234

First things first, let's take a look at what we're doing in our Iterator. We take an Iterable (in the sub call on line 37, we use a Range object out of which we can milk 5000 elements in this case), shallow-clone it (using [ ... ] operator) and keep that clone in @!stuff attribute of our Iterator. During object instantiation, we also save how many items @!stuff has in it into $!n attribute, inside the TWEAK submethod.

For each .pull-one of the Iterator, we .rotate our @!stuff attribute, storing the rotated result back in it, as well as making a shallow clone of it, which is what we return for the iteration.

We also already implemented the .skip-one and .skip-at-least optimization methods, where we use a private $!steps attribute to alter how many steps the next .pull-one will .rotate our @!stuff by. Whenever .pull-one is called, we simply reset $!steps to its default value of 1 using the LEAVE phaser.

Let's check out how this thing performs! We store our precious Seq in $rotations variable that we first check for truthiness, to see if it has any elements in it at all; then we tell the world how many rotations we can fish out of that Seq; lastly, we fetch the last element of the Seq and (for screen space reasons) print the first 5 elements of the last rotation.

All three steps—check .Bool, check .elems, and fetch last item with .tail are timed, and the results aren't that pretty. While .Bool took relatively quick to complete, the .elems call took ages (26s)! That's actually not all of the damage. Recall from PART I of this series that both .Bool and .elems cache the Seq unless special methods are implemented in the Iterator. This means that each of those rotations we made are still there in memory, using up space for nothing! What are we to do? Let's try implementing those special methods .Bool and .elems are looking for!

This only thing we need to change is to add two extra methods to our Iterator that determinine how many elements we can generate (.count-only) and whether we have any elements to generate (.bool-only):

method count-only { $!n     }
method bool-only  { $!n > 0 }

For the sake of completeness, here is our previous example, with these two methods added to our Iterator:

sub rotator (*@stuff) {
    Seq.new: class :: does Iterator {
        has int $!n;
        has int $!steps = 1;
        has     @.stuff is required;

        submethod TWEAK { $!n = @!stuff − 1 }

        method count-only { $!n     }
        method bool-only  { $!n > 0 }

        method pull-one {
            if $!n-- > 0 {
                LEAVE $!steps = 1;
                [@!stuff .= rotate: $!steps]
            }
            else {
                IterationEnd
            }
        }
        method skip-one {
            $!n > 0 or return False;
            $!n--; $!steps++;
            True
        }
        method skip-at-least (\n) {
            if $!n > all 0, n {
                $!steps += n;
                $!n     −= n;
                True
            }
            else {
                $!n = 0;
                False
            }
        }
    }.new: stuff => [@stuff]
}

my $rotations := rotator ^5000;

if $rotations {
    say "Time after getting Bool: {now - INIT now}";

    say "We got $rotations.elems() rotations!";
    say "Time after getting count: {now - INIT now}";

    say "Fetching last one...";
    say "Last one's first 5 elements are: $rotations.tail.head(5)";
    say "Time after getting last elem: {now - INIT now}";
}

# OUTPUT:
# Time after getting Bool: 0.0087576
# We got 4999 rotations!
# Time after getting count: 0.00993624
# Fetching last one...
# Last one's first 5 elements are: 4999 0 1 2 3
# Time after getting last elem: 0.0149863

The code is nearly identical, but look at those sweet, sweet timings! Our entire program runs about 1,733 times faster because our Seq can figure out if and how many elements it has without having to iterate or rotate anything. The .tail call sees our optimization (side note: that's actually very recent) and it too doesn't have to iterate over anything and can just use our .skip-at-least optimization to skip to the end. And last but not least, our Seq is no longer being cached, so the only things kept around in memory are the things we care about. It's a huge win-win-win for very little extra code.

But wait... there's more!

Push it real good...

The Seqs we looked at so far did heavy work: each generated value took a relatively long time to generate. However, Seqs are quite versatile and at times you'll find that generation of a value is cheaper than calling .pull-one and storing that value somewhere. For cases like that, there're a few more methods we can implement to make our Seq perform better.

For the next example, we'll stick with the basics. Our Iterator will generate a sequence of positive even numbers up to the wanted limit. Here's what the call to the sub that makes our Seq looks like:

say evens-up-to 20; # OUTPUT: (2 4 6 8 10 12 14 16 18)

And here's the all of the code for it. The particular operation we'll be doing is storing all the values in an Array, by assigning to it:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one { ($!n += 2) < $!limit ?? $!n !! IterationEnd }
    }.new: :$^limit
}

my @a = evens-up-to 1_700_000;

say now - INIT now; # OUTPUT: 1.00765440

For a limit of 1.7 million, the code takes around a second to run. However, all we do in our Iterator is add some numbers together, so a lot of the time is likely lost in .pull-oneing the values and adding them to the Array, one by one.

In cases like this, implementing a custom .push-all method in our Iterator can help. The method receives one argument that is a reification target. We're pretty close to bare "metal" now, so we can't do anything fancy with the reification target object other than call .push method on it with a single value to add to the target. The .push-all always returns IterationEnd, since it exhausts the Iterator, so we'll just pop that value right into the return value of the method's Signature:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method push-all (\target --> IterationEnd) {
            target.push: $!n while ($!n += 2) < $!limit;
        }
    }.new: :$^limit
}

my @a = evens-up-to 1_700_000;
say now - INIT now; # OUTPUT: 0.91364949

Our program is now 10% faster; not a lot. However, since we're doing all the work in .push-all now, we no longer need to deal with state inside the method's body, so we can shave off a bit of time by using lexical variables instead of accessing object's attributes all the time. We'll make them use native int types for even more speed. Also, (at least currently), the += meta operator is more expensive than a simple assignment and a regular +; since we're trying to squeeze every last bit of juice here, let's take advantage of that as well. So what we have now is this:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method push-all (\target --> IterationEnd) {
            my int $limit = $!limit;
            my int $n     = $!n;
            target.push: $n while ($n = $n + 2) < $limit;
            $!n = $n;
        }
    }.new: :$^limit
}

my @a = evens-up-to 1_700_000;
say now - INIT now; # OUTPUT: 0.6688109

There we go. Now our program is 1.5 times faster than the original, thanks to .push-all. The gain isn't as dramatic as we what saw with other methods, but can come in quite handy when you need it.

There are a few more .push-* methods you can implement to, for example, do something special when your Seq is used in codes like...

for $your-seq -> $a, $b, $c { ... }

...where the Iterator would be asked to .push-exactly three items. The idea behind them is similar to .push-all: you push stuff onto the reification target. Their utility and performance gains are ever smaller, useful only in particular situations, so I won't be covering them.

It's worth noting the .push-all can be used only with Iterators that are not lazy, since... well... it expects you to push all the items. And what exactly are lazy Iterators? I'm so glad you asked!

A quick brown fox jumped over the lazy Seq

Let's pare down our previous Seq that generates even numbers down to the basics. Let's make it generate an infinite list of even numbers, using an anonymous state variable:

sub evens {
    Seq.new: class :: does Iterator {
        method pull-one { $ += 2 }
    }.new
}

put evens

Since the list is infinite, it'd take us an infinite time to fetch them all. So what exactly happens when we run the code above? It... quite predictably hangs when the put routine is called; it sits and patiently waits for our infinite Seq to complete. The same issue occurs when trying to assign our seq to a @-sigiled variable:

my @evens = evens # hangs

Or even when trying to pass our Seq to a sub with a slurpy parameter_Parameters):

sub meows (*@evens) { say 'Got some evens!' }
meows evens # hangs

That's quite an annoying problem. Fortunately, there's a very easy solution for it. But first, a minor detour to the land of naming clarification!

A rose by any other name would laze as sweet

In Perl 6 some things are or can be made "lazy". While it evokes the concept of on-demand or "lazy" evaluation, which is ubiquitous in Perl 6, things that are lazy in Perl 6 aren't just about that. If something is-lazy, it means it always wants to be evaluated lazily, fetching only as many items as needed, even in "mostly lazy" Perl 6 constructs that would otherwise eagerly consume even from sources that do on-demand generation.

For example, a sequence of lines read from a file would want to be lazy, as reading them all in at once has the potential to use up all the RAM. An infinite sequence would also want to be is-lazy because an eager evaluation would cause it to hang, as the sequence never completes.

So a thing that is-lazy in Perl 6 can be thought of as being infinite. Sometimes it actually will be infinite, but even if it isn't, it being lazy means it has similar consequences if used eagerly (too much CPU time used, too much RAM, etc).


Now back to our infinite list of even numbers. It sounds like all we have to do is make our Seq lazy and we do that by implementing .is-lazy method on our Iterator that simply returns True:

sub evens {
    Seq.new: class :: does Iterator {
        method pull-one { $ += 2 }
        method is-lazy (--> True) {}
    }.new
}

sub meows (*@evens) { say 'Got some evens!' }

put         evens; # OUTPUT: ...
my @evens = evens; # doesn't hang
meows       evens; # OUTPUT: Got some evens!

The put routine now detects its dealing with something terribly long and just outputs some dots. Assignment to Array no longer hangs (and will instead reify on demand). And the call to a slurpy doesn't hang either and will also reify on demand.

There's one more Iterator optimization method left that we should discuss...

A Sinking Ship

Perl 6 has sink context, similar to "void" context in other languages, which means a value is being discarded:

42;

# OUTPUT:
# WARNINGS for ...:
# Useless use of constant integer 42 in sink context (line 1)

The constant 42 in the above program is in sink context—its value isn't used by anything—and since it's nearly pointless to have it like that, the compiler warns about it.

Not all sinkage is bad however and sometimes you may find that gorgeous Seq on which you worked so hard is ruthlessly being sunk by the user! Let's take a look at what happens when we sink one of our previous examples, the Seq that generates up to limit even numbers:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
    }.new: :$^limit
}

evens-up-to 5_000_000; # sink our Seq

say now - INIT now; # OUTPUT: 5.87409072

Ouch! Iterating our Seq has no side-effects outside of the Iterator that it uses, which means it took the program almost six seconds to do absolutely nothing.

We can remedy the situation by implementing our own .sink-all method. Its default implementation .pull-ones until the end of the Seq (since Seqs may have useful side effects), which is not what we want for our Seq. So let's implement a .sink-all that does nothing!

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method sink-all(--> IterationEnd) {}
    }.new: :$^limit
}

evens-up-to 5_000_000; # sink our Seq

say now - INIT now; # OUTPUT: 0.0038638

We added a single line of code and made our program 1,520 times faster—the perfect speed up for a program that does nothing!

However, doing nothing is not the only thing .sink-all is good for. Use it for clean up that would usually be done at the end of iteration (e.g. closing a file handle the Iterator was using). Or simply set the state of the system to what it would be at the end of the iteration (e.g. .seek a file handle to the end, for sunk Seq that produces lines from it). Or, as an alternative idea, how about warning the user their code might contain an error:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method sink-all(--> IterationEnd) {
            warn "Oh noes! Looks like you sunk all the evens!\n"
                ~ 'Why did you make them in the first place?'
        }
    }.new: :$^limit
}

evens-up-to 5_000_000; # sink our Seq

# OUTPUT:
# Oh noes! Looks like you sunk all the evens!
# Why did you make them in the first place?
# ...

That concludes our discussion on optimizing your Iterators. Now, let's talk about using Iterators others have made.

It's a marathon, not a sprint

With all the juicy knowledge about Iterators and Seqs we now possess, we can probably see how this piece of code manages to work without hanging, despite being given an infinite Range of numbers:

.say for ^∞ .grep(*.is-prime).map(* ~ ' is a prime number').head: 5;

# OUTPUT:
# 2 is a prime number
# 3 is a prime number
# 5 is a prime number
# 7 is a prime number
# 11 is a prime number

The infinite Range probably is-lazy. That .grep probably .pull-ones until it finds a prime number. The .map .pull-ones each of the .grep's values and modifies them, and .head allows at most 5 values to be .pull-oned from it.

In short what we have here is a pipeline of Seqs and Iterators where the Iterator of the next Seq is based on the Iterator of the previous one. For our study purposes, let's cook up a Seq of our own that combines all of the steps above:

sub first-five-primes (*@numbers) {
    Seq.new: class :: does Iterator {
        has     $.iter;
        has int $!produced = 0;
        method pull-one {
            $!produced++ == 5 and return IterationEnd;
            loop {
                my $value := $!iter.pull-one;
                return IterationEnd if $value =:= IterationEnd;
                return "$value is a prime number" if $value.is-prime;
            }
        }
    }.new: iter => @numbers.iterator
}

.say for first-five-primes ^∞;

# OUTPUT:
# 2 is a prime number
# 3 is a prime number
# 5 is a prime number
# 7 is a prime number
# 11 is a prime number

Our sub slurps up_Parameters) its positional arguments and then calls .iterator method on the @numbers Iterable. This method is available on all Perl 6 objects and will let us interface with the object using Iterator methods directly.

We save the @numbers's Iterator in one of the attributes of our Iterator as well as create another attribute to keep track of how many items we produced. In the .pull-one method, we first check whether we already produced the 5 items we need to produce, and if not, we drop into a loop that calls .pull-one on the other Iterator, the one we got from @numbers Array.

We recently learned that if the Iterator does not have any more values for us, it will return the IterationEnd constant. A constant whose job is to signal the end of iteration is finicky to deal with, as you can imagine. To detect it, we need to ensure we use the binding (:=), not the assignment (=) operator, when storing the value we get from .pull-one in a variable. This is because pretty much only the container identity (=:=) operator will accept such a monstrosity, so we can't stuff the value we .pull-one into just any container we please.

In our example program, if we do find that we received IterationEnd from the source Iterator, we simply return it to indicate we're done. If not, we repeat the process until we find a prime number, which we then put into our desired string and that's what we return from our .pull-one.

All the rest of the Iterator methods we've learned about can be called on the source Iterator in a similar fashion as we called .pull-one in our example.

Conclusion

Today, we've learned a whole ton of stuff! We now know that Seqs are powered by Iterator objects and we can make custom iterators that generate any variety of values we can dream about.

The most basic Iterator has only .pull-one method that generates a single value and returns IterationEnd when it has no more values to produce. It's not permitted to call .pull-one again, once it generated IterationEnd and we can write our .pull-one methods with the expectation that will never happen.

There are plenty of optimization opportunities a custom Iterator can take advantage of. If it can cheaply skip through items, it can implement .skip-one or .skip-at-least methods. If it can know how many items it'll produce, it can implement .bool-only and .count-only methods that can avoid a ton of work and memory use when only certain values of a Seq are needed. And for squeezing the very last bit of performance, you can take advantage of .push-all and other .push-* methods that let you push values onto the target directly.

When your Iterator .is-lazy, things will treat it with extra care and won't try to fetch all of the items at once. And we can use the .sink-all method to avoid work or warn the user of potential mistakes in their code, when our Seq is being sunk.

Lastly, since we know how to make Iterators and what their methods do, we can make use of Iterators coming from other sources and call methods on them directly, manipulating them just how we want to.

We now have all the tools to work with Seq objects in Perl 6. In the PART III of this series, we'll learn how to compactify all of that knowledge and skillfully build Seqs with just a line or two of code, using the sequence operator.

Stay tuned!

-Ofun