Results matching “perl6”

6lang: The Naming Discussion Update

Read this article on 6lang.Party

When a couple months ago I rekindled the naming debate—the discussion on whether "Perl 6" should be renamed—I didn't expect anything more than a collective groan. That wasn't the case and today, I figured, I'd post a progress report and list the salient happenings, all the way to my currently being the proud owner of 6lang.party domain name.

The "Rakudo" Language

The "new" name I mentioned in my original post was Rakudo. As many quickly pointed out, it wasn't the greatest of names because it was the name of an implementation. Yes, I agree, but originally I thought few, if any, would be on board with a new name, or extended name, and Rakudo was basically the only name people already were using, so it stood out as something that could be "hijacked."

The Blog Post Fallout

There was quite a bit of discussion on r/perl, r/perl6, and blogs.perl.org. The general mood among the Perl community members who aren't avid 6lang users was that the entirely new name was a good idea. However, the 6lang users, and especially core devs, overall, argued "Perl 6" still had some recognition benefits and should not be removed entirely.

The middle ground was aimed at then: extend the language name. The "official" name would be among the lines of "Blah Perl 6" and users opposed to the 4-letter swear word would just use the name extension on its own, while those who feel the original name has benefits can still reap them.

The decision on the naming extension was placed on the 6.d language release agenda, with the final call on whether and with what the name should to be extended to be done by Larry, when we cut the 6.d language release.

The 6lang

Fast-forward two months. A kind soul (thank you, by the way!) asked Larry what he thought about the naming debate during the last Perl Conference:

Larry opined that we could have other terms by which Perl versions or Perl distributions are marketed as. So that gives us an option to pick an alternative name to be the second name with any "official" standing. Personally, I really like this idea; even more than name extension, because should there indeed be more benefit to the name without "Perl" in it, the alternative name will naturally become the most-used one.

Another core dev, AlexDaniel++, coined an alternative name: spelt 6lang; can be pronounced as slang, if you want to be fancy. I really liked the name, so I jumped in and registered 6lang.party

<AlexDaniel> Zoffix++ for making me recognize the need for
     alternative name. For a long time I was against
<AlexDaniel> and honestly, I can start using something like 6lang
     right away. “Rakudo Perl 6” is infringing on
     language/compiler distinction so I'm feeling reluctant
<Zoffix> OK, I'll too start using 6lang
* Zoffix is now a proud owner of 6lang.party :D
<timotimo> wow
<AlexDaniel> that was quick

And a couple of hours later, our Marketing Department churned out a new poster:

The drawback is that the name can't be used as an identifier… and Larry doesn't think it's a terribly sexy name.

* TimToady notes that 6lang isn't gonna work anywhere an identifier
     needs a leading alpha
<TimToady> it's also not a terribly sexy name
<TimToady> I could go for something more like psix, "where the p is silent
     if you want it to be" :)

Although, on the plus side, the name has the benefit that alphabetically it sorts earlier than pretty much any other language.

<AlexDaniel> If we see “6lang” as a more marketable alternative, then
     the fact that some things may not parse it as an identifier
     practically does not matter. However, this little bit is quite useful:
<AlexDaniel> m: <perl5 golang c# 6lang ruby>.sort.say
<camelia> rakudo-moar 39a4b7: OUTPUT: «(6lang c# golang perl5 ruby)␤»
<AlexDaniel> :)
<AlexDaniel> .oO( AAAlang – batteries included )

To 6.d Release And Beyond

So that's where things progressed to so far. No official decisions have been made yet, but we're thinking about it and playing with the idea. The decision on the naming debate is to be made during 6.d release.

Having learned a painful lesson from The Christmas release, we're reluctant to put down any dates for 6.d release, but I suspect it'll be somewhere between the upcoming New Year's and It's-Ready-When-It's-Ready.

See you then \o

The Rakudo Book Project

Read this article on Rakudo.Party

When I first joined the Rakudo project, we used to say "there are none right now; check back in a year" whenever someone asked for a book about the language. Today, there's a whole website for picking out a book, and the number of available books seems to multiply every time I look at it.

Still, I feel something is amiss, when I talk to folks on our support chat, when I read blog posts about the language, or when I look at our official language documentation. And it's due to that feeling that I wish to join the Rakudo book-writing club and write a few of my own. I dub it: The Rakudo Book Project.


The Books

The Rakudo Book Project involves 3 main books—The White Book, The Gray Book, and The Black Book—as well as 2 half-books—The Green Book and The Cracked Book.

The White Book will aim to provide introductory material to the Rakudo language. The target audience will benefit from prior programming experience, but it won't be strictly necessary for computer-savy people. The target audience is "adept beginners", as some might call it.

The book will cover most of Rakudo's features a typical Rakudo programmer might use in their projects, but it won't cover every little thing about each of them. By the end of the book, the readers will have written several programming projects and will be comfortable making useful, real-world Rakudo programs. More in-depth coverage of the language will be provided by The Gray Book, which is what The White Book's readers would read next. The Black Book will reach even deeper, exploring all of the arcane constructs. The progression through the books can be thought of as a plant growing in a flower pot. Initially, the roots extend through a large area of the pot, but they don't go all the way to all the walls and are rather sparse. As the plant grows, more and more roots shoot out, covering more and more volume of the pot. Same is with the books; while reading The White Book alone will let the plant survive, the root coverage will be sparse. However, by the end of The Black Book, the reader will be an expert Rakudo programmer.

Those three books are the core of my planned project. They're supplemented by two half-books on each end of the knowledge spectrum. The Green Book will target absolute programming beginners and get them up to speed just enough so they would be able to comfortably continue their learning using The White Book. On the other end of the spectrum is The Cracked Book. It's a half-book that follows The Black Book and won't provide more advanced techniques per say, but rather arcane "hacks" or even "bad ideas" that one might not wish to use in real-life code but which nevertheless provide some insight into the language.

The Cracked Book is yet a faint glimmer of an idea. Whether it will actually be made will depend on how much more I will want to say after The Black Book is complete. The Green Book is currently a bit amorphous as well. I have a 12-year old sibling interested in computers, so The Green Book might end up being a Rakudo For Kids.

The likely order in which the books will be produced is White, Gray, Green, Black, and Cracked. It's an ambitious plan, and so I won't be making any promises for producing more than one book at a time. Thus, the current aim is to produce just The White Book.

The Price

The digital versions of the books will be available for free.

Since Rakudo development can always use more funding, I plan to run crowd-funding campaigns during each of the book's development. 100% of all the collected funds will be used to sponsor Rakudo work (sponsoring someone other than me, of course). The campaigns will start once half of the target book has been created and the backers will get early preview digital copies as the book is developed further, as well as honourable mentions as Rakudo sponsors in the book itself.

Thus, the first Rakudo Core Fundraiser will launch once I have the first half of The White Book finished. I'm hoping that will happen soon.

The Why

Other than the obvious reason why people write the books—giving an alternate take on the material—I'd like to do this to cross off an item off my bucket list. Having written a terrible non-fiction book, lackluster fiction book, and a decent illustrated children's book, I hope to add a great technical book to the list, to complete it. I figure, with 5 books to attempt it, I'll be successful.

As for my alternate take, I hope to squash the myth that Rakudo is too big to learn as well as carve out a well-defined path for learners to follow. Just as I could make a living 10 years ago, when I barely spoke English, so a beginner Rakudo programmer can make useful programs with rudimentary knowledge of the language. The key is to not try to learn everything at once as well as have a definite path to walk through. Hence the 5 separate books.

I'm hoping at the end of this journey I will have accomplished all of these goals.

See you at the first Rakudo Core Fundraiser.

On Troll Hugging, Hole Digging, and Improving Open Source Communities

Read this article on Rakudo.Party

While observing a recent split in a large open source community, I did some self-reflection and thought about the state of the Rakudo community that I am a part of. It involved learning of its huggable past; thinking of its undulating present; as well as looking for its brighter future.

This article is the outcome. It contains notes to myself on how to be a better human, but I hope they'll have wider appeal and can improve communities I am a part of.

Part I: Digging a Hole

A lot of organizational metaphors involve the act of climbing. You start at the base of a hill or a ladder and you start climbing. The higher you get, the more knowledge, power, and resources you attain. There's a problem with that metaphor: you're facing the backs of the people who came before you and they're not really paying attention to you.

The people higher up can pull others up to their level, but the problem is they can also push them down, prevent them from climbing, or even accidentally kick down some dirt in the face. As we get higher and higher, the tip of the hill we're climbing gets narrower and narrower, accommodating fewer and fewer people, until progress stops and everyone freezes, waiting for someone higher up to disappear and free up the space for someone lower down to move up to.

A more useful metaphor I think is directly the opposite of a dirt hill: a dirt hole. People dig it.

When you are just starting a project, you're alone. It's just you and a shovel. You dig a few feet down and someone comes to the edge of your hole and looks down on you. You are vulnerable. You offer them a shovel and now there's more than one person digging the hole.

You've been digging longer, so you're a bit further down. You know what the ground is like on that level, and the person above you asks you how to best dig the layers you've already dug through. Once in a while some dirt falls down from their level onto yours, so it's in your best interest to bring them to your level sooner than later. Unlike a hill or a ladder, you have no easy way to kick them off; you have to help them. At the same time, you have to ensure more people come to the edge of the hole and start digging along with you. Otherwise, it'd just be a narrow and deep hole, with no easy way in.

There's a parallel to open source development of a large community project like Rakudo: there's a need for a constant supply of fresh users and volunteers and there's a need for more seasoned members of the community to show the ropes and mentor the less-skilled members. The veterans are too far from the edge of the hole to really know how easy it is to join it, but the newbies are well-aware of the challenges that prevent more people joining. No one is more important than someone else; for a well-shaped hole both the veterans and the newcomers need to contribute in their patches of the hole.

Here's a badly shaped hole. The walls are too vertical and are crumbling, and it's tough to navigate the hole.

And here's a well-shaped hole. Everyone's more connected. It's easy to get in and start digging. And even those who dug the deepest can still go and help out those who are about to start.

The hole digging metaphor isn't just about the shape of the hole. It's also about people's position within it.

Those who have been digging the hole the longest are the lowest in it. Anything happening up above has great potential to impact those in the lowest ranks: a careless footstep breaks off some dirt and kicks it down the hole.

If a fight breaks out, the community's most senior members' would notice the dirt flying down the hole and it's in their best interests to calm the fighting down and resolve the conflict peacefully.

In fact, a particularly gruesome conflict kicks down enough dirt to make the hole shallower, and in severe cases, entirely burried.


Part II: The Seven Hugs for a Better Community

Audrey Tang, now Taiwan's Minister of Digital, was a prominent Perl 6 community member who created the concept of Troll Hugging. In a nutshell, it's this: Do not feed trolls, but hug them tenderly until they feel comfortable enough to speak about their authentic selves, and then they turn into beautiful princes(ses).

I've never met Audrey in real-time and only have her inspiring writing to go by, but I'd like to carry forward the concept of troll hugging, as well as include non-trolls in those we aim to hug.

I thought up some Tips for how to improve things; but Tips is too cliché a name, so how about some Hugs instead? The seven Hugs for a better community.

Hug 1: Gift a Shovel

Always seek to expand our community. Invite people to help us.

A person comes to the edge of the hole you're digging and says: "What the heck are you doing over there?" You explain a few things, the person nods agreeably, wishes you good luck, and continues on their merry ways. It was an amicable interraction, but could it be better?

Instead of walking away, the person can help the hole grow larger, by picking a site on the edge of it and starting to dig their own patch. On occasion, some passerby will realize how awesome your hole digging idea is and join you on their own initiative, but you can greatly improve the chances of people joining by gifting the curious passerby a shovel and actively asking them to help you. Some won't be able to, but it's a lot easier to start digging if you already have a shovel in hand.

If someone on the help channel is asking a question, it's possible your project's learning resources could be improved. Answer the question and then ask that very same person to help improve the learning resources. Now that you answered the question, that person is most qualified to improve the learning resources in this situation: they both now know the answer and still remember their thinking processes that led to them asking their question and the eventual understanding of the answer.

This works especially well with issues you could fix in less than a minute. It's easy to explain to the person—even to a fresh newcomer—what needs to be done to fix the problem and it gives them experience with working on your project, as well as confidence to try their hand at harder issues in the future.

So invite people to join in. Given them appropriate commit bits and guidance on how to get involved. Even people who think your project sucks could be asked to give a helping hand making it better. They just might.

Hug 2: Feed The Hand That Bites You

Always assume positive intent behind people's words and actions.

The biographical film Temple Grandin depicts Professor Temple Grandin's first steps working at a cattle farm, where cows are constantly prodded and, especially by today's standards, abused. Being autistic, Temple was a lot more sensitive to the environmental stimuli that affected cattle behaviour, and she was able to design a much more efficient and humane holding pen and supporting equipment where cows moved with ease, without prodding and with less stress.

I recall the most infuriating scene in the film, when the old timer workers came over to Temple's newly-built, state-of-the-art holding pen and, confused about the new design, angrily dismantled many of its key pieces. By the time Temple arrived on the job, several cows have drowned on the washing platform, and the workers were pissed off about whatever "idiot" designed this holding pen.

I was hoping Temple would get back at them: get them fired, insult them, anything really! They're clearly too damn dumb to realize just how much better Temple's equipment is and they shouldn't be allowed anywhere near cattle. Am I right? Not really.

Both Temple and the other workers had the same goal: get the cattle washed, dried, and chopped up into delicious steaks and burgers. Without autism, however, the workers didn't have a clue why Temple's design was superior. And lacking that understanding, they went back to what they knew does the job. Temple never got back at the workers, but I've seen others (and myself) get back at the "offenders" in very similar circumstances.

When Rakudo implemented atomic operators that incorporate emoji atom symbol, over 220 comments were made about them on Reddit. The overall theme was: how the hell am I supposed to type that and have Rakudo people lost their minds, using an emoji as an operator? These comments are from programmers who've been using ASCII symbols in their code for decades. Just like Temple's cattle workers, programmers who never learned how to easily type fancy Unicode characters could, understandably, be baffled that an emoji could ever be efficent to use.

Temple could lash back at these programmers and ridicule them for not being autistic enough to have the required extra knowledge, or she could patiently explain the missing pieces (like Rakudo's ASCII-only alternatives to all fancy Unicode ops).

If we spend time to patiently explain the missing information, we get potential new community members. If we merely try to prove who's right and who's wrong, at best we'd just be right. Just like Temple and the workers had a common goal, so do we and many of the people we interact with. If you perceive someone as attacking and dismantling your work, perhaps all they're trying to do is understand how it helps us achieve our common goal. Assume positive intent and respond positively.

Hug 3: We All Leave Footprints

What you do today, the others will follow and do tomorrow.

There's a famed experiment on chimps that demonstrates an interesting quirk in thinking that humans likely possess as well. In a room with several chimps, a bundle of bananas is placed. Whenever any chimp tries to reach for a banana, all of the chimps get sprayed with water. The chimps quickly learn not to reach for bananas.

A new chimp is placed into the room. When it tries to reach for bananas, the other chimps who know they will get sprayed with water actually attack the new chimp and prevent it from reaching the bananas. Now, slowly, one-by-one, start removing the chimps who were sprayed with water in the past and replace them with new chimps who weren't. The pattern remains: whenever a new chimp tries to reach for bananas, all the rest attack it, including the chimps who weren't ever sprayed with water.

The surprising discovery of this experiment is eventually you will end up with a room full of chimps, none of which were ever sprayed with water, who will avoid reaching for the bananas and attack any new chimp that tries to. There are two lessons we can learn from these findings.

First, be mindful of your actions; the new chimps will follow your lead. If all the newbie questions are answered with snark and contempt, the people who manage to stick around and learn things will likely continue to respond with snark and contempt to all the new newbies, perpetuating the cycle of negativity. How we treat newcomers, how we treat old timers, how we treat members of other communities, are all patterns that show new members of the community how to act. Ensure the patterns you leave behind to emulate are positive ones.

Second, avoid attacking chimps who try to reach for bananas. In other words, avoid telling people they can't do something or that something is very hard or impossible. A common pattern is someone says "I'm going to try doing X" and the immediate response is "you can't" or "X is useless". Now the first person's enthusiasm is curbed; they doubt they can succeed. If the first person perceives the naysayer as the expert, they might not even question the judgment and give up right away. And worse yet, the chimp has learned to attack new chimps when they try to reach for the same bananas.

Similar issue exists when you claim something can only be done by the super-star chimp. The claim carries the inherent assumption that the task is so hard, it'd be foolish for other chimps to even attempt it. Yes, some tasks are tougher than others, but the only sure way to fail at them is to never try to do them at all.

Hug 4: Speak Up

Point out unwanted behaviour, regardless of who you are and who the offender is.

If a friend ever invites you to participate in an experiment studying authority, you probably should decline, as you might kill someone.

The experiment is this: the man in a lab coat tells you to turn the dial and press the button that gives the person next to you an electric shock. The man in a lab coat writes something down, then tells you to dial in higher numbers and give a larger shock. It's a little fun at first, but as you keep dialing in larger and larger numbers, the person you're shocking appears to be in more and more distress, showing visible signs of severe pain. The scientist tells you to keep going, and you do, shocking your hapless victim with currents far above lethal, until the victim dies. Or rather, until it's revealed the victim is an actor who was faking it all along.

So what's going on? Why did you just fake-kill a guy? The answer is: authority. You perceived the scientist as an authority in this situation and trusted their judgement of the situation more than your own. A similar experiment showed that when you're jaywalking at a busy intersection, more people will follow and jaywalk with you if you look like an authority (e.g. wearing a business suit and carrying a brief case).

Similar factors are at play when a support chat's "regular" is being abusive to a "newbie". The regular says parsing HTML with regex is wrong and the newbie should use an HTML parser. The newbie, on the other hand, struggled for the whole day to get half the regex working and feels learning to use an HTML parser is far beyond their current skill, so they keep asking for regex help. Tempers flare. Feelings get hurt. Meanwhile, the rest of the people silently look on.

Two things can improve such situations. First, if you're a percieved authority, be mindful of your actions, as they set an example for others to follow (see above, Hug 3: We All Leave Footprints).

Second, and even more important: speak up, regardless of who you are. Question the judgement of the scientist who's applying lethal electric shocks. It's important to point out abusive behaviour and request the person to stop it. It's quite possible they're not realizing just how negative their actions are, for reasons ranging from something as simple as being too tired to much more complex like drug addiction or mental illness.

Speak up. It's beneficial for all parties involved.

Hug 5: Simply a Hug

A simple hug is a positive interruption.

The aforementioned Professor Temple Grandin had another useful contribution to humanity: a hugging machine.

It's a therapeutic aid for autistics that, in its crude form, consists of two boards and a lever that brings the boards together, pushing them against the person lying in the middle of the machine. When you have autism, being touched by other humans is unpleasant, distressing, or even scary. The relaxing and pleasant feeling from the pressure of the machine's boards is likely similar to how neurotypical people experience a hug from another human.

I built my own hugging machine! Now, I'm not good at carpentry, so my machine is entirely digital, but on the bright side, anyone can use it:

It's a bot on #perl6 support chat. Type .hug to hug everyone, or type .hug SomeOne to hug SomeOne. It's a silly, simple thing, but a hug wedged in the middle of a heated, unproductive discussion can quickly shift the tone to something more positive and remind the participants to be kinder to each other.

There's not much more to say about this. It's simply a hug.

Hug 6: Love Others

People are more important than code.

Think back on the last few heated arguments you had with someone. You likely can easily recall who you were arguing with. What you were arguing about is a lot more foggy. And perhaps you don't remember the other party's counter-arguments at all. You remember the person, but the argument faded into the unimportance.

It's easy to get caught up in the moment and defend your position to the death; after all, there are specifications, studies, and all sorts of best practices you could link to. It's easy to overestimate the importance of the thing being argued about in the grand scheme of things. It's also easy to push too far and people will not want to dig the dirt hole with you any more.

Always remember that people are more important than code. The argument you so desperately tried to win won't build more code, won't train more people, and won't write more blog posts. At least until the robot uprising, those things get done by people. You need to care for them.

First, consider if the argument you're participating in is something you even care about. Does it even affect you if the other person tries doing things their way? You'd be surprised how often you'd realize you can just walk away from the argument, without care. But when you can't walk away, consider the impact of your emotional state on the clarity of the discussion. You always have the option to re-schedule and ask to discuss it later.

You need people to dig the hole. Cherish them.

Hug 7: Go For The Third Option

Instead of me being right and you being wrong we both could be right-ish.

When you're in a discussion trying to decide something; or giving criticism; or receiving it; there's a trick you can use to make the process more friendly and palatable. I call it, going for the third option.

Suppose you don't like something I tend to do. You ask me to stop. You grasp for words, trying to put the request as softly as you can, while I blush and hold back the tears, realizing that I, the "I", is a terrible human being. The discussion looks something like this:

However, there's no the "I". Since time parameter is involved, the "I" being reprimanded for offending behaviour is the person in the past. If you're over 30 years old, you can probably easily recall the "you" from a decade or more ago and see that the past "you" and the current "you" differ vastly on many ideas. The two "you"s are different people.

With that in mind, when discussing my offending behaviour, you and the "I" from the present can work on the third "I", the one in the future. Under this paradigm, the discussion looks like this:

You no longer have the need to be reserved about your criticism and perhaps can discuss things you were originally planning to hold back; things that still matter. And I no longer feel that I'm being attacked—after all, we're examining the past me to figure out how the future me could be better.

The same technique applies to discussions about issues we might disagree about. Instead of trying to list all the things you're right about and all the things I am wrong about and trying to figure out whose solution "wins", we could work on the entirely new third option that combines the best of our ideas, leaving the parts either of us thinks are problematic behind. In the end, we get something we both feel to have had a hand at creating. We both win.


Conclusion

At the time of this writing, I've been applying the ideas I discussed in this article for about a week. I think they have something real behind them, as I feel a lot happier now than a week ago and I see some positive changes around me that I think I could attribute to these ideas.

I saw new faces appear in our community, who were gifted shovels and invited to join in the hole digging. I no longer dread reading negative comments on our project's articles, as I know I can view the third option in any feedback given, as well as realize the negative feedback might only be a misunderstanding. I no longer get too wrapped up in decisions that barely affect me.

Working from Audrey's Troll Hugging concept that seeds a positive framework for our community, I think we can expand on it and start hugging each other, as well as the trolls.

I think we can build something pretty damn good.

Let's grab our shovels and get digging.

You're invited: Community Bug SQUASHathon

Rakudo and other repositories in perl6 GitHub org have plenty of open bug tickets. We decided it would be neat to give them an extra push with concentrated effort, which is why we'd like to organize a monthly, 1-day virtual event where we pick a repository and everyone works on open tickets in that repository.

The day will be first Saturday of every month. This month we'll be hacking on the Issues of github.com/perl6/doc repository.

Whether you're a seasoned Rakudo developer or just starting out, join us this Saturday in #perl6 on irc.freenode.net chat channel (no specific time) and contribute! If you'd like to simply hang out, you're welcome too, we love company!

See also: our SQUASHathon Wiki or talk to a human about this.

The Hot New Language Named Rakudo

This article represents my own thoughts on the matter alone and is not an official statement on behalf of the Rakudo team or, perhaps, is not even representative of the majority opinion.


When I came to Perl 6 around its first stable Christmas 2015 release, "The Name Issue" was in hot debate. Put simply: Perl 6 is not a replacement to Perl; Perl 6 is not the "next" Perl; Perl 6 is a very different language to Perl; so why does it still have 'Perl' in its name?

From what I understand, the debate raged on for years prior to my arrival, so the topic always felt taboo to talk about, because it always ended up in a heated discussion, without a solution at end. However, we do need that solution.

The major argument I heard (and often peddled myself) for why Perl 6 had 'Perl' in the name was because of brand recognition. The hypothesis was that fewer people would bother to use an unknown language "Foo" than a recognizable language "Perl". Now, two years later, we can examine whether that hypothesis was true and beneficial and act accordingly.

Fo6.d for Thought

The Perl 6 language—to which I shall refer to as Rakudo language, for the rest of the article—is versioned separately from its implementations and is defined by the specification. The current version is 6.c "Christmas" and the upcoming version is 6.d "Diwali"

As some know, despite slinging a lot of code in my spare time, I earn my bread under the banner of Multi-Media Designer. While one of the "media" I work with is Web and so I do get to write some code once in a while, my office for the past 8-ish years has been located squarely in the Marketing Department, not I.T.

As the Rakudo core team was recently penning down the dates for 6.d release, I got excited to have the opportunity to do some design and marketing for something quite different than products at my job. However, I very quickly hit a roadblock. The name "Perl 6" isn't quite marketable.

Ignoring trolls and people whose knowledge of Perl ends with the line-noise quips, Perl is the Grandfather of Web, the Queen of Backwards Compatibility, and Gluest language of all the Glues. Perl is installed by default on many systems, and if you're worthy enough to wield its arcane magic, it's quite damn performant.

Rakudo language, on the other hand, is none of those things. It's a young and hip teenager who doesn't mind breaking long held status quo. Rakudo is the King of Unicode and Queen of Concurrency. It's a "4th generation" language, and if you take the time to learn its many features, it's quite damn concise.

Trying to market Rakudo language as a "Perl 6" language is like holding a great Poker hand while playing Blackjack—even with a Royal Flush, you still lost the game that's actually being played. The truly distinguishing features of Rakudo don't get any attention, while at the same time people get disappointed when a "Perl" language no longer does things Perl used to do.

So did the hypothesis about Perl brand name recognition hold true? Yes, but Rakudo language has very different strengths than those that brand represents. Which leads to a lot of confusion, disappointment, and annoyance.

As the 6.d language release nears, and with it the ability to make large changes, I think it would benefit us to reflect on the issues of the past two years and improve.

"Just Rename It"

Even if the entire Rakudo community would decide a different name is good, there's a teenie-tiny problem of existing infrastructure. Need documentation? You go to perl6.org, not rakudo.org. Need a live, squishy human to help you out? You go to #perl6 IRC channel, not #rakudo. Need a Rakudo book? Why, then go to perl6book.com and pick any of the books with "Perl 6" in their titles.

This is one of the major things that derailed my thinking on the subject in the past: people saying "just rename it," when clearly it's no easy task. Domain names, email addresses, bug trackers, Reddit subreddits, Facebook groups, Twitter feeds, GitHub orgs, IRC channels, presentations, books, blog posts, videos, hell, even names of some variables ($*PERL) and env vars (PERL6_TEST_DIE_ON_FAIL) would all need to change for a thorough rename job.

Not only would all those things need a rename, the old versions in many cases would need to be able to redirect to the new name. "Just renaming" perl6.party website and its contents will take me some effort and already incurred a minor expense for a new domain name. The effort required to do the same everywhere would be monumental and in the end we'd still go to The Perl Conference and get sponsored by grants from The Perl Foundation.

I think the ship for "just renaming" it has sailed a few years before first stable language release. However, we don't have to be at the mercy of all-or-nothing tactics, when there are clear benefits to reap from a name tweak.

Rakudo Perl 6

Rakudo is the name of a mature—and to date, the only one that's usable—implementation of the language. If Wikipedia is to be believed, the name means "The Way of The Camel" or "Paradise."

It's also the name that's ripe for the picking to be the name of the language: those who use the language already have heard the name, so it's familiar; the compiler's repo is rakudo/rakudo, not perl6/rakudo; newcomers are told to install "Rakudo Star," not "Perl 6 Star"; and having an already bikesheded name can cut down on irrelevant discussions when the need for change itself is controversial.

While it's true that re-using the compiler's name for the language creates an ambiguity, it can be resolved by using all-lowercase letters for the compiler and title case for the language—Perl 5 has been doing that for years. In addition, if the executable were to be renamed from perl6 to rakudo, there'd be fewer accidents of running Rakudo scripts with perl command, which is currently is actually actively fought against by the recommendation to put use v6 in all of the programs.

The "Rakudo Perl 6" name for the language was suggested by lizmat++, so I assume there's at least one other core team member who's open to the language name tweak. And I do precicely mean tweak, not change. While change would be more preferable, it stands opposed by existing infrastructure naming and, of course, those who believe Perl 6 is a fine name and should be kept unchanged. So by tweaking the language name to be "Rakudo Perl 6," we get the benefit of marketing a new release of a hot new language "Rakudo 6.d" instead of a new release of same-name-but-totally-not-Perl-5 "Perl 6.d"; we get to keep using "perl6" ticket queue on RT, without raising too many confused eyebrows; we get to publish Rakudo blog posts that don't get knee-jerk reactions form non-Perl users; we get to attend The Perl Conference without feeling we don't belong; we get to mention how awesome Rakudo is to our peers without fearing yet-another pointless "Perl is dead" discussion; we save the trees by not reprinting all of the existing "Perl 6" books"; yet we get to... start anew.

It's The Beginning, Not The End

Humans are funny creatures. We don't like to change our minds, lest we appear to not have a clue. We cling to past decisions and things said because abandoning them is admitting you were wrong. However, looking at the past two years, it's very clear to me the name of "Perl 6" has been detrimental to the language. I'm not afraid to admit I was wrong in defending the "Perl 6" name.

It's an indicator that something's wrong, when you spent days writing an amazing technical post but have anxiety posting it to r/programming because it'll inevitably end up with quips and jokes about Perl being late to the party. It's an indicator that something's wrong, when you're apprehensive joining a tech discussion to mention how easy the task is to do in "Perl 6," because even well-meaning people have a hard time realizing Perl 6 is an entirely new language.

I'm under no delusion that merely changing the name would instantly make everyone love the language. There are still performance problems to tackle. More bugs to fix. More documentation and tests to write. All these things need humans to work on them and humans care about perception. The assumption that many humans will start using Rakudo simply because it's a better product just does not match reality.

It would be beneficial to change the perception of the Rakudo language. Ignoring the problem won't do that. Including boiler plate text about Perl 6 being new language that's totally different from Perl 5 at the start of every conversation won't do it. Tweaking the language name to be unique will. It doesn't have to be a dramatic event, but...

I can't do it alone

Last night I registered rakudo.party and changed my Twitter bio to no longer refer to the language as "Perl." In the coming days, I'll update all mentions of "Perl 6" on rakudo.party to read "Rakudo" or "Rakudo language" where it's ambiguous with the rakudo compiler. My IRC hostmask and module descriptions on GitHub will follow suit. My conversations, Twitter hashtags, Facebook posts... all will refer to Rakudo instead of Perl 6, just as I've been doing in this post.

However, that is about the end of my unilateral control of the whole thing. I can't change docs.perl6.org or the next blog post you'll write, which is why I strongly encourage those who care about The Name Issue and especially those who care about success of the Rakudo language to do the same active language name tweak I'm doing.

Acknowledge that language's full name is "Rakudo Perl 6". Yes, there's a compiler with a similar name, but it's the next best thing after nothing. Shorten the full name of the language to just "Rakudo," to differentiate it from THE Perl; you don't even have to worry about spacing issues if you do! Tell people about Rakudo's unique features, not about how it's trying to catch up to the things Perl 5 does well.

Rakudo has many strengths but they get muted when we call it "Perl 6". Perl is a brand name for a product with different strengths and attempting to pretend Rakudo has the same strengths for the past 2 years proved to be a failed strategy. I believe a name tweak can help these issues and start us on a path with a more solid footing. A path that invites newcomers, not scares them with knee-jerk reactions and fear of using an outmoded product.

I may be wrong about it. I may be the only fucking idiot on the planet with a "#Rakudo" hashtag in their Twitter bio. But... I think I'm right about it, and I hope you'll join me and use the tweaked language name.

-Ofun

Perl 6: Seqs, Drugs, And Rock'n'Roll (Part 2)

Read this article on Perl6.Party

This is the second part in the series! Be sure you read Part I first where we discuss what Seqs are and how to .cache them.

Today, we'll take the Seq apart and see what's up in it; what drives it; and how to make it do exactly what we want.

PART II: That Iterated Quickly

The main piece that makes a Seq do its thing is an object that does the Iterator role. It's this object that knows how to generate the next value, whenever we try to pull a value from a Seq, or push all of its values somewhere, or simply discard all of the remaining values.

Keep in mind that you never need to use Iterator's methods directly, when making use of a Seq as a source of values. They are called indirectly under the hood in various Perl 6 constructs. The use case for calling those methods yourself is often the time when we're making an Iterator that's fed by another Iterator, as we'll see.

Pull my finger...

In its most basic form, an Iterator object needs to provide only one method: .pull-one

my $seq := Seq.new: class :: does Iterator {
    method pull-one {
        return $++ if $++ < 4;
        IterationEnd
    }
}.new;

.say for $seq;

# OUTPUT:
# 0
# 1
# 2
# 3

Above, we create a Seq using its .new method that expects an instantiated Iterator, for which we use an anonymous class that does the Iterator role and provides a single .pull-one method that uses a pair of anonymous state variables to generate 4 numbers, one per call, and then returns IterationEnd constant to signal the Iterator does not have any more values to produce.

The Iterator protocol forbids attempting to fetch more values from an Iterator once it generated the IterationEnd value, so your Iterator's methods may assume they'll never get called again past that point.

Meet the rest of the gang

The Iterator role defines several more methods, all of which are optional to implement, and most of which have some sort of default implementation. The extra methods are there for optimization purposes that let you take shortcuts depending on how the sequence is iterated over.

Let's build a Seq that hashes a bunch of data using Crypt::Bcryptmodule (run zef install Crypt::Bcrypt to install it). We'll start with the most basic Iterator that provides .pull-one method and then we'll optimize it to perform better in different circumstances.

use Crypt::Bcrypt;

sub hash-it (*@stuff) {
    Seq.new: class :: does Iterator {
        has @.stuff;
        method pull-one {
            @!stuff ?? bcrypt-hash @!stuff.shift, :15rounds
                    !! IterationEnd
        }
    }.new: :@stuff
}

my $hashes := hash-it <foo bar ber>;
for $hashes {
    say "Fetched value #{++$} {now - INIT now}";
    say "\t$_";
}

# OUTPUT:
# Fetched value #1 2.26035863
#     $2b$15$ZspycxXAHoiDpK99YuMWqeXUJX4XZ3cNNzTMwhfF8kEudqli.lSIa
# Fetched value #2 4.49311657
#     $2b$15$GiqWNgaaVbHABT6yBh7aAec0r5Vwl4AUPYmDqPlac.pK4RPOUNv1K
# Fetched value #3 6.71103435
#     $2b$15$zq0mf6Qv3Xv8oIDp686eYeTixCw1aF9/EqpV/bH2SohbbImXRSati

In the above program, we wrapped all the Seq making stuff inside a sub called hash-it. We slurp all the positional arguments given to that sub and instantiate a new Seq with an anonymous class as the Iterator. We use attribute @!stuff to store the stuff we need to hash. In the .pull-one method we check if we still have @!stuff to hash; if we do, we shift a value off @!stuff and hash it, using 15 rounds to make the hashing algo take some time. Lastly, we added a say statement to measure how long the program has been running for each iteration, using two now calls, one of which is run with the INIT phaser. From the output, we see it takes about 2.2 seconds to hash a single string.

Skipping breakfast

Using a for loop, is not the only way to use the Seq returned by our hashing routine. What if some user doesn't care about the first few hashes? For example, they could write a piece of code like this:

my $hash = hash-it(<foo bar ber>).skip(2).head;
say "Made hash {now - INIT now}";
say bcrypt-match 'ber', $hash;

# OUTPUT:
# Made hash 6.6813790
# True

We've used Crypt::Bcryptmodule's bcrypt-match routine to ensure the hash we got matches our third input string and it does, but look at the timing in the output. It took 6.7s to produce that single hash!

In fact, things will look the worse the more items the user tries to skip. If the user calls our hash-it with a ton of items and then tries to .skip the first 1,000,000 elements to get at the 1,000,001st hash, they'll be waiting for about 25 days for that single hash to be produced!!

The reason is our basic Iterator only knows how to .pull-one, so the skip operation still generates the hashes, just to discard them. Since the values our Iterator generates do not depend on previous values, we can implement one of the optimizing methods to skip iterations cheaply:

use Crypt::Bcrypt;

sub hash-it (*@stuff) {
    Seq.new: class :: does Iterator {
        has @.stuff;
        method pull-one {
            @!stuff ?? bcrypt-hash @!stuff.shift, :15rounds
                    !! IterationEnd
        }
        method skip-one {
            return False unless @!stuff;
            @!stuff.shift;
            True
        }
    }.new: :@stuff
}

my $hash = hash-it(<foo bar ber>).skip(2).head;
say "Made hash {now - INIT now}";
say bcrypt-match 'ber', $hash;

# OUTPUT:
# Made hash 2.2548012
# True

We added a .skip-one method to our Iterator that instead of hashing a value, simply discards it. It needs to return a truthy value, if it was able to skip a value (i.e. we had a value we'd otherwise generate in .pull-one, but we skipped it), or falsy value if there weren't any values to skip.

Now, the .skip method called on our Seq uses our new .skip-one method to cheaply skip through 2 items and then uses .pull-one to generate the third hash. Look at the timing now: 2.2s; the time it takes to generate a single hash.

However, we can kick it up a notch. While we won't notice a difference with our 3-item Seq, that user who was attempting to skip 1,000,000 items won't get the 2.2s time to generate the 1,000,000th hash. They would also have to wait for 1,000,000 calls to .skip-one and @!stuff.shift. To optimize skipping over a bunch of items, we can implement the .skip-at-least method (for brevity, just our Iterator class is shown):

class :: does Iterator {
    has @.stuff;
    method pull-one {
        @!stuff
            ?? bcrypt-hash( @!stuff.shift, :15rounds )
            !! IterationEnd
    }
    method skip-one {
        return False unless @!stuff;
        @!stuff.shift;
        True
    }
    method skip-at-least (Int \n) {
        n == @!stuff.splice: 0, n
    }
}

The .skip-at-least method takes an Int of items to skip. It should skip as many as it can, and return a truthy value if it was able to skip that many items, and falsy value if the number of skipped items was fewer. Now, the user who skips 1,000,000 items will only have to suffer through a single .splice call.

For the sake of completeness, there's another skipping method defined by Iterator: .skip-at-least-pull-one. It follows the same semantics as .skip-at-least, except with .pull-one semantics for return values. Its default implemention involves just calling those two methods, short-circuiting and returning IterationEnd if the .skip-at-least returned a falsy value, and that default implementation is very likely good enough for all Iterators. The method exists as a convenience for Iterator users who call methods on Iterators and (at the moment) it's not used in core Rakudo Perl 6 by any methods that can be called on users' Seqs.

A so, so count...

There are two more optimization methods—.bool-only and .count-only—that do not have a default implementation. The first one returns True or False, depending on whether there are still items that can be generated by the Iterator (True if yes). The second one returns the number of items the Iterator can still produce. Importantly these methods must be able to do that without exhausting the Iterator. In other words, after finding these methods implemented, the user of our Iterator can call them and afterwards should still be able to .pull-one all of the items, as if the methods were never called.

Let's make an Iterator that will take an Iterable and .rotate it once per iteration of our Iterator until its tail becomes its head. Basically, we want this:

.say for rotator 1, 2, 3, 4;

# OUTPUT:
# [2 3 4 1]
# [3 4 1 2]
# [4 1 2 3]

This iterator will serve our purpose to study the two Iterator methods. For a less "made-up" example, try to find implementations of iterators for combinations and permutations routines in Perl 6 compiler's source code.

Here's a sub that creates our Seq with our shiny Iterator along with some code that operates on it and some timings for different stages of the program:

sub rotator (*@stuff) {
    Seq.new: class :: does Iterator {
        has int $!n;
        has int $!steps = 1;
        has     @.stuff is required;

        submethod TWEAK { $!n = @!stuff − 1 }

        method pull-one {
            if $!n-- > 0 {
                LEAVE $!steps = 1;
                [@!stuff .= rotate: $!steps]
            }
            else {
                IterationEnd
            }
        }
        method skip-one {
            $!n > 0 or return False;
            $!n--; $!steps++;
            True
        }
        method skip-at-least (Int \n) {
            if $!n > all 0, n {
                $!steps += n;
                $!n     −= n;
                True
            }
            else {
                $!n = 0;
                False
            }
        }
    }.new: stuff => [@stuff]
}

my $rotations := rotator ^5000;

if $rotations {
    say "Time after getting Bool: {now - INIT now}";

    say "We got $rotations.elems() rotations!";
    say "Time after getting count: {now - INIT now}";

    say "Fetching last one...";
    say "Last one's first 5 elements are: $rotations.tail.head(5)";
    say "Time after getting last elem: {now - INIT now}";
}

# OUTPUT:
# Time after getting Bool: 0.0230339
# We got 4999 rotations!
# Time after getting count: 26.04481484
# Fetching last one...
# Last one's first 5 elements are: 4999 0 1 2 3
# Time after getting last elem: 26.0466234

First things first, let's take a look at what we're doing in our Iterator. We take an Iterable (in the sub call on line 37, we use a Range object out of which we can milk 5000 elements in this case), shallow-clone it (using [ ... ] operator) and keep that clone in @!stuff attribute of our Iterator. During object instantiation, we also save how many items @!stuff has in it into $!n attribute, inside the TWEAK submethod.

For each .pull-one of the Iterator, we .rotate our @!stuff attribute, storing the rotated result back in it, as well as making a shallow clone of it, which is what we return for the iteration.

We also already implemented the .skip-one and .skip-at-least optimization methods, where we use a private $!steps attribute to alter how many steps the next .pull-one will .rotate our @!stuff by. Whenever .pull-one is called, we simply reset $!steps to its default value of 1 using the LEAVE phaser.

Let's check out how this thing performs! We store our precious Seq in $rotations variable that we first check for truthiness, to see if it has any elements in it at all; then we tell the world how many rotations we can fish out of that Seq; lastly, we fetch the last element of the Seq and (for screen space reasons) print the first 5 elements of the last rotation.

All three steps—check .Bool, check .elems, and fetch last item with .tail are timed, and the results aren't that pretty. While .Bool took relatively quick to complete, the .elems call took ages (26s)! That's actually not all of the damage. Recall from PART I of this series that both .Bool and .elems cache the Seq unless special methods are implemented in the Iterator. This means that each of those rotations we made are still there in memory, using up space for nothing! What are we to do? Let's try implementing those special methods .Bool and .elems are looking for!

This only thing we need to change is to add two extra methods to our Iterator that determinine how many elements we can generate (.count-only) and whether we have any elements to generate (.bool-only):

method count-only { $!n     }
method bool-only  { $!n > 0 }

For the sake of completeness, here is our previous example, with these two methods added to our Iterator:

sub rotator (*@stuff) {
    Seq.new: class :: does Iterator {
        has int $!n;
        has int $!steps = 1;
        has     @.stuff is required;

        submethod TWEAK { $!n = @!stuff − 1 }

        method count-only { $!n     }
        method bool-only  { $!n > 0 }

        method pull-one {
            if $!n-- > 0 {
                LEAVE $!steps = 1;
                [@!stuff .= rotate: $!steps]
            }
            else {
                IterationEnd
            }
        }
        method skip-one {
            $!n > 0 or return False;
            $!n--; $!steps++;
            True
        }
        method skip-at-least (\n) {
            if $!n > all 0, n {
                $!steps += n;
                $!n     −= n;
                True
            }
            else {
                $!n = 0;
                False
            }
        }
    }.new: stuff => [@stuff]
}

my $rotations := rotator ^5000;

if $rotations {
    say "Time after getting Bool: {now - INIT now}";

    say "We got $rotations.elems() rotations!";
    say "Time after getting count: {now - INIT now}";

    say "Fetching last one...";
    say "Last one's first 5 elements are: $rotations.tail.head(5)";
    say "Time after getting last elem: {now - INIT now}";
}

# OUTPUT:
# Time after getting Bool: 0.0087576
# We got 4999 rotations!
# Time after getting count: 0.00993624
# Fetching last one...
# Last one's first 5 elements are: 4999 0 1 2 3
# Time after getting last elem: 0.0149863

The code is nearly identical, but look at those sweet, sweet timings! Our entire program runs about 1,733 times faster because our Seq can figure out if and how many elements it has without having to iterate or rotate anything. The .tail call sees our optimization (side note: that's actually very recent) and it too doesn't have to iterate over anything and can just use our .skip-at-least optimization to skip to the end. And last but not least, our Seq is no longer being cached, so the only things kept around in memory are the things we care about. It's a huge win-win-win for very little extra code.

But wait... there's more!

Push it real good...

The Seqs we looked at so far did heavy work: each generated value took a relatively long time to generate. However, Seqs are quite versatile and at times you'll find that generation of a value is cheaper than calling .pull-one and storing that value somewhere. For cases like that, there're a few more methods we can implement to make our Seq perform better.

For the next example, we'll stick with the basics. Our Iterator will generate a sequence of positive even numbers up to the wanted limit. Here's what the call to the sub that makes our Seq looks like:

say evens-up-to 20; # OUTPUT: (2 4 6 8 10 12 14 16 18)

And here's the all of the code for it. The particular operation we'll be doing is storing all the values in an Array, by assigning to it:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one { ($!n += 2) < $!limit ?? $!n !! IterationEnd }
    }.new: :$^limit
}

my @a = evens-up-to 1_700_000;

say now - INIT now; # OUTPUT: 1.00765440

For a limit of 1.7 million, the code takes around a second to run. However, all we do in our Iterator is add some numbers together, so a lot of the time is likely lost in .pull-oneing the values and adding them to the Array, one by one.

In cases like this, implementing a custom .push-all method in our Iterator can help. The method receives one argument that is a reification target. We're pretty close to bare "metal" now, so we can't do anything fancy with the reification target object other than call .push method on it with a single value to add to the target. The .push-all always returns IterationEnd, since it exhausts the Iterator, so we'll just pop that value right into the return value of the method's Signature:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method push-all (\target --> IterationEnd) {
            target.push: $!n while ($!n += 2) < $!limit;
        }
    }.new: :$^limit
}

my @a = evens-up-to 1_700_000;
say now - INIT now; # OUTPUT: 0.91364949

Our program is now 10% faster; not a lot. However, since we're doing all the work in .push-all now, we no longer need to deal with state inside the method's body, so we can shave off a bit of time by using lexical variables instead of accessing object's attributes all the time. We'll make them use native int types for even more speed. Also, (at least currently), the += meta operator is more expensive than a simple assignment and a regular +; since we're trying to squeeze every last bit of juice here, let's take advantage of that as well. So what we have now is this:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method push-all (\target --> IterationEnd) {
            my int $limit = $!limit;
            my int $n     = $!n;
            target.push: $n while ($n = $n + 2) < $limit;
            $!n = $n;
        }
    }.new: :$^limit
}

my @a = evens-up-to 1_700_000;
say now - INIT now; # OUTPUT: 0.6688109

There we go. Now our program is 1.5 times faster than the original, thanks to .push-all. The gain isn't as dramatic as we what saw with other methods, but can come in quite handy when you need it.

There are a few more .push-* methods you can implement to, for example, do something special when your Seq is used in codes like...

for $your-seq -> $a, $b, $c { ... }

...where the Iterator would be asked to .push-exactly three items. The idea behind them is similar to .push-all: you push stuff onto the reification target. Their utility and performance gains are ever smaller, useful only in particular situations, so I won't be covering them.

It's worth noting the .push-all can be used only with Iterators that are not lazy, since... well... it expects you to push all the items. And what exactly are lazy Iterators? I'm so glad you asked!

A quick brown fox jumped over the lazy Seq

Let's pare down our previous Seq that generates even numbers down to the basics. Let's make it generate an infinite list of even numbers, using an anonymous state variable:

sub evens {
    Seq.new: class :: does Iterator {
        method pull-one { $ += 2 }
    }.new
}

put evens

Since the list is infinite, it'd take us an infinite time to fetch them all. So what exactly happens when we run the code above? It... quite predictably hangs when the put routine is called; it sits and patiently waits for our infinite Seq to complete. The same issue occurs when trying to assign our seq to a @-sigiled variable:

my @evens = evens # hangs

Or even when trying to pass our Seq to a sub with a slurpy parameter_Parameters):

sub meows (*@evens) { say 'Got some evens!' }
meows evens # hangs

That's quite an annoying problem. Fortunately, there's a very easy solution for it. But first, a minor detour to the land of naming clarification!

A rose by any other name would laze as sweet

In Perl 6 some things are or can be made "lazy". While it evokes the concept of on-demand or "lazy" evaluation, which is ubiquitous in Perl 6, things that are lazy in Perl 6 aren't just about that. If something is-lazy, it means it always wants to be evaluated lazily, fetching only as many items as needed, even in "mostly lazy" Perl 6 constructs that would otherwise eagerly consume even from sources that do on-demand generation.

For example, a sequence of lines read from a file would want to be lazy, as reading them all in at once has the potential to use up all the RAM. An infinite sequence would also want to be is-lazy because an eager evaluation would cause it to hang, as the sequence never completes.

So a thing that is-lazy in Perl 6 can be thought of as being infinite. Sometimes it actually will be infinite, but even if it isn't, it being lazy means it has similar consequences if used eagerly (too much CPU time used, too much RAM, etc).


Now back to our infinite list of even numbers. It sounds like all we have to do is make our Seq lazy and we do that by implementing .is-lazy method on our Iterator that simply returns True:

sub evens {
    Seq.new: class :: does Iterator {
        method pull-one { $ += 2 }
        method is-lazy (--> True) {}
    }.new
}

sub meows (*@evens) { say 'Got some evens!' }

put         evens; # OUTPUT: ...
my @evens = evens; # doesn't hang
meows       evens; # OUTPUT: Got some evens!

The put routine now detects its dealing with something terribly long and just outputs some dots. Assignment to Array no longer hangs (and will instead reify on demand). And the call to a slurpy doesn't hang either and will also reify on demand.

There's one more Iterator optimization method left that we should discuss...

A Sinking Ship

Perl 6 has sink context, similar to "void" context in other languages, which means a value is being discarded:

42;

# OUTPUT:
# WARNINGS for ...:
# Useless use of constant integer 42 in sink context (line 1)

The constant 42 in the above program is in sink context—its value isn't used by anything—and since it's nearly pointless to have it like that, the compiler warns about it.

Not all sinkage is bad however and sometimes you may find that gorgeous Seq on which you worked so hard is ruthlessly being sunk by the user! Let's take a look at what happens when we sink one of our previous examples, the Seq that generates up to limit even numbers:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
    }.new: :$^limit
}

evens-up-to 5_000_000; # sink our Seq

say now - INIT now; # OUTPUT: 5.87409072

Ouch! Iterating our Seq has no side-effects outside of the Iterator that it uses, which means it took the program almost six seconds to do absolutely nothing.

We can remedy the situation by implementing our own .sink-all method. Its default implementation .pull-ones until the end of the Seq (since Seqs may have useful side effects), which is not what we want for our Seq. So let's implement a .sink-all that does nothing!

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method sink-all(--> IterationEnd) {}
    }.new: :$^limit
}

evens-up-to 5_000_000; # sink our Seq

say now - INIT now; # OUTPUT: 0.0038638

We added a single line of code and made our program 1,520 times faster—the perfect speed up for a program that does nothing!

However, doing nothing is not the only thing .sink-all is good for. Use it for clean up that would usually be done at the end of iteration (e.g. closing a file handle the Iterator was using). Or simply set the state of the system to what it would be at the end of the iteration (e.g. .seek a file handle to the end, for sunk Seq that produces lines from it). Or, as an alternative idea, how about warning the user their code might contain an error:

sub evens-up-to {
    Seq.new: class :: does Iterator {
        has int $!n = 0;
        has int $.limit is required;
        method pull-one {
            ($!n += 2) < $!limit ?? $!n !! IterationEnd
        }
        method sink-all(--> IterationEnd) {
            warn "Oh noes! Looks like you sunk all the evens!\n"
                ~ 'Why did you make them in the first place?'
        }
    }.new: :$^limit
}

evens-up-to 5_000_000; # sink our Seq

# OUTPUT:
# Oh noes! Looks like you sunk all the evens!
# Why did you make them in the first place?
# ...

That concludes our discussion on optimizing your Iterators. Now, let's talk about using Iterators others have made.

It's a marathon, not a sprint

With all the juicy knowledge about Iterators and Seqs we now possess, we can probably see how this piece of code manages to work without hanging, despite being given an infinite Range of numbers:

.say for ^∞ .grep(*.is-prime).map(* ~ ' is a prime number').head: 5;

# OUTPUT:
# 2 is a prime number
# 3 is a prime number
# 5 is a prime number
# 7 is a prime number
# 11 is a prime number

The infinite Range probably is-lazy. That .grep probably .pull-ones until it finds a prime number. The .map .pull-ones each of the .grep's values and modifies them, and .head allows at most 5 values to be .pull-oned from it.

In short what we have here is a pipeline of Seqs and Iterators where the Iterator of the next Seq is based on the Iterator of the previous one. For our study purposes, let's cook up a Seq of our own that combines all of the steps above:

sub first-five-primes (*@numbers) {
    Seq.new: class :: does Iterator {
        has     $.iter;
        has int $!produced = 0;
        method pull-one {
            $!produced++ == 5 and return IterationEnd;
            loop {
                my $value := $!iter.pull-one;
                return IterationEnd if $value =:= IterationEnd;
                return "$value is a prime number" if $value.is-prime;
            }
        }
    }.new: iter => @numbers.iterator
}

.say for first-five-primes ^∞;

# OUTPUT:
# 2 is a prime number
# 3 is a prime number
# 5 is a prime number
# 7 is a prime number
# 11 is a prime number

Our sub slurps up_Parameters) its positional arguments and then calls .iterator method on the @numbers Iterable. This method is available on all Perl 6 objects and will let us interface with the object using Iterator methods directly.

We save the @numbers's Iterator in one of the attributes of our Iterator as well as create another attribute to keep track of how many items we produced. In the .pull-one method, we first check whether we already produced the 5 items we need to produce, and if not, we drop into a loop that calls .pull-one on the other Iterator, the one we got from @numbers Array.

We recently learned that if the Iterator does not have any more values for us, it will return the IterationEnd constant. A constant whose job is to signal the end of iteration is finicky to deal with, as you can imagine. To detect it, we need to ensure we use the binding (:=), not the assignment (=) operator, when storing the value we get from .pull-one in a variable. This is because pretty much only the container identity (=:=) operator will accept such a monstrosity, so we can't stuff the value we .pull-one into just any container we please.

In our example program, if we do find that we received IterationEnd from the source Iterator, we simply return it to indicate we're done. If not, we repeat the process until we find a prime number, which we then put into our desired string and that's what we return from our .pull-one.

All the rest of the Iterator methods we've learned about can be called on the source Iterator in a similar fashion as we called .pull-one in our example.

Conclusion

Today, we've learned a whole ton of stuff! We now know that Seqs are powered by Iterator objects and we can make custom iterators that generate any variety of values we can dream about.

The most basic Iterator has only .pull-one method that generates a single value and returns IterationEnd when it has no more values to produce. It's not permitted to call .pull-one again, once it generated IterationEnd and we can write our .pull-one methods with the expectation that will never happen.

There are plenty of optimization opportunities a custom Iterator can take advantage of. If it can cheaply skip through items, it can implement .skip-one or .skip-at-least methods. If it can know how many items it'll produce, it can implement .bool-only and .count-only methods that can avoid a ton of work and memory use when only certain values of a Seq are needed. And for squeezing the very last bit of performance, you can take advantage of .push-all and other .push-* methods that let you push values onto the target directly.

When your Iterator .is-lazy, things will treat it with extra care and won't try to fetch all of the items at once. And we can use the .sink-all method to avoid work or warn the user of potential mistakes in their code, when our Seq is being sunk.

Lastly, since we know how to make Iterators and what their methods do, we can make use of Iterators coming from other sources and call methods on them directly, manipulating them just how we want to.

We now have all the tools to work with Seq objects in Perl 6. In the PART III of this series, we'll learn how to compactify all of that knowledge and skillfully build Seqs with just a line or two of code, using the sequence operator.

Stay tuned!

-Ofun

Perl 6: Seqs, Drugs, And Rock'n'Roll

Read this article on Perl6.Party

I vividly recall my first steps in Perl 6 were just a couple of months before the first stable release of the language in December 2015. Around that time, Larry Wall was making a presentation and showed a neat feature—the sequence operator—and it got me amazed about just how powerful the language is:

# First 12 even numbers:
say (2, 4 … ∞)[^12];      # OUTPUT: (2 4 6 8 10 12 14 16 18 20 22 24)

# First 10 powers of 2:
say (2, 2², 2³ … ∞)[^10]; # OUTPUT: (2 4 8 16 32 64 128 256 512 1024)

# First 13 Fibonacci numbers:
say (1, 1, *+* … ∞)[^13]; # OUTPUT: (1 1 2 3 5 8 13 21 34 55 89 144 233)

The ellipsis () is the sequence operator and the stuff it makes is the Seq object. And now, a year and a half after Perl 6's first release, I hope to pass on my amazement to a new batch of future Perl 6 programmers.

This is a 3-part series. In PART I of this article we'll talk about what Seq s are and how to make them without the sequence operator. In PART II, we'll look at the thing-behind-the-curtain of Seq's: the Iterator type and how to make Seqs from our own Iterators. Lastly, in PART III, we'll examine the sequence operator in all of its glory.

Note: I will be using all sorts of fancy Unicode operators and symbols in this article. If you don't like them, consult with the Texas Equivalents page for the equivalent ASCII-only way to type those elements.

PART I: What the Seq is all this about?

The Seq stands for Sequence and the Seq object provides a one-shot way to iterate over a sequence of stuff. New values can be generated on demand—in fact, it's perfectly possible to create infinite sequences—and already-generated values are discarded, never to be seen again, although, there's a way to cache them, as we'll see.

Sequences are driven by Iterator objects that are responsible for generating values. However, in many cases you don't have to create Iterators directly or use their methods while iterating a Seq. There are several ways to make a Seq and in this section, we'll talk about gather/take construct.

I gather you'll take us to...

The gather statement and take routine are similar to "generators" and "yield" statement in some other languages:

my $seq-full-of-sunshine := gather {
    say  'And nobody cries';
    say  'there’s only butterflies';

    take 'me away';
    say  'A secret place';
    say  'A sweet escape';

    take 'meee awaaay';
    say  'To better days'    ;

    take 'MEEE AWAAAAYYYY';
    say  'A hiding place';
}

Above, we have a code block with lines of song lyrics, some of which we say (print to the screen) and others we take (to be gathered). Just like, .say can be used as either a method or a subroutine, so you can use .take as a method or subroutine, there's no real difference; merely convenience.

Now, let's iterate over $seq-full-of-sunshine and watch the output:

for $seq-full-of-sunshine {
    ENTER say '▬▬▶ Entering';
    LEAVE say '◀▬▬ Leaving';

    say "❚❚ $_";
}

# OUTPUT:
# And nobody cries
# there’s only butterflies
# ▬▬▶ Entering
# ❚❚ me away
# ◀▬▬ Leaving
# A secret place
# A sweet escape
# ▬▬▶ Entering
# ❚❚ meee awaaay
# ◀▬▬ Leaving
# To better days
# ▬▬▶ Entering
# ❚❚ MEEE AWAAAAYYYY
# ◀▬▬ Leaving
# A hiding place

Notice how the say statements we had inside the gather statement didn't actualy get executed until we needed to iterate over a value that take routines took after those particular say lines. The block got stopped and then continued only when more values from the Seq were requested. The last say call didn't have any more takes after it, and it got executed when the iterator was asked for more values after the last take.

That's exceptional!

The take routine works by throwing a CX::Take control exception that will percolate up the call stack until something takes care of it. This means you can feed a gather not just from an immediate block, but from a bunch of different sources, such as routine calls:

multi what's-that (42)                     { take 'The Answer'            }
multi what's-that (Int $ where *.is-prime) { take 'Tis a prime!'          }
multi what's-that (Numeric)                { take 'Some kind of a number' }

multi what's-that   { how-good-is $^it                   }
sub how-good-is ($) { take rand > ½ ?? 'Tis OK' !! 'Eww' }

my $seq := gather map &what's-that, 1, 31337, 42, 'meows';

.say for $seq;

# OUTPUT:
# Some kind of a number
# Tis a prime!
# The Answer
# Eww

Once again, we iterated over our new Seq with a for loop, and you can see that take called from different multies and even nested sub calls still delivered the value to our gather successfully:

The only limitation is you can't gather takes done in another Promise or in code manually cued in the scheduler:

gather await start take 42;
# OUTPUT:
# Tried to get the result of a broken Promise
#   in block <unit> at test.p6 line 2
#
# Original exception:
#     take without gather

gather $*SCHEDULER.cue: { take 42 }
await Promise.in: 2;
# OUTPUT: Unhandled exception: take without gather

However, nothing's stopping you from using a Channel to proxy your data to be taken in a react block.

my Channel $chan .= new;
my $promise = start gather react whenever $chan { .take }

say "Sending stuff to Channel to gather...";
await start {
    $chan.send: $_ for <a b c>;
    $chan.close;
}
dd await $promise;

# OUTPUT:
# Sending stuff to Channel to gather...
# ("a", "b", "c").Seq

Or gathering takes from within a Supply:

my $supply = supply {
    take 42;
    emit 'Took 42!';
}

my $x := gather react whenever $supply { .say }
say $x;

# OUTPUT: Took 42!
# (42)

Stash into the cache

I mentioned earlier that Seqs are one-shot Iterables that can be iterated only once. So what exactly happens when we try to iterate them the second time?

my $seq := gather take 42;
.say for $seq;
.say for $seq;

# OUTPUT:
# 42
# This Seq has already been iterated, and its values consumed
# (you might solve this by adding .cache on usages of the Seq, or
# by assigning the Seq into an array)

A X::Seq::Consumed exception gets thrown. In fact, Seqs do not even do the Positional role, which is why we didn't use the @ sigil that type- checks for Positional on the variables we stored Seqs in.

The Seq is deemed consumed whenever something asks it for its Iterator after another thing grabbed it, like the for loop would. For example, even if in the first for loop above we would've iterated over just 1 item, we wouldn't be able to resume taking more items in the next for loop, as it'd try to ask for the Seq's iterator that was already taken by the first for loop.

As you can imagine, having Seqs always be one-shot would be somewhat of a pain in the butt. A lot of times you can afford to keep the entire sequence around, which is the price for being able to access its values more than once, and that's precisely what the Seq.cachemethod does:

my $seq := gather { take 42; take 70 };
$seq.cache;

.say for $seq;
.say for $seq;

# OUTPUT:
# 42
# 70
# 42
# 70

As long as you call .cache before you fetch the first item of the Seq, you're good to go iterating over it until the heat death of the Universe (or until its cache noms all of your RAM). However, often you do not even need to call .cache yourself.

Many methods will automatically .cache the Seq for you:

There's one more nicety with Seqs losing their one-shotness that you may see refered to as PositionalBindFailover. It's a role that indicates to the parameter binder that the type can still be converted into a Positional, even when it doesn't do Positional role. In plain English, it means you can do this:

sub foo (@pos) { say @pos[1, 3, 5] }

my $seq := 2, 4 … ∞;
foo $seq; # OUTPUT: (4 8 12)

We have a sub that expects a Positional argument and we give it a Seq which isn't Positional, yet it all works out, because the binder .caches our Seq and uses the List the .cache method returns to be the Positional to be used, thanks to it doing the PositionalBindFailover role.

Last, but not least, if you don't care about all of your Seq's values being generated and cached right there and then, you can simply assign it to a @ sigiled variable, which will reify the Seq and store it as an Array:

my @stuff = gather {
    take 42;
    say "meow";
    take 70;
}

say "Starting to iterate:";
.say for @stuff;

# OUTPUT:
# meow
# Starting to iterate:
# 42
# 70

From the output, we can see say "meow" was executed on assignment to @stuff and not when we actually iterated over the value in the for loop.

Conclusion

In Perl 6, Seqs are one-shot Iterables that don't keep their values around, which makes them very useful for iterating over huge, or even infinite, sequences. However, it's perfectly possible to cache Seq values and re-use them, if that is needed. In fact, many of the Seq's methods will automatically cache the Seq for you.

There are several ways to create Seqs, one of which is to use the gather and take where a gather block will stop its execution and continue it only when more values are needed.

In parts II and III, we'll look at other, more exciting, ways of creating Seqs. Stay tuned!

-Ofun

Perl 6 Release Quality Assurance: Full Ecosystem Toaster

Read this article on Perl6.Party

As some recall, Rakudo's 2017.04 release was somewhat of a trainwreck. It was clear the quality assurance of releases needed to be kicked up a notch. So today, I'll talk about what progress we've made in that area.

Define The Problem

A particular problem that plagued the 2017.04 release were big changes and refactors made in the compiler that passed all the 150,000+ stresstests, however still caused issues in some ecosystem modules and users' code.

The upcoming 2017.06 has many, many more big changes:

  • IO::ArgFiles were entirely replaced with the new IO::CatHandle implementation
  • IO::Socket got a refactor and sync sockets no longer use libuv
  • IO::Handle got a refactor with encoding and sync IO no longer uses libuv
  • Sets/Bags/Mixes got optimization polish and op semantics finalizations
  • Proc was refactored to be in terms of Proc::Async

The IO and Proc stuff is especially impactful, as it affects precomp and module loading as well. Merely passing stresstests just wouldn't give me enough of peace of mind of a solid release. It was time to extend the testing.

Going All In

The good news is I didn't actually have to write any new tests. With 836 modules in the Perl 6 ecosystem, the tests were already there for the taking. Best of all, they were mostly written without bias due to implementation knowledge of core code, as well as have personal style variations from hundreds of different coders. This is all perfect for testing for any regressions of core code. The only problem is running all that.

While there's a budding effort to get CPANTesters to smoke Perl 6 dists, it's not quite the data I need. I need to smoke a whole ton of modules on a particular pre-release commit, while also smoking them on a previous release on the same box, eliminating setup issues that might contribute to failures, as well as ensuring the results were for the same versions of modules.

My first crude attempt involved firing up a 32-core Google Compute Engine VM and writing a 60-line script that launched 836 Proc::Asyncs—one for each module.

Other than chewing through 125 GB of RAM with a single Perl 6 program, the experiment didn't yield any useful data. Each module had to wait for locks, before being installed, and all the Procs were asking zef to install to the same location, so dependency handling was iffy. I needed a more refined solution...

Procs, Kernels, and Murder

So, I started to polish my code. First, I wrote Proc::Q module that let me queue up a bunch of Procs, and scale the number of them running at the same time, based on the number of cores the box had. Supply.throttle core feature made the job a piece of cake.

However, some modules are naughty or broken and I needed a way to kill Procs that take too long to run. Alas, I discovered that Proc::Async.kill had a bug in it, where trying to simultaneously kill a bunch of Procs was failing. After some digging I found out the cause was $*KERNEL.signal method the .kill was using isn't actually thread safe and the bug was due to a data race in initialization of the signal table.

After refactoring Kernel.signal, and fixing Proc::Async.kill, I released Proc::Q module—my first module to require (at the time) the bleedest of bleeding edges: a HEAD commit.

Going Atomic

After cooking up boilerplate DB and Proc::Q code, I was ready to toast the ecosystem. However, it appeared zef wasn't designed, or at least well-tested, in scenarious where up to 40 instances were running module installations simultaneously. I was getting JSON errors from reading ecosystem JSON, broken cache files (due to lack of file locking), and false positives in installations because modules claimed they were already installed.

I initially attempted to solve the JSON errors by looking at an Issue in the ecosystem repo about the updater script not writing atomically. However, even after fixing the updater script, I was still getting invalid JSON errors from zef when reading ecosystem data.

It might be due to something in zef, but instead of investigating it further, I followed ugexe++'s advice and told zef not to fetch ecosystem in each Proc. The broken cache issues were similarly eliminated by disabling caching support. And the false positives were eliminated telling each zef instance to install the tested module into a separate location.

The final solution involved programatically editing zef's config file before a toast run to disable auto-updates of CPAN and p6c ecosystem data, and then in individual Procs zef module install command ended up being:

«zef --/cached --debug install "$module" "--install-to=inst#$where"»

Where $where is a per-module, per-rakudo-commit location. The final issue was floppy test runs, which I resolved by re-testing failed modules one more time, to see if the new run succeeds.

Time is Everything

The toasting of the entire ecosystem on HEAD and 2017.05 releases took about three hours on a 24-core VM, while being unattended. While watching over it and killing the few hanging modules at the end without waiting for them to time out makes a single-commit run take about 65 minutes.

I also did a toast run on a 64-core VM...

Overall, the run took me 50 minutes, and I had to manually kill some modules' tests. However, looking at CPU utilization charts, it seems the run sat idle for dozens of minutes before I came along to kill stuff:

So I think after some polish of avoiding hanging modules and figuring out why (apparently) Proc::Async.kill still doesn't kill everything, the runs can be entirely automated and a single run can be completed in about 20-30 minutes.

This means that even with last-minute big changes pushed to Rakudo, I can still toast the entire ecosystem reasonably fast, detect any potential regressions, fix them, and re-test again.

Reeling In The Catch

The Toaster database is available for viewing at toast.perl6.party. As more commits get toasted, they get added to the database. I plan to clear them out after each release.

The toasting runs I did so far weren't just a chance to play with powerful hardware. The very first issue was detected when toasting Clifford module.

The issue was to do with Lists of Pairs with same keys coerced into a MixHash, when the final accumulative weight was zero. The issue was introduced on June 7th and it took me about an hour of digging through the module's guts to find it. Considering it's quite an edge case, I imagine without the toaster runs it would take a lot longer to identify this bug. lizmat++ squashed this bug hours after identification and it never made it into any releases.

The other issue detected by toasting had to do with the VM-backed decoder serialization introduced during IO refactor and jnthn++ fixed it a day after detection. One more bug had to do with Proc refactor making Proc not synchronous-enough. It was mercilessly squashed, while fixing a couple of longstanding issues with Proc.

All of these issues weren't detected by the 150,000+ tests in the testsuite and while an argument can be made that the tests are sparse in places, there's no doubt the Toaster has paid off for the effort in making it by catching bugs that might've otherwise made it into the release.

The Future

The future plans for the Toaster would be first to make it toast on more platforms, like Windows and MacOS. Eventually, I hope to make toast runs continuous, on less-powerful VMs that are entirely automated. An IRC bot would watch for any failures and report them to the dev channel.

Conclusion

The ecosystem Toaster lets core devs test a Rakudo commit on hundreds of software pieces, made by hundreds of different developers, all within a single hour. During its short existence, the Toaster already found issues with ecosystem infrastructure, highly-multi-threaded Perl 6 programs, as well as detected regressions and new bugs that we were able to fix before the release.

The extra testing lets core devs deliver higher-quality releases, which makes Perl 6 more trustworthy to use in production-quality software. The future will see the Toaster improved to test on a wider range of systems, as well as being automated for continued extended testing.

And most importantly, the Toaster makes it possible for any Perl 6 programmer to help core development of Perl 6, by simply publishing a module.

-Ofun

COMPLETION Report / Perl 6 IO TPF Grant

This document is the May, 2017 progress report for TPF Standardization, Test Coverage, and Documentation of Perl 6 I/O Routines grant. I believe I reasonably satisfied the goals of the grant and consider it completed. This is the final report and may reference some of the work/commits previously mentioned in monthly reports.

Thank You!

I'd like to thank all the donors that support The Perl Foundation who made this grant possible. It was a wonderful learning experience for me, and it brings me joy to look back and see Perl 6 improved due to this grant.

Thank You!

Completeness Criteria

Here are the original completeness criteria (in bold) that are listed on the original grant proposal and my comments on their status:

  • rakudo repository will contain the IO Action Plan document and it will be fully implemented. The promised document exists. It's fully implemented except for three items that I listed on the IO Action Plan, but which are currently a bit beyond my skill level to implement. I hope to do them eventually, but outside the scope of this grant. They are:
    • IO::Handle's Closed status. My original proposal would cause some perfomance issues, so it was decided to improve MoarVM errors instead.
    • Optimize multiple stat calls. This involves creating a new nqp op, with code for it implemented in MoarVM and JVM backends.
    • Use typed exceptions instead of X::AdHoc. I made typed exceptions be thrown whereever I could. The rest require VM-level exceptions and is on the same level as the handle closed status issue (first item above).
  • All of the I/O routines will have tests in roast and documented on docs.perl6.org. If any of the currently implemented but unspecced routines are decided against being included in Perl 6 Language, their implementation will no longer be available in Rakudo. To the best of my knowledge, this is completed in full.
  • The test coverage tool will report all I/O routines as covered and the information will be visible on perl6.wtf (Perl 6's Wonderful Test Files) website. Note: due to current experimental status of the coverage tool, its report may still show some lines or conditionals untested despite them actually being tested; however, it will show the lines where routines' names are specified as covered. To the best of my knowledge, all IO routines currently have tests covering them. Due to its experimental status, the coverage tool shows some attributes as uncovered. I did manually verify all the attributes/routines whose names the tool shows as uncovered contain tests for them. One exception is IO::Notification type (and IO::Path.watch method). While it has full coverage for OSX operating system, it lacks it for other OSes. I tried writing some tests for it, but it looks like the behaviour of the nqp op handling these is broken on Linux and the class needs more work.

Extra Deliverables

I produced these extra deliverables while working on the grant:

  • The Definitive I/O Guide. Providing tutorial-like documentation for Perl 6's I/O, including documenting some of the bad practices I noticed in the ecosystem (and even a Perl 6 book!) and the correct way to perform those tasks. (N.B. as I write this report, the guide could still use a few extra sections to be considered "The Definitive"; I'll write them in upcoming weeks)
  • Performance improvements. I made 23 performance-enhancing commits, with many commits making things more than 200% faster, with highest improvement making a routine 6300% faster.
  • Trait::IO module. Provides does auto-close pseudo-trait to simplify closing of IO handles.
  • IO::Path::ChildSecure module. Due to large ecosystem usage, IO::Path.child was left as is until 6.d language, at which point it will be made secure (as outlined in the IO Plan). This module provides the secure version in the mean time.
  • IO::Dir module. Provides IO::Path.dir-like functionality, with ability to close open directory without needing to fully exhaust the returned Seq.
  • Die module. Implements Perl-5-like behaviour for &die routine.
  • The "Map of Perl 6 Routines" (or rather the "table") is available on map.perl6.party with its code in perl6/routine-map repo. In near future, I plan to use it to identify incorrect or incomplete entries in our documentation

In addition, I plan to complete these modules some time in the future; the ideas for them were birthed while working on the grant: - NL module. Targeted for use in one liners, the module will provide $*NL dynvar that behaves like Perl 5's $. variable (providing current $*ARGFILES's file's line number). Its implementation became possible thanks to newly-implemented IO::CatHandle type - FastIO module. A re-imagination of core IO, the biggest part of which will be the removal of (user-exposed) use of IO::Spec::* types and $*SPEC variable, which—it is believed—will provide improved performance over core IO. The module is a prototype for some of the proposals that were made during the IO grant and if it offers significant improvements over core IO, its ideas will be used by core IO in future language versions.

Work Performed in May

For the work done in May, many of my commits went into going through the IO routine list, and adding missing tests and documentation, along with fixing bugs (and reporting new ones I found).

The major work was implementation of the IO::CatHandle class that fixed all of the bugs and NYIs with the $*ARGFILES. This work saw the addition of 372 lines of code, 800 lines of tests and 793 lines of documentation.

Work by Other Core Members

jnthn++ completed the handle encoding refactor that will eventually let us get rid of using libuv for syncronous IO and, more importantly, allow us to support user-defined encoders/decoders.

Along with fixing a bunch of bugs, this work altered the performance landscape for IO operations (i.e. some operations may now be a bit faster, others a bit slower), though overall the performance appeared to stay the same.

Tickets Fixed

Grant Commits

During this grant, I've made 417 commits, that are: 134 Rakudo commits + 23 performance-enchancing Rakudo commits + 114 Perl 6 Specification commits + 146 documentation commits,

Performance Rakudo Commits

I've made 23 performance enchancing commits to Rakudo's repository:

  • 4032953 Make IO::Handle.open 75% faster
  • dcf1bb2 Make IO::Spec::Unix.rel2abs 35% faster
  • c13480c IO::Path.slurp: make 12%-35% faster; propagate Failures
  • 0e36bb2 Make IO::Spec::Win32!canon-cat 2.3x faster
  • c6fd736 Make IO::Spec::Win32.is-absolute about 63x faster
  • 894ba82 Make IO::Spec::Win32.split about 82% faster
  • 277b6e5 Make IO::Spec::Unix.rel2abs 2.9x faster
  • 74680d4 Make IO::Path.is-absolute about 80% faster
  • ff23416 Make IO::Path.is-relative about 2.1x faster
  • d272667 Make IO::Spec::Unix.join about 40% faster
  • 50429b1 Make IO::Handle.put($x) about 5%-35% faster
  • 204ea59 Make &say(**@args) 70%− faster
  • 6d7fc8e Make &put(**@args) up to 70% faster
  • 76af536 Make 1-arg IO::Handle.say up to 2x faster
  • aa72bde Remove dir's :absolute and :Str; make up to 23% faster
  • 48cf0e6 Make IO::Spec::Cygwin.is-absolute 21x faster
  • c96727a Fix combiners on SPEC::Win32.rel2abs; make 6% faster
  • 0547979 Make IO::Spec::Unix.path consistent and 4.6x faster
  • 8992af1 Fix IO::Spec::Win32.path and make 26x faster
  • 7d6fa73 Make IO::Spec::Win32.catpath 47x faster
  • 494659a Make IO::Spec::Win32.join 26x faster
  • 6ca702f Make IO::Spec::Unix.splitdir 7.7x faster
  • 2816ef7 Make IO::Spec::Win32.splitdir 25x faster

Non-Performance Rakudo Commits

Other than perf commits, I've also made 134 commits to the Rakudo's repository:

  • dd4dfb1 Fix crash in IO::Special .WHICH/.Str
  • 76f7187 Do not cache IO::Path.e results
  • 212cc8a Remove IO::Path.Bridge
  • a01d679 Remove IO::Path.pipe
  • 55abc6d Improve IO::Path.child perf on *nix
  • 4fdebc9 Make IO::Spec::Unix.split 36x Faster
  • 0111f10 Make IO::Spec::Unix.catdir 3.9x Faster
  • fa9aa47 Make R::I::SET_LINE_ENDING_ON_HANDLE 4.1x Faster
  • c360ac2 Fix smartmatch of Cool ~~ IO::Path
  • 0c7e4a0 Do not capture args in .IO method
  • 9d8d7b2 Log all changes to plan made during review period
  • 87987c2 Removerole IOand its .umask method
  • 36ad92a Remove 15 methods from IO::Handle
  • a5800a1 Implement IO::Handle.spurt
  • aa62cd5 Remove &tmpdir and &homedir
  • a0ef2ed Improve &chdir, &indir, and IO::Path.chdir
  • ca1acb7 Fix race in &indir(IO::Path …)
  • 2483d68 Fix regression in &chdir's failure mode
  • 5464b82 Improve &*chdir
  • 4c31903 Add S32-io/chdir-process.t to list of test files to run
  • cb27bce Clean up &open and IO::Path.open
  • 099512b Clean up and improve all spurt routines
  • b62d1a7 Give $*TMPDIR a container
  • b1e7a01 Implement IO::Path.extension 2.0
  • 15a25da Fix ambiguity in empty extension vs no extension
  • 50aea2b Restore IO::Handle.IO
  • 966a7e3 Implement IO::Path.concat-with
  • 94a6909 Clean up IO::Spec::Unix.abs2rel a bit
  • a432b3d Remove IO::Path.abspath (part 2)
  • 954e69e Fix return value of IO::Special methods
  • 67f06b2 Run S32-io/io-special.t test file
  • a0b82ed Make IO::Path::* actually instantiate a subclass
  • 0c8bef5 Implement :parent in IO::Spec::Cygwin.canonpath
  • 0a442ce Remove type constraint in IO::Spec::Cygwin.canonpath
  • b4358af Delete code for IO::Spec::Win32.catfile
  • e681498 Make IO::Path throw when path contains NUL byte
  • 6a8d63d Implement :completely param in IO::Path.resolve
  • b6838ee Remove .f check in .z
  • 184d499 Make IO::Handle.Supply respect handle's mode
  • f1b4af7 Implement IO::Handle.slurp
  • 90da80f Rework read methods in IO::Path/IO::Handle
  • 8c09c84 Fix symlink and link routines
  • da1dea2 Fix &symlink and &link
  • 7f73f92 Make IO::Path.new-from-absolute-path private
  • ff97083 Straighten up rename, move, and copy
  • 0d9ecae Remove multi-dir &mkdir
  • 6ee71c2 Coerce mode in IO::Path.mkdir to Int
  • d46e8df Add IO::Pipe .path and .IO methods
  • c01ebea Make IO::Path.mkdir return invocant on success
  • 1f689a9 Fix up IO::Handle.Str
  • 490ffd1 Do not use self.Str in IO::Path errors
  • 40217ed Swap .child to .concat-with in all the guts
  • fd503f8 Revert "Removerole IOand its .umask method"
  • c95c4a7 Make IO::Path/IO::Special do IO role
  • 214198b Implement proper args for IO::Handle.lock
  • 9a2446c Move Bool return value to signature
  • 51e4629 Amend rules for last part in IO::Path.resolve
  • b8458d3 Rewordmethod childfor cleaner code
  • 1887114 Implement IO::Path.child-secure
  • 9d8e391 Fix IO::Path.resolve with combiners; timotimo++
  • 0b5a41b Rename IO::Path.concat-with to .add
  • a98b285 Remove IO::Path.child-secure
  • 8bacad8 Implement IO::Path.sibling
  • 7112a08 Add :D on invocant for file tests
  • b2a64a1 Fix $*CWD inside IO::Path.dir's :test Callable
  • 6fa4bbc Straighten out &slurp/&spurt/&get/&getc/&close
  • 34b58d1 Straighten out &lines/&words
  • d0cd137 Make dir take any IO(), not just Cool
  • 7412184 Make $*HOME default to Nil, not Any
  • 475d9bc Fix display of backslashes in IO::Path.gist
  • 6ef2abd Revert "Fix display of backslashes in IO::Path.gist"
  • 134efd8 Fix .perl for IO::Path and subclasses
  • 69320e7 Fix .IO on :U of IO::Path subclasses
  • eb8d006 Make IO::Handle.iterator a private lines iterator
  • 08a8075 Fix IO::Path.copy/move when source/target are same
  • 973338a Fix IO::Handle.comb/.split; make them .slurp
  • b43ed18 Make IO::Handle.flush fail with typed exceptions
  • 276d4a7 Remove .tell info in IO::Handle.gist
  • f4309de Fix IO::Spec::Unix.is-absolute for combiners on /
  • 06d8800 Fix crash when setting .nl-in ...
  • 7e9496d Make IO::Handle.encoding settable via .new
  • 95e49dc Make IO::Handle.open respect attribute values
  • 6ed14ef Remove:directoryfrom IO::Spec::*.split
  • 9021a48 Make IO::Path.parts a Map instead of Hash
  • a282b8c Fix IO::Handle.perl.EVAL roundtrippage
  • a412788 Make IO::Path.resolve set CWD to $!SPEC.dir-sep
  • 84502dc Implement $limit arg for IO::Handle.words
  • 613bdcf Make IO::Handle.print/.put sig consistent
  • 0646d3f Allow no-arg &prompt
  • 4a8aa27 Implement IO::CatHandle.close
  • 4ad8b17 Implement IO::CatHandle.get
  • 3b668b6 Implement IO::CatHandle.getc
  • 25b664a Implement IO::CatHandle.words
  • 7ebc386 Implement IO::CatHandle.slurp
  • 52b34b7 Implement IO::CatHandle.comb/.split
  • beaa925 Implement IO::CatHandle.read
  • ccc90fd Implement IO::CatHandle.readchars
  • 40f4dc9 Implement IO::CatHandle.Supply
  • 0c9aea7 Implement IO::CatHandle.encoding
  • ee1e185 Implement IO::CatHandle.eof
  • 80686a7 Implement IO::CatHandle.t/.path/.IO/.native-descriptor
  • 993de50 Implement IO::CatHandle.gist/.Str/.opened/.open
  • 677c4ea Implement IO::CatHandle.lock/.unlock/.seek/.tell
  • e657ed1 Implement IO::CatHandle.chomp/.nl-in
  • a452e42 Implement IO::CatHandle.on-switch
  • f539a62 Swap IO::ArgFiles to IO::CatHandle impl
  • fa7aa1c Implement IO::CatHandle.perl method
  • 21fd2c4 Remove IO::Path.watch
  • 65941b2 Revert "Remove IO::Path.watch"
  • a47a78f Remove useless :SPEC/:CWD on some IO subs
  • d13d9c2 Throw out IO::Path.int

Perl 6 Specification Commits

I've made 114 commits to the Perl 6 Specification (roast) repository:

  • 63370fe Test IO::Special .WHICH/.Str do not crash
  • 465795c Test IO::Path.lines(*) does not crash
  • 091931a Expand &open tests
  • 8d6ca7a Cover IO::Path.ACCEPTS
  • 14b6844 Use Numeric instead of IO role in dispatch test
  • 5a7a365 Expand IO::Spec::*.tmpdir tests
  • f48198f Test &indir
  • bd46836 Amend &indir race tests
  • 04333b3 Test &indir fails with non-existent paths by default
  • 73a5448 Remove two fudged &chdir tests
  • 86f79ce Expand &chdir tests
  • 430ab89 Test &*chdir
  • 86c5f9c Delete qp{} tests
  • 3c4e81b Test IO::Path.Str works as advertised
  • ba3e7be Merge S32-io/path.t and S32-io/io-path.t
  • 79ff022 Expand &spurt and IO::Path.spurt tests
  • 1d4e881 Test $*TMPDIR can betemped
  • b23e53e Test IO::Path.extension
  • 2f09f18 Fix incorrect test
  • 305f206 Test empty-string extensions in IO::Path.extension
  • 0e47f25 Test IO::Path.concat-with
  • e5dc376 Expand IO::Path.accessed tests
  • 43ec543 Cover methods of IO::Special
  • bd8d167 Test IO::Path::* instantiate a subclass
  • d8707e7 Cover IO::Spec::Unix.basename
  • c3c51ed Cover IO::Spec::Win32.basename
  • 896033a Cover IO::Spec::QNX.canonpath
  • 7c7fbb4 Cover :parent arg in IO::Spec::Cygwin.canonpath
  • 8f73ad8 Change \0 roundtrip test to \t roundtrip test
  • b16fbd3 Add tests to check nul byte is rejected
  • ee7f05b Move is-path sub to top so it can be reused
  • a809f0f Expand IO::Path.resolve tests
  • feecaf0 Expand file tests
  • a4c53b0 Use bin IO::Handle to test its .Supply
  • 7e4a2ae Swap .slurp-rest to .slurp
  • d4353b6 Rewrite .l on broken symlinks test
  • 416b746 Test symlink routines
  • 8fa49e1 Testlinkroutines
  • 637500d Spec IO::Pipe.path/.IO returns IO::Path type object
  • 64ff572 Cover IO::Path/IO::Pipe's .Str/.path/.IO
  • 4194755 Test IO::Handle.lock/.unlock
  • a716962 Amend rules for last part in IO::Path.resolve
  • f3c5dae Test IO::Path.child-secure
  • 92217f7 Test IO::Path.child-secure with combiners
  • 39677c4 IO::Path.concat-with got renamed to .add
  • 7a063b5 Fudge .child-secure tests
  • 3b36d4d Test IO::Path.sibling
  • 41b7f9f Test $*CWD in IO::Path.dir(:test) Callable
  • 18d9c04 Cover IO::Handle.spurt
  • 8f78ca6 Test &words with IO::ArgFiles
  • ea137f6 Cover IO::Handle.tell
  • 71a6423 Add $*HOME tests
  • 95d68a2 Test IO::Path.gist does escapes of backslashes
  • de89d25 Revert "Test IO::Path.gist does escapes of backslashes"
  • 9e8b154 Test IO::Handle.close can be...
  • 853f76f Test IO::Pipe.close returns pipe's Proc
  • d543e75 Test IO::Handle.DESTROY closes the handle
  • 1ed18b4 Add test for .perl.EVAL roundtrip with combiners
  • 704210c Test we can roundtrip IO::Path.perl
  • 2689eb1 Test .IO on :U of IO::Path subclasses
  • 40353f1 Test for IO::Handle:D { ... } loops over handle
  • 4fdb850 Test IO::Path.copy/move when source/target are same
  • 98917dc Test IO::Path.dir's absoluteness behaviour
  • 71eebc7 Test IO::Spec::Unix.extension
  • 4495615 Test IO::Handle.flush
  • 60f5a6d Test IO::Handle.t when handle is a TTY
  • 31e3993 Test IO::Path*.gist
  • c481433 Test .is-absolute method for / with combiners
  • 8ee0a0a Test IO::Spec::Win32.rel2abs with combiners
  • a41027f Test IO::Handle.nl-in can be set
  • e82b798 Test IO::Handle.open respects attributes
  • 2c29150 Test IO::Handle.nl-in attribute
  • 03ce93b Test IO::Handle.encoding can be set
  • 8ae81c0 Test no-arg candidate of &note
  • fb61306 Test IO::Path.parts attribute
  • 7266522 Test return type of IO::Spec::Unix.path
  • 6ac3b4a Test IO::Spec::Win32.path
  • dbbea15 Test IO::Handle.perl.EVAL roundtrips
  • 5eb513c Test IO::Path.resolve sets CWD to $!SPEC.dir-sep
  • b0c4a7a Test &words, IO::Handle.words, and IO::Path.words
  • f3d1f67 Test $limit arg with &lines/IO::*.lines
  • 4f5589b Add test for handle leak in IO::Path.lines
  • 4d0f97a Add &put/IO::Handle.put tests
  • 125fe18 Add &prompt tests
  • 939ca8d Test IO::CatHandle.close
  • 9833012 Test IO::CatHandle.get
  • 2f65a72 Test IO::CatHandle.getc
  • a4a7eaa Test IO::CatHandle.words
  • 1131c09 Add &put/IO::Handle.put tests
  • 80de9b6 Add &prompt tests
  • bacfd9f Test IO::CatHandle.slurp
  • e78e3c0 Test IO::CatHandle.comb/.split
  • f1c1125 Test IO::CatHandle.read
  • e9e78e1 Test IO::CatHandle.readchars
  • 0479087 Test IO::CatHandle.Supply
  • 71953e3 Test IO::CatHandle.encoding
  • db4847e Test IO::CatHandle.eof
  • 175ba45 Test IO::CatHandle.t/.path/.IO/.native-descriptor
  • c6cc66a Test IO::CatHandle.gist/.Str/.opened/.open
  • dcdac1a Test IO::CatHandle.lock/.unlock/.seek/.tell
  • f48c26e Test IO::CatHandle.chomp/.nl-in
  • 8afd758 Test IO::CatHandle.DESTROY
  • c7eff2b Test IO::CatHandle.on-switch
  • e87e20d Test IO::CatHandle.next-handle
  • 28717f0 Test IO::CatHandle.perl method
  • 432bf94 Test IO::Path.watch
  • ce1b637 Test IO::Handle.say
  • 0bb6298 Test IO::Handle.print-nl
  • 47c88ab Test IO::Pipe.proc attribute
  • 945621d Test IO::Path.SPEC attribute
  • 5fb4b63 Test IO::Path.CWD/.path attributes
  • d0e5701 Test IO::Path.Numeric and other .numeric methods
  • 94d7133 Test 0-arg &say/&put/&print
  • 38c61cd Test &slurp() and &slurp(IO::Handle)

Perl 6 Documentation Commits

I've made 146 commits to the Perl 6 Documentation repository:

  • fd7a41b Improve code example
  • 110efb4 No need for.ends-with``
  • 69d32da Remove IO::Handle.z
  • d02ae7d Remove IO::Handle.rw and .rwx
  • ccae74a Fix incorrect information for IO::Path.absolute
  • 3cf943d Expand IO::Path.relative
  • cc496eb Remove mention of IO.umask
  • 335a98d Remove mention ofrole IO``
  • cc6539b Remove 8 methods from IO::Handle
  • 0511e07 Document IO::Spec::*.tmpdir
  • db36655 Remove tip to use $*SPEC to detect OS
  • 839a6b3 Expand docs for $*HOME and $*TMPDIR
  • d050d4b Remove IO::Path.chdir prose
  • 1d0e433 Document &chdir
  • 3fdc6dc Document &*chdir
  • e1a299c Reword "defined as" for &*chdir
  • e5225be Fix URL to &*chdir
  • bf377c7 Document &indir
  • 5aa614f Improve suggestion for Perl 5's opendir
  • a53015a Clarify value of IO::Path.path
  • bdd18f1 Fix desc of IO::Path.Str
  • b78d4fd Include type names in links to methods
  • b8fba97 Point out my $*CWD = chdir … is an error
  • d5abceb Write docs for all spurt routines
  • b9e692e Document new IO::Path.extension
  • 65cc372 Document IO::Path.concat-with
  • 24a6ea9 Toss all of the TODO methods in IO::Spec*
  • 1f75ddc Document IO::Spec*.abs2rel
  • cc62dd2 Kill IO::Path.abspath
  • 1973010 Document IO::Path.ACCEPTS
  • b3a9324 Expand/fix up IO::Path.accessed
  • 1cd7de0 Fix up type graph
  • 56256d0 Minor formatting improvements in IO::Special
  • 184342c Document IO::Special.what
  • 6bd0f98 Dissuade readers from using IO::Spec*
  • 7afd9c4 Remove unrelated related classes
  • a43ecb9 Document IO::Path's $.SPEC and $.CWD
  • e9b6809 Document IO::Path::* subclasses
  • 9102b51 Fix up IO::Path.basename
  • 5c1d3b6 Document IO::Spec::Unix.basename
  • a1cb80b Document IO::Spec::Win32.basename
  • 28b6283 Document IO::Spec::*.canonpath
  • 50e5565 Document IO::Spec::*.catdir and .catfile
  • dbdc995 Document IO::Spec::*.catpath
  • 0ca2295 Reword/expand IO::Path intro prose
  • 45e84ad Move IO::Path.path to attributes
  • b9de84f Remove DateTime tutorial from IO::Path docs
  • 69b2082 Document IO::Path.chdir
  • d436f3c Document IO::Spec::* don't do any validation
  • 4090446 Improve chmod docs
  • 1527d32 Document :completely arg to IO::Path.resolve
  • 372545c Straighten up file test docs
  • a30fae6 Avoid potential confusion with use of word "object"
  • 2aa3c9f Document new behaviour of IO::Handle.Supply
  • 56b50fe Document IO::Handle.slurp
  • 017acd4 Improve docs for IO::Path.slurp
  • 0f49bb5 List Rakudo-supported encodings in open()
  • e60da5c List utf-* alias examples too since they're common
  • f83f78c Use idiomatic Perl 6 in example
  • fff866f Fix docs for symlink/link routines
  • aeeec94 Straighten up copy, move, rename
  • 923ea05 Straighten up mkdir docs
  • 47b0526 Explicitly spell out caveats of IO::Path.Str
  • 60b9227 Change return value formkdir``
  • 8d95371 Expand IO::Handle/IO::Pipe.path docs
  • fd8a5ed Document IO::Pipe.path
  • bd4fa68 Document IO::Handle/IO::Pipe.IO
  • 2aaf12a Document IO::Handle.Str
  • 53f2b99 Documentrole IO's new purpose
  • 160c6a2 Document IO::Handle.lock/.unlock
  • 3145979 Document IO::Path.child-secure
  • c5524ef Rename IO::Path.concat-with to .add
  • 81a5806 Amend IO::Path.resolve: :completely
  • 6ca67e4 Start sketching out Definitive IO Guide™
  • b9c9117 Toss IO::Path.child-secure
  • 61cb776 Document IO::Path.sibling
  • 0fc39a6 Fix typegraph
  • 9a63dc4 Document IO::Path.cleanup
  • 2387ce3 Re-write IO::Handle.close docs
  • 0def0d1 Amend IO::Handle.close docs
  • c7e32e2 Document IO::Spec::Unix.curupdir
  • fe489dc Document IO::Spec::Unix.curdir
  • 83d5de0 Document IO::Spec::Unix.updir
  • 4804128 Document IO::Handle.DESTROY
  • c991862 Add warning to dir about...
  • eca21ff Document copy/move behaviour for same target/source
  • 6c2b8b2 Document IO::Path/IO::Handle.comb
  • fb29e04 Include exception used in IO::Path.resolve
  • 69d473f Document IO::Spec::*.devnull
  • 994d671 List IO::Dir as one of the means...
  • 4432ef3 Finish up IO::Path.dir docs
  • 64355c8 Document IO::Spec::*.dir-sep
  • 914c100 Finish up IO::Path.dirname
  • 8d5e31c Document IO::Handle.encoding
  • d5c36aa Finish off IO::Handle.eof
  • e9de97e Document IO::Spec::*.extension
  • bf7ec00 Document IO::Handle.flush
  • 25bce38 Document IO::Path.succ
  • 8233960 Improve IO::Handle.t docs
  • b4006a2 Be explicit what IO::Handle.opened returns
  • c4f27a7 Document IO::Path.pred
  • 860333f Remove entirely-invented "File test operators"
  • ab0bd7a Document IO::Path.Numeric/.Int
  • 4f81f08 Improve IO::Handle.get docs
  • c45d389 Finish off IO::Handle.getc/&getc docs
  • a4012e0 Document IO::Handle.gist
  • d15b0c7 Document IO::Path.gist
  • 1cf6932 Document IO::Spec::*.is-absolute
  • 4e88b84 Finish up IO::Path.is-absolute
  • 497e7f7 Finish off IO::Path.is-relative
  • f7e75c1 Document IO::Handle.nl-in
  • e309ddd Finish up &note
  • 81900cb Finish off IO::Path.parent
  • 59cbc38 Finish off IO::Path.parts
  • b99a666 Finish off IO::Path.path/.IO
  • b070999 Document IO::Spec::*.path
  • bace8ff Document IO::Path*.perl
  • dfdd845 Add "The Basics" section to TDIOG
  • cdc701e Add "What's an IO::Path Anyway?" section to TDIOG
  • 0d6d058 Add "Writing into files" Section to TDIOG
  • a6365f3 Document IO::Handle.words/&words
  • 2e25c82 Document IO::Spec::*.join
  • 49e58bd Document IO::Handle.lines
  • 1744820 Document IO::Path.lines
  • f3f70a0 Document IO::Path.words
  • 509f0e8 Fix incorrect suggested routine
  • a6f1cbf Fix up IO::Handle.print
  • 8f53830 Fix up IO::Handle.print-nl
  • dc50211 Fix &prompt
  • 98965b3 Fix up IO::Handle.split
  • bd702e2 Fix up IO::Handle.comb
  • 6dd92b8 Document IO::CatHandle
  • edeb069 Document IO::Path.split
  • 2d96596 Document IO::Spec::*.split
  • 129c097 Document IO::Spec::*.splitdir
  • b946960 Document IO::Spec::*.splitpath
  • dcd7490 Fix rmdir docs
  • 2a7bd17 Document IO::Spec::*.rel2abs
  • f45241f Document IO::Spec::*.rootdir
  • 70a80ec Document IO::Handle.put
  • 6f58ed0 Polish IO::Handle.say
  • 3790a0f Polish &put/&print/&say
  • ebb6f53 Document IO::Handle.nl-out attribute
  • 53c9c91 Document IO::Handle.chomp attribute
  • ca2a3a0 Improve &open/IO::Handle.open docs
  • 856e846 Add Reading From Files section to TDIOG

Perl 6 IO TPF Grant: Monthly Report (April, 2017)

This document is the April, 2017 progress report for TPF Standardization, Test Coverage, and Documentation of Perl 6 I/O Routines grant

Timing

As proposed to and approved by the Grant Manager, I've extended the due date for this grant by 1 extra month, in exchange for doing some extra optimization work on IO routines at no extra cost. The new completion date is May 22nd; right after the next Rakudo compiler release.

Communications

I've created and published three notices as part of this grant, informing the users on what is changing and how to best upgrade their code, where needed:

IO Action Plan Progress

Most of the IO Action Plan has been implemented and got shipped in Rakudo's 2017.04.2 release. The remaining items are:

  • Implement better way to signal closed handle status (was omited from release due to original suggestion to do this not being ideal; likely better to do this on the VM end)
  • Implement IO::CatHandle as a generalized IO::ArgFiles (was omited from release because it was decided to make it mostly-usable wherever IO::Handle can be used, and IO::ArgFiles is far from that, having only a handful of methods implemented)
  • Optimization of the way we perform stat calls for multiple file tests (entirely internal that requires no changes to users' code)

Documentation and Coverage Progress

In my spreadsheet with all the IO routines and their candidates, the totals show that 40% have been documented and tested. Some of the remaining 60% may already have tests/docs added when implementing IO Action Plan or ealier and merely need checking and verification.

Optimizations

Some of the optimizations I promised to deliver in exchange for grant deadline extension were already done on IO::Spec::Unix and IO::Path routines and have made it into the 2017.04.2 release. Most of the optimizations that will be done in the upcoming month will be done in IO::Spec::Win32 and will largely affect Windows users.

IO Optimizations in 2017.04.2 Done by Other Core Members:

  • Elizabeth Mattijsen made .slurp 2x faster rakudo/b4d80c0
  • Samantha McVey made nqp::index—which is used in path operations—2x faster rakudo/f1fc879
  • IO::Pipe.lines was made 3.2x faster by relying on work done by Elizabeth Mattijsen rakudo/0c62815

Tickets Resolved

The following tickets have been resolved as part of the grant:

Possibly more tickets were addressed by the IO Action Plan implementation, but they still need further review.

Bugs Fixed

  • Fixed a bug in IO::Path.resolve with combiners tucked on the path separator. Fix in rakudo/9d8e391f3b; tests in roast/92217f75ce. The bug was identified by Timo Paulssen while testing secure implementation of IO::Path.child

IO Bug Fixes in 2017.04.2 Done by Other Core Members:

  • Timo Paulssen fixed a bug with IO::Path types not being accepted by is native NativeCall trait rakudo/99840804
  • Elizabeth Mattijsen fixed an issue in assignment to dynamics. This made it possible to temp $*TMPDIR variable rakudo/1b9d53
  • Jonathan Worthington fixed a crash when slurping large files in binary mode with &slurp or IO::Path.slurp rakudo/d0924f1a2
  • Jonathan Worthington fixed a bug with binary slurp reading zero bytes when another thread is causing a lot of GC rakudo/756877e

Commits

So far, I've commited 192 IO grant commits to rakudo/roast/doc repos.

Rakudo

69 IO grant commits:

  • c6fd736 Make IO::Spec::Win32.is-absolute about 63x faster
  • 7112a08 Add :D on invocant for file tests
  • 8bacad8 Implement IO::Path.sibling
  • a98b285 Remove IO::Path.child-secure
  • 0b5a41b Rename IO::Path.concat-with to .add
  • 9d8e391 Fix IO::Path.resolve with combiners; timotimo++
  • 1887114 Implement IO::Path.child-secure
  • b8458d3 Reword method child for cleaner code
  • 51e4629 Amend rules for last part in IO::Path.resolve
  • 9a2446c Move Bool return value to signature
  • 214198b Implement proper args for IO::Handle.lock
  • c95c4a7 Make IO::Path/IO::Special do IO role
  • fd503f8 grant] Remove role IO and its .umask method"
  • 0e36bb2 Make IO::Spec::Win32!canon-cat 2.3x faster
  • 40217ed Swap .child to .concat-with in all the guts
  • 490ffd1 Do not use self.Str in IO::Path errors
  • 1f689a9 Fix up IO::Handle.Str
  • c01ebea Make IO::Path.mkdir return invocant on success
  • d46e8df Add IO::Pipe .path and .IO methods
  • 6ee71c2 Coerce mode in IO::Path.mkdir to Int
  • 0d9ecae Remove multi-dir &mkdir
  • ff97083 Straighten up rename, move, and copy
  • 7f73f92 Make IO::Path.new-from-absolute-path private
  • da1dea2 Fix &symlink and &link
  • 8c09c84 Fix symlink and link routines
  • 90da80f Rework read methods in IO::Path/IO::Handle
  • c13480c IO::Path.slurp: make 12%-35% faster; propagate Failures
  • f1b4af7 Implement IO::Handle.slurp
  • 184d499 Make IO::Handle.Supply respect handle's mode
  • b6838ee Remove .f check in .z
  • 6a8d63d Implement :completely param in IO::Path.resolve
  • e681498 Make IO::Path throw when path contains NUL byte
  • b4358af Delete code for IO::Spec::Win32.catfile
  • 0a442ce Remove type constraint in IO::Spec::Cygwin.canonpath
  • 0c8bef5 Implement :parent in IO::Spec::Cygwin.canonpath
  • a0b82ed Make IO::Path::* actually instantiate a subclass
  • 67f06b2 Run S32-io/io-special.t test file
  • 954e69e Fix return value of IO::Special methods
  • a432b3d Remove IO::Path.abspath (part 2)
  • 94a6909 Clean up IO::Spec::Unix.abs2rel a bit
  • 966a7e3 Implement IO::Path.concat-with
  • 50aea2b Restore IO::Handle.IO
  • 15a25da Fix ambiguity in empty extension vs no extension
  • b1e7a01 Implement IO::Path.extension 2.0
  • b62d1a7 Give $*TMPDIR a container
  • 099512b Clean up and improve all spurt routines
  • cb27bce Clean up &open and IO::Path.open
  • 4c31903 Add S32-io/chdir-process.t to list of test files to run
  • 5464b82 Improve &*chdir
  • 2483d68 Fix regression in &chdir's failure mode
  • ca1acb7 Fix race in &indir(IO::Path …)
  • a0ef2ed Improve &chdir, &indir, and IO::Path.chdir
  • aa62cd5 Remove &tmpdir and &homedir
  • a5800a1 Implement IO::Handle.spurt
  • 36ad92a Remove 15 methods from IO::Handle
  • 87987c2 Remove role IO and its .umask method
  • 9d8d7b2 Log all changes to plan made during review period
  • 0c7e4a0 Do not capture args in .IO method
  • c360ac2 Fix smartmatch of Cool ~~ IO::Path
  • fa9aa47 Make R::I::SET_LINE_ENDING_ON_HANDLE 4.1x Faster
  • 0111f10 Make IO::Spec::Unix.catdir 3.9x Faster
  • 4fdebc9 Make IO::Spec::Unix.split 36x Faster
  • dcf1bb2 Make IO::Spec::Unix.rel2abs 35% faster
  • 55abc6d Improve IO::Path.child perf on *nix
  • 4032953 Make IO::Handle.open 75% faster
  • a01d679 Remove IO::Path.pipe
  • 212cc8a Remove IO::Path.Bridge
  • 76f7187 Do not cache IO::Path.e results
  • dd4dfb1 Fix crash in IO::Special .WHICH/.Str

Perl 6 Specification

47 IO grant commits:

  • 3b36d4d Test IO::Path.sibling
  • 7a063b5 Fudge .child-secure tests
  • 39677c4 IO::Path.concat-with got renamed to .add
  • 92217f7 Test IO::Path.child-secure with combiners
  • f3c5dae Test IO::Path.child-secure
  • a716962 Amend rules for last part in IO::Path.resolve
  • 4194755 Test IO::Handle.lock/.unlock
  • 64ff572 Cover IO::Path/IO::Pipe's .Str/.path/.IO
  • 637500d Spec IO::Pipe.path/.IO returns IO::Path type object
  • 8fa49e1 Test link routines
  • 416b746 Test symlink routines
  • d4353b6 Rewrite .l on broken symlinks test
  • 7e4a2ae Swap .slurp-rest to .slurp
  • a4c53b0 Use bin IO::Handle to test its .Supply
  • feecaf0 Expand file tests
  • a809f0f Expand IO::Path.resolve tests
  • ee7f05b Move is-path sub to top so it can be reused
  • b16fbd3 Add tests to check nul byte is rejected
  • 8f73ad8 Change \0 roundtrip test to \t roundtrip test
  • 7c7fbb4 Cover :parent arg in IO::Spec::Cygwin.canonpath
  • 896033a Cover IO::Spec::QNX.canonpath
  • c3c51ed Cover IO::Spec::Win32.basename
  • d8707e7 Cover IO::Spec::Unix.basename
  • bd8d167 Test IO::Path::* instantiate a subclass
  • 43ec543 Cover methods of IO::Special
  • e5dc376 Expand IO::Path.accessed tests
  • 0e47f25 Test IO::Path.concat-with
  • 305f206 Test empty-string extensions in IO::Path.extension
  • 2f09f18 Fix incorrect test
  • b23e53e Test IO::Path.extension
  • 1d4e881 Test $*TMPDIR can be temped
  • 79ff022 Expand &spurt and IO::Path.spurt tests
  • ba3e7be Merge S32-io/path.t and S32-io/io-path.t
  • 3c4e81b Test IO::Path.Str works as advertised
  • 86c5f9c Delete qp{} tests
  • 430ab89 Test &*chdir
  • 86f79ce Expand &chdir tests
  • 73a5448 Remove two fudged &chdir tests
  • 04333b3 Test &indir fails with non-existent paths by default
  • bd46836 Amend &indir race tests
  • f48198f Test &indir
  • 5a7a365 Expand IO::Spec::*.tmpdir tests
  • 14b6844 Use Numeric instead of IO role in dispatch test
  • 8d6ca7a Cover IO::Path.ACCEPTS
  • 091931a Expand &open tests
  • 465795c Test IO::Path.lines(*) does not crash
  • 63370fe Test IO::Special .WHICH/.Str do not crash

Documentation

76 IO grant commits:

  • 61cb776 Document IO::Path.sibling
  • b9c9117 Toss IO::Path.child-secure
  • 6ca67e4 Start sketching out Definitive IO Guide™
  • 81a5806 Amend IO::Path.resolve: :completely
  • c5524ef Rename IO::Path.concat-with to .add
  • 3145979 Document IO::Path.child-secure
  • 160c6a2 Document IO::Handle.lock/.unlock
  • 53f2b99 Document role IO's new purpose
  • 2aaf12a Document IO::Handle.Str
  • bd4fa68 Document IO::Handle/IO::Pipe.IO
  • fd8a5ed Document IO::Pipe.path
  • 8d95371 Expand IO::Handle/IO::Pipe.path docs
  • 60b9227 Change return value for mkdir
  • 47b0526 Explicitly spell out caveats of IO::Path.Str
  • 923ea05 Straighten up mkdir docs
  • aeeec94 Straighten up copy, move, rename
  • fff866f Fix docs for symlink/link routines
  • f83f78c Use idiomatic Perl 6 in example
  • e60da5c List utf-* alias examples too since they're common
  • 0f49bb5 List Rakudo-supported encodings in open()
  • 017acd4 Improve docs for IO::Path.slurp
  • 56b50fe Document IO::Handle.slurp
  • 2aa3c9f Document new behaviour of IO::Handle.Supply
  • a30fae6 Avoid potential confusion with use of word "object"
  • 372545c Straighten up file test docs
  • 1527d32 Document :completely arg to IO::Path.resolve
  • 4090446 Improve chmod docs
  • d436f3c Document IO::Spec::* don't do any validation
  • 69b2082 Document IO::Path.chdir
  • b9de84f Remove DateTime tutorial from IO::Path docs
  • 45e84ad Move IO::Path.path to attributes
  • 0ca2295 Reword/expand IO::Path intro prose
  • dbdc995 Document IO::Spec::*.catpath
  • 50e5565 Document IO::Spec::*.catdir and .catfile
  • 28b6283 Document IO::Spec::*.canonpath
  • a1cb80b Document IO::Spec::Win32.basename
  • 5c1d3b6 Document IO::Spec::Unix.basename
  • 9102b51 Fix up IO::Path.basename
  • e9b6809 Document IO::Path::* subclasses
  • a43ecb9 Document IO::Path's $.SPEC and $.CWD
  • 7afd9c4 Remove unrelated related classes
  • 6bd0f98 Dissuade readers from using IO::Spec*
  • 184342c Document IO::Special.what
  • 56256d0 Minor formatting improvements in IO::Special
  • 1cd7de0 Fix up type graph
  • b3a9324 Expand/fix up IO::Path.accessed
  • 1973010 Document IO::Path.ACCEPTS
  • cc62dd2 Kill IO::Path.abspath
  • 1f75ddc Document IO::Spec*.abs2rel
  • 24a6ea9 Toss all of the TODO methods in IO::Spec*
  • 65cc372 Document IO::Path.concat-with
  • b9e692e Document new IO::Path.extension
  • d5abceb Write docs for all spurt routines
  • b8fba97 Point out my $*CWD = chdir … is an error
  • b78d4fd Include type names in links to methods
  • bdd18f1 Fix desc of IO::Path.Str
  • a53015a Clarify value of IO::Path.path
  • 5aa614f Improve suggestion for Perl 5's opendir
  • bf377c7 Document &indir
  • e5225be Fix URL to &*chdir
  • e1a299c Reword "defined as" for &*chdir
  • 3fdc6dc Document &*chdir
  • 1d0e433 Document &chdir
  • d050d4b Remove IO::Path.chdir prose
  • 839a6b3 Expand docs for $*HOME and $*TMPDIR
  • db36655 Remove tip to use $*SPEC to detect OS
  • 0511e07 Document IO::Spec::*.tmpdir
  • cc6539b Remove 8 methods from IO::Handle
  • 335a98d Remove mention of role IO
  • cc496eb Remove mention of IO.umask
  • 3cf943d Expand IO::Path.relative
  • ccae74a Fix incorrect information for IO::Path.absolute
  • d02ae7d Remove IO::Handle.rw and .rwx
  • 69d32da Remove IO::Handle.z
  • 110efb4 No need for .ends-with
  • fd7a41b Improve code example
  1 2 3 4 5 6 7  

About Zoffix Znet

user-pic I blog about Perl.