Perl 6 IO TPF Grant: Monthly Report (April, 2017)

This document is the April, 2017 progress report for TPF Standardization, Test Coverage, and Documentation of Perl 6 I/O Routines grant


As proposed to and approved by the Grant Manager, I've extended the due date for this grant by 1 extra month, in exchange for doing some extra optimization work on IO routines at no extra cost. The new completion date is May 22nd; right after the next Rakudo compiler release.


I've created and published three notices as part of this grant, informing the users on what is changing and how to best upgrade their code, where needed:

IO Action Plan Progress

Most of the IO Action Plan has been implemented and got shipped in Rakudo's 2017.04.2 release. The remaining items are:

  • Implement better way to signal closed handle status (was omited from release due to original suggestion to do this not being ideal; likely better to do this on the VM end)
  • Implement IO::CatHandle as a generalized IO::ArgFiles (was omited from release because it was decided to make it mostly-usable wherever IO::Handle can be used, and IO::ArgFiles is far from that, having only a handful of methods implemented)
  • Optimization of the way we perform stat calls for multiple file tests (entirely internal that requires no changes to users' code)

Documentation and Coverage Progress

In my spreadsheet with all the IO routines and their candidates, the totals show that 40% have been documented and tested. Some of the remaining 60% may already have tests/docs added when implementing IO Action Plan or ealier and merely need checking and verification.


Some of the optimizations I promised to deliver in exchange for grant deadline extension were already done on IO::Spec::Unix and IO::Path routines and have made it into the 2017.04.2 release. Most of the optimizations that will be done in the upcoming month will be done in IO::Spec::Win32 and will largely affect Windows users.

IO Optimizations in 2017.04.2 Done by Other Core Members:

  • Elizabeth Mattijsen made .slurp 2x faster rakudo/b4d80c0
  • Samantha McVey made nqp::index—which is used in path operations—2x faster rakudo/f1fc879
  • IO::Pipe.lines was made 3.2x faster by relying on work done by Elizabeth Mattijsen rakudo/0c62815

Tickets Resolved

The following tickets have been resolved as part of the grant:

Possibly more tickets were addressed by the IO Action Plan implementation, but they still need further review.

Bugs Fixed

  • Fixed a bug in IO::Path.resolve with combiners tucked on the path separator. Fix in rakudo/9d8e391f3b; tests in roast/92217f75ce. The bug was identified by Timo Paulssen while testing secure implementation of IO::Path.child

IO Bug Fixes in 2017.04.2 Done by Other Core Members:

  • Timo Paulssen fixed a bug with IO::Path types not being accepted by is native NativeCall trait rakudo/99840804
  • Elizabeth Mattijsen fixed an issue in assignment to dynamics. This made it possible to temp $*TMPDIR variable rakudo/1b9d53
  • Jonathan Worthington fixed a crash when slurping large files in binary mode with &slurp or IO::Path.slurp rakudo/d0924f1a2
  • Jonathan Worthington fixed a bug with binary slurp reading zero bytes when another thread is causing a lot of GC rakudo/756877e


So far, I've commited 192 IO grant commits to rakudo/roast/doc repos.


69 IO grant commits:

  • c6fd736 Make about 63x faster
  • 7112a08 Add :D on invocant for file tests
  • 8bacad8 Implement IO::Path.sibling
  • a98b285 Remove IO::Path.child-secure
  • 0b5a41b Rename IO::Path.concat-with to .add
  • 9d8e391 Fix IO::Path.resolve with combiners; timotimo++
  • 1887114 Implement IO::Path.child-secure
  • b8458d3 Reword method child for cleaner code
  • 51e4629 Amend rules for last part in IO::Path.resolve
  • 9a2446c Move Bool return value to signature
  • 214198b Implement proper args for IO::Handle.lock
  • c95c4a7 Make IO::Path/IO::Special do IO role
  • fd503f8 grant] Remove role IO and its .umask method"
  • 0e36bb2 Make IO::Spec::Win32!canon-cat 2.3x faster
  • 40217ed Swap .child to .concat-with in all the guts
  • 490ffd1 Do not use self.Str in IO::Path errors
  • 1f689a9 Fix up IO::Handle.Str
  • c01ebea Make IO::Path.mkdir return invocant on success
  • d46e8df Add IO::Pipe .path and .IO methods
  • 6ee71c2 Coerce mode in IO::Path.mkdir to Int
  • 0d9ecae Remove multi-dir &mkdir
  • ff97083 Straighten up rename, move, and copy
  • 7f73f92 Make private
  • da1dea2 Fix &symlink and &link
  • 8c09c84 Fix symlink and link routines
  • 90da80f Rework read methods in IO::Path/IO::Handle
  • c13480c IO::Path.slurp: make 12%-35% faster; propagate Failures
  • f1b4af7 Implement IO::Handle.slurp
  • 184d499 Make IO::Handle.Supply respect handle's mode
  • b6838ee Remove .f check in .z
  • 6a8d63d Implement :completely param in IO::Path.resolve
  • e681498 Make IO::Path throw when path contains NUL byte
  • b4358af Delete code for IO::Spec::Win32.catfile
  • 0a442ce Remove type constraint in IO::Spec::Cygwin.canonpath
  • 0c8bef5 Implement :parent in IO::Spec::Cygwin.canonpath
  • a0b82ed Make IO::Path::* actually instantiate a subclass
  • 67f06b2 Run S32-io/io-special.t test file
  • 954e69e Fix return value of IO::Special methods
  • a432b3d Remove IO::Path.abspath (part 2)
  • 94a6909 Clean up IO::Spec::Unix.abs2rel a bit
  • 966a7e3 Implement IO::Path.concat-with
  • 50aea2b Restore IO::Handle.IO
  • 15a25da Fix ambiguity in empty extension vs no extension
  • b1e7a01 Implement IO::Path.extension 2.0
  • b62d1a7 Give $*TMPDIR a container
  • 099512b Clean up and improve all spurt routines
  • cb27bce Clean up &open and
  • 4c31903 Add S32-io/chdir-process.t to list of test files to run
  • 5464b82 Improve &*chdir
  • 2483d68 Fix regression in &chdir's failure mode
  • ca1acb7 Fix race in &indir(IO::Path …)
  • a0ef2ed Improve &chdir, &indir, and IO::Path.chdir
  • aa62cd5 Remove &tmpdir and &homedir
  • a5800a1 Implement IO::Handle.spurt
  • 36ad92a Remove 15 methods from IO::Handle
  • 87987c2 Remove role IO and its .umask method
  • 9d8d7b2 Log all changes to plan made during review period
  • 0c7e4a0 Do not capture args in .IO method
  • c360ac2 Fix smartmatch of Cool ~~ IO::Path
  • fa9aa47 Make R::I::SET_LINE_ENDING_ON_HANDLE 4.1x Faster
  • 0111f10 Make IO::Spec::Unix.catdir 3.9x Faster
  • 4fdebc9 Make IO::Spec::Unix.split 36x Faster
  • dcf1bb2 Make IO::Spec::Unix.rel2abs 35% faster
  • 55abc6d Improve IO::Path.child perf on *nix
  • 4032953 Make 75% faster
  • a01d679 Remove IO::Path.pipe
  • 212cc8a Remove IO::Path.Bridge
  • 76f7187 Do not cache IO::Path.e results
  • dd4dfb1 Fix crash in IO::Special .WHICH/.Str

Perl 6 Specification

47 IO grant commits:

  • 3b36d4d Test IO::Path.sibling
  • 7a063b5 Fudge .child-secure tests
  • 39677c4 IO::Path.concat-with got renamed to .add
  • 92217f7 Test IO::Path.child-secure with combiners
  • f3c5dae Test IO::Path.child-secure
  • a716962 Amend rules for last part in IO::Path.resolve
  • 4194755 Test IO::Handle.lock/.unlock
  • 64ff572 Cover IO::Path/IO::Pipe's .Str/.path/.IO
  • 637500d Spec IO::Pipe.path/.IO returns IO::Path type object
  • 8fa49e1 Test link routines
  • 416b746 Test symlink routines
  • d4353b6 Rewrite .l on broken symlinks test
  • 7e4a2ae Swap .slurp-rest to .slurp
  • a4c53b0 Use bin IO::Handle to test its .Supply
  • feecaf0 Expand file tests
  • a809f0f Expand IO::Path.resolve tests
  • ee7f05b Move is-path sub to top so it can be reused
  • b16fbd3 Add tests to check nul byte is rejected
  • 8f73ad8 Change \0 roundtrip test to \t roundtrip test
  • 7c7fbb4 Cover :parent arg in IO::Spec::Cygwin.canonpath
  • 896033a Cover IO::Spec::QNX.canonpath
  • c3c51ed Cover IO::Spec::Win32.basename
  • d8707e7 Cover IO::Spec::Unix.basename
  • bd8d167 Test IO::Path::* instantiate a subclass
  • 43ec543 Cover methods of IO::Special
  • e5dc376 Expand IO::Path.accessed tests
  • 0e47f25 Test IO::Path.concat-with
  • 305f206 Test empty-string extensions in IO::Path.extension
  • 2f09f18 Fix incorrect test
  • b23e53e Test IO::Path.extension
  • 1d4e881 Test $*TMPDIR can be temped
  • 79ff022 Expand &spurt and IO::Path.spurt tests
  • ba3e7be Merge S32-io/path.t and S32-io/io-path.t
  • 3c4e81b Test IO::Path.Str works as advertised
  • 86c5f9c Delete qp{} tests
  • 430ab89 Test &*chdir
  • 86f79ce Expand &chdir tests
  • 73a5448 Remove two fudged &chdir tests
  • 04333b3 Test &indir fails with non-existent paths by default
  • bd46836 Amend &indir race tests
  • f48198f Test &indir
  • 5a7a365 Expand IO::Spec::*.tmpdir tests
  • 14b6844 Use Numeric instead of IO role in dispatch test
  • 8d6ca7a Cover IO::Path.ACCEPTS
  • 091931a Expand &open tests
  • 465795c Test IO::Path.lines(*) does not crash
  • 63370fe Test IO::Special .WHICH/.Str do not crash


76 IO grant commits:

  • 61cb776 Document IO::Path.sibling
  • b9c9117 Toss IO::Path.child-secure
  • 6ca67e4 Start sketching out Definitive IO Guide™
  • 81a5806 Amend IO::Path.resolve: :completely
  • c5524ef Rename IO::Path.concat-with to .add
  • 3145979 Document IO::Path.child-secure
  • 160c6a2 Document IO::Handle.lock/.unlock
  • 53f2b99 Document role IO's new purpose
  • 2aaf12a Document IO::Handle.Str
  • bd4fa68 Document IO::Handle/IO::Pipe.IO
  • fd8a5ed Document IO::Pipe.path
  • 8d95371 Expand IO::Handle/IO::Pipe.path docs
  • 60b9227 Change return value for mkdir
  • 47b0526 Explicitly spell out caveats of IO::Path.Str
  • 923ea05 Straighten up mkdir docs
  • aeeec94 Straighten up copy, move, rename
  • fff866f Fix docs for symlink/link routines
  • f83f78c Use idiomatic Perl 6 in example
  • e60da5c List utf-* alias examples too since they're common
  • 0f49bb5 List Rakudo-supported encodings in open()
  • 017acd4 Improve docs for IO::Path.slurp
  • 56b50fe Document IO::Handle.slurp
  • 2aa3c9f Document new behaviour of IO::Handle.Supply
  • a30fae6 Avoid potential confusion with use of word "object"
  • 372545c Straighten up file test docs
  • 1527d32 Document :completely arg to IO::Path.resolve
  • 4090446 Improve chmod docs
  • d436f3c Document IO::Spec::* don't do any validation
  • 69b2082 Document IO::Path.chdir
  • b9de84f Remove DateTime tutorial from IO::Path docs
  • 45e84ad Move IO::Path.path to attributes
  • 0ca2295 Reword/expand IO::Path intro prose
  • dbdc995 Document IO::Spec::*.catpath
  • 50e5565 Document IO::Spec::*.catdir and .catfile
  • 28b6283 Document IO::Spec::*.canonpath
  • a1cb80b Document IO::Spec::Win32.basename
  • 5c1d3b6 Document IO::Spec::Unix.basename
  • 9102b51 Fix up IO::Path.basename
  • e9b6809 Document IO::Path::* subclasses
  • a43ecb9 Document IO::Path's $.SPEC and $.CWD
  • 7afd9c4 Remove unrelated related classes
  • 6bd0f98 Dissuade readers from using IO::Spec*
  • 184342c Document IO::Special.what
  • 56256d0 Minor formatting improvements in IO::Special
  • 1cd7de0 Fix up type graph
  • b3a9324 Expand/fix up IO::Path.accessed
  • 1973010 Document IO::Path.ACCEPTS
  • cc62dd2 Kill IO::Path.abspath
  • 1f75ddc Document IO::Spec*.abs2rel
  • 24a6ea9 Toss all of the TODO methods in IO::Spec*
  • 65cc372 Document IO::Path.concat-with
  • b9e692e Document new IO::Path.extension
  • d5abceb Write docs for all spurt routines
  • b8fba97 Point out my $*CWD = chdir … is an error
  • b78d4fd Include type names in links to methods
  • bdd18f1 Fix desc of IO::Path.Str
  • a53015a Clarify value of IO::Path.path
  • 5aa614f Improve suggestion for Perl 5's opendir
  • bf377c7 Document &indir
  • e5225be Fix URL to &*chdir
  • e1a299c Reword "defined as" for &*chdir
  • 3fdc6dc Document &*chdir
  • 1d0e433 Document &chdir
  • d050d4b Remove IO::Path.chdir prose
  • 839a6b3 Expand docs for $*HOME and $*TMPDIR
  • db36655 Remove tip to use $*SPEC to detect OS
  • 0511e07 Document IO::Spec::*.tmpdir
  • cc6539b Remove 8 methods from IO::Handle
  • 335a98d Remove mention of role IO
  • cc496eb Remove mention of IO.umask
  • 3cf943d Expand IO::Path.relative
  • ccae74a Fix incorrect information for IO::Path.absolute
  • d02ae7d Remove and .rwx
  • 69d32da Remove IO::Handle.z
  • 110efb4 No need for .ends-with
  • fd7a41b Improve code example

Perl 6 IO TPF Grant: Monthly Report (March, 2017)

This document is the March, 2017 progress report for TPF Standardization, Test Coverage, and Documentation of Perl 6 I/O Routines grant


My delivery of the Action Plan was one week later than I originally expected to deliver it. The delay let me assess some of the big-picture consistency issues, which led to proposal to remove 15 methods from IO::Handle and to iron out naming and argument format for several other routines.

I still hope to complete all the code modifications prior to end of weekend of April 15, so all of these can be included in the next Rakudo Star release. And a week after, I plan to complete the grant.

Note: to minimize user impact, some of the changes may be included only in 6.d language, which will be available in 2017.04 release only if the user uses use v6.d.PREVIEW pragma.

IO Action Plan

I finished the IO Action Plan, placed it into /doc of rakudo's repository, and made it available to other core devs for review for a week (period ends on April 1st). The Action Plan is 16 pages long and contains 26 sections detailing proposed changes.

Overall, I proposed many more smaller changes than I originally expected and fewer larger, breaking changes than I originally expected. This has to do with a much better understanding of how rakudo's IO routines are "meant to" be used, so I think the improvement of the documentation under this grant will be much greater than I originally anticipated.

A lot of this has to do with lack of explanatory documentation for how to manipulate and traverse paths. This had the effect that users were using the $*SPEC object (157 instances of its use in the ecosystem!) and its routines for that goal, which is rather awkward. This concept is prevalent enough that I even wrote SPEC::Func module in the past, due to user demand, and certain books whose draft copies I read used the $*SPEC as well.

In reality, $*SPEC is an internal-ish thing and unless you're writing your own IO abstractions, you never need to use it directly. The changes and additions to the IO::Path methods done under this grant will make traversing paths even more pleasant, and the new tutorial documentation I plan to write under this grant will fully describe the Right Way™ to do it all.

In fact, removal of $*SPEC in future language versions is currently under consideration...

Removal of $*SPEC

lizmat++ pointed out that we can gain significant performance improvements by removing $*SPEC infrastructure and moving it into module-space. For example, a benchmark of slurping a 10-line file shows that removal of all the path processing code makes benched program run more than 14x faster. When benching IO::Path creation, dynamic var lookup alone takes up 14.73% of the execution time.

The initial plan was to try and make IO routines handle all OSes in a unified way (e.g. using / on Windows), however it was found this would create several ambiguities and would be buggy, even if fast.

However, I think there are still a lot of improvements that can be gained by making $*SPEC infrastructure internal. So we'd still have the IO::Spec-type modules but they'll have a private API we can optimize freely, and we'll get rid of the dynamic lookups, consolidate what code we can into IO::Path, while keeping the functionality that differs between OSes in the ::Spec modules.

Since this all sounds like guestimation and there's a significant-ish use of $*SPEC in the ecosystem, the plan now is to implement it all in a module first and see whether it works well and offers any significant performance improvements. If it does, I believe it should be possible to swap IO::Path to use the fast version in 6.d language, while still leaving $*SPEC dynvar and its modules in core, as deprecated, for removal in 6.e.

This won't be done under this grant, and while trying not to over-promise, I hope to release this module some time in May-June. So keep an eye out for it; I already picked out a name: FastIO

newio Branch

As per original goals of the grant, I reviewed the code in Rakudo's 2014–2015 newio branch, to look for any salvagable ideas. I did not have any masterplan design documents to go with it and I tried a few commits but did not find one that didn't have merge conflicts and compiled (it kept complaining about ModuleLoader), so my understanding of it comes solely from reading the source code, and may be off from what the original author intended it to be.

The major difference between newio and Rakudo's current IO structure is type hierarchy and removal of $*SPEC. newio provides IO::Pathy and PIO roles which are done by IO::File, IO::Dir, IO::Local, IO::Dup, IO::Pipe, and IO::Huh classes that represent various IO objects. The current Rakudo's system has fewer abstractions: IO::Path represents a path to an IO object and IO::Handle provides read/write access to it, with IO::Pipe handling pipes, and no special objects for directories (their contents are obtained via IO::Path.dir method and their attributes are modified via IO::Path methods).

Since 6.d language is additive to 6.c language, completely revamping the type hierarchy may be challenging and messy. I'm also not entirely sold on what appears to be one of the core design ideas in newio: most of the abstractions are of IO objects as they were at the object instantiation time. An IO::Pathy object represents an IO item that exists, despite there being no guarantees that it actually does. Thus, IO::File's .f and .e methods always return True, while its .d method always returns False. This undoubtedly gives a performance enhancement, however, if $ rm foo were executed after IO::File object's creation, the .e method would no longer return correct data and if then $ mkdir foo were executed, both .f and .d methods would be returning incorrect data.

Until recently, Rakudo cached the result of .e call and that produced unexpected by user behaviour. I think the issue will be greatly exacerbated if this sort of caching is extended to entire objects and many of their methods.

However, I do think the removal of $*SPEC is a good idea. And as described in previous section I will try to make a FastIO module, using ideas from newio branch, for possible inclusion in future language versions.

Experimental MoarVM Coverage Reporter

As was mentioned in my grant proposal, the coverage reporter was busted by the upgrade of information returned by .file and .line methods on core routines. MasterDuke++ made several commits fixing numerous issues to the coverage parser and last night I identified the final piece of the breakage. The annotations and hit reports all use the new SETTING::src/core/blah file format. The setting file has SETTING::src/core/blah markers inside of it. The parser however, still thinks it's being fed the old gen/moar/CORE.setting filenames, so once I teach it to calculate proper offsets into the setting file, we'll have coverage reports on back up and running and I'll be able to use them to judge IO routine test coverage required for this grant.

Performance Improvements

Although not planned by the original grant, I was able to make the following performance enhancements to IO routines. So hey! Bonus deliverables \o/:

  • rakudo/fa9aa47 Make R::I::SET_LINE_ENDING_ON_HANDLE 4.1x Faster
  • rakudo/0111f10 Make IO::Spec::Unix.catdir 3.9x Faster
  • rakudo/4fdebc9 Make IO::Spec::Unix.split 36x Faster
    • Affects IO::Path's .parent, .parts, .volume, .dirname, and .basename
    • Measurement of first call to .basename shows it's now 6x-10x faster
  • rakudo/dcf1bb2 Make IO::Spec::Unix.rel2abs 35% faster
  • rakudo/55abc6d Improve IO::Path.child perf on *nix:
    • make IO::Path.child 2.1x faster on *nix
    • make IO::Spec::Unix.join 8.5x faster
    • make IO::Spec::Unix.catpath 9x faster
  • rakudo/4032953 Make 75% faster
  • rakudo/4eef6db Make about 4.4x faster
  • rakudo/ae5e510 Make 7% faster when creating from Str
  • rakudo/0c6281 Make IO::Pipe.lines use IO::Handle.lines for 3.2x faster performance

Performance Improvements Made By Other Core Developers

lizmat++ also made these improvements in IO area:

Along with the commits above, she also made IO::Handle.lines faster and eliminated a quirk that required custom .lines implementation in IO::Pipe (which is a subclass of IO::Handle). Due to that, I was able to remove old IO::Pipe.lines implementation and make it use new-and-improved IO::Handle.lines, which made the method about 3.2x faster.


Will (attempt to) fix as part of the grant

  • IO::Pipe inherits .t method from from IO::Handle to check if the handle is a TTY, however, attempt to call it causes a segfault. MasterDuke++ already found the candidate for the offending code (MoarVM/Issue#561) and this should be resolved by the time this grant is completed.

Don't think I will be able to fix these as part of the grant

  • Found a strange error generated when IO::Pipe's buffer is filled up. This is too deep in the guts for me to know how to resolve yet, so I filed it as RT#131026

Already Fixed

  • Found that IO::Path had a vestigial .pipe method that delegated to a non-existant IO::Handle method. Removed in rakudo/a01d67
  • Fixed IO::Pipe.lines not accepting a Whatever as limit, which is accepted by all other .lines. rakudo/0c6281 Tests in roast/465795 and roast/add852
  • Fixed issues due to caching of IO::Handle.e. Reported as RT#130889. Fixed in rakudo/76f718. Tests in roast/908348
  • Rejected rakudo PR#666 and resolved RT#126262 by explaining why the methods return Str objects instead of IO::Path on ticket/PR and improving the documentation by fixing mistakes (doc/ccae74) and expanding (doc/3cf943) on what the methods do exactly.
  • IO::Path.Bridge was defunct, as it was trying to call .Bridge on Str, which does not exist. Resolved the issue by deleting this method in rakudo/212cc8
  • Per demand, made IO::Path.dir a multi, so module-space can augment it with other candidates that add more functionality. rakudo/fbe7ace

Perl 6 IO TPF Grant: Monthly Report (February, 2017)

This document is the February, 2017 progress report for TPF Standardization, Test Coverage, and Documentation of Perl 6 I/O Routines grant


I'm currently running slightly behind the schedule outlined in the grant. I expect to complete the Action Plan and have it ratified by other core members by March 18th, which is the date of the 2017.03 compiler release. Then, I'll implement all of the Action Plan (and complete the grant) by the 2017.04 compiler release on April 15th. This is also the release the next Rakudo Star distribution will be based on, and so the regular end users will receive better IO there and then.

Some members of the Core Team voiced concerns over implementing any changes that can break users' code, even if the changes do not break 6.c-errata specification tests. Once the full set of changes is known, they will be reviewed on a case-by-case basis, and some of them may be implemented under 6.d.PREVIEW pragma, to be included in 6.d language version, leaving 6.c language versions untouched. Note that changes that are decided to be 6.d material may delay the completion of this grant due to not fully-fleshed out infrastructure for supporting multiple language versions. The April 15th deadline stated above applies only to changes to 6.c language and new deadline will be ascertained for completion of the 6.d changes.

User Communications

I wrote and disseminated advanced notice of the changes to be made due to this grant, to prepare the users to expect some code to break (some routines were found to be documented, despite being absent entirely from the Specification and not officially part of the language).

The notice can be seen at:

It is possible the Core Team will decide to defer all breaking changes to 6.d language version, to be currently implemented under v6.d.PREVIEW pragma.

Bonus Deliverable

The bonus deliverable—The Map of Perl 6 Routines—is now usable. The code is available in perl6/routine-map repository, and the rendered version is available on Its current state is sufficient to serve the intended purpose for this grant, but I'll certainly add improvements to it sometime in the future, such as linking to docs, linking to routines' source code, having an IRC bot looking stuff up in it, etc.

It'll also be fairy easy to use the Map to detect undocumented routines or ones that are documented under the incorrect type.

Identified Issues/Deficiencies with IO Routines

These points, issues, and ideas were identified this month and will be included for consideration in the Action Plan.

  • Calling practically any method on a closed IO::Handle results in an LTA (Less Than Awesome) error message that reads <something> requires an object with REPR MVMOSHandle where <something> is sometimes the name of the method called by the user and others is some internal method invoked indirectly. We need better errors for closed file handles; and not something that would require a is-fh-closed() type of conditional called in all the methods, which would be a hefty performance hit.
  • Several routines have been identified which in other languages return useful information: number of bytes actually written or current file position, whereas in Perl 6 they just return a Bool (.print, .say, .write) or a Mu type object (.seek). Inconsistently, .printf does appear to return the number of bytes written. It should be possible to make other routines similarly useful, although I suspect some of it may have to wait until 6.d language release.
  • The .seek routine takes the seek location as one of three Enum values. Not only are they quite lengthy to type, they're globally available for no good reason and .seek is virtually unique in using this calling convention. I will seek to standardize this routine to take mutually-exclusive named arguments instead, preferably with much shorter names, but those are yet to be bikeshed.
  • IO.umask routine simply shells out to umask. This fails terribly on OSes that don't have that command, especially since the code still tries to decode the received input as an octal string, even after the failure. Needs improvement.
  • link's implementation and documentation confuses what a "target" is. Luckily (or sadly?) there are exactly zero tests for this routine in the Perl 6 Specification, so we can change it to match the behaviour of ln Linux command and the foo $existing-thing, $new-thing argument order of move, rename, and other similar routines.
  • When using run(:out, 'some-non-existant-command').out.slurp-rest it will silently succeed and return an empty string. If possible, this should be changed to return the failure or throw at some point.
  • chdir's :test parameter for directory permissions test is taken as a single string parameter. This makes it extremely easy to mistakenly write broken code: for example, "/".IO.chdir: "tmp", :test<r w> succeeds, while "/".IO.chdir: "tmp", :test<w r> fails with a misleading error message saying the directory is not readable/writable. I will propose for :test parameter to be deprecated in favour of using multiple named arguments to indicate desired tests. By extension, similar change will be applied to indir, tmpdir, and homedir routines (if they remain in the language).
  • Documentation: several inaccuracies in the documentation were found. I won't be identifying these in my reports/Action Plan, but will simply ensure the documentation matches the implementation once the Action Plan is fully implemented.

Discovered Bugs

The hunt for 6-legged friends has these finds so far:

Will (attempt to) fix as part of the grant

  • indir() has a race condition where the actual dir it runs in ends up being wrong. Using indir '/tmp/useless', { qx/rm -fr */ } in one thread and backing up your precious data in another has the potential to result in some spectacular failurage.
  • perl6 -ne '@ = lines' crashes after first iteration, crying about MVMOSHandle REPR. I suspect the code is failing to follow iterator protocol somewhere and is attempting to read on an already closed handle. I expect to be able to resolve this and the related RT#128047 as part of the grant.
  • .tell incorrectly always returns 0 on files opened in append mode
  • link mixes up target and link name in its error message

Don't think I will be able to fix these as part of the grant

  • seek() with SeekFromCurrent as location fails to seek correctly if called after .readchars, but only on MoarVM. This appears to occur due to some sort of buffering. I filed this as RT#130843.
  • On JVM, .readchars incorrectly assumes all chars are 2 bytes long. This appears to be just a naive substitute for nqp::readcharsfh op. I filed this as RT#130840.

Already Fixed

  • While making the Routine Map, I discovered .WHICH and .Str methods on IO::Special were only methods defined only for the :D subtype, resulting in a crash when using, say, infix:<eqv> operator on the type object, instead Mu.WHICH/.Str candidates getting invoked. This bug was easy and I already commited fix in radudo/dd4dfb14d3 and tests to cover it in roast/63370fe054

Auxiliary Bugs

While doing the work for this grant, I also discovered some non-IO related bugs (some of which I fixed):

I Botched a Perl 6 Release And Now a Robot Is Taking My Job

Deconfusion note: if you're just a regular Perl 6 user, you likely use and only ever heard of Rakudo Star, which is a distribution that includes the Rakudo Perl 6 Compiler, some modules, and the docs. This post details a release of that compiler only, which gets released more often than Rakudo Star. So please don't think there's a new release, if Star is all you use.

Part I: Humans Make Errors

Today is the third Saturday of the month, which is an awesome day! It's when the Rakudo Perl 6 Compiler sees its monthly release. I was at the helm today, so I chugged along through the Rakudo release guide and the NQP release guide, whose release is part of the process as well.

We're All Green To Go

As I was nearing the end of the process, the first hint of a problem surfaced when a user joined the #perl6-dev IRC channel:

<nwc10> the tag 2016.08 seems to be missing from the nqp github repository
<Zoffix> nwc10, it was created about 44 minutes ago. Ensure you got the
latest everything
<nwc10> very strange. because if I pull again in my nqp checkout I still don't see it
<Zoffix> Did you add --tags? it's git pull --tags or git fetch --tags or something like that
<nwc10> Zoffix: correct. no I didn't. thanks
<Zoffix> \o/
<nwc10> OK, why do I usually not need to do that?

Good question. I pulled in a fresh checkout and ran the full test suite again, just to be sure. Everything passed. Luckily, I got the answer rather quickly:

<nwc10> git log --graph --decorate origin/master 2016.08
<nwc10> how that it and master have diverged
<nwc10> the tag is for a commit which is not a parent of nqp master

Here's the story: I have a local NQP checkout. I bump its version and its MoarVM dependency. Tag the release. And start the full build and test suite run. That's a lengthly process, so while that is happening, I go to GitHub and use the online editor to make a minor tweak to the text of the NQP's release guide.

The tests finish. I try to push my tag and the version bump and get the usual error saying the online repo has diverged. Instinctively, I run my gr alias for git pull --rebase to bring in the new changes, and then push all of the work into the repo.

The problem is that --rebase doesn't move my tag, so it's still referencing that old commit. If you clone the repo, everything appears to work for that tag, but if you pull in the changes into the existing repo, you don't get the tag. Also, git describe (which we use when doing mid-release NQP/MoarVM version bumps) was now missing my tag, because the commit my tag tagged is not the parent of the master HEAD.

At this point, I don't yet know about the breakage for folks who are doing git pull in an existing branch, and so I continue with the release, since all of my tests are green. Finish up. Then relax by departing to fly in No Man's Sky. Awesome!

The Robocide

The first hint (wait, the second hint?) of trouble showed up when the latest commits did not appear in our eval bot that's supposed to update itself to HEAD:

<Zoffix> m: dd [ $*VM.version, $*PERL.compiler.version ]
<camelia> rakudo-moar c201a7: OUTPUT«[v2016.07.24.g.31.eccd.7,
<Zoffix> Oh. I guess camelia doesn't build every commit right away.

I did not create camelia, so was unfamiliar with her workings, but there was another bot whom I did create and knew for sure it was supposed to update to HEAD every hour... it didn't:

<Zoffix> s: dd $*VM.version, $*PERL.compiler.version
<SourceBaby> Zoffix, Something's wrong: ␤ERR: v2016.07.24.g.31.eccd.7
v2016​.07.1.243.gc.201.a.76␤Cannot resolve caller sourcery(Nil); none of
these signatures match:␤ ($thing, Str:D $method, Capture $c)␤
($thing, Str:D $method)␤ (&code)␤ (&code, Capture $c)␤
in block at -e line 6␤␤
* Zoffix gets a bad feeling

The Fallout

I didn't have to wonder about what the hell was going on for too long, since shortly thereafter a user in #perl6 channel turned up, saying they can't build:

<cuonglm> Hi, anyone got trouble when building 2016.08 rakudo
release from source I got `error: pathspec '2016.08' did not match
any file(s) know to git. Sounds like git history was overriden

After a short conversation with the person, it became obvious that while cloning and building the repo (or using the release I released) may have worked fine, simply git pulling ran into the issue with the tag.

It was obvious there was an issue and it needed fixing. The question was: how. The first suggestions seemed scary:

<mst> you can always burn the tag with fire and replace it with a fixed one.
<mst> the advantages may outweigh the risks

We already had users with checkouts with a broken tag asking for help in the chat channel and since I've sent the release announcement email, there may have been many users experiencing such a checkout issue. I didn't know what to do, so the first thing I did was panic and attempt to find someone else to fix my mess:

<Zoffix> jnthn, [Coke], TimToady are you around? I made a booboo with the tag

A lot of the core devs are in Europe, so there's a bit of timezone conflict with me, and with YAPC::Europe happening, some of them were not even in the chat at the time. Left to my own devices, I joined #git on Freenode, and the kind folks in there reaffirmed that replacing the tag might be a bad idea. Victory... I didn't have to learn how to do that:

<Zoffix> What if I delete the tag and create a new one.
What happens to people who have cloned the branch with the incorrect tag?
<_ikke_> Zoffix: A good question. Tags are kind of special as
in they're not expecting to change (and git somewhat ignores changing tags from the remote)
<_ikke_> Zoffix: man git tag has a section about it ("On retagging")
<gitinfo> Zoffix: the git-tag manpage is available at
<Zoffix> OK, then I think I'll leave it as it is, and wait for
someone smarter than me to figure out a fix :P
<Zoffix> Haha "Just admit you screwed up, and use
a different name." Yup, I'll go with that :P
<Zoffix> Thanks for the help!
<_ikke_> YW ;-)

With tag mangling out of the question and users having issues, the next answer was to make an emergency point release with a corrected tag, so I proceeded to do so:

<Zoffix> OK. I'm gonna do a 2016.08.1 'cause I'm not 100% sure
just changing the tag will fix everything and won't introduce new problems.

Making another NQP release, with a correct tag, seemed to resolve the issue on my box, but I wasn't sure there wasn't any more fallout about to happen with the current Rakudo release, so I made a point release of the compiler as well, just to be safe, and to sync up versions with NQP (so folks don't think a 2016.08.01 Rakudo release is missing, when they see a 2016.08.1 NQP release).

Jumping into the release guides at mid-point proved challenging, and considering I've just done a release, so did trying to keep track of which steps I've already done. Tired. Embarrassed. And too sober for the occasion. I headed to the pub, knowing this is the last time I'll make a release like this.

Part II: I, Robot

If you've heard of The Joel Test, you may've noticed Perl 6's release process described above currently fails item #2:

2. Can you make a build in one step?

Painfully aware of the issue, I was thinking about improving the situation ever since the first time I stepped up to cut a release. If you follow the #perl6-dev channel, you may have even seen me show off a couple of prototypes. Such as my Rele6sr web app that lets you keep track of release-blocking bug tickets:

Or my Rele6sr bot that reminds you about an upcoming release, giving a list of new tickets since last release, as well as the ChangeLog entries to review:

<Zoffix> Rele6sr: reminder
<Rele6sr> 🎺🎺🎺 Friends! I bear good news! Rakudo's release will
happen in just 1 days! Please update the ChangeLog with anything you worked
on that should be known to our users. 🎺🎺🎺
<Rele6sr> 🎺🎺🎺 Here are still-open new RT tickets since last release: And here is the git log output for commits
since last release: 🎺🎺🎺

Today's incident indicated to me that it's time to stop messing around with prototypes and put out a complete plan of action.

Spread It Out

There are two things the Release Guide contains that may make it seem like automating releases is a tough nut to crack: reviewing bug tickets for blockers and populating the ChangeLog with needed items. We can't make a robot reliably do those things (yet?), so the largest part of the release process requests a human to do so. But... what if we spread that job throughout the entire month between the releases?

This month, I will create a web app that will keep track of all the reported tickets and will let the release manager log in and mark the tickets they reviewed as either release blockers or non-blockers. As a bonus, the prototype buggable bug-queue interfacing bot that some of you may have seen will also use the app to do its thing more correctly, and the app will also serve as a nicer interface to tickets for people who don't like our original RT website.

The same principle will apply to ChangeLog entries too, where the site will keep track of the commits the release manager has reviewed and added to the ChangeLog. As such, by "keeping state" around, the web app will let the release manager spend only a couple of minutes every few days reviewing changes and bugs, instead of cramming all of the work into a single session on the release day.

Build In One Step

Providing the release manager keeps up with the above steps throughout the month, on the release day they will have a single thing to do: issue the release command to the Rele6sr bot:

<Zoffix> Rele6sr: release
<Rele6sr> Yey! It's that time! Starting the release.
You can watch the progress on
Tell me to abort at any time to abort the process.

Since the issue review process has been subtracted from the release instructions, all that's left is grunt work that can be entirely automated. The bot will use Google Compute Engine API to fire up my 24-core dev box. It will then ssh into it and clone the repos, update versions, run the builds, run the tests, tag stuff, generate tarballs, test the tarballs, scp the tarballs to Rakudo's website, email the release announcement, and update the Wikipedia page (if there's an API for that). The human can watch all of that happen on a page that feeds a websocket with the output from Proc::Async and abort if anything goes wrong. The bot will automatically abort if any of the tests in the test suite fail. As a bonus, I'm hoping to make the bot fire up instances of VMs with several different OSes, to add an extra layer of testing in different environments.

Granted, at least with the first several releases, there will be safety stop-measures and verification steps where a human can ensure the robot is doing the right thing, and abort the process, if needed. But there's no technical limitation to the bot correctly cutting a release after being issued a single command.

Exterminate All Humans

Automating complex build jobs eliminates mistakes. A robot doesn't care if it has to rebuild from scratch a hundred times while you're trying to debug a release-blocker (OK, I think it doesn't care... any robots' rights activists in the audience to tell me I'm wrong?).

Both my today's mistake and the difficulty of fixing it could've been avoided if the release process were entirely automated. The fewer things you have to do manually, the fewer places a mistake can creep into.

The Joel Test, while an amusing read, has a lot of truth and simplicity to it. I suggest you re-evaluate any of the points your project fails the test on, and I believe you'll save yourself a lot of headache if you do so.


Today's events were a great learning experience and a catalyst for improvement. Humans are squishy and fallable and they make mistakes, especially out of habit.

Trying to follow a regular list of instructions during an irregular situation can prove difficult and introduce more mistakes, making a bad situation worse.

If you haven't yet, try to automate your release process as much as you can. I know Perl 6 will follow that advice. This was the last time I cut a release. My job was taken by a robot.

The Awesome Errors of Perl 6

If you're following tech stuff, you probably know by now about the folks at Rust land working on some totally awesome error reporting capabilities. Since Perl 6 is also known for its Awesome Errors, mst inquired for some examples to show off to the rustaceans, and unfortunately I drew a blank...

Errors are something I try to avoid and rarely read in full. So I figured I'll hunt down some cool examples of Awesome Errors and write about them. While I could just bash my head on the keyboard and paste the output, that'd be quite boring to read, so I'll talk about some of the tricky errors that might not be obvious to beginners, and how to fix them.

Let the head bashing begin!

The Basics

Here's some code with an error in it;

say "Hello world!;
say "Local time is {}";

# ===SORRY!=== Error while compiling /home/zoffix/test.p6
# Two terms in a row (runaway multi-line "" quote starting at line 1 maybe?)
# at /home/zoffix/test.p6:2
# ------> say "⏏Local time is {}";
#     expecting any of:
#         infix
#         infix stopper
#         postfix
#         statement end
#         statement modifier
#         statement modifier loop

The first line is missing the closing quote on the string, so everything until the opening quote on the second line is still considered part of the string. Once the supposedly closing quote is found, Perl 6 sees word "Local," which it identifies as a term. Since two terms in a row are not allowed in Perl 6, the compiler throws an error, offering some suggestions on what it was expecting, and it detects we're in a string and suggests we check we didn't forget a closing quote on line 1.

The ===SORRY!=== part doesn't mean you're running the Canadian version of the compiler, but rather that the error is a compile-time (as compared to a run-time) error.


Here's an amusing error. We have a subroutine that returns things, so we call it and use a for loop to iterate over the values:

sub things { 1 … ∞ }

for things {
    say "Current stuff is $_";

# ===SORRY!===
# Function 'things' needs parens to avoid gobbling block
# at /home/zoffix/test.p6:5
# ------> }⏏<EOL>
# Missing block (apparently claimed by 'things')
# at /home/zoffix/test.p6:5
# ------> }⏏<EOL>

Perl 6 lets you omit parentheses when calling subroutines. The error talks about gobbling blocks. What happens is the block we were hoping to give to the for loop is actually being passed to the subroutine as an argument instead. The second error in the output corroborates by saying the for loop is missing its block (and makes a suggestion it was taken by the our things subroutine).

The first error tells us how to fix the issue: Function 'things' needs parens, so our loop needs to be:

for things() {
    say "Current stuff is $_";

However, were our subroutine actually expecting a block to be passed, no parentheses would be necessary. Two code blocks side by side would result in "two terms in a row" error we've seen above, so Perl 6 knows to pass the first block to the subroutine and use the second block as the body of the for loop:

sub things (&code) { code }

for things { 1 … ∞ } {
    say "Current stuff is $_";

Did You Mean Levestein?

Here's a cool feature that will not only tell you you're wrong, but also point out what you might've meant:

sub levenshtein {}

# ===SORRY!=== Error while compiling /home/zoffix/test.p6
# Undeclared routine:
#     levestein used at line 2. Did you mean 'levenshtein'?

When Perl 6 encounters names it doesn't recognize it computes Levenshtein distance for the things it does know to try to offer a useful suggestion. In the instance above it encountered an invocation of a subroutine it didn't know about. It noticed we do have a similar subroutine, so it offered it as an alternative. No more staring at the screen, trying to figure out where you made the typo!

The feature doesn't consider everything under the moon each time it's triggered, however. Were we to capitalize the sub's name to Levenshtein, we would no longer get the suggestion, because for things that start with a capital letter, the compiler figures it's likely a type name and not a subroutine name, so it checks for those instead:

class Levenshtein {};

# ===SORRY!=== Error while compiling /home/zoffix/test.p6
# Undeclared name:
#    Lvnshtein used at line 2. Did you mean 'Levenshtein'?

Once You Go Seq, You Never Go Back

Let's say you make a short sequence of Fibonacci numbers. You print it and then you'd like to print it again, but this time square each member. What happens?

my $seq = (1, 1, * + * … * > 100);
$seq             .join(', ').say;
${ $_² }).join(', ').say;

# 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144
# This Seq has already been iterated, and its values consumed
# (you might solve this by adding .cache on usages of the Seq, or
# by assigning the Seq into an array)
#   in block <unit> at test.p6 line 3

Ouch! A run-time error. What's happening is the Seq type we get from the the sequence operator doesn't keep stuff around. When you iterate over it, each time it gives you a value, it discards it, so once you've iterated over the entire Seq, you're done.

Above, we're attempting to iterate over it again, and so the Rakudo runtime cries and complains, because it can't do it. The error message does offer two possible solutions. We can either use the .cache method to obtain a List we'll iterate over:

my $seq = (1, 1, * + * … * > 100).cache;
$seq             .join(', ').say;
${ $_² }).join(', ').say;

Or we can use an Array from the get go:

my @seq = 1, 1, * + * … * > 100;
@seq             .join(', ').say;{ $_² }).join(', ').say;

And even though we're storing the Seq in an Array, it won't get reified until it's actually needed:

my @a = 1 … ∞;
say @a[^10];

# (1 2 3 4 5 6 7 8 9 10)

These Aren't The Attributes You're Looking For

Imagine you have a class. In it, you have some private attributes and you've got a method that does a regex match using the value of that attribute as part of it:

class {
    has $!prefix = 'foo';
    method has-prefix ($text) {
        so $text ~~ /^ $!prefix/;

# ===SORRY!=== Error while compiling /home/zoffix/test.p6
# Attribute $!prefix not available inside of a regex, since regexes are methods on Cursor.
# Consider storing the attribute in a lexical, and using that in the regex.
# at /home/zoffix/test.p6:4
# ------>         so $text ~~ /^ $!prefix⏏/;
#     expecting any of:
#         infix stopper

Oops! What happened?

It's useful to understand that as far as the parser is concerned, Perl 6 is actually braided from several languages: Perl 6, Quote, and Regex languages are parts of that braiding. This is why stuff like this Just Works™:

say "foo { "bar" ~ "meow" } ber ";

# foo barmeow ber

Despite the interpolated code block using the same " quotes to delimit the strings within it as the quotes on our original string, there's no conflict. However, the same mechanism presents us with a limitation in regexes, because in them, the looked up attributes belong to the Cursor object responsible for the regex.

To avoid the error, simply use a temporary variable to store the $!prefix in—as suggested by the error message—or use the given block:

class {
    has $!prefix = 'foo';
    method has-prefix ($text) {
        given $!prefix { so $text ~~ /^ $_/ }


Ever tried to access an element of a list that's out of range?

my @a = <foo bar ber>;
say @a[*-42];

# Effective index out of range. Is: -39, should be in 0..Inf
#  in block <unit> at test.p6 line 2

In Perl 6, to index an item from the end of a list, you use funky syntax: [*-42]. That's actually a closure that takes an argument (which is the number of elements in the list), subtracts 42 from it, and the return value is used as an actual index. You could use @a[sub ($total) { $total - 42 }] instead, if you were particularly bored.

In the error above, that index ends up being 3 - 42, or -39, which is the value we see in the error message. Since indexes cannot be negative, we receive the error, which also tells us the index must be 0 to positive infinity (with any indexes above what the list contains returning Any when looked up).

A Rose By Any Other Name, Would Code As Sweet

If you're an active user of Perl 6's sister language, the Perl 5, you may sometimes write Perl-5-isms in your Perl 6 code:

say "foo" . "bar";

# ===SORRY!=== Error while compiling /home/zoffix/test.p6
# Unsupported use of . to concatenate strings; in Perl 6 please use ~
# at /home/zoffix/test.p6:1
# ------> say "foo" .⏏ "bar";

Above, we're attempting to use Perl 5's concatenation operator to concatenate two strings. The error mechanism is smart enough to detect such usage and to recommend the use of the correct ~ operator instead.

This is not the only case of such detection. There are many. Here's another example, detecting accidental use of Perl 5's diamond operator, with several suggestions of what the programmer may have meant:

while <> {}

# ===SORRY!=== Error while compiling /home/zoffix/test.p6
# Unsupported use of <>; in Perl 6 please use lines() to read input, ('') to
# represent a null string or () to represent an empty list
# at /home/zoffix/test.p6:1
# ------> while <⏏> {}

Heredoc, Theredoc, Everywheredoc

Here's an evil error and there's nothing awesome about it, but I figured I'd mention it, since it's easy to debug if you know about it, and quite annoying if you don't. The error is evil enough that it may have been already improved if you're reading this far enough in the future from when I'm writing this.

Try to spot what the problem is... read the error at the bottom first, as if you were the one who wrote (and so, are familiar with) the code:

my $stuff = qq:to/END/;
Blah blah blah

for ^10 {
    say 'things';

for ^20 {
    say 'moar things';

sub foo ($wtf) {
    say 'oh my!';

# ===SORRY!=== Error while compiling /home/zoffix/test.p6
# Variable '$wtf' is not declared
# at /home/zoffix/test.p6:13
# ------> sub foo (⏏$wtf) {

Huh? It's crying about an undeclared variable, but it's pointing to a signature of a subroutine. Of course it won't be declared. What sort of e-Crack is the compiler smoking?

For those who didn't spot the problem: it's the spurious semicolon after the closing END of the heredoc. The heredoc ends where the closing delimiter appears on a line all by itself. As far as the compiler is concerned, we've not seen the delimiter at END;, so it continues parsing as if it were still parsing the heredoc. A qq heredoc lets you interpolate variables, so when the parser gets to the $wtf variable in the signature, it has no idea it's in a signature of an actual code and not just some random text, so it cries about the variable being undeclared.

Won't Someone Think of The Reader?

Here's a great error that prevents you from writing horrid code:

my $a;
sub {

# ===SORRY!=== Error while compiling /home/zoffix/test.p6
# $a has already been used as a non-placeholder in the surrounding sub block,
#   so you will confuse the reader if you suddenly declare $^a here
# at /home/zoffix/test.p6:4
# ------>         $^a⏏.say;

Here's a bit of a background: you can use the $^ twigil on variables to create an implicit signature. To make it possible to use such variables in nested blocks, this syntax actually creates the same variable without the twigil, so $^a and $a are the same thing, and the signature of the sub above is ($a).

In our code, we also have an $a in outer scope and supposedly we print it first, before using the $^ twigil to create another $a in the same scope, but one that contains the argument to the sub... complete brain-screw! To avoid this, just rename your variables to something that doesn't clash. How about some Thai?

my $ความสงบ = 'peace';
sub {
}('to your variables');

# peace
# to your variables

Well, Colour Me Errpressed!

If your terminal supports it, the compiler will emit ANSI codes to colourize the output a bit:

for ^5 {
    say meow";

That's all nice and spiffy, but if you're, say, capturing output from the compiler to display it elsewhere, you may get the ANSI code as is, like 31m===[0mSORRY![31m===[0m.

That's awful, but luckily, it's easy to disable the colours: just set RAKUDO_ERROR_COLOR environmental variable to 0:

You can set it from within the program too. You just have to do it early enough, so put it somewhere at the start of the program and use the BEGIN phaser to set it as soon as the assignment is compiled:


for ^5 {
    say meow";

An Exceptional Failure

Perl 6 has a special exception—Failure—that doesn't explode until you try to use it as a value, and you can even defuse it entirely by using it in boolean context. You can produce your own Failures by calling the fail subroutine and Perl 6 uses them in core whenever it can.

Here's a piece of code where we define a prefix operator for calculating the circumference of an object, given its radius. If the radius is negative, it calls fail, returning a Failure object:

sub prefix:<⟳> (\𝐫) {
    𝐫 < 0 and fail 'Your object warps the Universe a new one';
    τ × 𝐫;

say 'Calculating the circumference of the mystery object';
my $cₘ = ⟳ −𝑒;

say 'Calculating the circumference of the Earth';
my $cₑ = ⟳ 6.3781 × 10⁶;

say 'Calculating the circumference of the Sun';
my $cₛ = ⟳ 6.957 × 10⁸;

say "The circumference of the largest object is {max $cₘ, $cₑ, $cₛ} metres";

# Calculating the circumference of the mystery object
# Calculating the circumference of the Earth
# Calculating the circumference of the Sun
# Your object warps the Universe a new one
#   in sub prefix:<⟳> at test.p6 line 2
#   in block <unit> at test.p6 line 7
# Actually thrown at:
#   in block <unit> at test.p6 line 15

We're calculating the circumference for a negative radius on line 7, so if it were just a regular exception, our code would die there and then. Instead, by the output we can see that we continue to calculate the circumference of the Earth and the Sun, until we get to the last line.

There we try to use the Failure in $cₘ variable as one of the arguments to the max routine. Since we're asking for the actual value, the Failure explodes and gives us a nice backtrace. The error message includes the point where our Failure blew up (line 15), where we received it (line 7) as well as where it came from (line 2). Pretty sweet!


Useful, descriptive errors are becoming the industry standard and Perl 6 and Rust languages are leading that effort. The errors must go beyond merely telling you the line number to look at. They should point to a piece of code you wrote. They should make a guess at what you meant. They should be referencing your code, even if they originate in some third party library you're using.

Most of Perl 6 errors display the piece of code containing the error. They use algorithms to offer valid suggestions when you mistyped a subroutine name. If you're used to other languages, Perl 6 will detect your "accent," and offer the correct way to pronounce your code in Perl 6. And instead of immediately blowing up, Perl 6 offers a mechanism to propagate errors right to the code the programmer is writing.

Perl 6 has Awesome Errors.