Veure's Test Suite

We're still hacking away on the Veure MMORPG and things are moving forward nicely, but I thought some folks would like to hear more about our development process. This post is about our test suite. I'd love to hear how it compares to yours.

Here's the full output:

$ prove -l t
t/001sanity.t ... ok   
t/perlcritic.t .. ok     
t/sqitch.t ...... ok     
t/tcm.t ......... ok       
All tests successful.
Files=4, Tests=740, 654 wallclock secs ( 1.57 usr  0.20 sys + 742.40 cusr 15.79 csys = 759.96 CPU)
Result: PASS

Let's break that down so you can see what we've set up. You'll note that what we've built strongly follows my Zen of Application Test Suites recommendations.

First, the 001sanity.t bails out of the test suite if any critical problems are found:

  • A security check (yeah, this is vague)
  • sqitch database changes not fully deployed
  • Redis is not running
  • Output is not UTF-8

Any of the above issues is considered "stop everything and fix." If the failures occur, the developer will usually also get a useful diagnostic message telling them the likely cause of the failure and how to resolve it.

The perlcritic tests ensures we pass severity 5 (gentle) checks. We're close to passing on 4, but we have a strong enough test suite that I'm not terribly worried about it.

The t/sqitch.t test lets us test our sqitch changes. I get very tired of broken database migrations, but we're finding more and more issues with a standard database migration strategy (particularly the tension between data and its structure, but more on that in a later post).

Finally, we have the t/tcm.t tests. The core of our test suite is built on top of Test::Class::Moose. Once you get used to it, it really speeds up test development time and full test suite run time. However, running an individual test class can be a bit slower because it has to load the full app. It's a trade-off, but one I'm happy with.

You'll note that the summary line only claims 740 tests, but that's not true. For example, here is a regression test for a bug from the Veure Archives functionality. A particular URL should be available to players, but if you're not logged in, it should redirect you to the root URL of the archives rather than generating a 500:

sub test_travel_doesnt_500_when_logged_out {
    my $test = shift;
    my $mech = $test->test_mech;
        'We should be able to get travel pages for stations we have visited'
        '... but if we are logged out, it should not be a 500'
    is $mech->path, '/archive', '... and we should instead be redirected to the /archive';

You probably count three tests there, but in true xUnit philosophy, those three tests are for one feature, so they're counted as one test. If you count individual tests, we have about 6,000 tests for our system and we're at 85% test coverage. What's nice is that these are real tests, not just "I happened to run this code" tests. When we break something, we find out pretty quickly.

The test suite also has no warnings, something we've been hyper-vigilant about. We do have some front-end exposure so we're getting Selenium tests in place and that will help with a major concern we've had.

Our fixture handling is also pretty easy to work with: Here are the fixture declarations for our combat tests:

with qw(

As a bonus, those fixtures are loaded on demand. Consuming the roles merely declares an intent to use those fixtures.

The end result of all of this? We have multiple developers working on Veure and we find out fast if something has gone off the rails. Devs feel a heck of a lot more confident that their code does what it's supposed to do and merely having this level of quality and coverage for tests puts tremendous social pressure on devs to not drop the ball here. If they're not clear how to test something, they could hop on Slack and ask, but in reality, they check the docs/developer/testing.pod document and that tells them the vast majority of what they need to know.

Once we launch, I'm curious to see if we can maintain this level of quality.

Feel free to ask any questions and I'll answer what I can.


Nice write up. I hope you can find time to update us concerning the game at some point. I'm eagerly awaiting it's release.

A little extra info:

Without going into too many specifics about security, there was a period of time where running Devel::NYTProf was showing that the password hashing code we are using was - by far - using the largest amount of time in the test suite. As you can see from Ovid's output above, it takes a bit of time to run the tests already, so finding a way to reduce the amount of time we spend hashing was pretty critical to us.

In many of our tests, a typical workflow resembles: - Log in a character - Move the character someplace - Do stuff

Log in a player several thousand times over the course of a test suite and it's easy to see how the time for password hashing adds up.

How did we overcome this? We moved all testing of hashed passwords to t/001sanity.t, and run those tests once (rationale being that if password hashing is broken, everything needs to stop immediately, and someone should jump on that!). For our test characters, there are no hashed passwords, just plain text ones, so the hashing code is never called.

How well did it work? On my Mac, the entire test suite ran in about 10 minutes before we made the change, and in about 8 minutes afterwards. It took some time for one of our developers to get that hashed out (see what I did there?), but the 20% decrease in the time it took to run the test suite was more than worth it.


The upcoming game blog you mentioned would be great! Good luck! As a big fan of text games and space I'm really excited. There are quite a few fantasy MMO text games like Achaea, but nothing space/scifi of this level so it will be unique. Keep up these little posts on the game's creation as well. Very interesting.

Leave a comment

About Ovid

user-pic Freelance Perl/Testing/Agile consultant and trainer. See for our services. If you have a problem with Perl, we will solve it for you. And don't forget to buy my book!