"Functional core & Imperative shell" : OO design, and isolated tests without mocks

Following Ovid's Sick of being mocked by unit tests and the link to the discussion is TDD dead between Kent Beck and others, I found this talk and it seems like a promising solution. I'm writing this post to share it with you all, but also to clarify ideas for myself.

The original goal is to do isolated unit testing without using Test Doubles (mocks, stubs, etc), and the method is called "Functional core & Imperative shell". According to its creator it also leads to cleaner design, which is more important than testing in the end. So we're fetching two stones with one bird.

Here is how to do it in theory:

The program is divided between Services who interact with the outside world, and a Functional core who performs the business logic. Both these get called by a thin Imperative shell layer which doesn't take any decisions: it just passes data between Services and elements of the Functional core. It always has only one path (no "if ... else") so it doesn't need unit tests. It will be exercised by integration tests and end to end tests.

Inside the Functional core, Entities communicate with their neighbours by passing only data to each other: it's values in, values out.

What Entities pass to each other, are Values Objects. A Value Object can be a string, a number, a list, a hash. It's data without behaviour.

By extension an object whose attributes are all Value Objects, is a Value Object. It can have methods who perform operations, as long as they only return pure functions of the object's values. For example it can give you information about the data ("how long is this string?").

To make things work smoothly, Value Objects should be immutable otherwise they can be modified from the outside and things become complicated. Once you have passed a value to an Entity, you must know that it will never change. Which links to this discussion about how to achieve immutability with Perl. Perhaps we could write a ValueObject role (and/or MooX::ValueObject).

So we end up with four types of components:


  • Services who interface with the outside world;

  • Entities who perform the business logic, and send data to each other;

  • Value Objects who carry data around;

  • Imperative Shell, a thin layer who wires Services with Entities, by passing Values around.

I knew about separating the outside world (Service) from my own code with a thin layer; but I never head about keeping a Functional core for the business logic.

Passing immutable data around between entities, makes it so much simpler to test them in isolation: no need for stubs ("canned return values") any more, just construct your incoming Value Objects!

And apparently it improves design. Here is why (well, that's my understanding of it right now) :

Within Entities, behaviours are naturally isolated. Instead of the artificial boundary of mocking, you have a real boundary in the form of values. This type of well defined interfaces makes it easier for entities to vary independently, which facilitates software maintenance. It should also lead to simpler Debugging: all I need is to look at the data at different points in the program, in order to know what's going on; no need to understand how a given object got into a given state, which can be very complex and unintuitive sometimes.

One trade-off of such a functional style, is that you end up putting more complexity into the data structures that you pass from one method to another. And that can be constrictive overtime as you have a lot of methods that have intimate knowledge about what those value objects are expected to look like. This is the downside of getting "Smarter data and dumber code", which is generally considered a good thing, as pointed out by Ovid, and by quite a few other people ... (I don't have enough experience to confirm this myself - I will try this method and hopefully figure it out).


How does this sound?
Have you tried something similar? What did you think about it?

What could be the downsides and trade-offs of this design style, in theory?
What could be the downsides and trade-offs, specific to implementing it with Perl5? And Perl6?

I think I want to try it soon on a small project. Unfortunately I'm working on mostly non-programming aspects of my activity at the moment. It can't wait to get back to coding :-)

PS: I have now also posted some examples.

PPS: apparently Clojure embodies these principles, but it has downsides which make me still prefer to code in Perl, for now. I might post an article about this soon.

13 Comments

MooX::Struct might be close to what you want for value objects.

One class can happily pass a Point3D object to a second class. The second class can happily call the Point3D accessors and methods. However, the second class can't (easily) construct objects using the same Point3D definition from the first class.

MooX::Struct can define methods. e.g.

use Math::Trig qw(pi);
use MooX::Struct Circle => [
   qw( radius colour ),
   area => sub {
      my $r = shift->radius;
      pi * ($r ** 2);
   },
];

Other than an unreliable external service, why mock anything up? One argument I hear is "speed", but I'd rather have my tests correct than fast.

Another argument I hear is "unit testing", but that's only because some people have an arbitrary definition of "unit" that somehow precludes fully exercising our code because we don't want it to talk to other code. I don't understand that.

Yet another argument has been "this object is so hard to instantiate that we're just going to create a mock." I did that when I was a much younger programmer, not realizing that this is also known as a "code smell." I wish I could go back and smack myself.

But mocks are so useful, right? Layers and layers of mocks, never allowing your actual code to touch other code. And after a while, someone notices that the mocked behavior doesn't match the real behavior but now you've tightly coupled your tests to the implementation rather than the interface and life becomes No Fun.

I sometimes mock time or, as I said, external dependencies that I cannot rely on. Other than that, can someone explain to me why mocking something is good? This is a serious question; I don't want to rely on the above straw man arguments.

Also, I should point out that while your title says "without mocks", aren't "test doubles" just the same thing? I guess I should watch that presentation now :)

Other than an unreliable external service, why mock anything up?

Ovid, I linked you Gary Bernhardt’s Test Isolation Is About Avoiding Mocks in your own comments already – you really ought to read it. The (here deliberately over-summarised) punchline is that trying to test in isolation reveals invisible coupling. But really you ought to read the thing.

What external service is not unreliable? Any web service your code speaks to is by nature unreliable.
Mocking up garbage responses or no responses or super laggy responses are pretty good way to test how your application will respond.
Say for example your app sends emails. You write tests that send real emails. That's great, until the SMTP server goes offline. You've only tested against the live server when it was working, so you have no coverage for this.
Of course, if it was your SMTP server you could turn it off and see how your code responds.
If it isn't in your control you have no such recourse. ( And if you test with an SMTP server that is in your control, that's a mock )

Over-mocking is bad, and one should always develop against live services when possible, but I think throwing out the idea of mocks completely is wrong as well.

Leave a comment

About mascip

user-pic Perl, Javascript (AngularJS), Coffeescript, Firebase