"Functional core & Imperative shell" : OO design, and isolated tests without mocks
Following Ovid's Sick of being mocked by unit tests and the link to the discussion is TDD dead between Kent Beck and others, I found this talk and it seems like a promising solution. I'm writing this post to share it with you all, but also to clarify ideas for myself.
The original goal is to do isolated unit testing without using Test Doubles (mocks, stubs, etc), and the method is called "Functional core & Imperative shell". According to its creator it also leads to cleaner design, which is more important than testing in the end. So we're fetching two stones with one bird.
Here is how to do it in theory:
The program is divided between Services who interact with the outside world, and a Functional core who performs the business logic. Both these get called by a thin Imperative shell layer which doesn't take any decisions: it just passes data between Services and elements of the Functional core. It always has only one path (no "if ... else") so it doesn't need unit tests. It will be exercised by integration tests and end to end tests.
Inside the Functional core, Entities communicate with their neighbours by passing only data to each other: it's values in, values out.
What Entities pass to each other, are Values Objects. A Value Object can be a string, a number, a list, a hash. It's data without behaviour.
By extension an object whose attributes are all Value Objects, is a Value Object. It can have methods who perform operations, as long as they only return pure functions of the object's values. For example it can give you information about the data ("how long is this string?").
To make things work smoothly, Value Objects should be immutable otherwise they can be modified from the outside and things become complicated. Once you have passed a value to an Entity, you must know that it will never change. Which links to this discussion about how to achieve immutability with Perl. Perhaps we could write a ValueObject role (and/or MooX::ValueObject).
So we end up with four types of components:
- Services who interface with the outside world;
- Entities who perform the business logic, and send data to each other;
- Value Objects who carry data around;
- Imperative Shell, a thin layer who wires Services with Entities, by passing Values around.
I knew about separating the outside world (Service) from my own code with a thin layer; but I never head about keeping a Functional core for the business logic.
Passing immutable data around between entities, makes it so much simpler to test them in isolation: no need for stubs ("canned return values") any more, just construct your incoming Value Objects!
And apparently it improves design. Here is why (well, that's my understanding of it right now) :
Within Entities, behaviours are naturally isolated. Instead of the artificial boundary of mocking, you have a real boundary in the form of values. This type of well defined interfaces makes it easier for entities to vary independently, which facilitates software maintenance. It should also lead to simpler Debugging: all I need is to look at the data at different points in the program, in order to know what's going on; no need to understand how a given object got into a given state, which can be very complex and unintuitive sometimes.
One trade-off of such a functional style, is that you end up putting more complexity into the data structures that you pass from one method to another. And that can be constrictive overtime as you have a lot of methods that have intimate knowledge about what those value objects are expected to look like. This is the downside of getting "Smarter data and dumber code", which is generally considered a good thing, as pointed out by Ovid, and by quite a few other people ... (I don't have enough experience to confirm this myself - I will try this method and hopefully figure it out).
How does this sound?
Have you tried something similar? What did you think about it?
What could be the downsides and trade-offs of this design style, in theory?
What could be the downsides and trade-offs, specific to implementing it with Perl5? And Perl6?
I think I want to try it soon on a small project. Unfortunately I'm working on mostly non-programming aspects of my activity at the moment. It can't wait to get back to coding :-)
PS: I have now also posted some examples.
PPS: apparently Clojure embodies these principles, but it has downsides which make me still prefer to code in Perl, for now. I might post an article about this soon.
MooX::Struct might be close to what you want for value objects.
I don't think MooX::Struct would work, because "your module can create a 'Point3D' struct, and some other module can too, and they won't interfere with each other". Value objects represent an interface between two classes; they must be defined in one single place, so that these two classes can understand it.
Also, I'm not sure whether MooX::Struct can have methods? That's quite important: I want a Rectangle to tell me its ->perimeter() and ->surface_area().
Something derived from Mo objects might do the trick, or Moo objects if we extended them in a helpful way.
I think the properties we want to add in MooX::ValueObject are:
- all attributes must be either a scalar, an arrayref, a hashref, or a value object (is that do-able with a MooX extension?);
- immutability: using Sub::Trigger::Lock for all attributes might be enough, as all arguments are going to be value objects anyway.
One extra thing that could be dangerous but useful, would be to declare some existing Moo(se) classes as ValueObjects, so they can be incorporated into a value object. It could either be declared with
# I trust this attribute to behave as a value object
has 'some_attribute' => ( value_object => 1 );
And/or it could be a Role used as a tag. For instance, because DateTime::Moonpig is an immutable version of DateTime, we could indicate that it's a value object by composing the ValueObject::Tag role into it. We would then be allowed to carry it around in a value object.
It's dangerous because if you declare some mutable object as ValueObject, you are trusting an untrustworthy class. But it could be a good way to use a fair part of the existing CPAN modules, while using this coding style.
Am I imagining an interesting subset of the Perl language, or am I hopelessly swimming against the current?
What I strive for is a subset of Perl that encourages simpler designs. What I mean by "simpler" is very well expressed in this video: http://www.infoq.com/presentations/Simple-Made-Easy
I will post a concrete example tomorrow: it's much easier to understand when you see it. And it's probably quite close to what lots of people (who put some care in their design) do already.
My article is full of specific vocabulary, which makes it sound pretty esoteric I guess.
One class can happily pass a Point3D object to a second class. The second class can happily call the Point3D accessors and methods. However, the second class can't (easily) construct objects using the same Point3D definition from the first class.
MooX::Struct can define methods. e.g.
Cheers Toby. They could qualify then, but I would still prefer to manipulate immutable structures.
Example coming tomorrow! (hopefully)
Other than an unreliable external service, why mock anything up? One argument I hear is "speed", but I'd rather have my tests correct than fast.
Another argument I hear is "unit testing", but that's only because some people have an arbitrary definition of "unit" that somehow precludes fully exercising our code because we don't want it to talk to other code. I don't understand that.
Yet another argument has been "this object is so hard to instantiate that we're just going to create a mock." I did that when I was a much younger programmer, not realizing that this is also known as a "code smell." I wish I could go back and smack myself.
But mocks are so useful, right? Layers and layers of mocks, never allowing your actual code to touch other code. And after a while, someone notices that the mocked behavior doesn't match the real behavior but now you've tightly coupled your tests to the implementation rather than the interface and life becomes No Fun.
I sometimes mock time or, as I said, external dependencies that I cannot rely on. Other than that, can someone explain to me why mocking something is good? This is a serious question; I don't want to rely on the above straw man arguments.
Also, I should point out that while your title says "without mocks", aren't "test doubles" just the same thing? I guess I should watch that presentation now :)
The answer lies in my Test Doubles link, where Martin Fowler explains that there are 4 kinds of test doubles:
kinds of double:
- Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
- Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).
- Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it 'sent', or maybe only how many messages it 'sent'.
- Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.
So, Stubs are used to test state ("I send this data in, what comes out?"), while Mocks let us test behavior ("if I send this data in, does the Logger ->log() ?").
Several CPAN modules called Mock are in fact Stubs. We have very few mock modules in Perl.
Explanation with examples, here: href="https://blogs.perl.org/users/mascip/2014/06/functional-core-imperative-shell-explanation-with-code.html
Ovid, I linked you Gary Bernhardt’s Test Isolation Is About Avoiding Mocks in your own comments already – you really ought to read it. The (here deliberately over-summarised) punchline is that trying to test in isolation reveals invisible coupling. But really you ought to read the thing.
Oh, I hadn't seen Ovid's previous comment. I'm in agreement with you on mocks Ovid.
I think that learning to avoid mocks, while still testing lots of code in isolation, is what lead Gary to this idea of a functional core and an imperative shell.
And then he realized that it was also a very simplifying design. Simplifying in the sense that it keeps concerns properly separated.
The best one hour that I spent recently, was watching the video "Simple made Easy" by the creator of Clojure. The link is at the bottom of my first comment. It's the best explanation I've heard of what good design is. And it fits very well with Gary's functional core.
What external service is not unreliable? Any web service your code speaks to is by nature unreliable.
Mocking up garbage responses or no responses or super laggy responses are pretty good way to test how your application will respond.
Say for example your app sends emails. You write tests that send real emails. That's great, until the SMTP server goes offline. You've only tested against the live server when it was working, so you have no coverage for this.
Of course, if it was your SMTP server you could turn it off and see how your code responds.
If it isn't in your control you have no such recourse. ( And if you test with an SMTP server that is in your control, that's a mock )
Over-mocking is bad, and one should always develop against live services when possible, but I think throwing out the idea of mocks completely is wrong as well.
Good point Samuel! Agreed.
I think what people throw out at the moment, is mocking as a default for isolated testing:
mocking most relationships in your tests, for the sake of isolation, is just wrong.
Especially as isolated testing can be done without mocking; it's a matter of design.