I'm curious since I haven't used mocks that work that
way.
My gut reaction is that given a change to a complex
object under test you'd have to make more changes to the
test than I would. ;) Purely because your expectations
are embedded in code whereas my expectations are in a
text log and the mock can write out a new list of
expectations when the test breaks so all you need to do
is check that it's correct and switch to using the new
log...
I rather like the readability of the
"Expect" syntax over comparing logs.
- When you have an expected log text on disk - the test
reader needs to manually open the text file to see what
is the series of events that is expected to happen.
- Also - because the Mock always logs *all* calls - the
expected log text usually also contains events from
totally different scenarios making it less readable and
less focused on the current test. That means that If I
want know that my object gets the "X"
method called - I don't care that "Z"
and "Y" are called as well, but with
the log - I'd have to write that just as well o my test
would break.
-With "Expect" you can surgically
select what you want to happen - which discards the test
reader from asking themselves "Why is he
expecting 'Y' in this test?" and also makes for
the intent of the test be more visible. Also both test
and expectations are in the same place, and expectations
are shorter.
In summary, in both cases, when tests break you need to
change something - either that test code of the expect
log text, only in the log case.taking that into account
- I'd rather 'Expect" if it's not too much of a
hassle.
Fair enough.
I don't quite understand the 'surgically select' part?
Are you saying that if you're testing a scenario in
which an object uses a service provider to
"GetData()" you can tell your mock to
expect a single call to "GetData()"
and then if the object under test actually calls
"DoThing()" on the mock before and
after the call to "GetData()" the test
can still pass? And you'd consider that a valid way for
the test to pass?
If so, why aren't you interested in all of the
interaction that occurs during the scenario under test?
I find the fact that the mocks involved log all
interaction for a particular scenario (or state of a
scenario if I validate the logs at various points in the
test case) helps imensely to lock down object
interaction and draw attention to unexpected behaviour.
I agree that having the expectations inline helps make
the intent of the test more obvious. That was always the
main issue that I had with the logs.
Still, horses for courses. I'm working on some auto
generated mocks at present and I'm currently adding a
form of 'expect' syntax to them as it's reasonably easy
to generate alongside the logging functionality.
Len:
"And you'd consider that a valid way for the
test to pass?
If so, why aren't you interested in all of the
interaction that occurs during the scenario under test?
"
I usually try to test only one thing in my unit tests.
That one thing may be that a correct and *single*
interaction occurs on the class under test.
Other interactions that occur may happen because of
different application logic that should or should not
happen and that is probably tested elsewhere.
The point is - I can ignore or I can choose to not
ignore other interaction.
It all depends on the scenario I'm testing. Some
scenarios might call to test lots of interactions on the
same object and some not.
An example might be in order: Say you have an object
that, upon calling method "foo" should
do several things:
- Write a log to to a logger object (with complex
logging logic)
- Send an email if some parameters are valid (using an
emailer object)
- Throw an exception if one of several validations fails
(one test each?)
- pass down a message to a data layer object with
various parameters.
With logging - practically all of these (except for
maybe the third one, as it's not interaction) have to be
tested throughout all the tests for that method. There
is not way to write a test for just one of those
constraints.
With Mocks I can certainly do that - test each rule with
one or more tests and ignore the others. That one, if
one of the "rules" fails to correctly
do its job, only some of my tests will fail but the rest
will pass (telling me exactly what I should be focusing
on).
Usually mocks have the ability to be
"Strict" or non strict where in strict
mode only the expected interactions should occur and
anything else unexpected thrown an exception. the non
strict mode allows me to "select" the
interaction I care about.
OK, I understand. Good example. Though I would imagine
we'd be talking about several mocks, one to represent
each of the services that's being used. In that
situation testing just the email functionality would
mean ignoring the logs from the other mocks during that
test...
Anyway, I understand where you're coming from and I'm
interested enough to investigate adding this kind of
functionality to some of my mocks to see how I get on
with it (step one, work out how to do it in such a way
that I don't have to spend an age writing it by hand for
each mock that I want to use it with ;) )
Len:
Yes - you would probably be using several mocks in the
above example which makes it a little irrelevant.
However, it's quite easy to make that into an example
where there are several logical expectations on one mock
object, that you would much rather separate into several
different unit tests. For example - make sure that the
Mailer object is called with the
"SendEmail" method, but that the
"AddAttachement" method is called or
not called depending on various logical behavior etc.