Deep Dive into TDD Revisited

Hi, everyone. I haven’t posted any serious technical content on this blog for a long time now. The reason for this is that I’m now a pointy haired boss most of the time. I spend my days teaching, mentoring, coaching, and occasionally pairing with someone on another team. I miss coding… I really do.

However, I’ve been digging into Interaction Based Testing over the past few weeks, and I’ve found it fascinating. The road I took to get here involved trying to learn more about what Behavior Driven Development is, and why so many people I know and respect seem to like it, or at least appreciate it. One of the techniques that BDD uses is something called Interaction Based Testing, or IBT for short.

Interaction Based Testing

IBT is different from traditional TDD in that it is defining and verifying the interactions between objects as they take place, rather than defining and verifying that some input state is being successfully translated to some output state. This latter kind of testing, called State Based Testing, or SBT for short, is what I had always done when I did TDD (for the most part). IBT involves using a Mock Object Framework that allows you to set expectations on objects that your class under test is going to call, and then helps you verify that each of those calls took place. Here is a short example:

[TestFixture]
public class IBTExample
{
    [Test]
    public void SampleITBTest()
    {
        MockRepository mocks = new MockRepository();

        IListener listener = mocks.CreateMock<IListener>();
        Repeater repeater = new Repeater(listener);

        listener.Hear("");
        LastCall.On(listener).Constraints(Is.NotNull()).Repeat.Once();

        mocks.ReplayAll();

        repeater.Repeat("");

        mocks.VerifyAll();
    }
}

The basic problem that I’m trying to solve here is that I can write a method, Repeat(), on a class called Repeater such that when I call Repeat(), it repeats what it was passed to its IListener. The way that I set this up is more complicated than I would use in a state-based test, but I avoid cluttering my test with irrelevant implementation details (like explicit data).

What this test is doing is creating the system and setting expectations on the IListener that define how the Repeater class is going to use it at the appropriate time. The MockRepository is the class that represents the mock object framework I’m using, which in this case is Rhino Mocks. I new one of these up, and it handles all the mocking and verification activities that this test requires. On the next line, you see me creating a mock object to represent an IListener. I typically would have created a state-based stub for this listener that would simply remember what it was told, for my test to interrogate later. In this case, the framework is creating a testing version of this interface for me, so I don’t have to build my own stub. Next, I create the class under test and wire it together with the listener. Nothing fancy there.

The next line looks a little strange, and it is. It is actually a result of how this particular mocking framework functions, but it is easily understood. While it may look like I’m calling my listener’s Hear method, I’m actually not. When you create an instance of the mocking framework, it is created in a recording mode. What this means is that every time you invoke a method on a mocked out object while recording, you are actually calling a proxy for that object and defining expectations for how that object will be called in your regular code later. In this case (admittedly, not the simplest case), listener.Hear() is a void method, so I have to split the setting of expectations into two lines. On the first line, I call the proxy, and the framework makes a mental note that I called it. On the next line, I say to the framework, “Hey, remember that method I just called? Well, in my real code, when I call it, I expect that I am going to pass it some kind of string that will never be null, and I’ll call that method exactly once. If I do these things, please allow my test to pass. If I don’t do them, then fail it miserably”.

After I set up the single expectation I have on the code I’m going to be calling, I exit record mode and enter replay mode. In this mode. the framework allows me to run my real code and plays back my expectations for me while my real code executes. The framework keeps track of whatever is going on, and when I finally call my application method, Repeater.Repeat() in this case, followed by the mocks.VerifyAll(), it checks to make sure that all expectations were met. If they were, I’m cool, otherwise my test fails.

I hope that was at least a little clear. It was very confusing to me, but I sat down with a few folks at the agile conference two weeks ago, and they showed me how this worked. I’m still very new at it, so I’m likely to do things that programmers experienced with this kind of testing would find silly. If any of you see something I’m doing that doesn’t make sense, please tell me!

Here is the code this test is forcing me to write:

public class Repeater
{
    private readonly IListener listener;

    public Repeater(IListener listener)
    {
        this.listener = listener;
    }

    public void Repeat(string whatToRepeat)
    {
        listener.Hear(whatToRepeat);
    }
}

public interface IListener
{
    void Hear(string whatToHear);
}

Advantages to IBT style TDD

There are several things about this that I really like:

  • It allows me to write tests that completely and totally ignore what the data is that is being passed around. In most state-based tests, the actual data is irrelevant. You are forced to provide some values just so that you can see if your code worked. The values obfuscate what is happening. IBT allows me to avoid putting any data into my tests that isn’t completely relevant to that test, which allows me to focus better on what the test is saying.
  • It allows me to defer making decisions until much later. You can’t see it in this example, but I’m finding that I’m much better able to defer making choices about things until truly need to make them. You’ll see examples of this in the blog entries that are to follow (more about this below).
  • I get to much simpler code than state-based testing would lead me to
  • My workflow changes. I used to
    1. Write a test
    2. Implement it in simple, procedural terms
    3. Refactor the hell out of it

With ITB, I’m finding that it is really hard to write expectations on procedural code, so my code much more naturally tends to lots of really small, simple objects that collaborate together nicely. I am finding that I do refactoring less frequently, and it is usually when I’ve changed my mind about something rather than as part of my normal workflow. This is new and interesting to me.

There are some warts that I’m seeing with it, and I’ll get to those as well, as I write further in this series. I’m also very certain that this technique has its time and place. One of the things I want to learn is where that time and place is. Anyhow, here are my plans for this:

Revisiting my Deep Dive

I want to redo the example I did a couple years ago when I solved the Payroll problem in a 6-part blog series. I want to solve the same problem in a ITB way, and let you see where it leads me. I’ve done this once already, part of the way, just to learn how this worked, and the solution I came up with was very different than the one I did the first time. I’m going to do this new series the exact same way as the old series, talking through what I’m doing and what I’m thinking the whole time. I’m personally very curious to see where it goes.

Once we’re finished, I want to explore some other stories that are going to force me to refactor some of my basic design assumptions, because one of the knocks against ITB is that it makes refactoring harder by defining the interactions inside your tests and your code. We’ll find out.

Please ask questions

I’m learning this stuff as I go, so I’m very eager to hear criticisms of what I’ve done and answer questions about why I’ve done things. Please feel free to post comments on the blog about this and the following entries. I’m really looking forward to this, and I hope you readers are, too.

— bab

Agile 2007

I’m eagerly looking forward to going to Agile 2007 next week. I look forward to this conference every year to give me energy to get through the next year. It’s so much fun to see all my agile friends for a week, hang out, talk about what we’ve learned, drink the occasional beer, and just catch up.

If you’re going to DC this year, look me up. I’m in the conference hotel.

I’d also like to suggest a BDD BoF session, if anyone going is interested in that. I’ll try to sign up for it some night there.

Finally, I’m giving 2 talks this year. The first talk is an Experience Report on Tuesday morning. It’s about building trust with a customer as a key success factor. On Friday morning, Peter Provost and I are giving a talk called “Agile is more than ‘Monkey See, Monkey Do'”. The point of this talk is to point out to those who might be new to agile that there is more to it than just the practices, it is possible to be agile without doing ANY of the practices, and that doing the practices doesn’t necessarily make you agile.

See you all there!

— bab

How to create a maintainable system

Ayende had an interesting post on his blog today about the only metric that really counts, which is maintainability. He made a joke about measuring this property of a system by measuring the intensity of groans that emanate from the team when asked to changed something, which made me laugh.

It did bring up a more significant question in my mind, one that I’ve thought about before, and something that I’ve been telling my TDD classes lately. The question always comes up in class about why we just don’t design our systems right in the first place, for some value of right. There are several parts to my answer, but one of them is always that we build our systems simply and change them as we need to so that we can practice changing our systems for our big moment. That big moment happens when the customer comes in and tells us that he really, really, really needs this new feature, and he needs it by next Thursday.

If we had never practiced changing our system, and had never considered changing our system, then this might be a pretty scary thing. But if you build your system such that you practice changing it in new and interesting ways from the first day, when that change request comes in, you yawn and make the change.

In short, you get a maintainable system by practicing maintenance from the first day.

— bab