Test driving a WPF App – Part 2: Adding some data

This is part two of the blog series about creating a WPF application using TDD to drive the functionality and MVVM to split that functionality away from the view into something more testable. You may want to read part one if you haven’t already.

In this part, we’ll see what it takes to display data on the front page about each player on the softball team. We’re not going to display all their stats, since we don’t need to for the example, but it would be very easy to extend this to do so. (N.B. I have a more full version of this that I’ll post that includes a database backend with more data in it than what I’m going to show here. The data backend is irrelevant to the layer being discussed here, so I’m going to ignore those details.)

The end result

At the end of this process, what we’re going to have is a list of players and their numbers. We could easily extend it to total stats for each of them, but that’s outside the point of the example, and is left as an exercise for the reader (I hated when college textbooks said that!).

image

So anyhow, you can see the 4 players listed and their numbers. This data came from the data repository that I created behind the view, in a class called PlayerRepository. I’m not going to go into the details of how this repository class was created, as the database layer is outside the scope of this article, so we’ll just assume that we created it when it was needed, and I won’t include discussion or tests for it. The important part of us to think about is how the data got into the view model and how it got from there onto the view. And that begins our (short) story.

Step 1 – Get data to the view

In order to get data to the view, the view model needs to expose an ObservableCollection that the view can bind to. I decided to create a property called Players on the TeamStatControlViewModel, driving it through tests. I created a test fixture in the ViewModelFixtures assembly called TeamStatControlPlayersDataBindingFixture, and I put my test into there (as an aside, I’m trying a different style of fixtures here, where I’m naming fixtures after behaviors or features and putting all the tests for those features in that fixture. I’m intentionally trying to break out of the app class <-> fixture class model and create tests around the behaviors of the system, regardless of where those behaviors may lie. Let me know if you like it, please.)

   1: [Fact]

   2: public void ViewModelExposesObservableCollectionOfPlayers()

   3: {

   4:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null);

   5:  

   6:     Assert.IsType<ObservableCollection<Player>>(viewModel.Players);

   7: }

 
We have to add the Players property to the view model and create an empty Player class in our Model assembly to get this test to compile. Once it runs and fails, we implement the property in the view model to return an empty ObservableCollection<Player> collection. Run again, and the test passes.
   1: public ObservableCollection<Player> Players

   2: {

   3:     get { return new ObservableCollection<Player>(); }

   4: }

Step 2 – Get the data to the view model

Now that we have the data available to be shown on the view, we need to provide that data to the view model. We do this by giving the view model access to the PlayerRepository we spoke of earlier and letting it query the repository for the data as needed. Again, very simple. Here is the test in that same test fixture:

   1: [Fact]

   2: public void DataIsPulledFromModelWhenRetrievingPlayersFromViewModel()

   3: {

   4:     var playerRepository = new Mock<IPlayerRepository>();

   5:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null, playerRepository.Object);

   6:     playerRepository.Setup(pr => pr.GetAllPlayers()).Returns(new List<Player>());

   7:  

   8:     ObservableCollection<Player> players = viewModel.Players;

   9:  

  10:     playerRepository.Verify(pr => pr.GetAllPlayers());

  11: }

The test start driving to define the behavior our system is going to need. We already know that the view model is going to need a reference to the repository, so we add the repository to the constructor arguments for the view model on line 5. (I like working backwards in a situation like this to discover the objects needed. I’ll write the constructor signature first and use that to drive object creation on previous lines, like you see here.) This forces us to create a mock version of the repository, which we do in line 4 by discovering an IPlayerRepository interface, details to be fleshed out. On line 6, we set up the behavior that we want the view model to invoke on the repository, which is the GetAllPlayers method, which for our purposes needs to return an empty collection of the appropriate type. Finally, we invoke the Players property and verify that the repository’s GetAllPlayers method is indeed called.

In getting this to compile, we create the IPlayerRepository interface and modify the signature of TeamStatControlViewModel to take the new IPlayerRepository parameter. Run test, test fails, and we finally implement:

   1: public interface IPlayerRepository

   2:     {

   3:         IList<Player> GetAllPlayers();

   4:     }

and

   1: public class TeamStatControlViewModel: IApplicationExitAdapter

   2: {

   3:     private readonly IApplicationController applicationController;

   4:     private readonly IPlayerRepository playerRepository;

   5:  

   6:     public TeamStatControlViewModel(IApplicationController applicationController, IPlayerRepository playerRepository)

   7:     {

   8:         this.applicationController = applicationController;

   9:         this.playerRepository = playerRepository;

  10:         ApplicationExit = new ApplicationExitCommand(this);

  11:     }

  12:  

  13:     public void ExitApplication()

  14:     {

  15:         applicationController.ExitApplication();

  16:     }

  17:  

  18:     public ICommand ApplicationExit { get; set; }

  19:  

  20:     public ObservableCollection<Player> Players

  21:     {

  22:         get { return new ObservableCollection<Player>(playerRepository.GetAllPlayers()); }

  23:     }

  24: }

At this point, we have data coming from the repository and available to the view. All that’s left is to hook up to the view.

Step 3 – Hooking up to the real view

I’m going to cheat a bit and not define a DataTemplate for this and just directly bind to the two columns I’m going to define, Name and Number. I have a ListView in the middle of my window, as you can see in the screencapture at the top if this post. I define a couple GridViewColumns in it and bind them to Name and Number:

   1: <ListView IsSynchronizedWithCurrentItem="True" Grid.Row="1" VerticalAlignment="Stretch" ItemsSource="{Binding Players}">

   2:     <ListView.View>

   3:         <GridView>

   4:             <GridViewColumn Header="Player" DisplayMemberBinding="{Binding Name}"/>

   5:             <GridViewColumn Header="Number" DisplayMemberBinding="{Binding Number}" />

   6:         </GridView>

   7:     </ListView.View>

   8: </ListView>

Obviously I wouldn’t do this on a real project, I’d use a DataTemplate. But for now, this will do. The ItemSource is set to the Players property, and the two columns are set to the fields I want to show.

We don’t have a real instance of our IPlayerRepository yet, so lets build a really simple one. In real life, this would be a repository over the top of a database, but we don’t need to go that far for now. Let’s just create the simplest repository we can for now:

   1: public class PlayerRepository : IPlayerRepository

   2:     {

   3:         public IList<Player> GetAllPlayers()

   4:         {

   5:             return new List<Player>

   6:                        {

   7:                            new Player

   8:                                {

   9:                                    Name = "Linsey",

  10:                                    Number = "42"

  11:                                },

  12:                            new Player

  13:                                {

  14:                                    Name = "Michelle",

  15:                                    Number = "31"

  16:                                },

  17:                            new Player

  18:                                {

  19:                                    Name = "Susan",

  20:                                    Number = "17"

  21:                                },

  22:                            new Player

  23:                                {

  24:                                    Name = "Joan",

  25:                                    Number = "26"

  26:                                }

  27:                        };

  28:         }

  29:     }

One thing we’re still doing is exposing the Player class to the view. This is not necessarily the best practice that has evolved, since the Player class is defined in the Model layer. The danger is that we may end up needing to add INotifyPropertyChanged behavior to the Player class, which would pollute our domain model with view-specific code. If that were to happen, it would force us to refactor the view model to return an observable collection of something else, like an ObservableCollection<PlayerViewModel>, so that we would have a place to put our view-specific code. We don’t need to yet, so we’re not going to bother. This is a potential refactoring to come, though.

And our final change is to add the PlayerRepository type into our Application_Startup method, so that Unity knows how to build the repository and pass it into the TeamStatControlViewModel:

   1: private void Application_Startup(object sender, StartupEventArgs e)

   2: {

   3:     container.RegisterInstance<IApplicationController>(this);

   4:  

   5:     container.RegisterType<IPlayerRepository, PlayerRepository>(new ContainerControlledLifetimeManager());

   6:     container.RegisterType<TeamStatControlViewModel>(new ContainerControlledLifetimeManager());

   7:     container.RegisterType<TeamStatControl>(new ContainerControlledLifetimeManager());

   8:  

   9:     container.RegisterType<MainWindow>(new ContainerControlledLifetimeManager());

  10:     

  11:     Current.MainWindow = container.Resolve<MainWindow>();

  12:     Current.MainWindow.Content = container.Resolve<TeamStatControl>();

  13:     Current.MainWindow.Show();

  14: }

Compile and run, and all should work.

Conclusion

This was a fairly easy step to take. We exposed a property on our view model to let our view see the data we want to publish. Our view model has a repository injected into it to let it get the data as needed. That was really all there was to it. Two tests, and we got to where we needed to be.

The next step along the way will involve navigating from the front page to a player detail page. Stay tuned for the next installment, coming right up!

— bab

Test Driving a WPF application using MVVM and TDD

This is a preview of the material I’m going to be presenting at the St. Louis Day of Dot Net in a couple weeks.

Introduction

For those of you who are reading this blog for the first time, I’m pretty much the St. Louis Agile Zealot. Long hair, flowing robes… you get the picture 🙂 I’ve been advocating Agile and XP in St. Louis since around 2000, through speaking at various user groups and local conferences, teaching at local companies, giving talks in the park, dropping leaflets from airplanes, and so on. I live Agile in my day job at Asynchrony Solutions, where I lead their Agile training and mentoring group. I am totally sold on Test Driven Development as the best way for me to write my code, and I’ve been occasionally known to share that opinion with others.

As far as WPF experience goes, I’m still pretty new in my learning curve.  I’ve been playing with a project on my own for a few months, trying new things, reading, and experimenting. When I first started out, I completely wrote a Winforms app in WPF. I used a full-on MVP/MVC architecture, had my button click methods in my window classes, etc. A complete mess… Then I started reading about MVVM and that sparked something inside me that realized that this finally made sense. This was something that gave me a lot of the advantages of the MVP-like architectures, such as being able to test my UI without actually running the UI, but without the overhead of wiring all that junk up myself.

What follows are the lessons I’ve learned in the process of rethinking my original app and in writing this sample app for my talk.

The Application

My daughter plays select softball for one of the organizations here in St. Louis. They have a web site that lists all their stats throughout the year, and I thought it would be fun to mimic that site as a desktop app. The application would list all the girls’ total stats on the front page and give you the ability to drill down into detailed stats for each game on subsequent pages. For fun, I wanted to be able to add stats for new games, and add and delete players. With that set of features, it seemed that I’d be able to drive out a lot of interesting opportunities to change around the design to let me write the code in a TDD manner.

Here is the main page of the application, in all its glory. Remember that I am but a poor developer, with a developer’s eye for style 🙂

image

The display of stats isn’t complete yet, but it was enough to validate that my data binding was working. Let’s look at what it took to get this far…

Note: If you want to skip the setup stuff and get right to the parts where behavior is added, jump ahead to Step 3.

Our Roadmap

As a quick review, before we dive in, here is an overview of the MVVM design pattern and the players involved.

MVVM Architecture

The View represents the surface that the user interacts with. It is the WPF code, the UI, and XAML… all that stuff that users see and that is hard to test in an automated fashion. Our goal is to take as much logic out of there as possible, to allow us to have the highest amount of confidence and the greatest amount of test coverage that we can achieve.

Next to the View is the ViewModel. The ViewModel represents the entire set of data and behavior contained in the View. It has properties to expose all the different pieces of data that the View will  need to display, and it exposes another set of properties for all the behavior that the View will need to invoke.

The View and ViewModel are tied together in two separate, but very similar, ways. Through Data Binding, data travels between the View and ViewModel automatically. WPF, through the wonders of its internal magic, keeps consistent the data shown in the View and the data held in the ViewModel. There is a bit of work that we have to do to ensure this happens, but it mostly all automatic. The Commands are representations of the behavior that users invoke through the View. Individual commands are exposed through properties on the ViewModel and tied to controls on the View. When the user presses a button, for example, the command attached to that button fires and something happens. You can think of Commands as little controllers, focused on a specific topic, whose responsibility it is to respond to a user action in a particular manner by coordinating the response to that action between the View and ViewModel.

Our goal here is to put all logic about the View into the ViewModel. The View should be very thin and code-less, and the Commands should be thin shims that merely provide an execution conduit between the gesture being made and the behavior being invoked in the ViewModel. Code in the View or in Commands should be looked up with suspicion, with an eye towards trying to move it into someplace more testable. Sometimes you can move that code, and sometimes its entirely appropriate where it is, but the question should always be asked.

Step 1 – Modify the template to support TDD

When you create a WPF application using the Visual Studio New Project dialog, you get the framework of an empty application. There is no code anywhere, other than an InitializeComponent() call here and there. Most of the interesting behavior, at this point, takes place through XAML – things like instantiating the Application object, creating the main window and causing it to display, and other basic things like that. While this works, and works fine for the empty application, we’re going to need more control over how this initiation process occurs, for reasons we’ll discuss later in this post or a subsequent one.

So, to control the initial creation, there are a few changes we need to make. The first step is to put ourselves in control of when and where the main window is created. As time goes on, there are aspects of this creation process that we’re going to want to control, like wiring together other objects, so we’ll need full control over the how windows and controls are born.

App.xaml

When first created, App.xaml looks like this:

   1: <Application x:Class="WpfApplication1.App"

   2:     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

   3:     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"

   4:     StartupUri="Window1.xaml">

   5:     <Application.Resources>

   6:          

   7:     </Application.Resources>

   8: </Application>

The StartupUri attribute of the Application element is what causes the class defined in Window1.xaml to be instantiated. We want to prevent this from happening until we’re ready for it, so we have to take this out. We’ll still need to create this window, however, so we’ll need to arrange for another way to create objects as the system is starting. It turns out that WPF exposes and event that is perfect for this initialization, the Startup event. By providing a handler for the Startup event, we can have a place to put all our initialization code.

App.xaml now looks like this:

   1: <Application x:Class="WpfApplication1.App"

   2:     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

   3:     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"

   4:     Startup="Application_Startup">

   5:     <Application.Resources>

   6:  

   7:     </Application.Resources>

   8: </Application>

Now we have to add some code into the App.xaml.cs file to create the objects we need and to wire them together. Please note that I’m not writing tests for this code, because this is configuration code, not business logic. I view logic such as this the same way I view a Main method, and I tend not to test Main’s. It is where objects are created and wired together and then told to go on their merry way. This is not meant to be reusable code and should be just simple declarations of objects and assignments of variables. If I manage to screw up Main, the application isn’t going to work anyhow, and I’m pretty sure I’ll know it 🙂

   1: using System.Windows;

   2:  

   3: namespace WpfApplication1

   4: {

   5:     public partial class App : Application

   6:     {

   7:         private void Application_Startup(object sender, StartupEventArgs e)

   8:         {

   9:             Current.MainWindow = new MainWindow();

  10:             Current.MainWindow.Show();

  11:         }

  12:     }

  13: }

The final step is to rename the template-provided window name of Window1 to MainWindow and make sure I instantiate it with the proper class name in the Application_Startup method, the handler for the Startup event. Once that is done, you can start the project, and an empty window should appear.

image

 One more change…

Before going further, I’d like to introduce Unity into the codebase. Unity is the dependency injection library from Microsoft, and I’ve found it to be very useful in my WPF applications. Here is the previous code after injecting Unity (injecting Unity – HA! I kill myself).

   1: using System.Windows;

   2: using Microsoft.Practices.Unity;

   3:  

   4: namespace WpfApplication1

   5: {

   6:     public partial class App : Application

   7:     {

   8:         private readonly IUnityContainer container = new UnityContainer();

   9:  

  10:         private void Application_Startup(object sender, StartupEventArgs e)

  11:         {

  12:             container.RegisterType<MainWindow>(new ContainerControlledLifetimeManager());

  13:  

  14:             Current.MainWindow = container.Resolve<MainWindow>();

  15:             Current.MainWindow.Show();

  16:         }

  17:     }

  18: }

Briefly, the point of this code is to register a type, MainWindow, with the Unity container, and then use the container to instantiate the MainWindow object as the application’s main window. The reason I introduced Unity is to separate myself from the semantics of object creation. Unity will take care of instantiating any objects I ask for, as well as any objects that the class being created needs. Its just a way of relieving myself of having to worry about constructor signatures, basically.

Step 2 – Adding the initial content

Its time to fill in the empty window with some content. We’ll avoid adding any behavior yet, we just want to see something. The strategy that we’re going to use is to keep the MainWindow class as the one window in our application, and we’ll swap in and out different pieces of content to show in it. In my projects I’ve found this pattern to be useful, because it lets me layer content to mimic a workflow in the application, and have the content always show up in the same place, be the same size, and not take extra work to manage.

So our goal now is to set the MainWindow’s content to be the initial view we want to show people when they first open the application. The design for this looks like this:

DesignWithTeamStatControlAndViewModel

And the implementation is easy, too.

First, I create my user control, TeamStatsControl:

image

At the same time, I create a ResourceDictionary to hold styles that I’ll be using around the app and merged that dictionary into my UserControl.Resources section. I created a menu, a somewhat attractive header, and a couple of buttons. There’s no behavior in it yet, but that’s coming shortly.

Now, for the fun part, let’s make the simple change to make the content show up in the MainWindow. Since we’re creating a new object, our TeamStatControl, we have to make a few simple changes in App.Application_Startup. We need to register the user control with Unity so that it can create it as needed, and then resolve an instance of it as the Content property of the MainWindow. Once that is done, WPF takes care of making it appear in the window.

   1: private void Application_Startup(object sender, StartupEventArgs e)

   2: {

   3:     container.RegisterType<MainWindow>(new ContainerControlledLifetimeManager());

   4:     container.RegisterType<TeamStatControl>(new ContainerControlledLifetimeManager());

   5:  

   6:     Current.MainWindow = container.Resolve<MainWindow>();

   7:     Current.MainWindow.Content = container.Resolve<TeamStatControl>();

   8:     Current.MainWindow.Show();

   9: }

And, by the way, I also changed the Width and Height properties in MainWindow to be 600 and 1000 respectively so that the entire user control would be visible.

Step 3 – Adding behavior

Now that the shell is in place, we can add the behavior that we want. First, lets add the basic behavior of exiting the application in an orderly way. We want to be sure to do this in a way that all the code that we write is testable, and that is going to require a bit of extra work over and above just writing the simplest code. This is the price one pays when writing code using TDD, but the extra effort pays off in systems that work.

Let’s start by writing our first test. This test will be a test of the Command object, which is the object that the view will interact with. An underlying force of the MVVM pattern is that the ViewModel encapsulates the visible data and behavior of the View. And the Command object acts as the conduit through which the gestures of the user are translated into tangible actions in the ViewModel. Here is a sequence diagram showing this behavior:

CommandExecutionSequenceDiagram

Clicking the button causes the Command.Execute method to fire, which calls a method in the ViewModel. That method does the real work of this system, with the calls coming before it just part of the plumbing to cause things to happen.

So lets write a test to show that the above sequence is what will actually happen – that calling the Execute method of some Command object will invoke some sort of behavior that causes the application to exit.

Our first test:

   1: namespace ViewModelFixtures

   2: {

   3:     public class ApplicationExitBehaviorFixture

   4:     {

   5:         [Fact]

   6:         public void ApplicationExitingBehaviorInvokedWhenExecuteCalled()

   7:         {

   8:             var exiter = new Mock<IApplicationExitAdapter>();

   9:             ApplicationExitCommand command = new ApplicationExitCommand(exiter.Object);

  10:             exiter.Setup(e => e.ExitApplication()).Verifiable();

  11:  

  12:             command.Execute(null);

  13:  

  14:             exiter.Verify(e => e.ExitApplication());

  15:         }

  16:     }

  17: }

For those of you unfamiliar with the tools used in this test, I’m using Moq as my mocking framework and xUnit for my tests. These are my two favorite tools for TDD as of now, and I highly recommend both of them.

Beginning on line 7, defines the behavior of the system when my ApplicationExitCommand.Execute method is called. I’m defining the interface that the command is going to talk to, IApplicationExitAdapter, and I’m doing this in total and complete ignorance of what class is ever going to implement that interface (OK, I’m lying – its going to be my view model, but let’s keep that our secret for now).

And the code that we write to get this test to compile:

   1: namespace ViewModels

   2: {

   3:     public interface IApplicationExitAdapter

   4:     {

   5:         void ExitApplication();

   6:     }

   7: }

   1: namespace ViewModels

   2: {

   3:     public class ApplicationExitCommand : ICommand

   4:     {

   5:         public ApplicationExitCommand(IApplicationExitAdapter exitAdapter)

   6:         {

   7:             throw new NotImplementedException();

   8:         }

   9:  

  10:         public void Execute(object o)

  11:         {

  12:             throw new NotImplementedException();

  13:         }

  14:  

  15:         public bool CanExecute(object parameter)

  16:         {

  17:             throw new NotImplementedException();

  18:         }

  19:  

  20:         public event EventHandler CanExecuteChanged;

  21:     }

  22: }

Strictly speaking, I went a little further than I had to here, choosing to implement ICommand, but all commands that are hooked through a View as we’re discussing need to implement this interface (or one of a couple of others that are more advanced), so I just went ahead and did it.

Short Aside – Projects and Solution Structure

Before we go further, we should discuss the projects I’ve chosen to use in this solution, and why they were chosen. I have one project that holds the view and a separate project that holds the view models. I intentionally kept the Views and ViewModels separate to enforce the direction of the relationship between them. The Application class and the other WPF controls need to have access to the ViewModels, if only to create them and assign them to DataContexts. ViewModels, however, should be 100% ignorant of their views. The simplest way to enforce this constraint is to put all view model code into another assembly, which is what I’ve done.

Now its a simple matter of implementing the code to make the test pass:

   1: public class ApplicationExitCommand : ICommand

   2: {

   3:     private readonly IApplicationExitAdapter exitAdapter;

   4:  

   5:     public ApplicationExitCommand(IApplicationExitAdapter exitAdapter)

   6:     {

   7:         this.exitAdapter = exitAdapter;

   8:     }

   9:  

  10:     public void Execute(object o)

  11:     {

  12:         exitAdapter.ExitApplication();

  13:     }

  14:  

  15:     public bool CanExecute(object parameter)

  16:     {

  17:         return true;

  18:     }

  19:  

  20:     public event EventHandler CanExecuteChanged

  21:     {

  22:         add { CommandManager.RequerySuggested += value; }

  23:         remove { CommandManager.RequerySuggested -= value; }

  24:     }

  25: }

I also went ahead and added some boilerplate code into the CanExecute and CanExecuteChanged method and event inside this class. This boilerplate code is what you start with every time you write a command. You can think of this as creating a command using a code template inside Visual Studio.

At this point, the command should be invoking the behavior that we want, as seen in this diagram:

ApplicationExitCommandSequenceDiagram

Now that the behavior is being called inside the IApplicationExitAdapter, we’ll need to make the giant leap and assume that it is, indeed, our view model class that is going to be implementing that interface. This only make sense, since the Command is invoking behavior on some class that represents the behavior of the View, and the class that represents a View’s behavior is that View’s ViewModel. Q.E.D. Once we’ve crossed that intellectual chasm, we’ll go ahead and implement the functionality inside the view model.

For that, we write this test:

   1: [Fact] 

   2: public void ApplicationControllerInvokedWhenViewModelExitApplicationCalled()

   3: {

   4:     var applicationController = new Mock<IApplicationController>();

   5:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(applicationController.Object);

   6:     applicationController.Setup(ac => ac.ExitApplication()).Verifiable();

   7:  

   8:     viewModel.ExitApplication();

   9:  

  10:     applicationController.Verify(ac => ac.ExitApplication());

  11: }

This test causes us to make a few design decisions. First, we’ve discovered an interface called IApplicationController that is going to be responsible for controlling application-wide behavior, which would certainly include things like exiting at the appropriate time. We’re also creating the view model class, TeamStatControlViewModel, since we have to add behavior directly to it now. And finally, we’ve discovered that the view model needs to have a reference to the IApplicationController interface, to allow it to invoke functionality through that interfacee. This leads us to creating a constructor for the view model through which we can inject the interface at creation.

Here is the resulting code. First, the simple IApplicationController interface:

   1: namespace ViewModels

   2: {

   3:     public interface IApplicationController

   4:     {

   5:         void ExitApplication();

   6:     }

   7: }

and now the TeamStatControlViewModel:

   1: namespace ViewModels

   2: {

   3:     public class TeamStatControlViewModel : IApplicationExitAdapter

   4:     {

   5:         private readonly IApplicationController applicationController;

   6:  

   7:         public TeamStatControlViewModel(IApplicationController applicationController)

   8:         {

   9:             this.applicationController = applicationController;

  10:         }

  11:  

  12:         public void ExitApplication()

  13:         {

  14:             applicationController.ExitApplication();

  15:         }

  16:     }

  17: }

That’s just enough code to make that test pass, so we’re momentarily happy. However, our system isn’t going to work yet, because we haven’t found anyone to implement the IApplicationController interface. That’s our next major decision.

ClassDiagramExceptForFinalClass

I’m going to choose to put this interface onto the App class, at least for now, since it seems to be the one place in our application that has knowledge of the entire application. It may turn out later that we need to move this responsibility somewhere else, but we’ll leave it here for now.

We’ll go ahead and make the easy changes to App.xaml.cs, which consist of making it implement IApplicationController and writing the ExitApplication method, which simply calls Current.Shutdown().

   1: public partial class App : Application, IApplicationController

   2: {

   3:     private readonly IUnityContainer container = new UnityContainer();

   4:  

   5:     private void Application_Startup(object sender, StartupEventArgs e) {...}

   6:  

   7:     public void ExitApplication()

   8:     {

   9:         Current.Shutdown();

  10:     }

  11: }

The important point to note is that the IApplicationController interface lives in the ViewModel assembly, even though it is being implemented by the App class in the main WPF assembly. This means that the main WPF application project has to have a reference to the ViewModel project, which is the direction we want the dependencies to flow. By moving the ViewModel classes and interfaces into an a project separate from the view classes, we’re enforcing our dependency management design as we described earlier.

This now leaves us with this for a design:

ClassDiagramFinal

So we’re in the home stretch now. We have all our code built – what is lacking now are the few scattered bits and pieces needed to wire this whole thing together. There are several of these pieces, and we’ll look at them one at a time.

Let’s start with the object creation changes needed in App.Application_Startup. Over these last few changes, we’ve created a new interface for the App class, so we need to tell Unity that when I ask for an instance of an IApplicationController, it should pass back a reference to the App class. You can see this code on line 3 below. We also created a TeamStatControlViewModel, which we tell Unity about on line 5.

   1: private void Application_Startup(object sender, StartupEventArgs e)

   2: {

   3:     container.RegisterInstance<IApplicationController>(this);

   4:  

   5:     container.RegisterType<TeamStatControlViewModel>(new ContainerControlledLifetimeManager());

   6:     container.RegisterType<TeamStatControl>(new ContainerControlledLifetimeManager());

   7:  

   8:     container.RegisterType<MainWindow>(new ContainerControlledLifetimeManager());

   9:     

  10:     Current.MainWindow = container.Resolve<MainWindow>();

  11:     Current.MainWindow.Content = container.Resolve<TeamStatControl>();

  12:     Current.MainWindow.Show();

  13: }

Next, we know that TeamStatControlViewModel is going to be the DataContext for the TeamStatViewModel, so let’s make that happen in code. The easiest way is to change the constructor for the TeamStatControl to take the view model and assign it to be its DataContext in its constructor. The really cool part of this is that we don’t have to make any changes in App.ApplicationStartup based on this constructor changing. Unity will just build all the dependent objects for us and call whatever constructors it can find, and it all just works.

   1: public TeamStatControl(TeamStatControlViewModel viewModel)

   2: {

   3:     DataContext = viewModel;

   4:     InitializeComponent();

   5: }

Now that we have the objects built and wired up, we need to do the data binding to put the commands into the controls on the interface. The first place to do this is in TeamStatControl, where we define the Quit button. We need to assign an instance of the ApplicationExitCommand to the Command attribute of the button. Here is the XAML code that will do this for the Quit button:

   1: <Button Command="{Binding ApplicationExit}" Grid.Column="1" Content="Quit" HorizontalAlignment="Left" Style="{StaticResource NormalButton}"/>

We do this by creating a binding to the ApplicationExit property of our data context, the TeamStatControlViewModel. The view is now expecting the view model to expose a property called ApplicationExit, of type ICommand, through which it can get the command. We don’t have that property right now, so lets write a test that makes us create it.

   1: [Fact]

   2: public void ViewModelExposesApplicationExitCommandAsICommand()

   3: {

   4:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null);

   5:  

   6:     Assert.IsAssignableFrom<ApplicationExitCommand>(viewModel.ApplicationExit);

   7: }

This test says that we can get an ICommand from the view model and the ICommand we get is actually an ApplicationExitCommand, which is what we need. We just need to implement that through creating a simple auto-property in the view model. That satisfies what we need for the binding we defined in the XAML. Now, we construct the command in the constructor of the TeamStatControlViewModel class and pass an instance of the view model to the constructor of its command to wire that up, and we’re set!

   1: public TeamStatControlViewModel(IApplicationController applicationController)

   2: {

   3:     this.applicationController = applicationController;

   4:     ApplicationExit = new ApplicationExitCommand(this);

   5: }

   6:  

After all this, running the application and clicking the Quit button should cause the app to vanish silently, slipping into the night, never to be seen again. Whew!

A Bonus

Now that all this hard work is done, we can also  hook this same logic to the menu button. If we go into the XAML and find the line where we define the menu item, we just set its Command attribute to be a binding to ApplicationExit as well, and now it works, too.

Conclusion

As I said at the beginning, this blog entry, and the couple more that will follow this week, will form the basis for my St. Louis Day of Dot Net talk on this same subject. I’d appreciate any and all feedback on this article, especially including things that weren’t clear, design  leaps I made without a clear reason, and alternative ways to do the same things I did.

You can find the complete project for this on my website. You may need to install Unity to get the solution to compile.

Thanks for reading, and I hope it helps!

 

— bab

I’m thinking of an example…

I’m writing an example for a workshop on estimation in an agile context, and I’m considering using the idea of preparing a multi-course meal. I think I like this because it is entirely non-technical, so I can give it to developers, customers, QA, or anyone else without regard to previous technical knowledge. I also like it because there are obvious tie-ins to concepts like freshness, inventory, rapid delivery, and so on. That is as close to the real-world considerations of a software project I can think of while keeping it approachable for everyone.

So, here goes:

I have 100 guests showing up in 6 hours for an end of project celebration. This is the entire project team and their families. They did a great job on their last delivery, so we want to reward them with a feast. The menu is top secret, but we want it to surprise and delight them as much as possible. Here is the schedule:

12:00PM – 1:00PM – Guests start arriving. I need hors d’oeuvres ready and passed around through the crowd as they gather.

1:00PM-2:00PM – Salad and appetizer course served

3:00PM-4:00PM – Lunch

5:00PM-5:30PM – Dessert and Coffee

That gives us 4 different courses to serve. I have a kitchen filled with supplies, everything you might need. The pantry is well stocked, tons of pots and pans, spices, herbs, three ovens and stoves, and anything else you might need. I just need you and your team of world-class chefs to create something for me and my guests.

I’d like to have 4 or 5 different hors d’oeuvres, 3 different kinds of salads, 2 kinds of appetizers. Lunch should be a buffet of 3 meats, 2 pasta dishes, 3 vegetables, and a rice dish. I’d like 3 different desserts, one decadently sweet, one fruit dish, and a big cake.

So, off you go. Give me a selection of great food at each of these times. Bear in mind that things can go wrong, like ovens not working, menus changed, schedules shortening, so you’ll have to constantly keep your eye on time and delivering the best culinary value for the given effort.

So, how did that sound? I intend to mix things up by adding vegetarian dishes to the lunch menu, an oven going out, changing the schedule, running out of something… Basically doing whatever it takes to have them adapt, replan, and go forward.

Does this sound close enough to a real software project to be useful? Does the exercise sound like anything that would teach estimation and prioritization? Any sort of feedback would be greatly appreciated.

bab


Agile in 6 words

My good friend, Matt Philip, wrote an interesting blog entry the other day about Six-Word Memoirs By Writers Famous and Obscure. I thought it might be interesting to write about the agile methods, their values, and their practices, following a similar style. So here goes, my own take on Six-Word Memoirs on Agile Values, People, and Activities…

Agile: Business plans, developers do, people thrive
Agile: Don’t want “resources”, give me people!
Agile: Project success comes from working together
Agile: Engine converting features to software daily
Agile: Solve biggest problem. Lather, rinse, repeat

QA: Hey, I made the team!!! Yea, QA!!!
QA: Building it right the first time
QA: Equal parts developer, tester, and customer
QA: The glue that holds ‘em together
QA: Helping customers define what they want

Scrum Master: Equal parts facilitator, friend, and psychologist
Scrum Master: My job is to listen carefully
Scrum Master: If I do it, you don’t!
Scrum Master: Fixing problems is not the solution

Customer: Content is mine, method is yours
Customer: I chart the course to success
Customer: You let me worry about that
Customer: Word of mouth, my best tool
Customer: Don’t read specs, ask me questions

Developer: I eat user stories for breakfast
Developer: I trust my customer and team
Developer: It feels good to be valued
Developer: Method is mine, content is yours
Developer: Best architectures owned by whole team
Developer: Ivory tower architects need not apply
Developer: Red, green, refactor. Lather, rinse, repeat.
Developer: Its more fun done in pairs

Delivery: Running, tested features – what really counts
Delivery: Scope is negotiable, quality is not
Delivery: Not over, under promising, just delivering
Delivery: Reliably delighting the customer every week

I could do more, but lets leave it there. Any more?

—bab

Obvious comment of the day – TDD makes Pair Programming easier

A fairly obvious observation hit me today…

If you are trying to pair program without also doing test driven development, when do you change roles? When doing TDD with Pairing, there is a rhythm to when the roles switch – see Micro Pairing. But if you’re not doing TDD, the person typing is frequently lost in solving a fairly large problem, they are balancing a bunch of things in their heads, and they have to finish a big thought before they could possibly swap the keyboard with their pair. So, while the typer is solving these big problems, what does the other person do? Just sit there? It just seems pretty painful…

I’m sure its not impossible, but it sure seems like TDD is a near necessity for Pair Programming.

Thoughts?

— bab

A Story Splitting Story

This is a true story from a true client I’m working with . The names and details have been changed to protect the innocent…

Story splitting

Several of us were attending a pre-sprint planning meeting yesterday, trying to flesh out the user stories that the product owner was planning to bring to sprint planning in a few days. They are still pretty early in their agile adoption as well as their technology adoption, so there are lots of questions floating around about process and tools.

A story came up in the meeting that described the interaction that a user would take when visiting the site the first time, more specifically around the user agreement presented to them the first time they logged into the site. The story was something like:

“As a user, I want to be to able to accept the user agreement so that I can access my content”

The details of this story included a specific workflow to follow if the user didn’t agree, including errors to show them, and a need to refrain from showing the user agreement again after agreeing to it the first time.

Conversation around this story went on for a while, mainly focusing around the second part, how to handle remembering what the user did the last time they visited the site. There were technical details about where this information needed to be stored that hadn’t been agreed to yet, and team spun a bit around this issue.

The suggestion was made to split the story into two. The first story would be the same as the first, “As a user, I want to be able to accept the user agreement so that I can access my content”, and it included all the acceptance criteria other than the remembering part. The second story would be “As a previously logged in user, I will not be presented with the user agreement, since I had already agreed to it”, which picked up the remembering part.

By splitting the story in this way, they were able to work out the important part for the customer, which was the user agreement workflow, and defer the technical issues over where to store the agreement status until they could discuss the issue more.

Moral of the story

When discussing a story, if there is contention about how part of it is done, it is sometimes possible to split the story so that the understood part is separated from the not understood part, allowing progress to be made. In this case, we knew how to write the workflow, but not how to prevent the user from seeing the agreement each time. We split the story at that particular edge, which allowed the team to build something useful, and to defer what they didn’t know yet until they knew more about it later.

— bab

What do you think of this code?

I recently finished 6 weeks of coding for a client, and it was heaven! I actually got a chance to code every day, for 6 solid weeks. It was a chance for me to learn C# 3.0, and a chance to work on testing things that are hard to test. It was great!

Out of the work, came several interesting observations and coding techniques, all rooted in C# 3.0. Since no one at work has any experience with these new idioms I “invented”, “discovered”, or just “copied”, I’d love to get some reader feedback. I’ll start with this one trick I tried, and follow on with more as the mood strikes me over time.

Trick 1: Using extension methods and a marker interface in place of implementation inheritance

I had an instance of code duplication in two parallel hierarchies of classes, and I wanted to find a way to share the code. One option would be to use inheritance, factoring out another base class above BaseResponse and BaseRequest. This is where methods common to requests and responses could both live. Using inheritance as a way to reuse code in a single inheritance language is a pretty heavyweight thing to do. I’d rather find a way to use delegation, since that preserves the SRP in my class hierarchy. Instead, I decided to try an extension method, and just use that method where I needed it. To avoid polluting Object with unnecessary methods, however, I came up with the idea of using a marker interface on the classes I wanted to have these extension methods, limiting the scope where these extra methods were visible. (No idea if anyone else has done this yet or not)

ClassDiagram1

For each request and response class, in the two parallel  hierarchies, my client requirements made it necessary to add an XmlRoot attribute to tell the XmlSerializer that this object was the root of an XML document and to specify the runtime name of this element. To let me get the runtime name of each request and response object, for auditing and logging purposes, both hierarchies had a CommandName property, containing the exact same code. This was the code in question that I was trying to share.

As a simple exercise, I created an extension method to deal with this:

    internal static class SssMessageExtensionMethods
    {
        public static string GetCommandNameFromXmlRootAttribute(this object message)
        {
            object[] attributes = message.GetType().GetCustomAttributes(typeof(XmlRootAttribute), true);
            if (attributes.Length == 0) return message.GetType().Name;

            XmlRootAttribute xmlRootAttribute = attributes[0] as XmlRootAttribute;

            return xmlRootAttribute.ElementName;
        }
    }

This solution worked just fine, and the code ran correctly, but I still wasn’t happy with my solution. The problem I was sensing was that I was adding yet another extension method to Object, and Object’s neighborhood was already pretty crowded with all the Linq methods in there. I wanted my extension methods to show up only on those classes to which I wanted to apply them.

The solution that I came up with was to use a marker interface whose sole purpose is to limit the visibility of the extension methods to classes that I intend to apply them to. In this case, I made BaseRequest and BaseResponse each implement IMessageMarker, an interface with no methods. And I changed the extension method to be:

    internal static class SssMessageExtensionMethods
    {
        public static string GetCommandNameFromXmlRootAttribute(this ISssMessageMarker message)
        {
            object[] attributes = message.GetType().GetCustomAttributes(typeof(XmlRootAttribute), true);
            if (attributes.Length == 0) return message.GetType().Name;

            XmlRootAttribute xmlRootAttribute = attributes[0] as XmlRootAttribute;

            return xmlRootAttribute.ElementName;
        }
    }

Now I have the same extension method defined, but it only appears on those classes that implement the marker.

What do you think of this technique? In a more powerful language, like Ruby or C++ (ducking and running for cover!), this kind of trickery wouldn’t be needed. But C# can only get you so far, so I felt this was a good tradeoff between adding the methods for needed functionality and making the most minimal change in my classes to hide these methods so that only those places that needed them could see them.

— bab

The downside of coding alone…

I had what was probably an obvious insight the other day while I was working on my project alone. I’m a team of one, which kind of gets in the way when it comes to pairing. This, unfortunately, has an effect on my final code.


Good pairs are adversarial


When you find yourself pairing with someone really good, it can almost feel adversarial. What I mean by that is that you can get into a rhythm where one person writes a test, intending to lead his partner down the road of writing a particular piece of code. His partner, however, can write something entirely different that still causes the test to pass.


This back and forth dance between tester and implementer forms the basis of good micro pairing sessions. In these sessions the tester/driver intends to lead the implementer down a particular path, but the implementer has the option of following another way, forcing the test writer to write another test, trying to drive the implementer down the intended path, and so on.


This leads to particularly good code, as the code that is written is usually the least code possible to implement the functionality, and the tests that are written thoroughly cover the functionality that was intended. It’s really cool to watch this work.


If you’re a pair of one…


If you happen to be working by yourself, it is very difficult to simulate this tension between test authoring and application implementation. At least, from my point of view, what happens is that I do write the code I want to test to lead me to, regardless of whether or not there is a simpler way to get the test to pass. I think it is natural to do this, since you’re trying to play both sides of the partnership.


I think code I write without a pair is inferior to code I create with a partner, for this exact reason. We didn’t fight over the minimal implementation, which leads to still good code, but not the glory that is fully paired/TDD code.


There ain’t nothing better.


— bab

Episode 2 – The InputReader and the start of the Processor

OK, so this stuff is different. Really different. So different that i feel like a TDD rookie all over again. I find myself questioning everything that I do, and wondering if I’m going in the right direction. But it’s fun learning something new…

When I last left you…

When we finished episode 1, we had created a couple of customer tests and had used Interaction Based Testing to drive out the basics of our architecture. In looking back at what we drove out, I wonder about some of those classes. I can see that I have an input side, and processing middle, and an output side, but I see an awful lot of generalization having happened already. I’m going to watch out for this throughout the rest of this exercise. It is possible that this style of writing tests drives you towards early generalization, but I’m pretty sure it is just my unfamiliarity with how to drive code through these tests that is making this happen.

The Input Side

According to the first test I wrote, this is the interface that the input side needs to have:

public interface IInputReader
{
    List<BatchInput> ReadAllInputs();
}

I don’t know anything about a BatchInput yet, or how to read the input lines, but I think I may be about to find out.

So my job now is to drive out how the IInputReader will be implemented by some class. As it turns out, this is really not very interesting. The interaction-based testing that we’ve been doing has been very useful at driving out interactions, but the InputReader seems to stand alone. It is at the very end of the call chain, which means that it doesn’t interact with anything else. This means that state-based tests will do just fine to test this piece of the system.

Here is the first test I wrote for this class:

[Test]
public void InputReaderImplementsIInputReader()
{
    Assert.IsInstanceOfType(typeof(IInputReader), new InputReader(null));
}

I’ve started writing tests like these to force me to make the class I’m writing implement a specific interface. I started doing this because I’ve found myself going along writing a class, knowing full well that it has to implement some interface, but forgetting to actually do it. I end up writing the whole class to the wrong API, and I have to go back and refactor the API to match what it should be. Hence, I write this test now to enforce me implementing the right interface.

Here are the rest of the tests:

[Test]
public void EmptyInputStreamGeneratesEmptyOutputList()
{
    StringReader reader = new StringReader(String.Empty);
    InputReader inputReader = new InputReader(reader);

    List<BatchInput> input = inputReader.ReadAllInputs();

    Assert.AreEqual(0, input.Count);
}

[Test]
public void SingleCommandInputStringGeneratesSingleElementOutputList()
{
    StringReader reader = new StringReader("a|b" + System.Environment.NewLine);
    InputReader inputReader = new InputReader(reader);

    List<BatchInput> input = inputReader.ReadAllInputs();

    Assert.AreEqual(1, input.Count);
    Assert.AreEqual("a|b", input[0].ToString());
}

[Test]
public void MultipleCommandInputStringGeneratesMultipleElementsInOutputList()
{
    StringReader reader = new StringReader("a|b" + System.Environment.NewLine + "b|c" + Environment.NewLine);
    InputReader inputReader = new InputReader(reader);

    List<BatchInput> input = inputReader.ReadAllInputs();

    Assert.AreEqual(2, input.Count);
    Assert.AreEqual("a|b", input[0].ToString());
    Assert.AreEqual("b|c", input[1].ToString());
}

These tests follow the usual 0, 1, many pattern for implementing functionality. Make sure something works for 0 elements, which fleshes out the API, then make sure it works for a single element, which puts the business logic in, and then make it work for multiple elements, which adds the looping logic. Here is the oh, so complicated code to implement these tests:

public class InputReader : IInputReader
{
    private readonly TextReader reader;

    public InputReader(TextReader reader)
    {
        this.reader = reader;
    }

    public List<BatchInput> ReadAllInputs()
    {
        List<BatchInput> inputData = new List<BatchInput>();
        ReadAllLines().ForEach(delegate(string newLine) 
            { inputData.Add(new BatchInput(newLine)); });
        return inputData;
    }

    private List<string> ReadAllLines()
    {
        List<string> inputLines = new List<string>();
        while (reader.Peek() != -1)
        {
            inputLines.Add(reader.ReadLine());
        }

        return inputLines;
    }
}

And that should pretty well handle the input side of this system.

On to the Processor

The Processor class takes the input BatchInput list and converts it into ProcessOutput objects, which are then written to the output section of the program. Here is the interface again that rules this section of code:

public interface IProcessor
{
    List<ProcessOutput> Process(List<BatchInput> inputs);
}

First of all, let’s make sure that my class is going to implement the correct interface:

[Test]
public void ProcessorImplementsIProcessor()
{
    Assert.IsInstanceOfType(typeof(IProcessor), new Processor(null, null));
}

Now, the responsibilities that seem to have to happen here are that each BatchInput object needs to be turned into something representing a payroll input line, and that new object needs to be executed in some way. Those thought processes lead me to this test:

[Test]
public void SingleBatchInputCausesStuffToHappenOnce()
{
    MockRepository mocks = new MockRepository();

    IPayrollProcessorFactory factory = mocks.CreateMock<IPayrollProcessorFactory>();
    IPayrollExecutor executor = mocks.CreateMock<IPayrollExecutor>();
    Processor processor = new Processor(factory, executor);
    PayrollCommand commonPayrollCommand = new PayrollCommand();

    List<BatchInput> batches = TestDataFactory.CreateBatchInput();

    using (mocks.Record())
    {
        Expect.Call(factory.Create(batches[0])).Return(commonPayrollCommand).Repeat.Once();
        executor.Execute(commonPayrollCommand);
        LastCall.Constraints(Is.Equal(commonPayrollCommand)).Repeat.Once();
    }

    using (mocks.Playback())
    {
        processor.Process(batches);
    }
}

There is tons of stuff to see in this test method now. Immediately after I create my MockRepository, I create my system. This consists of three objects:

  • IPayrollProcessorFactory — responsible for converting the BatchInput object into a PayrollCommand
  • IPayrollExecutor — responsible for executing the PayrollCommand after it is created
  • Processor — the driver of the system

Together, these three classes make up this portion of our system. If I were doing SBT (state-based testing), I’m not entirely sure at all that I would have these two embedded objects yet. I would probably have written code and then refactored to get to where I am now. But, with IBT, you have to think in terms of who the collaborators are that the method under test is going to use, and jump to having those collaborators now rather than later. In fact, the whole IPayrollExecutor seems kind of contrived at this point to me, but I need to have something there to interact with, so I can write an IBT for this.

On the next line, I create an instance of my PayrollCommand. I specifically use this instance as the returned value from the factory.Create call in the first expectation and as a constraint to the executor.Execute in the second expectation. This was something I was struggling with earlier in my experimentation with IBT. What I want to have happen is that I want to force my code to take the object that is returned from the Create call and pass it to the Execute call. By having a common object that I use in the expectations, and having the Is.Equal constraint in the second expectation, I can actually force that to happen. It took me a while to figure this out, and I’m pretty sure that this is a Rhino Mocks thing, rather than a generic IBT thing, but I found this to be helpful.

Then I drop into the record section, where I set some expectations on the objects I’m collaborating with. The first expectation says that I expect an instance of a BatchInput to be provided to this Execute method when called. Please note, and it took me a while to intellectually really grasp this, the batches[0] that I’m passing to the Create method is really just a place holder. This is the weird part here — I’m not actually calling the factory.Create method here, I’m signaling the mocking framework that this is a method I’m about to set some expectations on. I could have just as easily, in this case, passed in null in place of the argument, but I thought null didn’t communicate very clearly. What I do mean is that I expect some instance of a BatchInput to be provided to this method. Maybe I would have done better by new’ing one up in place of using batches[0]???? It is not the value or identity of the object that matters here at all, it is the type, and only because a) the compiler needs it and b) it communicates test intent. The rest of that expectation states that I’m only going to expect this method to be called once, and is allowing me to specify what object will be returned when this method is called. This last part is one of the hardest parts for me to have initially grasped. I was unsure whether this framework was asserting that the mocked method would return the value I passed it, or whether it was allowing me to set up the value that would be returned when it was called. In looking back, the second option is the only one that makes any sense at all, since these expectations are being set on methods that are 100% mocked out and have no ability to return anything without me specifying it in some way. Doh!

The second expectation is where I set up the fact that I expect the same object that was returned from the Create call to be the object passed to this call. Again, I do this through the Constraint, not through the value actually passed to the executor.Execute() method. I could just as easily passed in null there, but it wouldn’t have communicated as clearly.

Finally, I get to the playback section, call my method, and the test is over.

This is the code that I wrote to make this test pass:

public List<ProcessOutput> Process(List<BatchInput> batches)
{
    List<ProcessOutput> results = new List<ProcessOutput>();

    PayrollCommand command = factory.Create(batches[0]);
    executor.Execute(command);

    return results;
}

I know I’m not handling the results at all yet, but I’m pretty sure I can flesh out what will happen with those at some point soon.

In my second test, I’ll worry about how to handle multiple BatchInput objects. Again, this is a very common pattern for me, starting with one of something to get the logic right, and then moving on to multiple, to put in any looping logic I need. Here is the second test:

[Test]
public void MultipleBatchInputsCausesStuffToHappenMultipleTimes()
{
    MockRepository mocks = new MockRepository();

    IPayrollProcessorFactory factory = mocks.CreateMock<IPayrollProcessorFactory>();
    IPayrollExecutor executor = mocks.CreateMock<IPayrollExecutor>();
    Processor processor = new Processor(factory, executor);
    PayrollCommand commonPayrollCommand = new PayrollCommand();

    List<BatchInput> batches = TestDataFactory.CreateMultipleBatches(2);

    using (mocks.Record())
    {
        Expect.Call(factory.Create(batches[0])).
                Constraints(List.OneOf(batches)).Return(commonPayrollCommand).Repeat.Twice();
        executor.Execute(commonPayrollCommand);
        LastCall.Constraints(Is.Equal(commonPayrollCommand)).Repeat.Twice();
    }

    using (mocks.Playback())
    {
        processor.Process(batches);
    }
}

Almost all of this test is exactly the same, except I add two BatchInput objects to my list. The only other thing I need to enforce is that the object that is passed to the factory.Create method is a BatchInput object that is a member of the list I passed in, which I do with the List Constraint to the first expectation.

Here is the modified Processor code:

public List<ProcessOutput> Process(List<BatchInput> batches)
{
    List<ProcessOutput> results = new List<ProcessOutput>();

    foreach (BatchInput batch in batches)
    {
        PayrollCommand command = factory.Create(batch);
        executor.Execute(command);
    }

    return results;
}

Object Mother

In both of these tests, you’ll see a reference to TestDataFactory. This is a class whose responsibility it is to create test data for me when asked. I use it to remove irrelevant details about test data from my tests and move it someplace else. This is called the Object Mother pattern.

In the next episode…

That’s about enough for now. If any of this wasn’t clear, please let me know, and I’ll update the text to be better. In the next episode, I’ll go ahead and build the factory using SBT, since it isn’t going to interact with anything and then dive into the Processor code, which should prove interesting.

Overall, I’m pretty happy with how IBT is allowing me to focus on interactions between objects and ignore details like the contents of my domain classes entirely until I get to a class who manipulates the contents of those domain classes.

My biggest question lies in the area of premature generalization. Am I thinking too much and ignoring YAGNI? Do these tests reflect the simplest thing that can possibly work? I’m truly not sure. I tried to do better in this episode to focus on just payroll stuff and not make generic classes, like the IInputReader. I have a PayrollProcessorFactory, for example, instead of a ProcessorFactory. Those refactorings will come, and I want to wait for the code to tell me about them. IBT, I think, makes it easier to see those abstractions ahead of time, but I need to resist!

Please write with questions and comments. This continues to be an interesting journey for me, and I’m not at all sure where I’m going yet! But it is fun!

— bab

Interesting difference using nested test suites in JUnit versus NUnit

My friend, Jim Newkirk, introduced me to a very nice way of partitioning programmer tests for a class as you write them. Most developers write a single test class for a single application class, and just dump all tests for that class in the same place. This is not as correct as it could be (that’s Consultant-Speak for “that’s just plain wrong”).

The accepted best practice is to group together tests that have the same setup/teardown logic into the same test fixtures, which can lead to having multiple fixtures for a single class. For example, when I build a Stack class, I generally have different fixtures for each of the different states that my Stack class can have, and I put a test into the correct fixture representing its starting state. For example, I might have states corresponding to

  • an empty stack
  • stack with a single element
  • stack with multiple elements
  • stack that is full

and so on. I would create a new fixture for each of these states, and use setup and teardown to push my system into the given state for that fixture. I know that this is a departure from my previous advice about Assiduously Avoid Setup and Teardown, but I think I like where this leads me. I promise to post an example of writing tests like this over the next few days, but that example is not part of what I’m talking about here.

What I am talking about is an arrangement like this:

[TestFixture]
public class StackFixture
{
    [TestFixture]
    public class EmptyFixture
    {
        [Test]
        public void ATest() {}
    }

    [TestFixture]
    public class SingleElementFixture
    {
        [Test]
        public void AnotherTest() {}
    }
}

and so on. The main reason I like this arrangement of using nested fixtures is that it allows for me to separate out tests for different behaviors of my class into different fixtures, which lets me find them more easily and makes it easier to decide where to put new tests, and it lets me run all the tests for a particular class together at the same time. If I were to have several independent test fixtures, I would have no automated way of ensuring that I ran all of them together. The closest I could come would be to use categories, which is rather manual and error prone.

Now, what I just tried to do was to replicate this arrangement in Java, and it was harder to make it work. Using JUnit 3.8.1, I tried this:

public class StackFixture extends TestCase {
    public static class EmptyFixture extends TestCase {
        public void testATest() {}
    }
}

This gave me two tests run, one failing, so it found the test in the inner class, but found an error about no test being found in the outer fixture, StackFixture, so my tests could never all pass. I tried removing static from the class declaration, and then JUnit didn’t find the test in the inner fixture, and I still had the failure for no tests found. Clearly, not possible here.

Then I tried JUnit 4, which, like NUnit 2 and beyond, uses attributes to identify tests. In Java, they call them annotations, but they seem to be the same thing. Here is what I wrote:

public class JUnitFourFixture {
    public  class StackEmptyFixture {
        @Test
        public void EmptyAtCreation() {
            Stack stack = new Stack();

            assertTrue(stack.isEmpty());
        }
    }
}

When I ran this, I got an error popup saying that there were no tests found. Not a winner 🙁 But when I added static to the class declaration for the inner class, things did finally work beautifully. Here is the code that worked, with an extra fixture added just to be sure, and the inner classes made static. (BTW, for those of you who don’t know, there are two kinds of inner classes in Java. Inner classes without the static prefix belong to a particular instance of the enclosing class, so when you instantiate the outer class, you’re instantiating the inner class as well, and it has access to stuff in the outer object. If you have the static prefix, then the inner class is entirely independent of the outer class, just like inner classes in C#.)

public class JUnitFourFixture {
    public static class StackEmptyFixture {
        @Test
        public void EmptyAtCreation() {
            Stack stack = new Stack();

            assertTrue(stack.isEmpty());
        }
    }

    public static class SingleElementFixture {
        @Test
        public void AnotherTest() {
            assertTrue(true);
        }
    }
}

I don’t know how many of you didn’t know this, or never had a reason to care about this, but I am teaching a Java TDD course this week. Before I taught this, I wanted to make sure it worked!

— bab