Test driving a WPF App – Part 2: Adding some data

This is part two of the blog series about creating a WPF application using TDD to drive the functionality and MVVM to split that functionality away from the view into something more testable. You may want to read part one if you haven’t already.

In this part, we’ll see what it takes to display data on the front page about each player on the softball team. We’re not going to display all their stats, since we don’t need to for the example, but it would be very easy to extend this to do so. (N.B. I have a more full version of this that I’ll post that includes a database backend with more data in it than what I’m going to show here. The data backend is irrelevant to the layer being discussed here, so I’m going to ignore those details.)

The end result

At the end of this process, what we’re going to have is a list of players and their numbers. We could easily extend it to total stats for each of them, but that’s outside the point of the example, and is left as an exercise for the reader (I hated when college textbooks said that!).

image

So anyhow, you can see the 4 players listed and their numbers. This data came from the data repository that I created behind the view, in a class called PlayerRepository. I’m not going to go into the details of how this repository class was created, as the database layer is outside the scope of this article, so we’ll just assume that we created it when it was needed, and I won’t include discussion or tests for it. The important part of us to think about is how the data got into the view model and how it got from there onto the view. And that begins our (short) story.

Step 1 – Get data to the view

In order to get data to the view, the view model needs to expose an ObservableCollection that the view can bind to. I decided to create a property called Players on the TeamStatControlViewModel, driving it through tests. I created a test fixture in the ViewModelFixtures assembly called TeamStatControlPlayersDataBindingFixture, and I put my test into there (as an aside, I’m trying a different style of fixtures here, where I’m naming fixtures after behaviors or features and putting all the tests for those features in that fixture. I’m intentionally trying to break out of the app class <-> fixture class model and create tests around the behaviors of the system, regardless of where those behaviors may lie. Let me know if you like it, please.)

   1: [Fact]

   2: public void ViewModelExposesObservableCollectionOfPlayers()

   3: {

   4:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null);

   5:  

   6:     Assert.IsType<ObservableCollection<Player>>(viewModel.Players);

   7: }

 
We have to add the Players property to the view model and create an empty Player class in our Model assembly to get this test to compile. Once it runs and fails, we implement the property in the view model to return an empty ObservableCollection<Player> collection. Run again, and the test passes.
   1: public ObservableCollection<Player> Players

   2: {

   3:     get { return new ObservableCollection<Player>(); }

   4: }

Step 2 – Get the data to the view model

Now that we have the data available to be shown on the view, we need to provide that data to the view model. We do this by giving the view model access to the PlayerRepository we spoke of earlier and letting it query the repository for the data as needed. Again, very simple. Here is the test in that same test fixture:

   1: [Fact]

   2: public void DataIsPulledFromModelWhenRetrievingPlayersFromViewModel()

   3: {

   4:     var playerRepository = new Mock<IPlayerRepository>();

   5:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null, playerRepository.Object);

   6:     playerRepository.Setup(pr => pr.GetAllPlayers()).Returns(new List<Player>());

   7:  

   8:     ObservableCollection<Player> players = viewModel.Players;

   9:  

  10:     playerRepository.Verify(pr => pr.GetAllPlayers());

  11: }

The test start driving to define the behavior our system is going to need. We already know that the view model is going to need a reference to the repository, so we add the repository to the constructor arguments for the view model on line 5. (I like working backwards in a situation like this to discover the objects needed. I’ll write the constructor signature first and use that to drive object creation on previous lines, like you see here.) This forces us to create a mock version of the repository, which we do in line 4 by discovering an IPlayerRepository interface, details to be fleshed out. On line 6, we set up the behavior that we want the view model to invoke on the repository, which is the GetAllPlayers method, which for our purposes needs to return an empty collection of the appropriate type. Finally, we invoke the Players property and verify that the repository’s GetAllPlayers method is indeed called.

In getting this to compile, we create the IPlayerRepository interface and modify the signature of TeamStatControlViewModel to take the new IPlayerRepository parameter. Run test, test fails, and we finally implement:

   1: public interface IPlayerRepository

   2:     {

   3:         IList<Player> GetAllPlayers();

   4:     }

and

   1: public class TeamStatControlViewModel: IApplicationExitAdapter

   2: {

   3:     private readonly IApplicationController applicationController;

   4:     private readonly IPlayerRepository playerRepository;

   5:  

   6:     public TeamStatControlViewModel(IApplicationController applicationController, IPlayerRepository playerRepository)

   7:     {

   8:         this.applicationController = applicationController;

   9:         this.playerRepository = playerRepository;

  10:         ApplicationExit = new ApplicationExitCommand(this);

  11:     }

  12:  

  13:     public void ExitApplication()

  14:     {

  15:         applicationController.ExitApplication();

  16:     }

  17:  

  18:     public ICommand ApplicationExit { get; set; }

  19:  

  20:     public ObservableCollection<Player> Players

  21:     {

  22:         get { return new ObservableCollection<Player>(playerRepository.GetAllPlayers()); }

  23:     }

  24: }

At this point, we have data coming from the repository and available to the view. All that’s left is to hook up to the view.

Step 3 – Hooking up to the real view

I’m going to cheat a bit and not define a DataTemplate for this and just directly bind to the two columns I’m going to define, Name and Number. I have a ListView in the middle of my window, as you can see in the screencapture at the top if this post. I define a couple GridViewColumns in it and bind them to Name and Number:

   1: <ListView IsSynchronizedWithCurrentItem="True" Grid.Row="1" VerticalAlignment="Stretch" ItemsSource="{Binding Players}">

   2:     <ListView.View>

   3:         <GridView>

   4:             <GridViewColumn Header="Player" DisplayMemberBinding="{Binding Name}"/>

   5:             <GridViewColumn Header="Number" DisplayMemberBinding="{Binding Number}" />

   6:         </GridView>

   7:     </ListView.View>

   8: </ListView>

Obviously I wouldn’t do this on a real project, I’d use a DataTemplate. But for now, this will do. The ItemSource is set to the Players property, and the two columns are set to the fields I want to show.

We don’t have a real instance of our IPlayerRepository yet, so lets build a really simple one. In real life, this would be a repository over the top of a database, but we don’t need to go that far for now. Let’s just create the simplest repository we can for now:

   1: public class PlayerRepository : IPlayerRepository

   2:     {

   3:         public IList<Player> GetAllPlayers()

   4:         {

   5:             return new List<Player>

   6:                        {

   7:                            new Player

   8:                                {

   9:                                    Name = "Linsey",

  10:                                    Number = "42"

  11:                                },

  12:                            new Player

  13:                                {

  14:                                    Name = "Michelle",

  15:                                    Number = "31"

  16:                                },

  17:                            new Player

  18:                                {

  19:                                    Name = "Susan",

  20:                                    Number = "17"

  21:                                },

  22:                            new Player

  23:                                {

  24:                                    Name = "Joan",

  25:                                    Number = "26"

  26:                                }

  27:                        };

  28:         }

  29:     }

One thing we’re still doing is exposing the Player class to the view. This is not necessarily the best practice that has evolved, since the Player class is defined in the Model layer. The danger is that we may end up needing to add INotifyPropertyChanged behavior to the Player class, which would pollute our domain model with view-specific code. If that were to happen, it would force us to refactor the view model to return an observable collection of something else, like an ObservableCollection<PlayerViewModel>, so that we would have a place to put our view-specific code. We don’t need to yet, so we’re not going to bother. This is a potential refactoring to come, though.

And our final change is to add the PlayerRepository type into our Application_Startup method, so that Unity knows how to build the repository and pass it into the TeamStatControlViewModel:

   1: private void Application_Startup(object sender, StartupEventArgs e)

   2: {

   3:     container.RegisterInstance<IApplicationController>(this);

   4:  

   5:     container.RegisterType<IPlayerRepository, PlayerRepository>(new ContainerControlledLifetimeManager());

   6:     container.RegisterType<TeamStatControlViewModel>(new ContainerControlledLifetimeManager());

   7:     container.RegisterType<TeamStatControl>(new ContainerControlledLifetimeManager());

   8:  

   9:     container.RegisterType<MainWindow>(new ContainerControlledLifetimeManager());

  10:     

  11:     Current.MainWindow = container.Resolve<MainWindow>();

  12:     Current.MainWindow.Content = container.Resolve<TeamStatControl>();

  13:     Current.MainWindow.Show();

  14: }

Compile and run, and all should work.

Conclusion

This was a fairly easy step to take. We exposed a property on our view model to let our view see the data we want to publish. Our view model has a repository injected into it to let it get the data as needed. That was really all there was to it. Two tests, and we got to where we needed to be.

The next step along the way will involve navigating from the front page to a player detail page. Stay tuned for the next installment, coming right up!

— bab

Test Driving a WPF application using MVVM and TDD

This is a preview of the material I’m going to be presenting at the St. Louis Day of Dot Net in a couple weeks.

Introduction

For those of you who are reading this blog for the first time, I’m pretty much the St. Louis Agile Zealot. Long hair, flowing robes… you get the picture 🙂 I’ve been advocating Agile and XP in St. Louis since around 2000, through speaking at various user groups and local conferences, teaching at local companies, giving talks in the park, dropping leaflets from airplanes, and so on. I live Agile in my day job at Asynchrony Solutions, where I lead their Agile training and mentoring group. I am totally sold on Test Driven Development as the best way for me to write my code, and I’ve been occasionally known to share that opinion with others.

As far as WPF experience goes, I’m still pretty new in my learning curve.  I’ve been playing with a project on my own for a few months, trying new things, reading, and experimenting. When I first started out, I completely wrote a Winforms app in WPF. I used a full-on MVP/MVC architecture, had my button click methods in my window classes, etc. A complete mess… Then I started reading about MVVM and that sparked something inside me that realized that this finally made sense. This was something that gave me a lot of the advantages of the MVP-like architectures, such as being able to test my UI without actually running the UI, but without the overhead of wiring all that junk up myself.

What follows are the lessons I’ve learned in the process of rethinking my original app and in writing this sample app for my talk.

The Application

My daughter plays select softball for one of the organizations here in St. Louis. They have a web site that lists all their stats throughout the year, and I thought it would be fun to mimic that site as a desktop app. The application would list all the girls’ total stats on the front page and give you the ability to drill down into detailed stats for each game on subsequent pages. For fun, I wanted to be able to add stats for new games, and add and delete players. With that set of features, it seemed that I’d be able to drive out a lot of interesting opportunities to change around the design to let me write the code in a TDD manner.

Here is the main page of the application, in all its glory. Remember that I am but a poor developer, with a developer’s eye for style 🙂

image

The display of stats isn’t complete yet, but it was enough to validate that my data binding was working. Let’s look at what it took to get this far…

Note: If you want to skip the setup stuff and get right to the parts where behavior is added, jump ahead to Step 3.

Our Roadmap

As a quick review, before we dive in, here is an overview of the MVVM design pattern and the players involved.

MVVM Architecture

The View represents the surface that the user interacts with. It is the WPF code, the UI, and XAML… all that stuff that users see and that is hard to test in an automated fashion. Our goal is to take as much logic out of there as possible, to allow us to have the highest amount of confidence and the greatest amount of test coverage that we can achieve.

Next to the View is the ViewModel. The ViewModel represents the entire set of data and behavior contained in the View. It has properties to expose all the different pieces of data that the View will  need to display, and it exposes another set of properties for all the behavior that the View will need to invoke.

The View and ViewModel are tied together in two separate, but very similar, ways. Through Data Binding, data travels between the View and ViewModel automatically. WPF, through the wonders of its internal magic, keeps consistent the data shown in the View and the data held in the ViewModel. There is a bit of work that we have to do to ensure this happens, but it mostly all automatic. The Commands are representations of the behavior that users invoke through the View. Individual commands are exposed through properties on the ViewModel and tied to controls on the View. When the user presses a button, for example, the command attached to that button fires and something happens. You can think of Commands as little controllers, focused on a specific topic, whose responsibility it is to respond to a user action in a particular manner by coordinating the response to that action between the View and ViewModel.

Our goal here is to put all logic about the View into the ViewModel. The View should be very thin and code-less, and the Commands should be thin shims that merely provide an execution conduit between the gesture being made and the behavior being invoked in the ViewModel. Code in the View or in Commands should be looked up with suspicion, with an eye towards trying to move it into someplace more testable. Sometimes you can move that code, and sometimes its entirely appropriate where it is, but the question should always be asked.

Step 1 – Modify the template to support TDD

When you create a WPF application using the Visual Studio New Project dialog, you get the framework of an empty application. There is no code anywhere, other than an InitializeComponent() call here and there. Most of the interesting behavior, at this point, takes place through XAML – things like instantiating the Application object, creating the main window and causing it to display, and other basic things like that. While this works, and works fine for the empty application, we’re going to need more control over how this initiation process occurs, for reasons we’ll discuss later in this post or a subsequent one.

So, to control the initial creation, there are a few changes we need to make. The first step is to put ourselves in control of when and where the main window is created. As time goes on, there are aspects of this creation process that we’re going to want to control, like wiring together other objects, so we’ll need full control over the how windows and controls are born.

App.xaml

When first created, App.xaml looks like this:

   1: <Application x:Class="WpfApplication1.App"

   2:     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

   3:     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"

   4:     StartupUri="Window1.xaml">

   5:     <Application.Resources>

   6:          

   7:     </Application.Resources>

   8: </Application>

The StartupUri attribute of the Application element is what causes the class defined in Window1.xaml to be instantiated. We want to prevent this from happening until we’re ready for it, so we have to take this out. We’ll still need to create this window, however, so we’ll need to arrange for another way to create objects as the system is starting. It turns out that WPF exposes and event that is perfect for this initialization, the Startup event. By providing a handler for the Startup event, we can have a place to put all our initialization code.

App.xaml now looks like this:

   1: <Application x:Class="WpfApplication1.App"

   2:     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

   3:     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"

   4:     Startup="Application_Startup">

   5:     <Application.Resources>

   6:  

   7:     </Application.Resources>

   8: </Application>

Now we have to add some code into the App.xaml.cs file to create the objects we need and to wire them together. Please note that I’m not writing tests for this code, because this is configuration code, not business logic. I view logic such as this the same way I view a Main method, and I tend not to test Main’s. It is where objects are created and wired together and then told to go on their merry way. This is not meant to be reusable code and should be just simple declarations of objects and assignments of variables. If I manage to screw up Main, the application isn’t going to work anyhow, and I’m pretty sure I’ll know it 🙂

   1: using System.Windows;

   2:  

   3: namespace WpfApplication1

   4: {

   5:     public partial class App : Application

   6:     {

   7:         private void Application_Startup(object sender, StartupEventArgs e)

   8:         {

   9:             Current.MainWindow = new MainWindow();

  10:             Current.MainWindow.Show();

  11:         }

  12:     }

  13: }

The final step is to rename the template-provided window name of Window1 to MainWindow and make sure I instantiate it with the proper class name in the Application_Startup method, the handler for the Startup event. Once that is done, you can start the project, and an empty window should appear.

image

 One more change…

Before going further, I’d like to introduce Unity into the codebase. Unity is the dependency injection library from Microsoft, and I’ve found it to be very useful in my WPF applications. Here is the previous code after injecting Unity (injecting Unity – HA! I kill myself).

   1: using System.Windows;

   2: using Microsoft.Practices.Unity;

   3:  

   4: namespace WpfApplication1

   5: {

   6:     public partial class App : Application

   7:     {

   8:         private readonly IUnityContainer container = new UnityContainer();

   9:  

  10:         private void Application_Startup(object sender, StartupEventArgs e)

  11:         {

  12:             container.RegisterType<MainWindow>(new ContainerControlledLifetimeManager());

  13:  

  14:             Current.MainWindow = container.Resolve<MainWindow>();

  15:             Current.MainWindow.Show();

  16:         }

  17:     }

  18: }

Briefly, the point of this code is to register a type, MainWindow, with the Unity container, and then use the container to instantiate the MainWindow object as the application’s main window. The reason I introduced Unity is to separate myself from the semantics of object creation. Unity will take care of instantiating any objects I ask for, as well as any objects that the class being created needs. Its just a way of relieving myself of having to worry about constructor signatures, basically.

Step 2 – Adding the initial content

Its time to fill in the empty window with some content. We’ll avoid adding any behavior yet, we just want to see something. The strategy that we’re going to use is to keep the MainWindow class as the one window in our application, and we’ll swap in and out different pieces of content to show in it. In my projects I’ve found this pattern to be useful, because it lets me layer content to mimic a workflow in the application, and have the content always show up in the same place, be the same size, and not take extra work to manage.

So our goal now is to set the MainWindow’s content to be the initial view we want to show people when they first open the application. The design for this looks like this:

DesignWithTeamStatControlAndViewModel

And the implementation is easy, too.

First, I create my user control, TeamStatsControl:

image

At the same time, I create a ResourceDictionary to hold styles that I’ll be using around the app and merged that dictionary into my UserControl.Resources section. I created a menu, a somewhat attractive header, and a couple of buttons. There’s no behavior in it yet, but that’s coming shortly.

Now, for the fun part, let’s make the simple change to make the content show up in the MainWindow. Since we’re creating a new object, our TeamStatControl, we have to make a few simple changes in App.Application_Startup. We need to register the user control with Unity so that it can create it as needed, and then resolve an instance of it as the Content property of the MainWindow. Once that is done, WPF takes care of making it appear in the window.

   1: private void Application_Startup(object sender, StartupEventArgs e)

   2: {

   3:     container.RegisterType<MainWindow>(new ContainerControlledLifetimeManager());

   4:     container.RegisterType<TeamStatControl>(new ContainerControlledLifetimeManager());

   5:  

   6:     Current.MainWindow = container.Resolve<MainWindow>();

   7:     Current.MainWindow.Content = container.Resolve<TeamStatControl>();

   8:     Current.MainWindow.Show();

   9: }

And, by the way, I also changed the Width and Height properties in MainWindow to be 600 and 1000 respectively so that the entire user control would be visible.

Step 3 – Adding behavior

Now that the shell is in place, we can add the behavior that we want. First, lets add the basic behavior of exiting the application in an orderly way. We want to be sure to do this in a way that all the code that we write is testable, and that is going to require a bit of extra work over and above just writing the simplest code. This is the price one pays when writing code using TDD, but the extra effort pays off in systems that work.

Let’s start by writing our first test. This test will be a test of the Command object, which is the object that the view will interact with. An underlying force of the MVVM pattern is that the ViewModel encapsulates the visible data and behavior of the View. And the Command object acts as the conduit through which the gestures of the user are translated into tangible actions in the ViewModel. Here is a sequence diagram showing this behavior:

CommandExecutionSequenceDiagram

Clicking the button causes the Command.Execute method to fire, which calls a method in the ViewModel. That method does the real work of this system, with the calls coming before it just part of the plumbing to cause things to happen.

So lets write a test to show that the above sequence is what will actually happen – that calling the Execute method of some Command object will invoke some sort of behavior that causes the application to exit.

Our first test:

   1: namespace ViewModelFixtures

   2: {

   3:     public class ApplicationExitBehaviorFixture

   4:     {

   5:         [Fact]

   6:         public void ApplicationExitingBehaviorInvokedWhenExecuteCalled()

   7:         {

   8:             var exiter = new Mock<IApplicationExitAdapter>();

   9:             ApplicationExitCommand command = new ApplicationExitCommand(exiter.Object);

  10:             exiter.Setup(e => e.ExitApplication()).Verifiable();

  11:  

  12:             command.Execute(null);

  13:  

  14:             exiter.Verify(e => e.ExitApplication());

  15:         }

  16:     }

  17: }

For those of you unfamiliar with the tools used in this test, I’m using Moq as my mocking framework and xUnit for my tests. These are my two favorite tools for TDD as of now, and I highly recommend both of them.

Beginning on line 7, defines the behavior of the system when my ApplicationExitCommand.Execute method is called. I’m defining the interface that the command is going to talk to, IApplicationExitAdapter, and I’m doing this in total and complete ignorance of what class is ever going to implement that interface (OK, I’m lying – its going to be my view model, but let’s keep that our secret for now).

And the code that we write to get this test to compile:

   1: namespace ViewModels

   2: {

   3:     public interface IApplicationExitAdapter

   4:     {

   5:         void ExitApplication();

   6:     }

   7: }

   1: namespace ViewModels

   2: {

   3:     public class ApplicationExitCommand : ICommand

   4:     {

   5:         public ApplicationExitCommand(IApplicationExitAdapter exitAdapter)

   6:         {

   7:             throw new NotImplementedException();

   8:         }

   9:  

  10:         public void Execute(object o)

  11:         {

  12:             throw new NotImplementedException();

  13:         }

  14:  

  15:         public bool CanExecute(object parameter)

  16:         {

  17:             throw new NotImplementedException();

  18:         }

  19:  

  20:         public event EventHandler CanExecuteChanged;

  21:     }

  22: }

Strictly speaking, I went a little further than I had to here, choosing to implement ICommand, but all commands that are hooked through a View as we’re discussing need to implement this interface (or one of a couple of others that are more advanced), so I just went ahead and did it.

Short Aside – Projects and Solution Structure

Before we go further, we should discuss the projects I’ve chosen to use in this solution, and why they were chosen. I have one project that holds the view and a separate project that holds the view models. I intentionally kept the Views and ViewModels separate to enforce the direction of the relationship between them. The Application class and the other WPF controls need to have access to the ViewModels, if only to create them and assign them to DataContexts. ViewModels, however, should be 100% ignorant of their views. The simplest way to enforce this constraint is to put all view model code into another assembly, which is what I’ve done.

Now its a simple matter of implementing the code to make the test pass:

   1: public class ApplicationExitCommand : ICommand

   2: {

   3:     private readonly IApplicationExitAdapter exitAdapter;

   4:  

   5:     public ApplicationExitCommand(IApplicationExitAdapter exitAdapter)

   6:     {

   7:         this.exitAdapter = exitAdapter;

   8:     }

   9:  

  10:     public void Execute(object o)

  11:     {

  12:         exitAdapter.ExitApplication();

  13:     }

  14:  

  15:     public bool CanExecute(object parameter)

  16:     {

  17:         return true;

  18:     }

  19:  

  20:     public event EventHandler CanExecuteChanged

  21:     {

  22:         add { CommandManager.RequerySuggested += value; }

  23:         remove { CommandManager.RequerySuggested -= value; }

  24:     }

  25: }

I also went ahead and added some boilerplate code into the CanExecute and CanExecuteChanged method and event inside this class. This boilerplate code is what you start with every time you write a command. You can think of this as creating a command using a code template inside Visual Studio.

At this point, the command should be invoking the behavior that we want, as seen in this diagram:

ApplicationExitCommandSequenceDiagram

Now that the behavior is being called inside the IApplicationExitAdapter, we’ll need to make the giant leap and assume that it is, indeed, our view model class that is going to be implementing that interface. This only make sense, since the Command is invoking behavior on some class that represents the behavior of the View, and the class that represents a View’s behavior is that View’s ViewModel. Q.E.D. Once we’ve crossed that intellectual chasm, we’ll go ahead and implement the functionality inside the view model.

For that, we write this test:

   1: [Fact] 

   2: public void ApplicationControllerInvokedWhenViewModelExitApplicationCalled()

   3: {

   4:     var applicationController = new Mock<IApplicationController>();

   5:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(applicationController.Object);

   6:     applicationController.Setup(ac => ac.ExitApplication()).Verifiable();

   7:  

   8:     viewModel.ExitApplication();

   9:  

  10:     applicationController.Verify(ac => ac.ExitApplication());

  11: }

This test causes us to make a few design decisions. First, we’ve discovered an interface called IApplicationController that is going to be responsible for controlling application-wide behavior, which would certainly include things like exiting at the appropriate time. We’re also creating the view model class, TeamStatControlViewModel, since we have to add behavior directly to it now. And finally, we’ve discovered that the view model needs to have a reference to the IApplicationController interface, to allow it to invoke functionality through that interfacee. This leads us to creating a constructor for the view model through which we can inject the interface at creation.

Here is the resulting code. First, the simple IApplicationController interface:

   1: namespace ViewModels

   2: {

   3:     public interface IApplicationController

   4:     {

   5:         void ExitApplication();

   6:     }

   7: }

and now the TeamStatControlViewModel:

   1: namespace ViewModels

   2: {

   3:     public class TeamStatControlViewModel : IApplicationExitAdapter

   4:     {

   5:         private readonly IApplicationController applicationController;

   6:  

   7:         public TeamStatControlViewModel(IApplicationController applicationController)

   8:         {

   9:             this.applicationController = applicationController;

  10:         }

  11:  

  12:         public void ExitApplication()

  13:         {

  14:             applicationController.ExitApplication();

  15:         }

  16:     }

  17: }

That’s just enough code to make that test pass, so we’re momentarily happy. However, our system isn’t going to work yet, because we haven’t found anyone to implement the IApplicationController interface. That’s our next major decision.

ClassDiagramExceptForFinalClass

I’m going to choose to put this interface onto the App class, at least for now, since it seems to be the one place in our application that has knowledge of the entire application. It may turn out later that we need to move this responsibility somewhere else, but we’ll leave it here for now.

We’ll go ahead and make the easy changes to App.xaml.cs, which consist of making it implement IApplicationController and writing the ExitApplication method, which simply calls Current.Shutdown().

   1: public partial class App : Application, IApplicationController

   2: {

   3:     private readonly IUnityContainer container = new UnityContainer();

   4:  

   5:     private void Application_Startup(object sender, StartupEventArgs e) {...}

   6:  

   7:     public void ExitApplication()

   8:     {

   9:         Current.Shutdown();

  10:     }

  11: }

The important point to note is that the IApplicationController interface lives in the ViewModel assembly, even though it is being implemented by the App class in the main WPF assembly. This means that the main WPF application project has to have a reference to the ViewModel project, which is the direction we want the dependencies to flow. By moving the ViewModel classes and interfaces into an a project separate from the view classes, we’re enforcing our dependency management design as we described earlier.

This now leaves us with this for a design:

ClassDiagramFinal

So we’re in the home stretch now. We have all our code built – what is lacking now are the few scattered bits and pieces needed to wire this whole thing together. There are several of these pieces, and we’ll look at them one at a time.

Let’s start with the object creation changes needed in App.Application_Startup. Over these last few changes, we’ve created a new interface for the App class, so we need to tell Unity that when I ask for an instance of an IApplicationController, it should pass back a reference to the App class. You can see this code on line 3 below. We also created a TeamStatControlViewModel, which we tell Unity about on line 5.

   1: private void Application_Startup(object sender, StartupEventArgs e)

   2: {

   3:     container.RegisterInstance<IApplicationController>(this);

   4:  

   5:     container.RegisterType<TeamStatControlViewModel>(new ContainerControlledLifetimeManager());

   6:     container.RegisterType<TeamStatControl>(new ContainerControlledLifetimeManager());

   7:  

   8:     container.RegisterType<MainWindow>(new ContainerControlledLifetimeManager());

   9:     

  10:     Current.MainWindow = container.Resolve<MainWindow>();

  11:     Current.MainWindow.Content = container.Resolve<TeamStatControl>();

  12:     Current.MainWindow.Show();

  13: }

Next, we know that TeamStatControlViewModel is going to be the DataContext for the TeamStatViewModel, so let’s make that happen in code. The easiest way is to change the constructor for the TeamStatControl to take the view model and assign it to be its DataContext in its constructor. The really cool part of this is that we don’t have to make any changes in App.ApplicationStartup based on this constructor changing. Unity will just build all the dependent objects for us and call whatever constructors it can find, and it all just works.

   1: public TeamStatControl(TeamStatControlViewModel viewModel)

   2: {

   3:     DataContext = viewModel;

   4:     InitializeComponent();

   5: }

Now that we have the objects built and wired up, we need to do the data binding to put the commands into the controls on the interface. The first place to do this is in TeamStatControl, where we define the Quit button. We need to assign an instance of the ApplicationExitCommand to the Command attribute of the button. Here is the XAML code that will do this for the Quit button:

   1: <Button Command="{Binding ApplicationExit}" Grid.Column="1" Content="Quit" HorizontalAlignment="Left" Style="{StaticResource NormalButton}"/>

We do this by creating a binding to the ApplicationExit property of our data context, the TeamStatControlViewModel. The view is now expecting the view model to expose a property called ApplicationExit, of type ICommand, through which it can get the command. We don’t have that property right now, so lets write a test that makes us create it.

   1: [Fact]

   2: public void ViewModelExposesApplicationExitCommandAsICommand()

   3: {

   4:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null);

   5:  

   6:     Assert.IsAssignableFrom<ApplicationExitCommand>(viewModel.ApplicationExit);

   7: }

This test says that we can get an ICommand from the view model and the ICommand we get is actually an ApplicationExitCommand, which is what we need. We just need to implement that through creating a simple auto-property in the view model. That satisfies what we need for the binding we defined in the XAML. Now, we construct the command in the constructor of the TeamStatControlViewModel class and pass an instance of the view model to the constructor of its command to wire that up, and we’re set!

   1: public TeamStatControlViewModel(IApplicationController applicationController)

   2: {

   3:     this.applicationController = applicationController;

   4:     ApplicationExit = new ApplicationExitCommand(this);

   5: }

   6:  

After all this, running the application and clicking the Quit button should cause the app to vanish silently, slipping into the night, never to be seen again. Whew!

A Bonus

Now that all this hard work is done, we can also  hook this same logic to the menu button. If we go into the XAML and find the line where we define the menu item, we just set its Command attribute to be a binding to ApplicationExit as well, and now it works, too.

Conclusion

As I said at the beginning, this blog entry, and the couple more that will follow this week, will form the basis for my St. Louis Day of Dot Net talk on this same subject. I’d appreciate any and all feedback on this article, especially including things that weren’t clear, design  leaps I made without a clear reason, and alternative ways to do the same things I did.

You can find the complete project for this on my website. You may need to install Unity to get the solution to compile.

Thanks for reading, and I hope it helps!

 

— bab

What do you think of this code?

I recently finished 6 weeks of coding for a client, and it was heaven! I actually got a chance to code every day, for 6 solid weeks. It was a chance for me to learn C# 3.0, and a chance to work on testing things that are hard to test. It was great!

Out of the work, came several interesting observations and coding techniques, all rooted in C# 3.0. Since no one at work has any experience with these new idioms I “invented”, “discovered”, or just “copied”, I’d love to get some reader feedback. I’ll start with this one trick I tried, and follow on with more as the mood strikes me over time.

Trick 1: Using extension methods and a marker interface in place of implementation inheritance

I had an instance of code duplication in two parallel hierarchies of classes, and I wanted to find a way to share the code. One option would be to use inheritance, factoring out another base class above BaseResponse and BaseRequest. This is where methods common to requests and responses could both live. Using inheritance as a way to reuse code in a single inheritance language is a pretty heavyweight thing to do. I’d rather find a way to use delegation, since that preserves the SRP in my class hierarchy. Instead, I decided to try an extension method, and just use that method where I needed it. To avoid polluting Object with unnecessary methods, however, I came up with the idea of using a marker interface on the classes I wanted to have these extension methods, limiting the scope where these extra methods were visible. (No idea if anyone else has done this yet or not)

ClassDiagram1

For each request and response class, in the two parallel  hierarchies, my client requirements made it necessary to add an XmlRoot attribute to tell the XmlSerializer that this object was the root of an XML document and to specify the runtime name of this element. To let me get the runtime name of each request and response object, for auditing and logging purposes, both hierarchies had a CommandName property, containing the exact same code. This was the code in question that I was trying to share.

As a simple exercise, I created an extension method to deal with this:

    internal static class SssMessageExtensionMethods
    {
        public static string GetCommandNameFromXmlRootAttribute(this object message)
        {
            object[] attributes = message.GetType().GetCustomAttributes(typeof(XmlRootAttribute), true);
            if (attributes.Length == 0) return message.GetType().Name;

            XmlRootAttribute xmlRootAttribute = attributes[0] as XmlRootAttribute;

            return xmlRootAttribute.ElementName;
        }
    }

This solution worked just fine, and the code ran correctly, but I still wasn’t happy with my solution. The problem I was sensing was that I was adding yet another extension method to Object, and Object’s neighborhood was already pretty crowded with all the Linq methods in there. I wanted my extension methods to show up only on those classes to which I wanted to apply them.

The solution that I came up with was to use a marker interface whose sole purpose is to limit the visibility of the extension methods to classes that I intend to apply them to. In this case, I made BaseRequest and BaseResponse each implement IMessageMarker, an interface with no methods. And I changed the extension method to be:

    internal static class SssMessageExtensionMethods
    {
        public static string GetCommandNameFromXmlRootAttribute(this ISssMessageMarker message)
        {
            object[] attributes = message.GetType().GetCustomAttributes(typeof(XmlRootAttribute), true);
            if (attributes.Length == 0) return message.GetType().Name;

            XmlRootAttribute xmlRootAttribute = attributes[0] as XmlRootAttribute;

            return xmlRootAttribute.ElementName;
        }
    }

Now I have the same extension method defined, but it only appears on those classes that implement the marker.

What do you think of this technique? In a more powerful language, like Ruby or C++ (ducking and running for cover!), this kind of trickery wouldn’t be needed. But C# can only get you so far, so I felt this was a good tradeoff between adding the methods for needed functionality and making the most minimal change in my classes to hide these methods so that only those places that needed them could see them.

— bab

Slides from my powershell talk

I gave an introduction to Powershell talk at the St. Louis .Net UG meeting on Monday, October 29th to about 70-80 people or so. I introduced basic concepts of powershell, talked about a few problems I had solved with it, and showed some simple scripts.

At the end of the presentation, I promised to post the slides by the end of this week, and I’m making it Wednesday night. Believe me when I say that me getting something finished early is a minor miracle 🙂

— bab

Episode 2 – The InputReader and the start of the Processor

OK, so this stuff is different. Really different. So different that i feel like a TDD rookie all over again. I find myself questioning everything that I do, and wondering if I’m going in the right direction. But it’s fun learning something new…

When I last left you…

When we finished episode 1, we had created a couple of customer tests and had used Interaction Based Testing to drive out the basics of our architecture. In looking back at what we drove out, I wonder about some of those classes. I can see that I have an input side, and processing middle, and an output side, but I see an awful lot of generalization having happened already. I’m going to watch out for this throughout the rest of this exercise. It is possible that this style of writing tests drives you towards early generalization, but I’m pretty sure it is just my unfamiliarity with how to drive code through these tests that is making this happen.

The Input Side

According to the first test I wrote, this is the interface that the input side needs to have:

public interface IInputReader
{
    List<BatchInput> ReadAllInputs();
}

I don’t know anything about a BatchInput yet, or how to read the input lines, but I think I may be about to find out.

So my job now is to drive out how the IInputReader will be implemented by some class. As it turns out, this is really not very interesting. The interaction-based testing that we’ve been doing has been very useful at driving out interactions, but the InputReader seems to stand alone. It is at the very end of the call chain, which means that it doesn’t interact with anything else. This means that state-based tests will do just fine to test this piece of the system.

Here is the first test I wrote for this class:

[Test]
public void InputReaderImplementsIInputReader()
{
    Assert.IsInstanceOfType(typeof(IInputReader), new InputReader(null));
}

I’ve started writing tests like these to force me to make the class I’m writing implement a specific interface. I started doing this because I’ve found myself going along writing a class, knowing full well that it has to implement some interface, but forgetting to actually do it. I end up writing the whole class to the wrong API, and I have to go back and refactor the API to match what it should be. Hence, I write this test now to enforce me implementing the right interface.

Here are the rest of the tests:

[Test]
public void EmptyInputStreamGeneratesEmptyOutputList()
{
    StringReader reader = new StringReader(String.Empty);
    InputReader inputReader = new InputReader(reader);

    List<BatchInput> input = inputReader.ReadAllInputs();

    Assert.AreEqual(0, input.Count);
}

[Test]
public void SingleCommandInputStringGeneratesSingleElementOutputList()
{
    StringReader reader = new StringReader("a|b" + System.Environment.NewLine);
    InputReader inputReader = new InputReader(reader);

    List<BatchInput> input = inputReader.ReadAllInputs();

    Assert.AreEqual(1, input.Count);
    Assert.AreEqual("a|b", input[0].ToString());
}

[Test]
public void MultipleCommandInputStringGeneratesMultipleElementsInOutputList()
{
    StringReader reader = new StringReader("a|b" + System.Environment.NewLine + "b|c" + Environment.NewLine);
    InputReader inputReader = new InputReader(reader);

    List<BatchInput> input = inputReader.ReadAllInputs();

    Assert.AreEqual(2, input.Count);
    Assert.AreEqual("a|b", input[0].ToString());
    Assert.AreEqual("b|c", input[1].ToString());
}

These tests follow the usual 0, 1, many pattern for implementing functionality. Make sure something works for 0 elements, which fleshes out the API, then make sure it works for a single element, which puts the business logic in, and then make it work for multiple elements, which adds the looping logic. Here is the oh, so complicated code to implement these tests:

public class InputReader : IInputReader
{
    private readonly TextReader reader;

    public InputReader(TextReader reader)
    {
        this.reader = reader;
    }

    public List<BatchInput> ReadAllInputs()
    {
        List<BatchInput> inputData = new List<BatchInput>();
        ReadAllLines().ForEach(delegate(string newLine) 
            { inputData.Add(new BatchInput(newLine)); });
        return inputData;
    }

    private List<string> ReadAllLines()
    {
        List<string> inputLines = new List<string>();
        while (reader.Peek() != -1)
        {
            inputLines.Add(reader.ReadLine());
        }

        return inputLines;
    }
}

And that should pretty well handle the input side of this system.

On to the Processor

The Processor class takes the input BatchInput list and converts it into ProcessOutput objects, which are then written to the output section of the program. Here is the interface again that rules this section of code:

public interface IProcessor
{
    List<ProcessOutput> Process(List<BatchInput> inputs);
}

First of all, let’s make sure that my class is going to implement the correct interface:

[Test]
public void ProcessorImplementsIProcessor()
{
    Assert.IsInstanceOfType(typeof(IProcessor), new Processor(null, null));
}

Now, the responsibilities that seem to have to happen here are that each BatchInput object needs to be turned into something representing a payroll input line, and that new object needs to be executed in some way. Those thought processes lead me to this test:

[Test]
public void SingleBatchInputCausesStuffToHappenOnce()
{
    MockRepository mocks = new MockRepository();

    IPayrollProcessorFactory factory = mocks.CreateMock<IPayrollProcessorFactory>();
    IPayrollExecutor executor = mocks.CreateMock<IPayrollExecutor>();
    Processor processor = new Processor(factory, executor);
    PayrollCommand commonPayrollCommand = new PayrollCommand();

    List<BatchInput> batches = TestDataFactory.CreateBatchInput();

    using (mocks.Record())
    {
        Expect.Call(factory.Create(batches[0])).Return(commonPayrollCommand).Repeat.Once();
        executor.Execute(commonPayrollCommand);
        LastCall.Constraints(Is.Equal(commonPayrollCommand)).Repeat.Once();
    }

    using (mocks.Playback())
    {
        processor.Process(batches);
    }
}

There is tons of stuff to see in this test method now. Immediately after I create my MockRepository, I create my system. This consists of three objects:

  • IPayrollProcessorFactory — responsible for converting the BatchInput object into a PayrollCommand
  • IPayrollExecutor — responsible for executing the PayrollCommand after it is created
  • Processor — the driver of the system

Together, these three classes make up this portion of our system. If I were doing SBT (state-based testing), I’m not entirely sure at all that I would have these two embedded objects yet. I would probably have written code and then refactored to get to where I am now. But, with IBT, you have to think in terms of who the collaborators are that the method under test is going to use, and jump to having those collaborators now rather than later. In fact, the whole IPayrollExecutor seems kind of contrived at this point to me, but I need to have something there to interact with, so I can write an IBT for this.

On the next line, I create an instance of my PayrollCommand. I specifically use this instance as the returned value from the factory.Create call in the first expectation and as a constraint to the executor.Execute in the second expectation. This was something I was struggling with earlier in my experimentation with IBT. What I want to have happen is that I want to force my code to take the object that is returned from the Create call and pass it to the Execute call. By having a common object that I use in the expectations, and having the Is.Equal constraint in the second expectation, I can actually force that to happen. It took me a while to figure this out, and I’m pretty sure that this is a Rhino Mocks thing, rather than a generic IBT thing, but I found this to be helpful.

Then I drop into the record section, where I set some expectations on the objects I’m collaborating with. The first expectation says that I expect an instance of a BatchInput to be provided to this Execute method when called. Please note, and it took me a while to intellectually really grasp this, the batches[0] that I’m passing to the Create method is really just a place holder. This is the weird part here — I’m not actually calling the factory.Create method here, I’m signaling the mocking framework that this is a method I’m about to set some expectations on. I could have just as easily, in this case, passed in null in place of the argument, but I thought null didn’t communicate very clearly. What I do mean is that I expect some instance of a BatchInput to be provided to this method. Maybe I would have done better by new’ing one up in place of using batches[0]???? It is not the value or identity of the object that matters here at all, it is the type, and only because a) the compiler needs it and b) it communicates test intent. The rest of that expectation states that I’m only going to expect this method to be called once, and is allowing me to specify what object will be returned when this method is called. This last part is one of the hardest parts for me to have initially grasped. I was unsure whether this framework was asserting that the mocked method would return the value I passed it, or whether it was allowing me to set up the value that would be returned when it was called. In looking back, the second option is the only one that makes any sense at all, since these expectations are being set on methods that are 100% mocked out and have no ability to return anything without me specifying it in some way. Doh!

The second expectation is where I set up the fact that I expect the same object that was returned from the Create call to be the object passed to this call. Again, I do this through the Constraint, not through the value actually passed to the executor.Execute() method. I could just as easily passed in null there, but it wouldn’t have communicated as clearly.

Finally, I get to the playback section, call my method, and the test is over.

This is the code that I wrote to make this test pass:

public List<ProcessOutput> Process(List<BatchInput> batches)
{
    List<ProcessOutput> results = new List<ProcessOutput>();

    PayrollCommand command = factory.Create(batches[0]);
    executor.Execute(command);

    return results;
}

I know I’m not handling the results at all yet, but I’m pretty sure I can flesh out what will happen with those at some point soon.

In my second test, I’ll worry about how to handle multiple BatchInput objects. Again, this is a very common pattern for me, starting with one of something to get the logic right, and then moving on to multiple, to put in any looping logic I need. Here is the second test:

[Test]
public void MultipleBatchInputsCausesStuffToHappenMultipleTimes()
{
    MockRepository mocks = new MockRepository();

    IPayrollProcessorFactory factory = mocks.CreateMock<IPayrollProcessorFactory>();
    IPayrollExecutor executor = mocks.CreateMock<IPayrollExecutor>();
    Processor processor = new Processor(factory, executor);
    PayrollCommand commonPayrollCommand = new PayrollCommand();

    List<BatchInput> batches = TestDataFactory.CreateMultipleBatches(2);

    using (mocks.Record())
    {
        Expect.Call(factory.Create(batches[0])).
                Constraints(List.OneOf(batches)).Return(commonPayrollCommand).Repeat.Twice();
        executor.Execute(commonPayrollCommand);
        LastCall.Constraints(Is.Equal(commonPayrollCommand)).Repeat.Twice();
    }

    using (mocks.Playback())
    {
        processor.Process(batches);
    }
}

Almost all of this test is exactly the same, except I add two BatchInput objects to my list. The only other thing I need to enforce is that the object that is passed to the factory.Create method is a BatchInput object that is a member of the list I passed in, which I do with the List Constraint to the first expectation.

Here is the modified Processor code:

public List<ProcessOutput> Process(List<BatchInput> batches)
{
    List<ProcessOutput> results = new List<ProcessOutput>();

    foreach (BatchInput batch in batches)
    {
        PayrollCommand command = factory.Create(batch);
        executor.Execute(command);
    }

    return results;
}

Object Mother

In both of these tests, you’ll see a reference to TestDataFactory. This is a class whose responsibility it is to create test data for me when asked. I use it to remove irrelevant details about test data from my tests and move it someplace else. This is called the Object Mother pattern.

In the next episode…

That’s about enough for now. If any of this wasn’t clear, please let me know, and I’ll update the text to be better. In the next episode, I’ll go ahead and build the factory using SBT, since it isn’t going to interact with anything and then dive into the Processor code, which should prove interesting.

Overall, I’m pretty happy with how IBT is allowing me to focus on interactions between objects and ignore details like the contents of my domain classes entirely until I get to a class who manipulates the contents of those domain classes.

My biggest question lies in the area of premature generalization. Am I thinking too much and ignoring YAGNI? Do these tests reflect the simplest thing that can possibly work? I’m truly not sure. I tried to do better in this episode to focus on just payroll stuff and not make generic classes, like the IInputReader. I have a PayrollProcessorFactory, for example, instead of a ProcessorFactory. Those refactorings will come, and I want to wait for the code to tell me about them. IBT, I think, makes it easier to see those abstractions ahead of time, but I need to resist!

Please write with questions and comments. This continues to be an interesting journey for me, and I’m not at all sure where I’m going yet! But it is fun!

— bab

Interesting difference using nested test suites in JUnit versus NUnit

My friend, Jim Newkirk, introduced me to a very nice way of partitioning programmer tests for a class as you write them. Most developers write a single test class for a single application class, and just dump all tests for that class in the same place. This is not as correct as it could be (that’s Consultant-Speak for “that’s just plain wrong”).

The accepted best practice is to group together tests that have the same setup/teardown logic into the same test fixtures, which can lead to having multiple fixtures for a single class. For example, when I build a Stack class, I generally have different fixtures for each of the different states that my Stack class can have, and I put a test into the correct fixture representing its starting state. For example, I might have states corresponding to

  • an empty stack
  • stack with a single element
  • stack with multiple elements
  • stack that is full

and so on. I would create a new fixture for each of these states, and use setup and teardown to push my system into the given state for that fixture. I know that this is a departure from my previous advice about Assiduously Avoid Setup and Teardown, but I think I like where this leads me. I promise to post an example of writing tests like this over the next few days, but that example is not part of what I’m talking about here.

What I am talking about is an arrangement like this:

[TestFixture]
public class StackFixture
{
    [TestFixture]
    public class EmptyFixture
    {
        [Test]
        public void ATest() {}
    }

    [TestFixture]
    public class SingleElementFixture
    {
        [Test]
        public void AnotherTest() {}
    }
}

and so on. The main reason I like this arrangement of using nested fixtures is that it allows for me to separate out tests for different behaviors of my class into different fixtures, which lets me find them more easily and makes it easier to decide where to put new tests, and it lets me run all the tests for a particular class together at the same time. If I were to have several independent test fixtures, I would have no automated way of ensuring that I ran all of them together. The closest I could come would be to use categories, which is rather manual and error prone.

Now, what I just tried to do was to replicate this arrangement in Java, and it was harder to make it work. Using JUnit 3.8.1, I tried this:

public class StackFixture extends TestCase {
    public static class EmptyFixture extends TestCase {
        public void testATest() {}
    }
}

This gave me two tests run, one failing, so it found the test in the inner class, but found an error about no test being found in the outer fixture, StackFixture, so my tests could never all pass. I tried removing static from the class declaration, and then JUnit didn’t find the test in the inner fixture, and I still had the failure for no tests found. Clearly, not possible here.

Then I tried JUnit 4, which, like NUnit 2 and beyond, uses attributes to identify tests. In Java, they call them annotations, but they seem to be the same thing. Here is what I wrote:

public class JUnitFourFixture {
    public  class StackEmptyFixture {
        @Test
        public void EmptyAtCreation() {
            Stack stack = new Stack();

            assertTrue(stack.isEmpty());
        }
    }
}

When I ran this, I got an error popup saying that there were no tests found. Not a winner 🙁 But when I added static to the class declaration for the inner class, things did finally work beautifully. Here is the code that worked, with an extra fixture added just to be sure, and the inner classes made static. (BTW, for those of you who don’t know, there are two kinds of inner classes in Java. Inner classes without the static prefix belong to a particular instance of the enclosing class, so when you instantiate the outer class, you’re instantiating the inner class as well, and it has access to stuff in the outer object. If you have the static prefix, then the inner class is entirely independent of the outer class, just like inner classes in C#.)

public class JUnitFourFixture {
    public static class StackEmptyFixture {
        @Test
        public void EmptyAtCreation() {
            Stack stack = new Stack();

            assertTrue(stack.isEmpty());
        }
    }

    public static class SingleElementFixture {
        @Test
        public void AnotherTest() {
            assertTrue(true);
        }
    }
}

I don’t know how many of you didn’t know this, or never had a reason to care about this, but I am teaching a Java TDD course this week. Before I taught this, I wanted to make sure it worked!

— bab

Another powershell quickie – removing all bin and obj directories beneath VS.Net projects

gci -recurse -include bin,obj . | ri -recurse

I was playing around with how to get this to work, and I couldn’t seem to figure out why these commands didn’t find the same locations to delete:

  • gci -recurse -include bin,obj .
  • ri -recurse -force -include bin,obj -whatif .

I finally got so baffled that I RTFMed remove-item, and there was my answer. In the fine print, nestled away in an example that did what I was looking for, and in the documentation for the recurse parameter was my answer…

-recurse <SwitchParameter>
Deletes the items in the specified locations and in all child items of the locations.

The Recurse parameter in this cmdlet does not work properly.

Ah ha! That’s when I went to the format that I finally settled on, and everything worked.

Another blog posting mostly written to help me remember how I solved this problem the next time I encounter it!

— bab

Asynchrony Solutions is hiring!

Asynchrony Solutions is looking for several developers to join our company. Our immediate requirements are:

  • C/C++/Unix/realtime/embedded developer — This is for an exciting, very long-term project where you would have the opportunity to write code in C, Java, and C++
  • Java or C#/.Net/ASP.Net developer for any one of a number of projects
  • Agile mentors and trainers

In addition to technical skills, agile experience is a definite plus. Even if you haven’t ever worked in an agile environment, you should be very opening to learning agile skills and working in such an environment.

I joined the company in May of this year, and I have been very happy there. I’m the VP of Engineering, and the thing that convinced me to join was the company’s committment to an agile way of thinking. There are 4 principal owners, and each of them understands, appreciates, and values agility, and this shows in how the company markets is services, how projects are run, and how people are valued. Our project teams are generally 5–10 people who work together in a warroom atmosphere. We actively encourage the agile management and development practices, and, more importantly, an agile belief system. I could go on, but it is just a great place to work. If you have questions about the company or the working environment, just let me know — I’ll be happy to answer.

For more information about our company, please see our web site. If you’d like to learn more about these opportunities, please contact me directly through this blog.

Thanks!

— bab

 

A Real World Example of Refactoring

I’m leading an agile team through developing a web site. This means that I spend most of my time managing, but on this one occasion I had the opportunity to write some code.

The problem

We had an image stored in a database that was always either 800×600 (portrait) or 600×800 (landscape). We had a need to render that image on the site either as-is or reduced to one of two other sizes. You can think of this as a thumbnail, a details image, and a full-size image. We were (of course) writing the code test first, and the tests were focusing on getting the sizes right and not so much on checking the generated images. We ended up getting it working, but we were thoroughly disgusted with the code that we produced 🙂 At least we knew the code was bad, and resolved to fix it when we had a chance.

Before we get into the application code, let’s look at the tests we wrote, and I’ll explain the evolution of the code and what the classes involved are.

As you can see, the class under test is called ImageWriter. It came into being because we were being careful about resource management, so we didn’t want to resize an image and expose it to the world, where it might not get disposed. So, our concept was to create this class, whose purpose in life is to write a properly sized image to a Stream. It would ensure that the image was sized correctly, it was put into the stream properly, and the resources were reclaimed. Sounds pretty simple, and it was, other than some ugly switch logic.

We wrote the first three tests you see below first, starting with just taking in the full-sized landscape image and writing that to the stream. This wasn’t a hard test to get working, as you might expect. We followed that up with writing the detail-sized image, which forced us to write a conditional statement to choose between the two sizes. And then we wrote the third test, which caused us to write another else to allow us to choose thumbnail-sized images. At the fourth test, it started to get really ugly when we had to decide if the image was portrait or landscape, which added a totally different conditional statement. Very rapidly this code was becoming unwieldy. We quickly wrote the fifth and sixth tests, just to get the functionality working, since we were just following an already existing pattern, ugly though it was. Once we were finished, though, we knew we needed to refactor this beast before checking it in.

7 [TestFixture]

8 public class ImageWriterFixture

9 {

10 private Image fullSizeLandscapeImage;

11 private Image fullSizePortraitImage;

12 private MemoryStream imageInStream;

13

14 [SetUp]

15 public void SetUp()

16 {

17 fullSizeLandscapeImage = new Bitmap(800, 600);

18 fullSizePortraitImage = new Bitmap(600, 800);

19 imageInStream = new MemoryStream();

20 }

21

22 [TearDown]

23 public void ReleaseResources()

24 {

25 imageInStream.Dispose();

26 fullSizeLandscapeImage.Dispose();

27 }

28

29 [Test]

30 public void ImageWriteWillWriteFullSizeLandscapeImages()

31 {

32 ImageWriter writer = ImageWriter.GetFullSizeWriter(imageInStream);

33 writer.Write(fullSizeLandscapeImage);

34

35 Image rereadImage = ResizeImageFromStream();

36

37 Assert.AreEqual(fullSizeLandscapeImage.Height, rereadImage.Height);

38 Assert.AreEqual(fullSizeLandscapeImage.Width, rereadImage.Width);

39 }

40

41 private Image ResizeImageFromStream()

42 {

43 imageInStream.Seek(0, SeekOrigin.Begin);

44 return Image.FromStream(imageInStream);

45 }

46

47 [Test]

48 public void ImageWriterWillWriteDetailsLandscapeImages()

49 {

50 ImageWriter writer = ImageWriter.GetDetailsSizeWriter(imageInStream);

51 writer.Write(fullSizeLandscapeImage);

52

53 Image rereadImage = ResizeImageFromStream();

54

55 Assert.AreEqual(306, rereadImage.Height);

56 Assert.AreEqual(408, rereadImage.Width);

57 }

58

59 [Test]

60 public void ImageWriterWillWriteThumbnailLandscapeImages()

61 {

62 ImageWriter writer = ImageWriter.GetThumbnailSizeWriter(imageInStream);

63 writer.Write(fullSizeLandscapeImage);

64

65 Image rereadImage = ResizeImageFromStream();

66

67 Assert.AreEqual(105, rereadImage.Height);

68 Assert.AreEqual(140, rereadImage.Width);

69 }

70

71 [Test]

72 public void ImageWriterWillWriteFullSizePortraitImages()

73 {

74 ImageWriter writer = ImageWriter.GetFullSizeWriter(imageInStream);

75 writer.Write(fullSizePortraitImage);

76

77 Image rereadImage = ResizeImageFromStream();

78

79 Assert.AreEqual(800, rereadImage.Height);

80 Assert.AreEqual(600, rereadImage.Width);

81 }

82

83 [Test]

84 public void ImageWriterWillWriteDetailsSizePortraitImages()

85 {

86 ImageWriter writer = ImageWriter.GetDetailsSizeWriter(imageInStream);

87 writer.Write(fullSizePortraitImage);

88

89 Image rereadImage = ResizeImageFromStream();

90

91 Assert.AreEqual(306, rereadImage.Height);

92 Assert.AreEqual(230, rereadImage.Width);

93 }

94

95 [Test]

96 public void ImageWriterWillWriteThumbnailSizePortraitImages()

97 {

98 ImageWriter writer = ImageWriter.GetThumbnailSizeWriter(imageInStream);

99 writer.Write(fullSizePortraitImage);

100

101 Image rereadImage = ResizeImageFromStream();

102

103 Assert.AreEqual(105, rereadImage.Height);

104 Assert.AreEqual(79, rereadImage.Width);

105 }

106

107 }

Now here is the finished, but mostly unrefactored, source code. During the process of writing the tests, we did do some refactoring to clean up the code a bit, make things a bit more readable, etc, but we held off on the Replace Conditional with Polymorphism refactoring that we could both see coming. And that refactoring is what I want to eventually share here. So, here is our ugly code:

1 public class ImageWriter : IImageWriter

2 {

3 private Stream stream;

4 private readonly ImageSize desiredImageSize;

5 private int desiredHeight;

6 private int desiredWidth;

7

8 private enum ImageSize

9 {

10 THUMBNAIL,

11 DETAILS,

12 FULLSIZE

13 } ;

14

15 private readonly Rectangle LandscapeFullSize = new Rectangle(0, 0, 800, 600);

16 private readonly Rectangle LandscapeDetailsSize = new Rectangle(0, 0, 408, 306);

17 private readonly Rectangle LandscapeThumbnailSize = new Rectangle(0, 0, 140, 105);

18 private readonly Rectangle PortraitFullSize = new Rectangle(0, 0, 600, 800);

19 private readonly Rectangle PortraitDetailsSize = new Rectangle(0,0, 230, 306);

20 private readonly Rectangle PortraitThumbnailSize = new Rectangle(0, 0, 79, 105);

21

22 public static ImageWriter GetFullSizeWriter(Stream imageInStream)

23 {

24 return new ImageWriter(imageInStream, ImageSize.FULLSIZE);

25 }

26

27 public static ImageWriter GetDetailsSizeWriter(Stream imageInStream)

28 {

29 return new ImageWriter(imageInStream, ImageSize.DETAILS);

30 }

31

32 public static ImageWriter GetThumbnailSizeWriter(Stream imageInStream)

33 {

34 return new ImageWriter(imageInStream, ImageSize.THUMBNAIL);

35 }

36

37 private ImageWriter(Stream stream, ImageSize imageSize)

38 {

39 this.stream = stream;

40 desiredImageSize = imageSize;

41 }

42

43 public void Write(Image image)

44 {

45 int width = image.Width;

46 int height = image.Height;

47

48 if (width < height) // isPortrait

49 {

50 switch(desiredImageSize)

51 {

52 case ImageSize.FULLSIZE:

53 desiredHeight = PortraitFullSize.Height;

54 desiredWidth = PortraitFullSize.Width;

55 break;

56

57 case ImageSize.DETAILS:

58 desiredHeight = PortraitDetailsSize.Height;

59 desiredWidth = PortraitDetailsSize.Width;

60 break;

61

62 case ImageSize.THUMBNAIL:

63 desiredHeight = PortraitThumbnailSize.Height;

64 desiredWidth = PortraitThumbnailSize.Width;

65 break;

66 }

67 }

68 else // isLandscape

69 {

70 switch(desiredImageSize)

71 {

72 case ImageSize.FULLSIZE:

73 desiredHeight = LandscapeFullSize.Height;

74 desiredWidth = LandscapeFullSize.Width;

75 break;

76

77 case ImageSize.DETAILS:

78 desiredHeight = LandscapeDetailsSize.Height;

79 desiredWidth = LandscapeDetailsSize.Width;

80 break;

81

82 case ImageSize.THUMBNAIL:

83 desiredHeight = LandscapeThumbnailSize.Height;

84 desiredWidth = LandscapeThumbnailSize.Width;

85 break;

86 }

87 }

88

89 using (Image resized = image.GetThumbnailImage(desiredWidth, desiredHeight, null, IntPtr.Zero))

90 {

91 resized.Save(stream, ImageFormat.Jpeg);

92 }

93 }

94 }

The bright idea we had while we were writing it was that we would use the Rectangles to hold the dimensions of an image of the proper size, to make it easier to identify what the magic numbers for height and width meant. This also made it easier for us to write the body of each leg of the case statements. Clearly, however, this was only a short term workaround for a more proper solution later.

Beginning the refactoring

OK, so we’re about to start this. I truly have never attempted the refactoring that we’re going to try here on this code, so I’m going to be doing it essentially live for you. I’ll try to share with you any mistakes I make, what thoughts are going through my semi-sentient head, and what I’m feeling about the code as it progresses. Our goal is to end up in a situation where any conditional behavior is moved out of a procedural if/then/else block and into some sort of polymorphic dispatch, but to do that in small, orderly steps, such that we’re always pretty close to having working code.

To begin, I’m planning on opening up my refactoring book to the section on Replace Conditional with Polymorphism. As I tell my students in every TDD course I teach, please open your Fowler Refactoring books and follow the steps, as Martin makes these things easy once you figure out which refactoring to use. So, to follow my own advice, I’m going to open the book and use it as I go.

First step — Are the responsibilities in the right place?

The first thing I notice when I look at the ImageWriter’s Write method is that I see policy and details happening in the same place. The policy in that class can be summed up as, "Determine the dimensions of the final image, resize the image, and then write the image to the stream", and the details in that method are concerned with how those dimensions are determined. In order for us to do anything at all to simplify this system, we’re going to have to pull the dimension calculations out into another method at least, and into another class possibly after that. So lets start with an ExtractMethod refactoring to get those dimension calculations out of there.

As my first step in doing this, I noticed that the member variables desiredHeight and desiredWidth weren’t really doing anything good for me, and I could get rid of them by using height and width, the local variables declared in the Write method, in their place, as such:

1 public void Write(Image image)

2 {

3 int width = image.Width;

4 int height = image.Height;

5

6 if (width < height) // isPortrait

7 {

8 switch(desiredImageSize)

9 {

10 case ImageSize.FULLSIZE:

11 height = PortraitFullSize.Height;

12 width = PortraitFullSize.Width;

13 break;

14

15 case ImageSize.DETAILS:

16 height = PortraitDetailsSize.Height;

17 width = PortraitDetailsSize.Width;

18 break;

19

20 case ImageSize.THUMBNAIL:

21 height = PortraitThumbnailSize.Height;

22 width = PortraitThumbnailSize.Width;

23 break;

24 }

25 }

and so on. I think this sets me up very nicely to get rid of the individual width and height variables and replacing them with a Rectangle object. I’m going to try that in the code and see where that takes me. I’m not going to make this whole change all at once, because that’s too large of a change. Instead, I’m going to find a way to refactor each leg of the switch to contain the assignment to the desiredRectangle reference and then take advantage of that rectangle to set the height and width repeatedly.

48 public void Write(Image image)

49 {

50 Rectangle desiredRectangle;

51

52 int width = image.Width;

53 int height = image.Height;

54

55 if (width < height) // isPortrait

56 {

57 switch(desiredImageSize)

58 {

59 case ImageSize.FULLSIZE:

60 desiredRectangle = PortraitFullSize;

61 height = desiredRectangle.Height;

62 width = desiredRectangle.Width;

63 break;

As you can see, I added a new Rectangle reference on line 50 which is going to hold the rectangle with the desired dimensions. And the smallest change I could make to start making use of this was to rewrite the body of the case statement starting on line 60 as you can see. For those of you new to refactoring, this is one of the most important pieces of the process — take steps that are as small as possible. By doing this, you keep your risk down as low as possible while you’re changing your code. If you take big steps and mess something up, it could take you a lot of time to get back to having something working. If you take a small step and mess something up, you can just back up a bit to where things worked. I’m taking a small step here, and just changing this leg of the switch. And after doing this, I ran my tests, and they worked. I’m going to change the rest of the legs now, running my tests between each change. I’ll do this privately, as it doesn’t seem very interesting to show you each step along this way.

<time passes>

OK, I did that, and all my tests worked, and each of the legs of the switches looks just like the sample code above, except for a different equivalent of line 60 for each case. Now that I’ve done this, I believe I can factor out the setting of the height and width in each leg and do that at the bottom of the method, right before actually doing the resizing. That will leave the code looking like this:

48 public void Write(Image image)

49 {

50 Rectangle desiredRectangle = new Rectangle();

51

52 if (image.Width < image.Height) // isPortrait

53 {

54 switch(desiredImageSize)

55 {

56 case ImageSize.FULLSIZE:

57 desiredRectangle = PortraitFullSize;

58 break;

59

60 case ImageSize.DETAILS:

61 desiredRectangle = PortraitDetailsSize;

62 break;

63

64 case ImageSize.THUMBNAIL:

65 desiredRectangle = PortraitThumbnailSize;

66 break;

67 }

68 }

69 else // isLandscape

70 {

71 // extra stuff elided

85 }

86

87 int height = desiredRectangle.Height;

88 int width = desiredRectangle.Width;

89

90 using (Image resized = image.GetThumbnailImage(width, height, null, IntPtr.Zero))

91 {

92 resized.Save(stream, ImageFormat.Jpeg);

93 }

94 }

95 }

So each leg of the switches has become more simple and we’re breaking out the height and width individually now only at the end. During the next refactoring, I’ll probably do an Inline refactoring on height and width, as they’re really not helping much, which would shrink this method down even more.

Now that we’re at this point, I think I can do the ExtractMethod I talked about previously on the switch stuff and move that out into another method, so we can make the Write method only concerned with the higher level, more abstract steps of how this process works, and get the details of how the dimensions are calculated into its own method. After this refactoring, another ExtractMethod to take the mechanics of the writing to the stream out, and a couple of renamings to clarify what the rectangle dimension calculations actually mean, Write looks like this, which is just about right 🙂

48 public void Write(Image image)

49 {

50 Rectangle imageDimensions = CalculateScaledImageDimensions(image);

51 WriteScaledImage(image, imageDimensions);

52 }

A decision needs to be made

I’m at somewhat of a crossroads here. In looking back at the Write method, I’ve decided I really don’t like it. Something just seems strange to me about it. I know I need to do something with scaled images, but instead I’m working with the dimensions of that scaled image. To me, the calculations of the dimensions of the scaled images and storing those dimensions into a rectangle seems like an implementation detail of how I did this. What the code really needs to be using, and written in terms of, are the scaled images themselves. In doing this, I think the method will now read better, since both lines of code in it will be using the same abstraction. Instead of using a Rectangle to represent something about the scaled image, now I can deal with the scaled image throughout the method. I like this better. This leads us to this code:

48 public void Write(Image image)

49 {

50 using (Image scaledImage = GenerateScaledImage(image))

51 {

52 WriteScaledImage(scaledImage);

53 }

54 }

I like this a lot better, as it seems like both of the lines of this method are dealing with the same abstraction now, a ScaledImage. This leads me to think that there is a ScaledImage class or hierarchy of classes trying to find its way out, which was our goal when we started this — we were looking for the right place to put our polymorphic logic, and this ScaledImage class seems like the right place. The final code for this class looks like this:

8 public class ImageWriter : IImageWriter

9 {

10 private Stream stream;

11 private readonly ImageSize desiredImageSize;

12

13 private enum ImageSize

14 {

15 THUMBNAIL,

16 DETAILS,

17 FULLSIZE

18 } ;

19

20 private readonly Rectangle LandscapeFullSize = new Rectangle(0, 0, 800, 600);

21 private readonly Rectangle LandscapeDetailsSize = new Rectangle(0, 0, 408, 306);

22 private readonly Rectangle LandscapeThumbnailSize = new Rectangle(0, 0, 140, 105);

23 private readonly Rectangle PortraitFullSize = new Rectangle(0, 0, 600, 800);

24 private readonly Rectangle PortraitDetailsSize = new Rectangle(0,0, 230, 306);

25 private readonly Rectangle PortraitThumbnailSize = new Rectangle(0, 0, 79, 105);

26

27 public static ImageWriter GetFullSizeWriter(Stream imageInStream)

28 {

29 return new ImageWriter(imageInStream, ImageSize.FULLSIZE);

30 }

31

32 public static ImageWriter GetDetailsSizeWriter(Stream imageInStream)

33 {

34 return new ImageWriter(imageInStream, ImageSize.DETAILS);

35 }

36

37 public static ImageWriter GetThumbnailSizeWriter(Stream imageInStream)

38 {

39 return new ImageWriter(imageInStream, ImageSize.THUMBNAIL);

40 }

41

42 private ImageWriter(Stream stream, ImageSize imageSize)

43 {

44 this.stream = stream;

45 desiredImageSize = imageSize;

46 }

47

48 public void Write(Image image)

49 {

50 using (Image scaledImage = GenerateScaledImage(image))

51 {

52 WriteScaledImage(scaledImage);

53 }

54 }

55

56 private void WriteScaledImage(Image scaledImage)

57 {

58 scaledImage.Save(stream, ImageFormat.Jpeg);

59 }

60

61 private Image GenerateScaledImage(Image image)

62 {

63 Rectangle imageDimensions = new Rectangle();

64

65 if (image.Width < image.Height) // isPortrait

66 {

67 switch(desiredImageSize)

68 {

69 case ImageSize.FULLSIZE:

70 imageDimensions = PortraitFullSize;

71 break;

72

73 case ImageSize.DETAILS:

74 imageDimensions = PortraitDetailsSize;

75 break;

76

77 case ImageSize.THUMBNAIL:

78 imageDimensions = PortraitThumbnailSize;

79 break;

80 }

81 }

82 else // isLandscape

83 {

84 switch(desiredImageSize)

85 {

86 case ImageSize.FULLSIZE:

87 imageDimensions = LandscapeFullSize;

88 break;

89

90 case ImageSize.DETAILS:

91 imageDimensions = LandscapeDetailsSize;

92 break;

93

94 case ImageSize.THUMBNAIL:

95 imageDimensions = LandscapeThumbnailSize;

96 break;

97 }

98 }

99

100 return image.GetThumbnailImage(imageDimensions.Width, imageDimensions.Height, null, IntPtr.Zero);

101 }

102 }

Next time

This entry is getting pretty long, so I’m going to end it here. When I pick it up again next time, I’ll do the refactoring to pull the conditional logic out into the ScaledImage hierarchy we’re going to create and see where the code takes us after that. I suspect that the WriteScaledImage method is going to find its way into there as well, given its name. One big hint that methods want to be grouped together is when you discover that they have a common abstraction as part of their name. GenerateScaledImage and WriteScaledImage seem to both be crying out to be in a ScaledImage class to me, but we’ll have to see.

I’m so sorry that I haven’t posted any worthwhile content in a long time, but I was tied up managing a huge waterfall-ish project all fall. That project is over, and I’m working with several different agile teams now with varying levels of involvement, which should give me more time to blog. I’m also working on 4 different proposals for Agile 2007 and one for the PMI National Congress. More on those as they get more fully formed.

As always, if you’ve made it this far, thanks for reading, and please let me know if you have any comments. I’ve had to disable comments on the blog as the spammers have taken over the comment logs, so send the emails to me directly. I’ll post a summary of the best questions and my answers in my next post.

— bab

Brian’s Handy Dandy Rules of Framework Development

The whole basis of my talk at TechEd is that there are some non-technical rules around which creating good frameworks should revolve. Since I mined those rules from my earlier poll on this blog, I thought I should share my results with you. I give you

Brian’s Handy Dandy Rules for Framework Development

  • Clients come before frameworks
  • Ease of use trumps ease of implementation
  • Quality, Quality, Quality
  • Be an enabler
  • It’s a people problem

Clients come before frameworks

This was easily the most commonly heard bit of advice my poll revealed, and it matches up very nicely with my own experiences. The best frameworks are not created, they are mined out of existing code. The worst frameworks seem to have sprung from the twisted imagination of a rogue architect somewhere…

Ease of use trumps ease of implementation

As the author of a framework, you need to keep your eye on the ball of making others’ jobs easier. If you ever have to make a tradeoff between changing something to make it easier to implement and changing something to make it easier for your framework clients to use, always choose the latter. Write good documentation, ship your unit tests, and test out your APIs through writing lots of in-house client code. Lots of eyes on the API helps as well

Quality, Quality, Quality

Use Test Driven Development on your framework code, and enable framework users to use it on theirs. This means to avoid things like sealed and overzealous use of CAS. ‘Nuff said.

Be an enabler

You can’t read people’s minds, so don’t presume to know what their key uses of your framework will be. Allow people to embrace and extend your framework through extension points. Document these extension points, give lots of examples, and create automation to help users with these tasks.

It’s a people problem

Building a framework is decidedly not a difficult technical problem. We’ve built these things before, we’ll build them again. It is primarily an issue of balancing people and their conflicting interests. Good architects know this and are adept at balancing these concerns.

Summary

In a nutshell, that’s my talk. I have a lot more detail, a lot more stories, and a lot more to say. If you want to hear it, come see me Friday morning, from 9–10:30 at TechEd 🙂

See you there,

bab