A Life Changing Event

After almost 8 years at Asynchrony, I resigned my position as VP of Engineering on Thursday to take a new position working with some of my old friends. The times at Asynchrony were some of the best times in my career, and I feel privileged to have worked with so many bright, driven, passionate people. The reason for this move has nothing to do with Asynchrony – I consider them to be one of the good guys in the industry. They get Agile there, they get Lean/Kanban, and they get delivering quality software. I will miss them, I will  miss the challenges, and I will miss calling myself an Asynchronite.

But, old chapters close in our lives, and new chapters open, and this is one of those times.

I have accepted a position with Tier3, working alongside my old Patterns and Practices friends, Jim Newkirk, Scott Densmore, Tim Shakarian, Brad Wilson, Ade Miller (I think there are one or two more that I’m leaving out, but I only heard the whole list once. Apologies if I left someone off), as well as other teammates I’m very eager to meet. In my new role, I will be Product Owner, community leader, and one of the public faces of Tier3’s Platform as a Service family of offerings. And I even get to write some code!

I have a ton to learn, about the space, the product, the team and people, and that’s probably just touching the surface. I’ve been consulting, mentoring, and training for half of my career, and I’m excited about moving back into the product world. I get to work with world class engineers and an exciting company to create, to quote Ron Jeffries’ words from oh, so long ago, Something Magnificent.

new Chapter().Next().Open();

— bab

Re-energizing a Standup Meeting with Kanban

Problem: Standup meetings are more like status meetings than a collaboration meeting for the team.

Symptoms: 

  • The whole standup meeting is run by the Scrum Master, with no one doing or saying anything until called upon.
  • People on the team report their status back to the Scrum Master or team lead. No one or few people speak to the rest of the team, and interaction between people is pretty rare.
  • Its difficult for people to gain an integrated picture of team status and progress across all the user stories. Individuals report around a circle, describing their progress against individual tasks on user stories, but no one is reporting on or focusing on the progress of the stories themselves.

In summary, all the data reported in the standup meeting is focused on reporting task status back to the Scrum Master, little collaborative conversation occurs, and no complete picture of progress in delivering user stories is to be found.

Solution: Change the standup meeting to focus on collaboration and the flow of value through the team. Change the level of conversation from talking about tasks, which represent effort, to talking about the user stories promised for the current iteration. Doing this promotes cross-team member discussion about the work and how they can complete it. This takes the Scrum Master out of the middle of the conversations and lets the team talk about what matters to them.

Method: The way I help teams focus more on the flow of user stories and value is to make the stories more visible parts of the standup. The Scrum Task Board is fine for showing the flow of tasks through their workflow, but user stories have other elements to their workflow, such as any sort of code reviews or business focused reviews. After getting the team in the habit of talking about tasks and their states, just to get them used to talking about their work, I like to introduce a Kanban board.

Kanban boards are used to represent both the course user stories take from the time a team starts working on them through the time that a team delivers that story, and the current status of all of the stories in play for that sprint. Having this visual representation of the work has a drastic effect on the standup and the team. They stop looking at me, the Scrum Master, and they start looking at the board. They start talking about the stories, where they are, what issues they need help with, discussions about when they’ll be finished with their current phase of work and ready to be moved forward… In other words, they begin collaborating. Its magical 🙂

For those of you who have never seen a Kanban board, this is what one looks like:

image

This board models the workflow defined by the team. In this case, stories committed to the sprint begin in the Sprint Backlog column, and then move through Story Review, Development, Testing, Review, and then finally Done. During a sprint, small cards that represent each user story move across the board from left to right representing the work done to each item. Here is a board populated with work:

image

Here we see user stories in varies stages of completion. Having this board as the focus of the standup allows the team to talk about the progress of user stories, and not get overly concerned with low level details that confuse the big picture.

Conclusion: On the team that I’ve been coaching for the past couple of months, introducing a Kanban board was the key to changing the standup meeting from being an exercise in status reporting to a productive, useful collaborative event. We learned how to focus on issues and the flow of value throughout our sprint, and to ignore the noise of personal status. Unanimously throughout the team, we agree that our new standups are much more productive in helping us deliver on our promises each sprint.

“You get what you measure” versus “What you measure, you can manage” – The Agile Metrics Dichotomy

(I just reinstalled Windows Live Writer and reconnected it to my blog, and it pulled down some drafts I wrote years ago. So, I have no idea when I actually wrote this, what the context was that was in my head, but it seems like a decent article. So I’m posting it :))

I’ve long been a fan of the “You get what you measure” school of metrics doubters, especially in agile projects. Its really easy to decide to measure something, have the people being measured act in a totally predictable and rational way, and screw up a team beyond belief. For example, if I measure how productive individual programmers are, then its to the advantage of individuals to focus on their own work and spend less (or no!) time helping others. Completely kills teamwork 🙁

On the other hand, in our environment, where we are responsible for delivering projects to clients on time, on budget, etc, we have a responsibility and an obligation to know what is happening in each of our project teams. Management needs to know if we’re going to make our deadlines, customers need to know what is coming, project leaders need to be able to manage and set expectations, we need to satisfy contractual obligations, etc.

The challenge is to come up with a set of metrics and an unobtrusive way to gather data that allows metrics to be reported to interested stakeholders, and yet does not disrupt the team.

This is my challenge. And what follows are the basic metrics I’ve been using as a way of meeting that challenge. These metrics are balanced between measuring things that meaningfully describe what’s happening inside teams, and creating the right environment and behaviors that will help the team reach its goals rather than incent poor behavior.

How Fast Are We Going?

The gold standard of progress on an agile team is its velocity. This can be measured in story points per week for teams following a more “traditional” agile process like Scrum or XP, or it be measured in cycle time for those teams that have evolved to a Lean or Kanban approach. The team I’m coaching now is somewhere in the middle, closer to Scrumban.

So our standard metric is points per week. We measure weekly, and our measurements are reasonably accurate, varying between 25 and 30 points a week. We have the occasional week when we’re significantly below this, and some weeks where we way overachieve. Our historical velocity chart looks like this:

At first, our velocity varied a lot, mostly because we were still learning the codebase (this project has a large legacy codebase that we are adding functionality to). As of late, though, we’ve become a lot more consistent in what we finish each week. Not perfect, but a lot better.

But What About Quality?

Measuring velocity is not enough, as teams can find ways to go faster when pressed to do so. This is a great example of measuring something that is going to create the wrong behavior. If a team is rated on how fast they go, they are going to go as fast as they need to. They’ll either cut corners and quality will slip, or they’ll inflate the estimates to make the numbers start to look better without actually accomplishing anything more.

What I’ve found is that the best way to avoid incenting the team to cut corners is to keep an eye on quality, too. So, in addition to tracking  velocity, I also track defect counts. These charts look like this:

 

This shows our weekly quality over time since the beginning of our project. We started off pretty high, put in some good effort to get our quality back under control, and have done a decent job of keeping it under control most of the time.

Looking at the Bigger Picture

What I do is to use these two metrics together to help me understand what’s happening on the team and to ask questions about what may be causing ebbs and flows in velocity or quality. We did a good job of keeping the defect count pretty low during February and March, but the count crept back up, pretty quickly, to 20 or so active defects by the beginning of April. It was towards the end of this relatively quiet period that the time began to feel pressure to deliver more functionality more quickly. They did what they thought was best and focused more on getting stories “finished” without spending quite a much time on making sure everything worked. They worked to what was being measured.

Up steps Project Manager Man (that’s me, for those of you following along at home), and I was able to point out what was happening with quality and, with their help, relate that back to the feeling of pressure they were feeling. Together, we came up with the idea that velocity and quality have to improve together or teams create false progress. I helped the team manage this mini-issue on their own by pointing it out to them through metrics.

The Final Velocity/Quality Metric

As a way to keep an eye on the general trend of where the work is going on the team, I also track time spent working on Acceptance Tests versus time spent fixing defects (defects include stories that are kicked back from QA or the Product Owners prior to acceptance) versus time spent developing. My feeling is that if we can maximize development effort and minimize defect remediation time, then the team is operating as effectively as it can, given its circumstances.

The Most Important Point About Metrics

Metrics should never be used to judge a team or used to form conclusions about what is happening. They should be treated as a signpost along the way, an indicator of things that are not easily observed in real time but can be measured to point out trends. They should always serve as the starting point for a conversation, the beginnings of understanding an issue. If a manager or executive begins to use metrics as a way to judge a team, the metrics will be gamed and played from that point on. They will become useless, because they will be a reflection of what reality the manager wants to see, rather than what is truly happening.

In each and every one of these cases, I have used the metrics to confirm what I thought I was seeing, as the starting point of a conversation with the team about what they saw happening, and a way to measure the impact of issues on the team. After speaking with the team, the metrics are also useful to judge whether the changes chosen by the team to improve are effective.

Metrics are no substitute for being on the ground with your team, in their team room, pairing with devs and testers, working with product owners, feeling the vibe in the room. Remember, its people over process, right? 🙂

— bab

New blog home

Hey, sorry for the inconvenience. We lost our hosting at our previous site (Thanks, Peter, for doing it for over 5 years!), so I’m reloading all content here, setting up a nice theme, etc. It might take a while, but it’ll eventually get done.

Thanks for your patience…

— bab

Test Driving a WPF App – Part 3: Switching to new window

This is the third part in what I hope to be a four-part series about creating a WPF application using TDD to drive the functionality and MVVM to keep everything testable as we go. The idea is to use view models to split away functionality from the view and have it available for testing while not losing any capabilities along the way. If you haven’t read the first two parts,  you may want to do that.

In this part, we’ll see what it takes to have a user gesture, in our case, either a double-click on a row in our ListView or clicking a button, take us to a different page in our application.We’ll write the code to get you to the other window in this post and leave the corresponding code to get back for readers as an exercise, should they want to confirm they understand how this all works. So, here we go.

The end result

At the conclusion of this process, we’re going to be able to start from the opening page, showing players and their numbers:

image

and navigate to this second page:

image

About this time, I’m feeling pretty good about my screen design abilities as well, as these pages are beautiful! (One thing I’ve learned about myself over the years is that I have very little taste when it comes to designing UIs :))

Step 1 – Writing the command logic to command the pages to switch

As before, when we’re adding new behavior based on a user gesture, we start by writing the ICommand class. We do this because it fleshes out both the details of that class, but also because it drives us down the road of creating the interface that the command class is going to use. This interface is going to be eventually implemented by the view model class, the TeamStatControlViewModel in our case. We don’t “officially” know that at this point, though, so we work through this interface, which represents one of the growing number of roles our view model fulfills. Fo more information on this style of development, read this paper by Steve Freeman.

Let’s begin by writing our first test for the command class. This is the test that invokes the behavior that the role we’re collaborating with makes available to us. In keeping with my experiment of creating lots of little fixtures, each of which represents a complete, but individual, behavior of my system, I’ve created a fixture called InterPageNavigationFixtures. My idea is that this fixture will hold all tests that document how navigating between pages works. Our first test:

   1: [Fact]

   2: public void SwitchToPlayerViewStartedWhenViewPlayerCommandExecutes()

   3: {

   4:     var viewPlayerSwitcher = new Mock<IViewPlayerModeSwitcher>();

   5:     ViewPlayerModeSwitchCommand cmd = new ViewPlayerModeSwitchCommand(viewPlayerSwitcher.Object);

   6:  

   7:     cmd.Execute(null);

   8:  

   9:     viewPlayerSwitcher.Verify(s => s.SwitchToPlayerViewMode());

  10: }

Like the other test we created for a command, we define the interface that represents the role we’re going to use, IViewPlayerModeSwitcher, and then our command class. The rest of the test ensures that the correct behavior happens when we invoke the command’s Execute method.

To make this test compile, we have to create some things, including the interface, the command, and some very minimal method signatures in both.

In the ViewModels project, we add the interface:

   1: namespace ViewModels

   2: {

   3:     public interface IViewPlayerModeSwitcher

   4:     {

   5:         void SwitchToPlayerViewMode();

   6:     }

   7: }

as well as the skeleton of the command:

   1: namespace ViewModels

   2: {

   3:     public class ViewPlayerModeSwitchCommand

   4:     {

   5:         public ViewPlayerModeSwitchCommand(IViewPlayerModeSwitcher modeSwitcher)

   6:         {

   7:         }

   8:  

   9:         public void Execute(object parameter)

  10:         {

  11:         }

  12:     }

  13: }

Run test, test fails, implement minimal behavior to make test pass:

   1: public class ViewPlayerModeSwitchCommand

   2: {

   3:     private readonly IViewPlayerModeSwitcher modeSwitcher;

   4:  

   5:     public ViewPlayerModeSwitchCommand(IViewPlayerModeSwitcher modeSwitcher)

   6:     {

   7:         this.modeSwitcher = modeSwitcher;

   8:     }

   9:  

  10:     public void Execute(object parameter)

  11:     {

  12:         modeSwitcher.SwitchToPlayerViewMode();

  13:     }

  14: }

At this point, the test passes, and we can invoke the switching behavior that will be implemented in our view model shortly.

The last thing I was to do here before moving on is to expose the command through the view model so the view can bind to it. A simple test, much like others we’ve seen before:

   1: [Fact]

   2: public void ViewModelExposesSwitchToPlayerViewModeCommandAsICommand()

   3: {

   4:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null, null, null);

   5:  

   6:     Assert.IsAssignableFrom<ViewPlayerModeSwitchCommand>(viewModel.PlayerViewMode);

   7: }

   8:  

and we implement this by simply adding an auto-property to our view model called PlayerViewMode.

Part 2 – Making the view model change application modes

In this next, the view model will begin to switch modes in the application between the Player Summary mode and View Player mode. I’m intentionally calling these modes here because I don’t want the view model to know about windows and views and other UI-ish specifics. As far as it is concerned, the application has two modes in which it can work, and someone else is going to be able to translate from the information about modes and transitions to swapping out views and controls. This is purely an example of separating concerns, where the view model only cares about the modes of the application and another class, more than likely in the same assembly as the view logic, will know how to act upon modes switching.

Starting down this road, we’ll write a test that drives out the interface for how we’ll tell something outside of the view model that a mode is changing. We’ll be talking to a role I chose to call the ModeController, commanding it through its PlayerViewMode. Here is the test:

   1: [Fact]

   2: public void ModeControllerCommandedToSwitchToPlayerViewByViewModel()

   3: {

   4:     var modeController = new Mock<IModeController>();

   5:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null, modeController.Object, null);

   6:  

   7:     viewModel.SwitchToPlayerViewMode();

   8:  

   9:     modeController.Verify(nc => nc.PlayerViewMode(It.IsAny<Player>()));

  10: }

Since the TeamStatControlViewModel needs access to the mode controller, we had to modify its constructor to take an argument of that type. We call the method we’re trying to drive through the tests, SwitchToPlayerViewMode, and then comes our assertion. Our assertion is a bit interesting here – maybe I’m thinking too far ahead, but I’m considering how the view that is going to eventually be rendered is going to work. Its going to need to know about the player that was selected, so it can show it. In fact, that player is probably going to become its DataContext. To give the new view, which will be rendered by someone else at some other time, access to the player, I’m passing it along here. I’m not writing the code that will actually cause it to be passed yet, as you can see by the It.IsAny<Player> argument in the Verify statement, but I’m declaring that an object of that type is going to be passed to it when the system runs for real. Let’s implement this small step:

First, the IModeController, which sits in the ViewModel sassembly:

   1: namespace ViewModels

   2: {

   3:     public interface IModeController

   4:     {

   5:         void PlayerViewMode(Player player);

   6:     }

   7: }

and then the minimal code in the view model:

   1: public class TeamStatControlViewModel: IApplicationExitAdapter, IViewPlayerModeSwitcher

   2: {

   3:     private readonly IApplicationController applicationController;

   4:     private readonly IModeController modeController;

   5:     private readonly IPlayerRepository playerRepository;

   6:  

   7:     public TeamStatControlViewModel(

   8:         IApplicationController applicationController, 

   9:         IModeController modeController, 

  10:         IPlayerRepository playerRepository)

  11:     {

  12:         this.applicationController = applicationController;

  13:         this.modeController = modeController;

  14:         this.playerRepository = playerRepository;

  15:         ApplicationExit = new ApplicationExitCommand(this);

  16:         PlayerViewMode = new ViewPlayerModeSwitchCommand(this);

  17:     }

  18:  

  19:     public void SwitchToPlayerViewMode()

  20:     {

  21:         modeController.PlayerViewMode(new Player());

  22:     }

  23: }

Please not that I’m only showing the interesting bits here, and removing code that didn’t change from the last time we looked at this class. The bits that did change are an additional parameter to the constructor of this class and the addition of the SwitchToPlayerMode method. Since that method came from the IViewPlayerModeSwitcher, I also went ahead and implemented the interface here. (Note – Since this isn’t a lesson in TDD, I skipped the step of how I used the AddMethodParameter refactoring to slowly add the new constructor and move away from the old one. I didn’t do this in one big step, I did it in small steps, using the tests to guide me to what to change next.)

The next step is to get the player that is selected in the ListView and pass that along in the call, instead of passing an empty player. Here is the test used to drive that:

   1: [Fact]

   2: public void SamePlayerPassedToModeControllerAsSetInViewModel()

   3: {

   4:     var modeController = new Mock<IModeController>();

   5:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null, modeController.Object, null);

   6:     Player player = new Player();

   7:     viewModel.SelectedPlayer = player;

   8:  

   9:     viewModel.SwitchToPlayerViewMode();

  10:  

  11:     modeController.Verify(nc => nc.PlayerViewMode(player));

  12: }

There are only small differences between this test and the one before it. The main one is that I’m defining a new property on the view model, SelectedPlayer. A ListView sets this property every time the selected item changes in its view, and this property will allow our view model to be informed about which player is selected at any time. Now we can pass along that player in the PlayerViewMode call, and all is well. Here are the minor changes in TeamStatControlViewModel:

   1: public Player SelectedPlayer { get; set; }

   2:  

   3: public void SwitchToPlayerViewMode()

   4: {

   5:     modeController.PlayerViewMode(SelectedPlayer);

   6: }

Part 3 – Hooking up what we have

Despite the fact that I write a lot of tests for my code, I still like to see it work every now and then! This would be one of those times. We’re at the point now where we should be able to wire up everything we have and get to the point where the mode controller would be able to swap out controls, were we to have implemented that last piece of functionality. Let’s hook up what we got and see what works.

Let’s start by making a couple of quick changes in the XAML. We’ll add a SelectedItem attribute and binding in the ListView and we’ll hook up the ViewPlayer button to the command:

   1: <ListView IsSynchronizedWithCurrentItem="True" Grid.Row="1" VerticalAlignment="Stretch" 

   2:     SelectedItem="{Binding SelectedPlayer}" ItemsSource="{Binding Players}">

   1: <Button Command="{Binding PlayerViewMode}" Grid.Column="0" Content="View Player Stats" 

   2:         HorizontalAlignment="Right" Style="{StaticResource NormalButton}"/>

That should be enough to have the user gestures be sent to our view model.

The view model itself is complete, but we need to make some class implement the IModeController. Again, I think the best place, at least for now, is the App class, so we’ll put that interface there – I do suspect that we’ll eventually pull that out of the App class and make it its own class, so that we can test it, but we haven’t had that need yet. Someday, maybe…

That leaves some very minor changes in App.xaml.cs. We add the interface to the class definition, implement the PlayerViewMode method, and tell the container that App implements the IModeController interface (line 8), which allows it to pass an object of that type to the view model’s constructor.

 

   1: public partial class App : Application, IApplicationController, IModeController

   2: {

   3:     private readonly IUnityContainer container = new UnityContainer();

   4:  

   5:     private void Application_Startup(object sender, StartupEventArgs e)

   6:     {

   7:         container.RegisterInstance<IApplicationController>(this);

   8:         container.RegisterInstance<IModeController>(this);

   9:  

  10:         container.RegisterType<IPlayerRepository, PlayerRepository>(new ContainerControlledLifetimeManager());

  11:         container.RegisterType<TeamStatControlViewModel>(new ContainerControlledLifetimeManager());

  12:         container.RegisterType<TeamStatControl>(new ContainerControlledLifetimeManager());

  13:  

  14:         container.RegisterType<MainWindow>(new ContainerControlledLifetimeManager());

  15:         

  16:         Current.MainWindow = container.Resolve<MainWindow>();

  17:         Current.MainWindow.Content = container.Resolve<TeamStatControl>();

  18:         Current.MainWindow.Show();

  19:     }

  20:  

  21:     public void ExitApplication()

  22:     {

  23:         Current.Shutdown();

  24:     }

  25:  

  26:     public void PlayerViewMode(Player player)

  27:     {

  28:         ;

  29:     }

  30: }

At this point, we’re good to try it out, so start up the application, click on a name, press View Player, and… OK, still nothing happens because we don’t have another window yet. I tested this by putting a breakpoint on line 28 and debugging it to see that I made it to here, which I did, and that the correct player was being sent here. I know my tests told me that this should work, but in the immortal words of Ronald Reagan, “Trust, but verify”.

Part 4 – Showing the other view

We’re in the home stretch now. Let’s show the other view. First, let’s just assume that I created another view. You can see it way back at the top – its the second window you see, with a big name and number in it. Once you’re at this point, all you need to do to enable the app to switch between the main view and this view is to implement the App.PlayerViewMode method correctly. I couldn’t find a way to test this, since I’m manipulating the UI directly, so I just wrote the code:

   1: public void PlayerViewMode(Player player)

   2: {

   3:     Current.MainWindow.Content = null;

   4:  

   5:     Current.MainWindow.Content = container.Resolve<PlayerDetailControl>();

   6:     ((PlayerDetailControl) Current.MainWindow.Content).DataContext = player;

   7: }

A couple things to note here. First, I didn’t register the type, PlayerDetailControl, with the container. The Unity container will do its best to create any object that hasn’t been registered with it by calling whatever constructor it can find that matches up with arguments it knows how to build. In our case, the class has a default constructor, so it knew how to build it. Since we didn’t register it, Unity will just create a new instance of the control for me every time I ask for it. I can then assign the player to be its DataContext, and we’re done. That turned out to be very easy 🙂

At this point, we should be able to select a player, click on the View Player button, and the other view should appear. Success! But we’re not quite finished yet… sorry! Two more points to make

Part 5 – Double clicking and CanExecute for the command

Before we finish, I want to cover two other things. The first is how to enable double click behavior for the ListView. There isn’t a simple way to attach a command behavior to the ListView in the version of WPF I’m using. I know that AttachedBehaviors are part of the new WPF version that’s in development now, but I don’t have that, so you’re stuck with this explanation 🙂

Since there wasn’t a way to attach a command to the double-click event, I had to put code into the code-behind page, which is the first time I’ve had to do that yet! I wired up an event handler to the double-click event through a small change in the XAML for the ListView:

   1: <ListView IsSynchronizedWithCurrentItem="True" Grid.Row="1" VerticalAlignment="Stretch" 

   2:     MouseDoubleClick="ListView_MouseDoubleClick"  SelectedItem="{Binding SelectedPlayer}" ItemsSource="{Binding Players}">

and added a trivial handler in the TeamStatControl.xaml.cs file:

   1: private void ListView_MouseDoubleClick(object sender, MouseButtonEventArgs e)

   2: {

   3:     TeamStatControlViewModel viewModel = (TeamStatControlViewModel) DataContext;

   4:     viewModel.PlayerViewMode.Execute(null);

   5: }

All this method does is to call the Execute method of the same command that the button runs. Trivial code. So now we can double-click on a player in the list and the right thing happens.

But what happens if no player is selected (OK, I couldn’t figure out a way to make this happen for the real list view, but it seemed like an interesting example and thought experiment!). We need to implement the CanExecute behavior for the ViewPlayerModeSwitchCommand. As usual, we do this through tests.

In our first test, we confirm that the command cannot execute if no player is selected:

   1: [Fact]

   2: public void ViewModelFunctionalityIsNotOnlyAvailableWhenNoPlayerIsSelected()

   3: {

   4:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null, null, null);

   5:     ICommand playerViewModeCommand = viewModel.PlayerViewMode;

   6:  

   7:     Assert.False(playerViewModeCommand.CanExecute(null));

   8: }

and we implement CanExecute as simply as we can to make the test pass:

   1: public bool CanExecute(object parameter)

   2: {

   3:     return false;

   4: }

Our  next test is a little more interesting, as it requires that the CanExecute returns true if a player is selected:

   1: [Fact]

   2: public void ViewModelFunctionalityIsAvailableWhenPlayerIsSelected()

   3: {

   4:     TeamStatControlViewModel viewModel = new TeamStatControlViewModel(null, null, null);

   5:     ICommand playerViewModeCommand = viewModel.PlayerViewMode;

   6:     viewModel.SelectedPlayer = new Player();

   7:  

   8:     Assert.True(playerViewModeCommand.CanExecute(null));

   9: }

The only real modification here is that we use the SelectedPlayer method of the view model to set the player that would be selected through the real view. We expect that the command class is able to query this SelectedPlayer property through its reference to the view model using the IViewPlayerModeSwitcher interface. Well, that’s what we’d expect, anyhow…

   1: public bool CanExecute(object parameter)

   2: {

   3:     return modeSwitcher.SelectedPlayer != null;

   4: }

The SelectedPlayer property isn’t available through that interface, so as our final step along this long and tortuous path, we add it to the interface, our implementation code compiles, our tests all run, and we celebrate by going home early for the day.

Conclusion

I hope people are finding this case study to be useful. I found a lot of different resources on the web about doing bits and pieces of your WPF/MVVM application test-first, but I still had a lot of unanswered questions. First and foremost on that list was how to do simple navigation like I’m doing. By putting together most of a whole application, I hope its more clear to readers how the different pieces fit together.

I do plan on doing one more step along the way, probably posted tomorrow morning (8/25/2009). I want to show how to handle pop-up windows, including windows dialog boxes as well as custom WPF windows. I had to deal with this on my own project, and it took a bit of thinking to figure out how to do it in a testable way.

Until tomorrow!

— bab

Using Unity to make Singleton-ness an implementation detail

The most common, canonical use of the Singleton pattern irritates me. Every time I see it, I think, “The singleton-ness of a class is an implementation detail, and shouldn’t leak out through that class’s interface.” Yet, that’s what it always does.

In the usual implementation, someone decides that some new class they’ve written is so resource critical, or so standalone, or so useful that they can have only one instance of that class and that instance has to be visible everywhere. Ignoring the intricacies of threading and so on, a Singleton usually looks exactly like this:

   1: public class MyImportantResource

   2: {

   3:     private static MyImportantResource instance;

   4:  

   5:     private MyImportantResource()

   6:     {

   7:         // ...

   8:     }

   9:  

  10:     public static MyImportantResource Instance()

  11:     {

  12:         if (instance == null) instance = new MyImportantResource();

  13:         return instance;

  14:     }

  15: }

And it gets used everywhere like this:

   1: public class Consumer

   2:     {

   3:         public Consumer()

   4:         {

   5:             var importantResource = MyImportantResource.Instance();

   6:             importantResource.FooTheBar();

   7:         }

   8:     }

Now what happens when I decide I want two of these? Or I decide that I want to let people create as many of these resources as possible? Or, even more commonly, what happens when I want my Consumer class to use a mocked out version of MyImportantResource? I’m scre^h^h^h^hin trouble.

The trouble comes from the fact that Consumer is reaching out into the ether and grabbing the resource it needs on its own. It isn’t giving me, the developer, control over what type of ImportantResource to use or where the ImportantResource is coming from. The Inversion of Control principle is being violated by having the class gather its own resources rather than having its creator decide and pass resources to it.

My typical solution

What I typically have been telling people is that it is OK to have a singleton, treat it as a singleton, and love the singleton pattern… within reason. At the point in your code where you’re creating all your objects and wiring them up, you should feel perfectly free to use singletons. But no layer beneath that should have any knowledge of the singleton-ness of a class. Grab that singleton and pass it into whoever needs it, preferably as an interface, and you’ve restored order and testability to your system by again returning to the Inversion of Control principle.

My DI solution

Dependency Inversion containers solve this problem for you automatically. I’ve been using Unity lately, and I’ve grown to really like it. One of the things I really like about it, and I’m sure all DI containers have this property, is that I can register a type with the container and tell it that I only want it to create a single instance of this type and pass it around whenever someone asks for that type. Bam! I have a singleton again, but I’ve completely separated the singleton-ness of the class from its singleton-like usage. Major win. Here is the code that lets me do that.

First, the interface over the important resource:

   1: public interface IImportantResource

   2: {

   3:     void FooTheBar();

   4: }

Now the changes in the MyImportantResource class to bring it back to being a regular class. Note that all mention of singleton-ness is gone from this class. Its just a plain, old class again, which is how we like it.

   1: public class MyImportantResource : IImportantResource

   2: {

   3:     public MyImportantResource()

   4:     {

   5:         // ...

   6:     }

   7:  

   8:     public void FooTheBar()

   9:     {

  10:         throw new NotImplementedException();

  11:     }

  12: }

Next, the changes in the Consumer class that promote the loose coupling encouraged by Inversion of Control, by injecting an IImportantResource at construction:

   1: public class Consumer

   2: {

   3:     public Consumer(IImportantResource importantResource)

   4:     {

   5:         importantResource.FooTheBar();

   6:     }

   7: }

And finally, the object construction code I can write:

   1: public static class Configurator

   2: {

   3:     private static IUnityContainer container;

   4:  

   5:     public static void Main(string[] args)

   6:     {

   7:         container = new UnityContainer();

   8:  

   9:         container.RegisterType<IImportantResource, MyImportantResource>(new ContainerControlledLifetimeManager());

  10:  

  11:         Consumer consumer = container.Resolve<Consumer>();

  12:     }

  13: }

The interesting bit of code here is on line 9, where I register MyImportantResource. I’m telling the container that whenever someone asks for an IImportantResource, it should give them a MyImportantResource object, and that it should give it the same instance every time (that what the new ContainerControlledLifetimeManager()) says. I’ve magically turned my MyImportantResource into a singleton class without polluting the code of that class with statics and instance methods.

Conclusion

Operationally nothing has changed. Assuming people always go to the container for their object instances, MyImportantResource is still a singleton. I will only have one instance of it, it is still accessible from a single place. Should I ever decide to allow 4 instances of it, I can create my own lifetime manager class that will create up to four instances and manage them for me. But now all of my classes are just that, classes. The design decision of how many instances of the resource is separated from the operation of the resource, which allows the two different design issues to evolve separately. And I’ve applied Dependency Inversion to the Consumer class, which allows it to have mock versions of the resource injected to enable testing scenarios that were more difficult when it was using the singleton.

— bab

Handling interactions between ViewModel classes

I’ve been puzzling over what seemed to be an insurmountable problem lately. It seemed that the guys who came up with the MVVM design pattern totally missed the boat on anything more than a simple, single-page app. It couldn’t be me, since I’m a SuperGenius (TM).

Well, OK, it did turn out to be me. Once I stopped flailing about in WPF-land and actually thought about the problem, it became easy.

The Problem

What I was trying to do seemed pretty simple, and I couldn’t figure out how to make it work, which is why I was sure it was them and not me – how wrong I would be…

Basically, I had an app with multiple views. In one view, I was doing something that caused some data to be updated. I wanted to show the updated data in another view. Since the two views were using two different ViewModels, I couldn’t figure out how data binding by itself could solve the problem. Then it came to me – the two different ViewModels were sharing the idea of the data they were both showing, and data is supposed to live in the Model layer. Duh!

Once this epiphany drilled its way through my head, I figured out how to solve the problem (Note – I’m 100% new to WPF, so there is every chance there is an easier way to do this. If you know it, please let me know!)

The Solution

The key to this was to make both my ViewModels implement INotifyPropertyChanged to hook them up to propagate changes to and from their Views. Then I created an event in the Model that would be raised whenever the underlying data they were sharing was changed. My model looked like this:

   1:     public class Model
   2:      {
   3:          private String modelValue;
   4:   
   5:          public delegate void ModelChangedEvent(object sender, ModelChangedEventArgs e);
   6:          public event ModelChangedEvent ModelChanged;
   7:          public void SetValue(string newValue)
   8:          {
   9:              modelValue = newValue;
  10:              if(ModelChanged != null)
  11:              {
  12:                  ModelChanged(this, new ModelChangedEventArgs {OldValue = modelValue, NewValue = newValue});
  13:              }
  14:          }
  15:      }

And I had my target ViewModel listen for this event. In the event handler for the ModelChangedEvent, the ViewModel used the newValue to set the value it needed to show on the View, which caused the PropertyChanged event to be raised, and everything worked like a champ. Here is the target ViewModel:

   1:      public class ReflectedViewModel : INotifyPropertyChanged
   2:      {
   3:          private string reflectedValue;
   4:          private Model model;
   5:          public event PropertyChangedEventHandler PropertyChanged;
   6:   
   7:          public Model Model
   8:          {
   9:              get { return model; }
  10:              set
  11:              {
  12:                  if(model != null) throw new InvalidOperationException("Attempted to set model more than once.");
  13:                  model = value;
  14:                  model.ModelChanged += model_ModelChanged;
  15:              }
  16:          }
  17:   
  18:          void model_ModelChanged(object sender, ModelChangedEventArgs e)
  19:          {
  20:              ReflectedValue = e.NewValue;
  21:          }
  22:   
  23:          public String ReflectedValue
  24:          {
  25:              get { return reflectedValue; }
  26:              set
  27:              {
  28:                  if(value.Equals(reflectedValue) == false)
  29:                  {
  30:                      reflectedValue = value;
  31:                      if(PropertyChanged != null)
  32:                      {
  33:                          PropertyChanged(this, new PropertyChangedEventArgs("ReflectedValue"));
  34:                      }
  35:                  }
  36:              }
  37:          }
  38:      }

You can download the entire example from here.

Conclusion

This was not a hard problem to solve, once I stopped and actually thought about it. I got so wrapped up in the new framework and toys (WPF/XAML) that I forgot about everything else I knew for a bit 🙂

As usual, any and all comments appreciated. Comments telling me about easier and more idiomatically correct ways of writing this are 100% welcomed!

— bab

Using Powershell to diagnose installation failures

I was trying to install the Application Block Software Factory, part of Enterprise Library 3.1, the other day, and I ran into a problem. During the installation, I got a failure stating that the necessary installer types could not be found in “c:program filesMicrosoft Visual Studio 8common7ideMicrosoft.Practices.EnterpriseLibrary.BlockFactoryInstaller.dll”. I was instructed to see the LoaderExceptions property of the exception for details.

Huh? How in the world was I supposed to see this property of an exception that I didn’t even have access to?????

Powershell to the rescue

Hmmm, I thought. Based on a previous blog posting, I remember that I found a way to load an assembly from a file, and I knew that I could inspect the types in an assembly once it was loaded. Maybe I could follow this process to learn something about what was happening.

So, I fired up powershell, and typed in the following command (note that I’m at home and not at work in front of the PC where I did this. The paths are as close as I can remember…)

$assembly = [System.Reflection.Assembly]::LoadFile(“c:program filesMicrosoft Visual Studio 8common7ideMicrosoft.Practices.EnterpriseLibrary.BlockFactoryInstaller.dll”)

Once I had the assembly loaded like this, I used its GetTypes() method to inspect the types in the assembly, and that’s when I got the same exception as before.

After a little investigation, I came across the $Error special variable, which seems to hold an array of the last exceptions through during powershell execution. I was able to get to the exact exception I saw at the command line through this variable by typing $Error[0].

I investigated further by using the get-member cmdlet, as

$Error[0] | get-member

which told me that the object returned from $Error[0] had an Exception property on it. I followed on a bit, and looked at the members of the exception I could get to using $Error[0].Exception. Here, it turned out, there was a property called LoaderExceptions, which was the exact property that I had been instructed to see by the error message.

Looking at that property as:

$Error[0].Exception.LoaderException[0]

gave me the exact right answer. It was looking for Microsoft.Practices.Recipes.dll, an assembly loaded by GAX, but it couldn’t find it. I searched for that assembly, and I did eventually find it, but it was installed beneath Orcas, not Whidbey, both of which I had installed on my machine.

The solution

So, to make a long story short, I reinstalled GAX, this time installing it for VS.Net 2005, and all was well. I was able to install the Enterprise Library in its entirety, and I was able to proceed.

But, without the ability of powershell to let me iteratively understand what was happening, and explore the objects involved, I don’t know how I would have otherwise solved this problem.

— bab

Episode 2 – Filtering by Genre

Update — Craig Buchek pointed out something about one of my tests that he didn’t like. My test entitled PresortedBookListWithSomeBooksInCorrectGenreOnlyReturnsBooksInCorrectGenre made an assumption about data ordering that was stronger than was necessary for the intent of the test. I didn’t need to show that the elements came out of the filtered list in sorted order for that test — I only needed to show that the elements were present in the filtered list. I’ve updated the area around that test with new content explaining the interesting lesson that Craig taught me. Thanks to Craig and all the participants on the XPSTL list.

OK, time for story number 2 —

I want to see just those books that are in a specific genre, sorted by title, so that I can survey which books I have on particular topics.

Fortunately for us, this turns out to be trivial in .Net 2.0 (Sorry, Java guys!).

First case — test for 0

As I said in the previous episode, when faced with the challenge of implementing a story that involves multiple results, follow the rule of 0, 1, many. So here is my test for getting no books back when there are no books in my genre:

[Test]
public void NoBooksReturnedWhenFilteringEmptyListOfBooksForGenre()
{
    BookListManager manager = new BookListManager();

    List<Book> filteredBookList = manager.GetBooksInGenre("Genre");

    Assert.IsEmpty(filteredBookList);
}

I puzzled a bit about the name of our new member funcction, GetBooksInGenre. Should it be FilterBooksByGenre, GetAllBooksInGenre, or any one of several other candidates? I played with each of them, trying them out for size, thinking about whether or not I liked the way the API felt. The fact that I was thinking about the API now, before I implemented the functionality, is actually pretty important. This is one of the key differences in Test Driven Development versus Test-at-the-same-time Development or Test After Development. In the latter two kinds of development, you write the code, putting a stake in the ground representing a potentially significant amount of work. It is only after the code is written that you start exercising the interface that you’ve written. In the first way, using Test Driven Development, you play with the interface first, without regard to how the code will be written, and have the opportunity to get things as best as you can now, before that stake gets planted.

So I finally settled on the API I show in the test. Let’s write enough code to make that test compile but fail:

public List<Book> GetBooksInGenre(string genre)
{
    return null;
}

and the final code:

public List<Book> GetBooksInGenre(string genre)
{
    return bookList;
}

Again, remember to watch the test fail to make sure that the test can fail, and that your code is making it pass.

Second case — test for 1

Next test is to create a list with one book in it, and that book should have the correct genre being searched for, and filter for that genre:

[Test]
public void SingleBookReturnedWhenFilteringListWithOneBookInCorrectGenre()
{
    BookListManager manager = new BookListManager();
    manager.Add(new Book("Title", "Genre"));

    List<Book> filteredBookList = manager.GetBooksInGenre("Genre");

    Assert.AreEqual(1, filteredBookList.Count);
    Assert.AreEqual("Title", filteredBookList[0].Title);
    Assert.AreEqual("Genre", filteredBookList[0].Genre);
}

OK, so first of all, this test requires a change to our Book class. Previously, Book only had a single argument to its constructor, just a title. Now it needs a genre, which is going to require a change to the Book class to make this test compile. It also needs a Genre property to allow us to test that we get the right book back.

If we start off by changing the constructor for Book directly, we’ll break other tests. What we need to do is find a way to make this change in such a way that we keep our code working and can slowly change to the new constructor. Being able to do this is a critical part of learning to write code using TDD. Each and every change we make to our code needs to made in as small sized steps as possible. This lets us stay close to working code, adding functionality quickly, and then simplifying things immediately afterwards.

The path I’d take is to ignore this new test for a moment. We wrote it, and the act of writing it informed us that we need a different signature for our constructor. So let’s create that constructor and make sure that everything still works after making that change. Once we finish that, we can go back to this test and implement the functionality it is requiring.

Step one in implementing this refactoring is to create a new constructor that takes the two arguments defined in our newest test. Change the old constructor to call the new constructor, passing in a dummy value for the genre in all cases where the old constructor is being called:

public class Book : IComparable<Book>
{
    private readonly string title;

    public Book(string title) : this(title, "Unspecified")
    {
    }

    public Book(string title, string genre)
    {
        this.title = title;
    }

    public string Title
    {
        get { return title; }
    }

    public string Genre
    {
        get { return null; }
    }

    public int CompareTo(Book other)
    {
        return title.CompareTo(other.title);
    }
}

I also created a property called Genre to let the new test compile. After making this change, all my tests still pass. It is important to note that I didn’t add anything but the absolute minimum I needed to keep the old tests working.

Do not add functionality while refactoring. Resist, resist, resist. Do not add it now, but remember to add tests to force you to add it later.

Now that we have the two constructors, we can realize that we don’t need to original constructor any more, so we can get rid of it, one call-site at a time. This is important, since we don’t have to make a big-bang change but can change one thing at a time. That’s the sign of a well executed refactoring.

Change one site, re-run tests, ensure things still work. Once you’ve found and changed all call sites, remove the old method, recompile and re-run tests. Done! Now, back to that test…

As of right now, we have the new constructor in place, as well as the empty Genre property. We run the test, the test fails. Now lets implement the code to get this test working, which only consists of adding the code behind the Genre property to the Book:

public class Book : IComparable<Book>
{
    private readonly string title;
    private readonly string genre;

    public Book(string title, string genre)
    {
        this.title = title;
        this.genre = genre;
    }

    public string Title
    {
        get { return title; }
    }

    public string Genre
    {
        get { return genre; }
    }

    public int CompareTo(Book other)
    {
        return title.CompareTo(other.title);
    }
}

I guess the case of a filtering a list of a single book wasn’t that interesting after all. Let’s write another test that tries to filter another list containing a single book, but let’s make that book have a genre different than that we’re searching for. Maybe that will cause us to write some code.

[Test]
public void NoBookReturnedWhenFilteringListWithOneBookInDifferentGenre()
{
    BookListManager manager = new BookListManager();
    manager.Add(new Book("Title", "Good"));

    List<Book> filteredBookList = manager.GetBooksInGenre("Bad");

    Assert.IsEmpty(filteredBookList);
}

Run this test, it compiles the first time, but it fails. I guess we do get to write some code…

public List<Book> GetBooksInGenre(string genre)
{
    return bookList.FindAll(delegate(Book book)
                                {
                                    return book.Genre == genre;
                                });
}

This code uses the .Net 2.0 anonymous delegate syntax that allows you to create a delegate in place and pass it as a function into the method you’re calling. What happens in the List.FindAll method is that it takes the delegate and applies it to all the elements in the collection, one at a time. In this case, if the Predicate delegate passed in executes and returns true, then the code inside the FindAll method adds the element into a new collection. Once the iteration is finished, the new list is returned to the caller.

The powerful part about doing this is that the new delegate has access to all the state that existed at the point at which it was defined. In other words, even though the delegate we’re defining is being passed as a parameter to the FindAll method and will be invoked later, in a completely different context, it still has access to the methods, members, parameters, and local variables that were in existence at the place where the delegate was defined, which is in our GetBooksInGenre method. Pretty cool, eh? It is an example of how you can create and use closures in .Net.

A bit of test refactoring before we carry on

As Book is growing larger than a single property, we’re going to find ourselves inspecting its list of properties in our tests over and over. When faced with this, I tend to create an Equals method for the object I’m manipulating in my tests. This lets me directly compare the objects in a single Assert.AreEquals. Since the Equals method is code, however, I have to write tests for it, and it usually takes several tests. Here is the code for Book.Equals:

public override bool Equals(object obj)
{
    Book other = obj as Book;
    if(other == null) return false;

    return title == other.title &&
           genre == other.genre;
}

and the completed tests for it, in the BookFixture (obviously):

[Test]
public void TwoBooksWithSameGenreAndTitleAreEqual()
{
    Book first = new Book("Title", "Genre");
    Book second = new Book("Title", "Genre");

    Assert.AreEqual(first, second);
}

[Test]
public void DifferentTitlesMakeBooksNotEqual()
{
    Book first = new Book("A", "Genre");
    Book second = new Book("B", "Genre");

    Assert.AreNotEqual(first, second);
}

[Test]
public void DifferentGenresMakeBooksNotEqual()
{
    Book first = new Book("Title", "A");
    Book second = new Book("Title", "B");

    Assert.AreNotEqual(first, second);
}

[Test]
public void NullObjectToCompareToCausesObjectsToBeUnequal()
{
    Assert.IsFalse(new Book("", "").Equals(null));
}

[Test]
public void WrongKindOfObjectPassedToEqualsCausesObjectsToBeUnequal()
{
    Assert.AreNotEqual(new Book("", ""), "");
}

I personally find an Equals method difficult to implement test first. In my standard working model, I would like to slowly build up the complete set of functionality needed to make something work. I don’t know how to do this with Equals. The problem is that it is difficult to incrementally add behavior to an Equals method and keep a set of growing tests working for it.

For example, I could have defined a test called BooksWithSameTitleCompareEqual, something like this:

[Test]
public void BooksWithSameTitleCompareEqual()
{
    Book first = new Book("A", "");
    Book second = new Book("A", "");

    Assert.AreEqual(first, second);
}

This would lead to a trivial implementation of my Equals method that just compared the titles to see if the objects were equal. Next, I’d add a test comparing the genre of books, but it would have to be crafted in such a way that the titles would be the same as well. And, in the previous test, I would have to have created the test data in such a way that the test wouldn’t break as I added code to compare the genre. My second test would have looked like this:

[Test]
public void BooksWithSameGenreAndTitleCompareEqual()
{
    Book first = new Book("A", "G");
    Book second = new Book("A", "G");

    Assert.AreEqual(first, second);
}

Note that the test has to have the intent of proving that not just the genres are equal, but that all fields we’ve written tests for up to this point are equal. So if we had 5 fields in our class, we’d have to have 5 tests, each growing by one field each time, but still having all other fields “equal” in some way, like the empty genres in my first test. This all feels very contrived and pedantic to me. What I usually do is to write a single test with all fields being equal, and write the positive test case for equality in one fell swoop. The exception to this is when calculating equality is more complex than just comparing fields. If there are loop comparisons involved or something, I will write individual tests for that.

Once this is finished, however, I do tend to write negative test cases for each, individual field, to ensure I haven’t missed something, and tests for passing null and passing the wrong kinds of objects.

That gives me 3 + n tests to write the Equals method for any class, at a minimum, where “n” is the number of fields a class has. When you’re fairly trivial Equals method is only about 5 lines long, and you’ve spent 15 minutes writing all those tests for it, you have to start thinking whether or not it was worth it. Well, my answer is that it is worth it, if you’re writing your Equals method by hand.

I prefer to let my tool generate it for me, in which case I don’t write any tests for it 🙂

Third case — multiple books

OK, so we’re finally at the point of handling multiple books at a time in our filtering. To get the multi case working, I think I see two separate steps. The first step is to implement filtering across multiple books, while the second step is to make sure that the filtered list is sorted by title. We should break this down into two separate tests.

In the first test, we’re going to check that we’re filtering correctly. In this test, we need to prove that we can filter the list for books having the correct genre. We are not concerned with the ordering of the books here, so we’ll just confirm that the right books are contained in the filtered list (this is the change from Craig). Here it is:

[Test]
public void PresortedBookListWithSomeBooksInCorrectGenreOnlyReturnsBooksInCorrectGenre()
{
    BookListManager manager = new BookListManager();
    Book firstMatchingBook = new Book("A", "G1");
    Book secondMatchingBook = new Book("C", "G1");
    Book nonMatchingBook = new Book("B", "G2");
    manager.Add(firstMatchingBook);
    manager.Add(nonMatchingBook);
    manager.Add(secondMatchingBook);

    List<Book> filteredBookList = manager.GetBooksInGenre("G1");

    Assert.AreEqual(2, filteredBookList.Count);
    Assert.Contains(firstMatchingBook, filteredBookList);
    Assert.Contains(secondMatchingBook, filteredBookList);
}

The trick I just learned in writing this test is that you have to be conscious of making the assertions in the test match the intent of the test as described in the name. In this case, we are intending to prove that we can filter the list, so our assertions should support this functionality exactly. They should be strong enough to confirm that this behavior is actually happening, but not so strong that they assert behavior that hasn’t been written yet. In our case, that means assertions about filtering, but none about sorting. This is why I’m using Assert.Contains in that test.

While this sounds like a tremendously good plan, unfortunately this test passed the first time I ran it. I understand why it happened — the List.FindAll(Predicate) method I called finds all elements that satisfy the predicate when called. It was the simplest way to get that particular test working, and I got the looping logic for free.

Now for the test of making sure that the filtered list I get is sorted:

[Test]
public void UnsortedBookListWithSomeBooksInCorrectGenreOnlyReturnsBooksInCorrectGenre()
{
    BookListManager manager = new BookListManager();
    Book firstMatchingBook = new Book("A", "G1");
    Book secondMatchingBook = new Book("C", "G1");
    manager.Add(secondMatchingBook);
    manager.Add(new Book("B", "G2"));
    manager.Add(firstMatchingBook);

    List<Book> filteredBookList = manager.GetBooksInGenre("G1");

    Assert.AreEqual(2, filteredBookList.Count);
    Assert.AreEqual(firstMatchingBook, filteredBookList[0]);
    Assert.AreEqual(secondMatchingBook, filteredBookList[1]);
}

All I did was the switch up the order I added the books. The output, once the functionality is implemented, should be the same as in the previous tests. I implement the functionality in BookListManager like this:

public List<Book> GetBooksInGenre(string genre)
{
    return GetSortedBookList().FindAll(delegate(Book book)
                                {
                                    return book.Genre == genre;
                                });
}

All I had to do was the pre-sort the list! I love when I can build on existing functionality. This happens a lot when you build small, fine-grained methods. You end up being able to use them in new and interesting combinations to implement new functionality easily.

Conclusion

When I sat down to write this episode, I truly thought it was going to be 2 or 3 tests, a couple hundred words, and a bit of code. When I got into it, though, I found several other tests to write, a need to write and use the Equals method for Book, and a bit of refactoring. I always find it interesting how complexity just shows up and how easily it is handled.

As an observation, you may have noticed that I have 68 lines of source code, almost all of it completely and totally trivial methods, and about 190 lines of test code. That ratio is a little skewed, because I haven’t had any really good functionality to implement, but I’m not upset about it at all. I’m certain that the 68 lines of source work, and I can continue to leverage the 190 lines of test code forever to confirm that the lines continue to work. I consider that to be an investment well worth the effort.

The next episode will add another user story to the mix. I think I’ll do the last one in the list, retrieving a list of books sorted by genre and title.

Until we meet again!

— bab