Real Test Driven Development with Visual Studio Team System

There has been a lot of controversy lately, both in the blogosphere and on the Yahoo TestDrivenDevelopment group, about whether or not it is possible to due true Test Driven Development (TDD) using VSTS. I’m here to tell you that it is absolutely, completely, totally possible. But it ain’t easy. The point of the rest of this post is to explain why, and to show you another, easier, way.

My definition of TDD

I’m pretty hard-core about what I consider to be Test Driven Development. I believe that I have to write a test first, before any application code is present. I believe I have to write enough code to make the test compile and fail, and then make the test pass. Then I need to refactor my code. All the while, I need to be able to run my tests easily, jump to failures as they happen, and navigate between code and test easily.

Starting off — Creating our base solution and projects

As in all projects, we need to start by defining our solution and our first project.

NewSolution

 In VSTS, to begin doing TDD, you need to create a Test Project. Test projects are special kinds of projects, with a special, double-secret probation GUID hidden in them. This GUID identifies them to VSTS as a project that houses tests, which I guess is used to help the built-in test runner, test viewer, and test manager to find the assemblies to reflect over to find the tests. It also serves to taint the definition of this kind of project to prevent it from being opened in Visual Studio Profession. Rumor has it on the web that you can remove this GUID and the Pro-level tool can open the project again, but now the tests won’t be found. Pick your poison… Anyhow, in the Solution Items folder above, you see two other files. the first, localtestrun.testrunconfig defines metainformation needed to tell VSTS how to run your unit tests. There are a bunch of special-purpose attributes that you can set in this file that allow you to do fancy things in our tests that we’ll talk about later. The TDDWithVSTS.vsmdi file contains the definitions of any lists of tests that you’ve defined with the included TestManager view (which I believe only ships with the VSTS for Testers SKU). I typically completely and totally ignore this file and its contents, because it doesn’t help me to define lists of related tests. I tend to run my tests either one at a time, at the class level, at the project level, or at the solution level. The tests lists let you define arbitrary sets of tests to run together, a feature that just doesn’t excite me.

Creating our first test

Here is the first test I created. I’m just doing the simple Stack exercise, as the TDD process isn’t the centerpiece of this article.

namespace SampleTestProject
{
    /// 
    /// Summary description for StackFixture
    /// 
    [TestClass]
    public class StackFixture
    {
        public StackFixture()
        {
            //
            // TODO: Add constructor logic here
            //
        }

        #region Additional test attributes
        //
        // You can use the following additional attributes as you write your tests:
        //
        // Use ClassInitialize to run code before running the first test in the class
        // [ClassInitialize()]
        // public static void MyClassInitialize(TestContext testContext) { }
        //
        // Use ClassCleanup to run code after all tests in a class have run
        // [ClassCleanup()]
        // public static void MyClassCleanup() { }
        //
        // Use TestInitialize to run code before running each test 
        // [TestInitialize()]
        // public void MyTestInitialize() { }
        //
        // Use TestCleanup to run code after each test has run
        // [TestCleanup()]
        // public void MyTestCleanup() { }
        //
        #endregion

        [TestMethod]
        public void TestMethod1()
        {
            //
            // TODO: Add test logic here
            //
        }
    }
}

Here we can see the basic outline of a test, along with some extra,optional testing attributes that the Create Unit Tests wizard built for me. I don’t need any of the extra attributes, so I’m going to remove the region from the above code and actually write my first test. If I didn’t want to ever see this stuff, I could certainly find the template that defined this base test and remove all the noise.

using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace SampleTestProject
{
    [TestClass]
    public class StackFixture
    {
        [TestMethod]
        public void StackIsEmptyAtCreation()
        {
            Stack stack = new Stack();
            Assert.IsTrue(stack.IsEmpty);
        }
    }
}

Compile and you get the expected warning about no Stack class defined. Now I can go build the skeleton of that class in typical TDD fashion. Note that I have to do this by hand — there is no fancy QuickFix-type wizard to build this class for me. So to build this, I have to create another project, this time a regular class library project and create a new Stack class there. The project creation is a one-time only kind of cost, so I’m not really worried about it, but I’d sure like some sort of feature to create the class for me instead of me typing the boilerplate code. This is my first hint that this tool is really meant to help you test already existing code, instead of meant to help you write tests and force you to write other code. In fact, were I to have written my Stack class first, there are all kinds of wizards in place to auto-generate my tests for me, but this is the wrong direction to interest me.

One quick note before going on — VSTS does not allow you to place tests in source assemblies, and you’re discouraged from putting source in test assemblies. This forces a source-code organizational scheme on you where your tests and code are in different assemblies. This has its pros and cons, but I think I like the end layout. We had our tests inside our code assemblies in Enterprise Library 1.0, and we had to have like 6 different compiler targets to build all the different variations we needed. With them in different assemblies, you don’t need to play these games, and things are much more simple. But now you’re forced to make things public in order to test them (or use InternalsVisibleTo, which is a topic for another posting coming soon).

After building my minimal Stack class, I have to add the project reference to my test project references, so that the compiler can find my class. Once I do that, VSTS has a handy keystroke (Shift-Alt F10) that will automatically generate the using statement for me that I need to compile.

AutoUsingGenerate

Once I generate the using for the class, I then have to go manually create the stub for the IsEmpty method. Now, if this were a method and not a property, apparently, VSTS would offer to create a stub for me, populated with a single line body that throws an Exception. I’m not sure why properties and methods are treated differently, but they clearly are. I’d expect at least a dialog that asked me if IsEmpty is a field or a property. Anyhow, I go create the property by hand, recompile, and I’m finally ready to run my test.

What I’d really like to find is an easy way to say, “VSTS, please run this test I’m currently editing for me”. Unfortunately, there is no easy way to run a single test. About the best you can do, if you are looking for something easy, is to run all tests in a single TestProject at once. If a file that belongs to a test project is the currently active edit window, you can either hit a toolbar button or hit Shift-Ctrl-X to run all tests in that project. I know that in my development cycle, I tend to run smaller sets of tests as part of my rapid TDD rhythm, and expand outwards as I get closer to completion, or enough time passes that I want to run a larger set of tests. Perhaps this is a smell resulting from our suite of 1800 tests taking almost 10 minutes to run through VSTS, or perhaps this is a normal rhythm that most people use. In any case, the only way to find an individual test to run, or to run just those tests in a single test class, is to open the TestView window, which reflects over all your TestProject assemblies in your solution and displays a listbox of all your tests. I’ve enclosed the TestView window as it initially is displayed for the latest Enterprise Library.

TestView

Now, as I said, we have 1800 tests, organized into about 20 different TestProjects or so. So if I wanted to run a particular test, the only way I can do it is to find it in this window, right click on it, and run it from there. This is a pain to do when you have lots of tests, and is also rather keyboard-unfriendly. You can group the items by different criteria, like into a single solution node, into project nodes, etc, but you have to do this grouping each time you open the TestView window, as the Group By choice isn’t remembered. This is one of the usability issues that is typically overlooked during reviews of this tool by people who are new to it. This view works just great if you have a few tests, but it starts to fall apart as your number of tests goes up. When first opening this window, for example, it takes over 30 seconds on my laptop to scan through all the assemblies and find all 1800 tests. After this, it opens right up, but it makes the act of running your tests the first time rather painful.

Now, lets look at the results of running our tests. The results are displayed in the Test Results window. This window looks very much like the Test View window, but its purpose in life is to show you the outcome of the tests that you have run. As a simple example of this, here are the results from our Stack test run —

TestResults

We see all kinds of stuff in this window, and most of it good. All of the information in this window is important to us in figuring out the status of the tests that we just ran. We see that the single test that we have ran but failed. We see the name of the test, which project it was in, and the error message that was generated by the failure. All good stuff. In a perfect world, what I’d really like to happen now is for me to be able to double click on the failed test, and be taken to the point of failure. But what I get when I double-click on that test is the Test Results pane —

TestResultsPane

This pane tells me everything I ever wanted to know about this test run. It tells me what test is was, when I ran it, how long it took, why it failed. It tells me exactly where the failure happened. Unfortunately, it doesn’t give me any way to get to the failure. You see, the stack trace is not live at all. I have to examine it, interpret it, find the particular line I want to look at, open the file, navigate to the line, and then I can finally examine my failure. I frankly cannot believe that they left this out, as this is the most basic function of a stack trace in a test tool. The fact that this trace is just dead text just boggles my mind. Sorry, VSTS Team, I know you worked hard on this tool, but come on… 

So let’s just suppose I do navigate to the failed line, and from there figure out how to fix the code to make the test pass, and try to rerun my test. As we saw in the Test Results window two figures above, there is a link in there to Rerun Original Tests. Let’s just say that I clicked this. In my mind, the logical thing to happen would be for the code to get recompiled as needed, and the tests that failed get run, based on the new code. Well, it turns out not to be the case. What happens is that VSTS saves each individual test run in its own individual folder on your disk, including all results files that are generated by the tool and all the assemblies needed to reproduce those results. If you want to recompile and rerun the one test that failed, unfortunately, you need to navigate back to that test in the Test View window and run it again from there.

However, I now really can rerun my test, watch it pass, and feel happy. I did it! I wrote a test in a Test Driven fashion using VSTS as my IDE of choice. It was painful, but I was able to do it.

And for the second test and beyond…

There isn’t much point in showing you how to write more tests in this single class, as the process is the same. As long as you have relatively few tests and a reasonably simple system, this testing system works pretty well. I’ve used it for teaching many times, and I can do anything I want to with it. Students sometimes chuckle as I go from window to window to window to make fairly simple things work, but I can do it. Once you get used to the keyboard shortcuts, you can start to move reasonably quickly.

However, once you get to a thousand tests or so, things get a bit dicey. The Test View window is extremely inconvenient to navigate, the Test Results window does a good job of showing you which tests failed, but falls down on helping you navigate to figure out why they failed. And the biggest problem I have with this framework is that the built-in test runner is slow. I can’t give benchmarks, as far as I know, but I do know that it seems like it takes at least twice as long to run our tests through it as it does through NUnit (our tests are written to run in either framework through judicious use of using aliases).

How I really use VSTS for unit testing

I think I made my point that I have no real problem with the testing framework itself, other than a few quirks that I haven’t gone into. But the test runner/viewer/manager aspect is pretty tough to use. Fortunately for all of us, there is a very simple, free alternative. Jamie Cansdale has written a tool called TestDriven.Net that will run your unit tests for you in a much more reasonable fashion. Once you load and install TD.Net, you can right click on a test, or a test class, or a test namespace and run that set of tests. You can right click on test files, test projects, solution folders, or the entire solution in Solution Explorer, and you can run those sets of tests.

TDNetPopup

The results do not show up in a graphical manner, but instead show up in your Output window. But when tests fail, the stack traces that are printed out are live and clickable, and take you to the line of code where the failure happened. You can also explore the stack trace of the failed test, popping up and down the callers and callees to figure out where you went wrong. With the combination of VSTS and TD.Net, you can do anything!

So I’d have to say that my TDD process, with TD.Net installed, works like the following. I do this every day, so I’m pretty sure that this process works well enough for anyone committed to doing Test Driven Development with VSTS.

  1. Create the appropriate solutions and projects as needed. I am currently just adding functionality to Enterprise Library, so my solutions and projects are already existing for the most part.
  2. Decide on the test to write and write it. If I need a new test fixture, use VSTS to create it for me. If I need a new test, create it where it should be.
  3. Compile and use the built-in VSTS auto-import and method-generation features as much as possible to get the test to compile.
  4. Right click on the test, run it, and watch it fail.
  5. Double click on stack trace where code needs to be fixed and fix code
  6. Right click on test, run it, and watch it pass.
  7. Right click on class, namespace, project, or solution to run larger set of tests.
  8. Repeat forever

Conclusion (finally!)

All done and told, TD.Net makes the actual testing framework that I’m using almost irrelevant. I’m just as productive using VSTS as I am using NUnit with this as my test runner. The VSTS team did a decent job creating a nice testing framework (Abstract Test Case issues aside ), and Jamie filled in the holes with a great runner. Between the two of them, I’m able to be productive, happy, and test-infected.

(N.B. — I have completely ignored the refactoring half of TDD in this article. VSTS does have built-in refactoring tools that work in a pretty expected way for the most part. Assume that I do the refactorings at the appropriate times during this article, and we’ll all sleep better.)

— bab

Now playing: Rush – A Show Of Hands – The Rhythm Method

Enterprise Library Caching Block and Exception Safety

Someone recently sent me some email after looking through my old Enterprise Library January, 2005 Caching block code, and asked to say a few words about the design of the exception safety of that block. This was one of the flaws in the original implementation of the Caching block that this newer implementation replaced. It had some thread safety issues, and it did not function well in the face of exceptions. Therefore, as one of the major design goals for my new implementation, exception safety became a very big concern.

What I wanted to do was set things up so that if an exception happened, I’d recover from it gracefully, restore everything to the same state that it was in before the exception happened, and throw the exception back to the caller. Unfortunately, I couldn’t do that, for reasons I’ll talk about later. What I ended up doing was recovering from an exception by deleting the user’s data stored at that caching key, leaving the system in a predictable state but with some limited data loss.

Before we talk about why I had to make that choice, l want to give you a bit of background on exceptions.

What are exceptions?

Exceptions represent a major advancement in handling errors that occur while a program is running. Being an old Unix hacker, I remember making a function call, getting back -1 as the return code and checking the global errno value for the failure reason. Then you’d go check in errno.h to see if the error code held by errno represented a sytem-defined error. If so, you could call strerr to get the error text associated with that error. This text was not customizable at all, but just a bunch of standard strings that shipped with your operating system.

There were a few downsides to this design —

  • You had to explicitly check for error returns from function calls. If you didn’t check, the error went unnoticed and unhandled
  • It was not remotely thread safe. Since errno was a global variable, it could be changed by any thread at any time. They eventually improved on this by changing errno from being an int into being a C macro that hid some thread-safe logic. Still, pretty ugly.

Contrast this with today’s style of error checking, which involves creating, throwing, and catching exceptions, and you’re left with a much more structured approach to handling errors in your system. Now, some piece of code can detect that some problem has happened, and it can raise a big, read, unignorable flag that someone higher up in the call chain has to deal with. You are allowed to customize the information contained in your exception through standard OO means, you can catch different of errors differently, leading to better encapsulation of error handling logic, and you can also centralize all your error handling in one place. And because of how throwing exceptions work, it is necesarily thread safe.

I know all of you already know most of this, but I like showing off my old, crusty Unix roots

What is exception safety?

So let’s think about what happens in your application when an exception is thrown. Basically, processing stops right then and there in that particular thread. Whatever that code was doing, it is interrupted, and the search for a matching catch block begins. The bad part about this is that any sort of resource management that you have carefully crafted is likely to be interrupted as well. So if you had any data structures you manipulating, or if you had any files opened, monitors locked, or whatever, you leave them in whatever state they were in when the exception happened, unless your code takes active measures to prevent this from happening. And the degree to which you go to prevent this dictates how strong an exception guarantee you can provide to your callers.

Exception safety guarantees

There are several levels of exception safety, describing the state in which a system is left after an exception is handled.

The first level is offering no guarantee. In this level, which is where, unfortunately most code lives, the state of the system is undefined after an exception is thrown. The code throwing the exception made no attempt to clean up, so any data structures that were in use could be corrupted, resources could have been leaked, and your disk could be reformatting right now…

In the second kind of exception safety guarantee, called the Basic Exception Safety Guarantee, the system is left in a stable condition, but something in it may have changed. This change is probably due to the application trying to do some cleanup action, based on recovering from the error. This is the kind of safety I promise in the caching block, and I’ll explain why shortly.

The third kind of exception safety guarantee is the Strong Exception Safety Guarantee. With this promise, if an exception is thrown during an operation, the state of the system is left unchnaged from its original state. In other words, operations either succeed or fail, totally. This is the best kind of guarantee, but it is the pretty hard to provide.

Finally, the most strict exception safety guarantee is the No Throw Guarantee. This simply promises that exceptions will never leak out of a method. Any exceptions that are thrown internally are handled internally, and no one is the wiser. This is hardest one to provide reliably, mainly because it is not always clear how to react to an exception that is thrown at the at which it was thrown. I personally use this form of guarantee least, since I don’t like mixing up the exception detection and reporting with the exception handling code. My feeling is that someone higher in the food chain probably knows more about handling this exception than me, and I should let them.

Caching and the DataBackingStore

Here is the basic algorithm I wanted to implement:

public void Add(string key, object newValue)
{
    if(ItemIsAlreadyInCache(key)
    {
        inMemoryCache.Remove(key);
        databaseBackingStore.Remove(key);
    }

    inMemoryCache.Add(key, object);
    databaseBackingStore.Add(key, object);
}

The issue with this code is that it doesn’t do anything in the face of exceptions. My caching block stores all its items in two places — in an in-memory hashtable for quick access and in some form of persistent storage to allow the cache to survive through an app recycle or something (think smart client). The first backing store I implemented used a database, so let’s consider that. So let’s imagine that we ran the code as written above. If an exception were to have happened after removing the item from the in-memory cache, then I’d be stuck with the cache in an inconsistent state — the in-memory representation would not have an item that was still represented in the backing store. This is bad.

We can get around this problem by reversing the order of those two calls —

public void Add(string key, object newValue)
{
    if(ItemIsAlreadyInCache(key)
    {
        databaseBackingStore.Remove(key);
        inMemoryCache.Remove(key);
    }

    inMemoryCache.Add(key, object);
    databaseBackingStore.Add(key, object);
}

We can do this, because we can depend on the database used in the backing store to promise us to abide by a strong exception guarantee. After all, data integrity is what databases are very good for. Since we can rely on the database to either completely remove the item or not touch it at all, we can make tha call first, and only remove the object from our in-memory cache if the first remove worked. And if it didn’t work, an exception would be thrown, and we’d be left right where we started. But what about the two Add calls — is there an order that we can put them in to make the add operation strongly exception safe as well? As a matter of fact, we need to do a little more restructuring to make that work, since the recovery operation is a bit more complicated.

Let’s assume that we switch around the order to be like this —

public void Add(string key, object newValue)
{
    if(ItemIsAlreadyInCache(key)
    {
        databaseBackingStore.Remove(key);
        inMemoryCache.Remove(key);
    }

    databaseBackingStore.Add(key, object);
    inMemoryCache.Add(key, object);
}

If an exception happens during the databaseBackingStore.Add call, we obviously shouldn’t add the new element to the in-memory hash table, since that is a recipe for inconsistency. What we should do, in fact, is add the old item back in! If we’re going to promise exception safety from this method, we need to arrange things so that the state of the underlying cache is unchanged after the exception. But what happens if the add of the old item to the databaseBackingStore fails? Now we’re in really big trouble! Fortunately, what we can do is to defer the remove/add functionality to a single stored procedure with transactional characteristics, which would mean our code could now look like this —

public void Add(string key, object newValue)
{
    databaseBackingStore.Add(key, object);
    inMemoryCache.Remove(key);
    inMemoryCache.Add(key, object);
}

As long as we assume that the Remove and Add calls to the hashtable can never fail, and as long as the key isn’t null they can’t, we’re finished! Yea!!!

Enter the second backing store

In addition to supporting persistence through a database, I had a requirement to support something called Isolated Storage, or IS for short. IS is a special place on your hard drive that windows arranges to always have writable for you, almost regardless of your current identity and permissions (not quite, but close enough for our purposes). IS acts just like its own little file system, with the ability to create directories and files in those directories. You have a limited subset of operations that you can perform on those files, which are creating directories and files, and removing directories and files. As we will see shortly, this is entirely insufficient for our needs.

Let’s start from this as our base attempt at getting an IsolatedStorageBackingStore to work —

public void Add(string key, object newValue)
{
    if(ItemIsAlreadyInCache(key)
    {
        isoStorageBackingStore.Remove(key);
        inMemoryCache.Remove(key);
    }

    isoStorageBackingStore.Add(key, object);
    inMemoryCache.Add(key, object);
}

And let’s assume that there is an exception thrown for some reason during the isoStorageBackingStore.Add method call. What kinds of recovery actions can we take? In a perfect world, we’d like to do as we did for the database backing store, and reset things to how they were before we started fiddling, but in this case, we can’t. We could imagine some code that looked this as our attempt at error handling —

public void Add(string key, object newValue)
{
    if(ItemIsAlreadyInCache(key)
    {
        isoStorageBackingStore.Remove(key);
        inMemoryCache.Remove(key);
    }

    try
    {
        isoStorageBackingStore.Add(key, object);
        inMemoryCache.Add(key, object);
    }
    catch(Exception)
    {
        isoStorageBackingStore.Add(key, originalObject);
        inMemoryCache.Add(key, originalObject)
    }
}

And this would leave the cache how we found it. But we have no assurances whatsoever that the re-adding of the original object is going to succeed. If it fails, we’re left with a cache in an unpredictable state, which clearly violates our Strong Exception Safety guarantee.

Going back to my Unix roots, I learned long ago that there is only one safe way to deal with replacing files in a file system, and that involves a 3–step operation:

  1. Rename the original file to a new name
  2. Insert the new file
  3. Remove the old file

And if the insertion of the new file failed, you simply rename the old file back to its original name, and you’re back where you started, and everyone is happy. You really need the rename functionality to make this work, as it doesn’t use up file system resources, other than the equivalent of a directory slot (inode). Renames are considered to be pretty safe things to do, in that they don’t fail, or they fail really quickly. If you don’t have permission to rename the file, the rename fails immediately. If you are out of inodes in your file system, the call fails immediately. And so on. If you contrast this with an Add operation, there are lots of reasons they can fail. But if the add does fail, you clean up the resources it used, which should leave enough resources to be able to rename the old file back. And IS lacks this rename capability. Without it, it is impossible to create a backing store that can promise a strong guarantee. Which is exactly why I had to change to offering the Basic Exception Safety guarantee.

And this is also why the Caching Block promises that it will remove any traces of any object that is being added or removed from the cache when an exception happens. The basic algorithm for adding something to the cache becomes something like this —

        public void Add(string key, object value, CacheItemPriority scavengingPriority, ICacheItemRefreshAction refreshAction, params ICacheItemExpiration[] expirations)
        {
            ValidateKey(key);
            EnsureCacheInitialized();

            CacheItem cacheItemBeforeLock = LockItemInCache();
            try
            {
                CacheItem newCacheItem = new CacheItem(key, value, scavengingPriority, refreshAction, expirations);
                try
                {
                    backingStore.Add(newCacheItem);
                    cacheItemBeforeLock.Replace(value, refreshAction, scavengingPriority, expirations);
                    inMemoryCache[key] = cacheItemBeforeLock;
                }
                catch
                {
                    backingStore.Remove(key);
                    inMemoryCache.Remove(key);
                    throw;
                }
            }
            finally
            {
                Monitor.Exit(cacheItemBeforeLock);
            }
        }

And the backingStore.Add looks like this, refactored into a class called BaseBackingStore —

       public virtual void Add(CacheItem newCacheItem)
        {
            try
            {
                RemoveOldItem(newCacheItem.Key.GetHashCode());
                AddNewItem(newCacheItem.Key.GetHashCode(), newCacheItem);
            }
            catch
            {
                Remove(newCacheItem.Key.GetHashCode());
                throw;
            }
        }

Please try to ignore the ugly implementation details above. There were some implementation issues with IsolatedStorage and path lengths that caused some problems. The essence of the solution, however, is that if anything goes wrong, immediately begin removing whatever we’ve touched, so we can at least leave the cache in a predictable state so someone can continue to use it.

The biggest downside of the inability of IS to do what I needed is that I had to change the DataBackingStore class to work the same way. I’d much rather offer the strong guarantee for the DataBackingStore, but then the two backing stores I implemented wouldn’t be substitutable for each other, and Barbara Liskov would be very unhappy with me!

Conclusion

Boy, that was probably my longest blog posting yet! I wanted to share the whole thought process of how I went from really trying to get the best exception guarantee I could out of my code, and why I had to compromise. If I’ve left any questions unanswered, or wasn’t clear about something, please let me know, and I’ll add even more to this tome. Thanks for reading, and I hope it was interesting,

— bab

 

This isn’t how I learned TDD… — Updated!

Updated – 11/23/2005

The content on this page has been taken down. I want to thank those of you who voted on this topic to send the correct message to Microsoft, and those of you inside of Microsoft who took the complaints seriously and acted on them. </>

– bab

Original Post

This link has been making its way through the blogosphere over the past couple of days. Please take a minute and visit that link, and then come back here.

<Brief musical interlude>

I learned about this from Michael Feathers, who learned about it from Scott Bellware. Ladies and gentlemen, this piece of advice is slightly less than correct. The problem here is not that the advice is wrong (ok, that’s a problem, but we could overlook that since there is so much other good TDD information out there), it is that it is coming from Microsoft. Given that there are billions of MS developers, and they tend to go look to MS for advice about how to develop, this advice carries instant weight and credibility. And yet, it is not correct.

Fortunately, there is a feedback mechanism that you can use should you wish to. At the bottom of that page, there is a place to vote on the quality of information on that page. There is even a place to put comments, gently encouraging the content to be changed.

Why the advice is wrong

Now that the rant is over, I’d like to discuss what is so wrong with the advice. All you experienced TDDers will have picked up on it immediately after reading the page. The problem is that if you develop your software in the way described in that page, you remove the opportunity to learn from feedback. And after all, the whole point of TDD is to poke at your code, learn from it, and use that learning to poke at it again.

The advice as given is (slightly paraphrased with a few steps omitted):

  1. Make a list of tests that will verify the requirements. A complete list of tests for a particular feature area describes the requirements of that feature area unambiguously and completely.
  2. Create work items in Team Foundation System for each test that needs to be written
  3. Write the shells of the classes and interfaces you’ll need to implement these tests
  4. Generate test stubs for each class and interface
  5. Carefully examine each test to make sure it tests what you think it should
  6. Run all tests and watch them fail
  7. Go through each test and update it to test what you actually want it to. This is where you implement each test fully
  8. Run all tests and watch them pass
  9. Fix bugs

Contrast this with a different approach to writing your code:

  1. Make a list of tests that will verify the requirements. Don’t worry if the list isn’t complete — you can add to it as you go and learn
  2. Write a test from your list that illustrates some behavior of your system that you consider important but can be implemented quickly (5 minutes?)
  3. Try to compile it and watch it fail
  4. Write just enough code to make that single test compile but fail. This may involve creating new classes and interfaces
  5. Run this test and watch it fail
  6. Make the test pass
  7. Run all tests and watch them pass
  8. If any failed, fix the code and rerun tests
  9. Refactor
  10. If feature is implemented and all refactoring is finished, then goto end
  11. Consider if there are new tests to add to your list
  12. Goto step 2

The key difference between these two approaches is that the top piece of advice encourages you to plot your entire course through unknown territory up front. You create your entire test list, commit this to stone through TFS work items, write all your classes and interfaces, generate all your tests, implement everything, and then look to see if things pass. If, in this case, you decide you were wrong about something, or some requirement changes, or other strange set of cosmic events occurs, changing your mind is going to be painful. If you just learn something new that makes you take a different course, you have to go back and change all this infrastructure you’ve built up around a guess about how something might work, and it just isn’t going to happen. In this style of development, I believe you’re going to make your stab at it at first, and then make that stab work.

In the bottom style of development, you’re encouraged to work entirely piecemeal. Try something, make it work, learn from it, and try the next thing based on what you’ve learned. If you change your mind, the only baggage you are responsible for bringing along with you is that baggage you’ve already created. You have nothing invested in your guess about how this thing might work, so learning and changing your mind is easy, quick, and cheap.

These are two different styles of development, and only one of them is TDD.

— bab

 

Announcing the Enteprise Library November CTP — available now!!

patterns & practices is proud to release to the web Enterprise Library for .NET Framework 2.0: November 2005 CTP (that’s a mouthful!). This version of the library has been recoded to take advantage of the  new .NET Framework 2.0 features, including database providers, better support for logging and tracing, and the new System.Configuration subsystem. Blocks included in this release are:

  • Caching
  • Data
  • Logging
  • Exception Handling
  • Security
  • Security Caching


This version of the library also features a new Instrumentation subsystem and an entirely new dependency injection-based configuration system.

You can expect a series of webcasts discussing the various new features of the library to be coming soon. Keep an eye on this blog for updates on this.

The final version of the library will be released (if all goes well!) someplace around the middle of December, 2005.

— bab

Want a great job??? patterns & practices is hiring

One of the best gigs I’ve had in my whole life has been working with the great people at Microsoft’s patterns & practices group. These people are responsible for solving interesting problems in interesting new ways, and sharing their code and learnings with you, fellow Microsoft developers.

At this point, they’re looking to hire another developer to join their team. I warn you — the bar is high. They interview many and accept only the best. I’m not sure how they let me in — probably because I’m only a consultant, not an employee!

So, go ahead and swallow the blue pill. Check out Darrell Snow’s blog for details.

Updated: Corrected Darrell’s blog address. Stupid fingers…

— bab

Now playing: – – idobi Radio

What do you find hard to test?

OK, folks. I’m asking you —

What kinds of problems do you find hard to conquer through TDD?

I’m interested in writing a continuing series on my blog about things that are difficult to create TDD. I plan on taking the suggestions that I get, both through email and as comments on this blog post, and begin solving them using C# and .Net. The intent of this series is twofold: the first result I hope to get from this is the obvious one — a set of lessons about how to TDD hard problems. The second result I hope for is more important — I want to show that seemingly difficult problems are not as hard as they seem if you change around the problem to suit your needs. In other words, if something is hard to test, maybe you’re not thinking about it the right way.

Here is a suggestion to get you started:

  • A filesystem walker — walk a filesystem starting at a directory and do something or other for each file found. Maybe we’re searching for a particular file, or looking for a set of files matching a certain pattern, or whatever. The essence of this is that you need to walk a filesystem tree, and develop this code test first.

I know many of you want to see GUIs and database code developed test first, but I intentionally haven’t mentioned them as examples above. I want to hear specifics about what makes them hard for you. Given that knowledge, we’ll focus on just those areas and solve those problems.

You can either respond through my blog here, or you can email your suggestions to TDD-problems@agilestl.com .

I’m looking forward to hearing your suggestions!

— bab

 

Long time, no see…

I know I haven’t posted for a long time, hopefully to the dismay of some of you . I’ve been involved with remaking myself physically, which has taken a tremendous amount of my time and energy, but has also paid off very well. During my time off from blogging, however, I’ve still been generating ideas on what I should write about. Here are some of the entries that I have coming in the next days, weeks, and months, in no particular order:

  • A continuing series on TDDing things that are hard to test. This series will examine how to use TDD to develop things that are generally considered difficult to write tests for as you go. Things that come to mind are applications that make extensive use of the file system  structure, things that rely on databases, UIs, web forms, etc, multi-threaded systems, and more. Soon I’ll be posting a request for some of you to send me some ideas about things that are hard to write TDD, and I’ll take the best suggestions (or maybe the most difficult suggestions ) and approach them from a strict TDD point of view.
  • My take on using the newly released VSTS as a TDD engine. I’ve been doing that now since pre-beta2, and I have some pretty strong opinions on things that work well, and things that really get in the way when trying to do TDD with the new version of VS.Net. I want to confirm that issues I’ve had with previous versions are stil present with the RTM version before I post. I promise that this review will be very interesting, challenging, and potentially controversial.
  • In depth discussions of new features and concepts in Enterprise Library for .Net 2.0 (actually not sure what the final name of this release is supposed to be). I personally rewrote the instrumentation subsystem for EntLib, and hopefully made it much easier and less intrusive to use. Along the way I learned a lot about performance issues with respect to reflection, how to design APIs for usability, and how to test things that were hard to test.

As for why I haven’t been blogging lately, the fact is that I just haven’t had the energy. You see, I’m 40 years old, and my body had started to look it and feel it. I’ve always been pretty athletic, but I’m beginning to understand why professional athletic careers are winding down and mostly finished by my age. I’m 5’ 11” tall, and I had just topped out at around 215 lbs, which put me on the border of being clinically obese. I was very unhappy with how I looked and felt, but also felt powerless to change it. But through a strange set of events, I began to make progress back towards a more healthy weight and attitude.

What I did was to travel with my family to Europe on vacation (Paris for 2 days, and different parts of England for 14 days). Without knowing it, my journey back towards health began there. Starting on the very first day, our eating habits changed, as we ate typical Paris fare, and walked everywhere. We ate less food, ate healthier food, and burned more calories. No more American super-sized portions! My 5 year-old son couldn’t keep up with all the walking every day, so I had him on my shoulders most of the time, burning more calories. And we did this for 2 weeks…

So, to sum up, for two weeks, we ate less and exercised more. All without knowing it!

Once we got home, my wife noticed that I didn’t seem as huge as I once did (she put it in a nicer way than that ). I finally got the courage to weigh myself, and found that I had lost almost 10 pounds without trying. After a little reflection, I realized that we had changed our lifestyle during our trip (duh!), and I resolved to keep living like this from then on. And I have. I eat less food, eat better food, and exercise. I started walking everywhere I could, including to and from work at Microsoft in Redmond. Just that walking was about 4–1/2 miles a day. At my peak, I was walking twice a day for a total of about 40 miles a week. And the weight kept coming off! I lost 10 lbs, 15 lbs, 20 lbs, and I was really starting to feel good. So I started to mix in some running.

At first, I would run a mile or so every other day, then every day, then I would bump up my every-other-day long runs by a mile each week. After a bit, I found myself where I am now, 30 lbs lighter, running between 5 miles and 10k per day, 6 days a week, at about 5AM. And in the afternoon, I ride an exercise bike and lift weights. (Thus the reason for lack of blogging). I just completed a 10k race in St. Louis this past weekend in 52 minutes, for a pace of 8 min, 25 sec per mile. My next goal is a half marathon in St. Louis April 8, which means its time to start serious training. I’m considering doing a marathon as well, but that seems like too much pounding for my tastes.

I hope some of you made it this far in this entry, and I apologize for the entirely personal tone of it, but I wanted to brag a bit — I think I’ve earned it! Now that I’ve almost reached my goal weight (10 lbs to go to 175), I have a bit more time and energy available for technical topics, and I promise to be back blogging again.

— bab

Now playing: Rush – Rush (from vinyl) – In The Mood