Legacy Refactoring Series – Part1

As promised, this is the first installment of a series of articles on refactoring a piece of legacy code that I have been given. The purpose of this particular piece of code is to use reflection to find all the types in an assembly, determine the relationships and associations between these types, and spit the resulting model out in XMI. For those of you who are unfamiliar with XMI, as I was/am, it is a standard interchange format that UML tools are supposed to support that allows models built in one tool to be viewed in another tool.

Current State of Codebase

In its current state, the codebase is not as healthy as it could be. There are three classes in this system. The first class is the GUI, which is pretty simple and self contained. There might be some logic in it that could be refactored out into a Controller-type class, but I haven’t looked at it closely enough yet to know. In the same file as the GUI is the worker method for the thread where all the work happens. Not sure if I like this or not yet, but I’ll visit it at some point. There is a class called Assembly2UML which is the main worker class for this tool. This class is responsible for finding all the types and their relationships, and it has one real method in it, Serialize, and its 200 lines long. We’ll revisit this Serialize method shortly, as it is our original target for refactoring. Finally, there is another file called XMIServices, which houses both the interface IUMLFormatter and its derived type XMIFormatter. It is the responsibility of this XMIFormatter to be called occasionally from the Serializer to output the growing XMI model as it forms.

That’s the basic architecture. As it is, it would be very difficult to test, so I’d like to make few minor modifications to let us get it under some very simple tests.

Why is it hard to get under test?

I’d like to explain why this particular method is difficult to test, and also make a point about how we can design interfaces to make methods more easily testable. In this case, our problem is that the signature of the Serialize method looks like this:

 public bool Serialize(Assembly assembly,SerializationProgressHandler progressHandler)

The immediate issue with this is that in order to test this beast, we have to pass it an Assembly. This is a pretty high bar for us to get over to let us call this method — we have to have a compiled assembly on our disk somewhere that has just the right set of classes in it to let us test what we’re trying to look at.

If we look through the code, however, the method doesn’t use any fancy features of an assembly. All it does is ask the Assembly for an array of its Types and for its name. If we, perchance, were to pass in the name of the assembly and an array of types, we could avoid having to climb the hurdle of creating special test-only assemblies to allow us to test this method. That would be very cool, and would make our lives easier.

This leads me to my first point. When you’re creating methods in your system, please think about the requirements you’re putting on the callers of your methods, and make those requirements as lax as you can. In this case, we had to have an entire Assembly compiled and on disk before we could call this method, where we really only needed an array of types. If this code would have been written in TDD style,. I doubt this situation would have arisen, but it frequently happens in legacy code. We should try to make our methods as easy as possible for others to call, just to make our maintainers’ lives easier later.

Our first step

As Michael Feathers describes in his book, Working Effectively with Legacy Code (WELC), before we can test a method, we have to be able to sense what it does. What Michael means by that is that we have to have something to look at to know if the method does what it is supposed to. That can be a return code from a method, or a side-effect from it touching some other object during its execution. In this case, the authors of the original code gave us our hook for sensing in the IUMLFormatter class. An instance of this class is passed to us upon creation of an Assembly2UML object, so we’re all set with our sensor.

Now the one implementation of IUMLFormatter that they provide is a pretty complicated one, the XMIFormatter, and I really don’t want to parse through the entire XMI output for a model to know whether or not our method still worked, so I’ll probably define a much simpler IUMLFormatter class to use as a sensor in our tests. It’ll just make our asserts easier to write and read.

But the first thing to do to make our tests easier to write is to do a simple, safe refactoring to the Serialize method. What we need to do is to create a second Serialize method that is much friendlier toward unit testing. The original method, which is waaaay to long to show you  here  uses two bits of information out of the Assembly it is passed — the assembly name and its list of types. So, my first step in making this easier to test is to extract variables out to hold these two values. This way, I only access them once each, and I can use the variables in place of the property call all over the place. This leaves the top of the Serialize method looking like this:

        public bool Serialize(Assembly assembly,SerializationProgressHandler progressHandler)
        {
            string fullName = assembly.FullName;
            Type [] typesInAssembly = assembly.GetTypes();

            int maxBound = typesInAssembly.Length*3;
            int currentProgress = 0;
            if(progressHandler != null)
                if(progressHandler(this,0,maxBound,currentProgress,String.Empty) == ProgressStatus.Abort)
                    return false;
    
            umlFormater.Initialize(fullName);
    
            //Figure out ranges and call on progress
            //First pass through types and create unique namespaces list:
            StringCollection namespacesList = new StringCollection();
            foreach(Type type in typesInAssembly)
            {
                if(!namespacesList.Contains(type.Namespace))
                    namespacesList.Add(type.Namespace);
            }

Now that I have these variables in place, and I’m using them throughout the rest of the method, I can do an ExtractMethod refactoring on everything after those two variable initializations and put them into the Serialize method I really want to test:

        public bool Serialize(Type[] typesInAssembly, SerializationProgressHandler progressHandler, string fullName)

And now the old method, in its entirety, looks like this:

        public bool Serialize(Assembly assembly,SerializationProgressHandler progressHandler)
        {
            string fullName = assembly.FullName;
            Type [] typesInAssembly = assembly.GetTypes();

            return Serialize(typesInAssembly, progressHandler, fullName);
        }

And our new method, the one with the Type[] array, is completely and easily testable. Step one accomplished!

In the next installment

The next thing we need to do is to start characterizing what this method does with tests. This is another thing out of WELC — before you can start refactoring or changing what a method does, you have to get it under tests, and this first set of tests serves to characterize the existing behavior of the method. We’re going to wrap progressively bigger tests around this method, until we feel that its pretty tightly controlled by these tests. And then we’re going to write one more that exposes a bug that I see, recognize, understand, and already know how to fix. But I can’t do it until I can extract it out someplace where I can directly sense it, which means we have some refactoring to do first.

Until next time…

— bab

Learn about the infamous Delta Lounge — Where all the Enterprise Library dirty deeds were done…

Ron Jacobs has up Part 1 of a 2–part podcast series that Peter Provost, Scott Densmore, and I did about our working environment during Enterprise Library V1 development. I consider the fact that we were all put into the same room to be the critical success factory in letting us get Enterprise Library out the door in the time that we did, with the quality it has.

Our conversations in the 2–part series roam through how and why we used the Delta Lounge, who was in there, why it was important that devs, architects, and test were all in there at the same time, how our choices in music help draw us together as a team, and more.

It was a lot of fun to do, so I have to believe its going to be a lot of fun to listen to. So, hear it from the horses’ mouths here!

— bab

Explaining TDD to a non-techie

I was explaining what was different about the agile approach to software to a non-technical person the other day, and I chanced upon an explanation that I kind of liked. She did, too, as it made sense to her.

So, this is the explanation I used:

Imagine you have to write a 10 page paper. One way of doing it would be to sit down and outline the whole thing carefully, and then write it. You don’t look back at any of it — you just write. Any editing that’s necessary will be done at the end. You’re proceeding along like this, about 1/2 to your deadline, happy as can be, because you know what you’re going to write, you know where you are in terms of page count (never edited though!), and then the news comes. Your project is being canceled, and you need to deliver right now!

So, what do you have now? You have a great plan for writing the entire paper (your outline), but you have 1/2 of your paper written. You’ve introduced your subject, began developing a few details, but haven’t come close to any solutions or recommendations yet. And you have some unknown amount of editing to do before your paper is even readable. Not a good place to be.

Or…

You could rough out the topics you wanted to cover first. Then you could write your first sentence. That sentence could express the most important thought of the entire paper. Not a lot of detail, no depth, not a lot of value, but it at least makes sense and makes your most critical point. Then you could write your second sentence, re-examining the first sentence to make the two sentences work together. Then you could write a little more, editing as you go, and so on. When you’re writing a paper like this, the paper is always finished as far as the quality of the writing currently existing goes, it just might not explore the subject in enough depth. And as you go further into the project, you are adding depth and details to a complete paper, reorganizing as you go. And when that magic day comes when your project is canceled, you’re left with a complete paper, edited and ready to go.

That explanation made sense to her, and everybody left happy 🙂

— bab

 

Now playing: Rush – Exit…Stage Left – Xanadu

TDD defeats Programmer’s Block — Film at 11

Cheesy title, but it caught your eye, eh? The real point of this article is to describe a TDD experience I had the other day (Friday), and how it changed my life (OK, a bit over the top, but hey,  it kept you reading…)

Here is the scene. I’m working with Peter Provost and Jim Newkirk, building a Continuous Integration system to be used internally in patterns & practices. Peter and I have been the devs on it, and Jim provided the customer’s point of view. It has taken longer than we thought, mostly because things were harder than they seemed at first 🙂 (Does that ever happen to you???) But we’re winding up the initial implementation of this system, with one final piece left before we can put this thing into use.

Continuum

The general architecture of this thing is built based on several disconnected services. There is a piece that monitors several different kinds of source code repositories, including subversion and an internal one, called Source Depot. This was originally written as a trigger that lived in the same place as each repository, but we have lately come to realize that we’re not going to be able to have co-located code on the repository machines. So I have to convert our triggers into pollers, which is where I am right now. These triggers, and soon to be pollers, will talk to a web service, called StarterService, and uses the ConfigurationService web service to look up config info on each project to build, and then sends a BuildCommand to a windows service, BuildService, which runs a BuildWorker that queues all builds for a particular project and executes them one by one. Whew! That’s the system in a nutshell. As of right now, with triggers, this all works great. But I have to create these monitors, which is where my knock-me-over-the-head moment happened.

Where TDD Comes In

So, I was sitting at my little table in Scott Densmore’s office, my home away from home, with my hands on the keyboard, and no idea what to type. I had all kinds of thoughts running through my head, trying to figure out what to start with. I knew I wanted to work in how to use the existing trigger code in my poller, so I could keep my dev time down. I had promised to finish this thing this week, so I was feeling a lot of time pressure. And I didn’t know what to test. My head started to hurt.

Then, all of a sudden, a calm came over me. I remembered something Kent Beck had said a while ago — “What would you do if you had all the time in the world? Do that.” Simple words from a Master. I stopped worrying about the existing code, the deadline, and everything else. I just thought about what my poller needed to do. And I wrote this test:

        [Test]
        public void StartingBuildWithNoPreviousStateOnlyStartsBuildForLastChange()
        {
            SourceDepotRepositoryMonitor monitor = new SourceDepotRepositoryMonitor();
            monitor.StartBuilds();
        }

Now I was at least thinking in terms of the system I was trying to build, and how it would work. In order to test something, though, I had to find a way to sense (Michael Feathers’ word in his great book, WELC) what happened. I knew that my SourceDepotRepositoryMonitor had to talk to a repository of some sort, and I didn’t want to be colored by any previous implementation decisions. So I created a MockSourceDepotRepositoryTrigger and an ISourceDepotRepositoryTrigger, but I didn’t put any methods in either of them yet, because I didn’t know what they did. Yet. That led the test to change to this:

        [Test]
        public void StartingBuildWithNoPreviousStateOnlyStartsBuildForLastChange()
        {
            MockSourceDepotTrigger trigger = new MockSourceDepotTrigger("localhost:9876");
            SourceDepotRepositoryMonitor monitor = new SourceDepotRepositoryMonitor(trigger);
            monitor.StartBuilds();

            Assert.AreEqual("localhost:9876", trigger.GivenRepositoryLocation);
            Assert.AreEqual(19, trigger.GivenRevisionNumber);
        }

Now I had something I could test, and I was feeling better. I had quit worrying about what was and started thinking about what I wanted it to be. I still had a problem, because the SourceDepotRepositoryMonitor had to talk to a repository to get the data that it needed. I guess I should explain what this monitor is supposed to do. Its job is to ping the repository server every now and then what the latest change number is. Then it has to kick off a build for each build revision that have not been built yet. That’s it. So our Monitor has to be have a Repository to talk to, so it can ask it the, “What’s the last change number” question. That means that the Monitor needed a SourceDepotRepository in order to talk to something. I already had this class built, and it had an interface that was usable, so I changed it a bit to let me create a mock to avoid the actual round trip to the repository service. This led to a MockSourceDepotRepository that allowed me to prime the pump with the fake data that I needed. Now my test looked like this:

        [Test]
        public void StartingBuildWithNoPreviousStateOnlyStartsBuildForLastChange()
        {
            MockSourceDepotTrigger trigger = new MockSourceDepotTrigger("localhost:9876");
            MockSourceDepotRepository repository = new MockSourceDepotRepository("localhost:9876", 19);
            SourceDepotRepositoryMonitor monitor = new SourceDepotRepositoryMonitor(trigger, repository);
            monitor.StartBuilds();

            Assert.AreEqual("localhost:9876", trigger.GivenRepositoryLocation);
            Assert.AreEqual(19, trigger.GivenRevisionNumber);
        }

And I could write the code to make that test pass.

On to the next test! This first test let me build a system that could start out fresh, without any notion of saved state, and start a build for the most recently checked in revision. But our goal is to maintain some state, so that we can kick off all the builds needed to bring our system up to date. So I started to think about how I might manage this previous build state. It really is just a number, so do I need any special processing for it at all? Isn’t it silly to wrap an int with a class? It sure felt like it was overkill.

But the fact was that I didn’t know where this number came from. I didn’t know where it had to go. I didn’t know how I was going to change it to reflect it the newer builds that have just occurred. Questions, questions, questions. Since I didn’t have the answers to these questions, but I wanted to preserve the right to answer these questions later, I went ahead, almost against my better judgement, and create a PreviousBuildState class, along with its corresponding mock, MockPreviousBuildState. This led to this test:

       [Test]
        public void StartingBuildWithPreviousStateOfOneBeforeCurrentStartsBuildForLastChange()
        {
            MockPreviousBuildState previousBuildState = new MockPreviousBuildState(12);
            MockSourceDepotTrigger trigger = new MockSourceDepotTrigger("localhost:9876");
            MockSourceDepotRepository repository = new MockSourceDepotRepository("localhost:9876", 13);
            SourceDepotRepositoryMonitor monitor = new SourceDepotRepositoryMonitor(previousBuildState, trigger, repository);
            monitor.StartBuilds();

            Assert.AreEqual("localhost:9876", trigger.GivenRepositoryLocation);
            Assert.AreEqual(13, trigger.GivenRevisionNumber);
        }

The point of this test is to prove that if the previous build number is one before this newest build number, we would kick off exactly one build. I was able to directly implement this behavior, and I was really starting to like where this was going. I still wasn’t sure about the PreviousBuildState class, but I was willing to give it some time. The next test to write was to try to start several builds, and I was able to do that with the same test code, just changing the previous build state to a few before the build number that was being requested. Not a big deal to implement. This left me only one more thing to test before I got to testing error conditions — I had to make sure that I was updating the PreviousBuildState after each build kicked off. It was at this point that the shear genius of the PreviousBuildState class became apparent to me 🙂 Since I had wrapped this number with a class, I had something to tell to update it, and I didn’t have to care how it happened. I knew that I was going to store it in a file for now, I was going to send it back to our ConfigurationService later, and maybe store it in a database. This class was going to be my ticket to let me do this without changing any existing code! I had actually made the right choice, even though it felt wrong at that point. This led to the following test:

        [Test]
        public void PreviousBuildNumberIsIncrementedAfterSuccessfullStartedBuild()
        {
            MockPreviousBuildState previousBuildState = new MockPreviousBuildState(12);
            MultiMockSourceDepotTrigger trigger = new MultiMockSourceDepotTrigger("localhost:9876");
            MockSourceDepotRepository repository = new MockSourceDepotRepository("localhost:9876", 14);
            SourceDepotRepositoryMonitor monitor = new SourceDepotRepositoryMonitor(previousBuildState, trigger, repository);
            monitor.StartBuilds();

            Assert.AreEqual(14, previousBuildState.GetLastBuildRevision());
        }

All that was left to do after this was some housekeeping stuff, like making sure that I didn’t increment the PreviousBuildState if I failed to start the build correctly for some reason. Those tests looked like this:

        [Test]
        public void FailedBuildDoesNotIncrementLastBuiltStateForOneRequestedBuild()
        {
            MockPreviousBuildState previousBuildState = new MockPreviousBuildState(12);
            ThrowingMockSourceDepotTrigger trigger = new ThrowingMockSourceDepotTrigger("localhost:9876", 0);
            MockSourceDepotRepository repository = new MockSourceDepotRepository("localhost:9876", 13);
            SourceDepotRepositoryMonitor monitor = new SourceDepotRepositoryMonitor(previousBuildState, trigger, repository);
            monitor.StartBuilds();

            Assert.AreEqual(12, previousBuildState.GetLastBuildRevision());
        }

        [Test]
        public void LastBuildFailingLeavesLastBuildSetToPreviousBuild()
        {
            MockPreviousBuildState previousBuildState = new MockPreviousBuildState(12);
            ThrowingMockSourceDepotTrigger trigger = new ThrowingMockSourceDepotTrigger("localhost:9876", 2);
            MockSourceDepotRepository repository = new MockSourceDepotRepository("localhost:9876", 15);
            SourceDepotRepositoryMonitor monitor = new SourceDepotRepositoryMonitor(previousBuildState, trigger, repository);
            monitor.StartBuilds();

            Assert.AreEqual(14, previousBuildState.GetLastBuildRevision());
        }

The implementation for these tests led to this class and associated interfaces:

using System;
using System.Collections.Generic;
using System.Text;
using Continuum.Triggers.Common;

namespace Continuum.SourceDepotTrigger
{
    public class SourceDepotRepositoryMonitor
    {
        PreviousBuildState previousBuildState;
        ISourceDepotTrigger trigger;
        SourceDepotRepository repository;

        public SourceDepotRepositoryMonitor(PreviousBuildState previousBuildState, ISourceDepotTrigger trigger, SourceDepotRepository repository)
        {
            this.previousBuildState = previousBuildState;
            this.trigger = trigger;
            this.repository = repository;
        }

        public void StartBuilds()
        {
            int latestRevisionNumber = repository.GetLatestRevisionNumber();
            int lastSuccessfullyBuiltRevision = previousBuildState.GetLastBuildRevision();

            int lastRevisionStartedSuccessfully = BuildAllRevisions(lastSuccessfullyBuiltRevision, latestRevisionNumber);

            previousBuildState.UpdateLastBuildRevision(lastRevisionStartedSuccessfully);
        }

        private int BuildAllRevisions(int lastSuccessfullyBuiltRevision, int lastRevisionToBuild)
        {
            int firstRevisionToBuild = (lastSuccessfullyBuiltRevision == PreviousBuildState.NoPreviousBuildStateAvailable)
                ? lastRevisionToBuild
                : lastSuccessfullyBuiltRevision + 1;
            int lastBuildStartedSuccessfully = lastSuccessfullyBuiltRevision;

            for (int revisionToBuild = firstRevisionToBuild; revisionToBuild <= lastRevisionToBuild; revisionToBuild++)
            {
                try
                {
                    trigger.StartBuild(revisionToBuild);
                    lastBuildStartedSuccessfully = revisionToBuild;
                }
                // Exception handled in trigger. We just have this here to maintain our counts
                catch { }
            }

            return lastBuildStartedSuccessfully;
        }
    }
}

I’m not too happy with the BuildAllRevisions() method, but I don’t know how to refactor it better yet. I’ll come back to it later when I have a few spare cycles to make it more readable. I’m wondering if there is a Build class hiding in there somewhere that would manage a lot of the stuff in this method. Or maybe a BuildableCollection that would have a ForEach method that would have the loop in it, or something like that.

This is the PreviousBuildState class. It is an abstract class because I wanted to have a constant in there, and I couldn’t if it was an interface.

using System;
using System.Collections.Generic;
using System.Text;

namespace Continuum.Triggers.Common
{
    public abstract class PreviousBuildState
    {
        public const int NoPreviousBuildStateAvailable = -1;

        public abstract int GetLastBuildRevision();
        public abstract void UpdateLastBuildRevision(int lastBuiltRevision);
    }
}

This is the interface over the trigger class that I expect to adapt to the currently implemented SourceDepotRepositoryTrigger class.

using System;
using System.Collections.Generic;
using System.Text;

namespace Continuum.SourceDepotTrigger
{
    public interface ISourceDepotTrigger
    {
        void StartBuild(int revisionNumber);
    }
}

And finally the SourceDepotRepository class is a concrete class, as I described above, but I made the method that actually talks to the repository a virtual method, and overrode it in the Mock version, so I wouldn’t have to make the round trip to the SD box.

Once I finished with these tests, I knew exactly how to proceed to finish this task. I just had to implement the real versions of the Mock classes I had, wrap the SourceDepotRepositoryMonitor in a thread, stick it in a service, and I was finished. Those were tasks I knew how to do easily, so my crisis had passed.

And it was the calmness that came from remembering Kent’s words and from just writing that first test without regard to what I already had, that let me make any progress at all. I consider this a complete and total victory for TDD over my panic, and I was able to leave for home feeling good that I could finish this task very soon.

— bab

The Dispose pattern, Finalizers, and Debug.Assert?

Peter Provost, Scott Densmore, and I were talking about some additions to our project coding guidelines, and I proposed a new one that we wanted to get some feedback on from the community.

According to the Dispose pattern, you’re supposed to create classes that look like this:

 public class BuildProcess : IDisposable
    {
        ~BuildProcess()
        {
            Dispose(false);
        }

        public void Dispose()
        {
            Dispose(true);
            GC.SuppressFinalize(this);
        }

        protected virtual void Dispose(bool disposing)
        {
            if (disposing)
            {
            }
        }
    }

Now the issue here is what should happen if your class is disposable and the finalizer gets called? Is this an error? I believe that it is. The only way for the finalizer to get called for this class is for me to have forgotten to dispose of the instance when I was finished with it. So my proposal for a change to the coding guidelines was to change the finalizer to have Debug.Assert() in it, so that I’ll know when I forget to call it.

What does the community think about this? Is this reasonable? Is there some deep, dark .Net secret that would prevent me from doing this? Any opinions greatly appreciated.

— bab

New continuing series starting

Consider this an teaser of things to come 🙂

A group I am a member of, the C# Design Patterns Group, had a piece of software donated to it, and we are now responsible for its maintenance and upkeep. It just so happens that the software is not in the best possible shape, as far as design quality goes. Our plan is to fix a few bugs, refactor it a bit, and eventually add some new features to it. But we gotta write some tests for it first.

I plan on sharing how I write these tests with you, as I examine Michael Feathers’ new book, Working Effectively with Legacy Code. I’ll be talking some new terms that Michael explains and working through some of the test-introduction patterns he describes. And in the end, we’ll end up with a nicely factored system, and a good example of what to do when presented with an existing code-base.

Excited yet? I know I am!

The next article will explain the existing architecture, talk about what the program does, and show some examples of it working. Then we’ll get this beast into a test harness and start fixing simple bug that results in an exception.

— bab

 

The difference between OEM and Retail when buying computer components

I may be in the process of learning a not-inexpensive lesson here, and I wanted to share it with others in the hopes that you may avoid my pitfall.

There are two ways to buy most computer components. You can buy full retail boxes, which are exactly as produced by the factory for the part’s manufacturer, or you can buy an OEM part. OEM stands for Original Equipment Manufacturer, which is someone who builds systems and buys that part to include in their system for resale. Sometimes the discount places on the web get batches of these OEM parts from OEMs who bought too many. In these cases, these discount places resell these parts to you, the end customer, at a price lower than the retail price.

The problem with that, as I’m finding out, is that the warrantee for these parts is sometimes sold along with the OEM part to the OEM. So when that part dies, you have no one to go to for a warrantee replacement. This is bad.

I had a hard drive die on my last week, and it was only about 6 months old, well within the warrantee period. Unfortunately, I bought an OEM drive, and the original drive manufacturer no longer claims any responsibility for that drive. I’m contacting the place where I bought it now to see if they’ll replace it for me, and I have no idea if they will or not. Time will tell.

Either way, be aware of what you’re buying. I’m pretty sure I’m only buying retail boxes from now on, just for the peace of mind. It’ll cost a few more bucks, but that money is insurance against something going wrong later.

— bab

 

Building a new development machine

Update — Brad Wilson has given me some advice about RAM and video cards, and Sean Malloy advised me away from 25ms response time LCDs, so its back to the drawing board for those. I may also change out the Zalman cooler for one from Silenx as well as a couple new case fans. The Zalman cooler is actually about the same in terms of cost and noise, but the Silenx one doesn’t have the freaky blue LEDs in it 🙂 The Silenx case fans are supposed to be really quiet, so I may upgrade to those after the fact. 

I have a little clarity now into my short to medium term future, which seems to be centered on windows development for a while. And I’m getting tired of doing all my work on my little laptop keyboard and monitor. So I want to build a desktop machine that I can use while I’m home. The only complication to it is that I want to be able to take my work with me when I go on the road. So here’s what I decided to do:

First of all, I’m going to jump on the Virtual Server bandwagon. I intend to build a PC that will allow me to do my development work in VPCs. These VPCs will live on an external 7200 RPM disk most of the time, but can be copied to my desktop machine for work when I’m home. I’ve become very pleased with the VPC style of development, especially in light of what happened to me last week. I was happily working away in VS.Net 2005 on a VPC and all of a sudden, everything locked up. I tried all kinds of things to get the VPC working again, until I finally tried to just copy the image some place else on the external drive. When this failed, I realized that my drive had failed.

No big deal at all. Peter Provost keeps a bunch of VPC images sysprepped based on our common dev environment, so I replace the hard drive, copied over a new VPC, set up the extra “stuff” that was needed, reinstalled VS.Net 2005, reloaded our dev software, and got our tests running. Start to finish of this process — 3 hours. Not bad 🙂

I figure I want to build the biggest, baddest PC I can, so it will support all this VPC stuff, and will also be upgradable over time. These are the parts I chose:

I’d really love any comments about these choices. I intend this to the be the best dev PC I can build for the money, and game playing, etc is completely ancillary. If anyone has any suggestions about different components I should use, or has advice about these components, I’d love to hear it.

I’m also looking for a decent, but inexpensive, 2–port DVI/USB KVM. The Belkin one is supposed to be “less than good”, and the others are >$400 🙁

— bab

Pointer to posts mentioned in my webcast

Hi, all,

I hope some of you had a chance to listen to my webcast yesterday on the Enterprise Library Caching block. I did an OK job — I was a little nervous at first and felt very rushed during it. Anyhow, I hope the content I wanted to get in was understandable…

At one point, someone in the talk asked about thread safety and performance of the  block, and I mentioned that I had blogged about how the locking scheme in the caching block evolved over time. I wanted to post links to those previous articles to make them easier to find:

Scott and I also chatted very briefly about statics. Here is a link to Scott talking about an email that I sent him very early in my time at p&p

Let me know if I can answer any questions about these posts or anything else I talked about during my webcast.

Scott Densmore and I talked about doing a few podcasts with Ron Jacobs covering other aspects of our experiences on Enterprise Library. We want to bring in Peter Provost and talk about our Triangle Lounge bullpen and our TDD process. Ron and I also talked about filming a video of an Enterprise Library presentation I’ve done in the past where I show about 10 lines of code and spend the rest of the hour using the configuration tool to play with them. Look for these pnplive shows in the coming month or so.

— bab