Upcoming talks in St. Louis

If you happen to be in St. Louis any time over the next few weeks, I’m talking at a couple of different user groups. On Wednesday, Sept 1, from 7-9PM, I’m talking at the XPSTL meeting in St. Charles. And I’m also talking at the St. Louis OOSIG meeting on Thursday, Sept 16th. The topic of both talks is the same:

Using Programmer Tests as Agile Documentation.

For more information, and an outline of the talk, please see the previous entry in my weblog.

— bab

Feedback sought on using Offline Application Block

Has anyone out there written any code using the Offline Application Block? If so, Microsoft would love to talk with you. patterns&practices is actively looking at what changes should be made to the OAB for version 2, so any feedback would be great.

In particular, we are looking for information on how easy the API was to use and understand, if there were any things that you were trying to do with the block that were impossible or needlessly difficult, and any new features or tooling that you’d like to see. The best avenue for giving this feedback is to join the Offline Application Block GotDotNet workspace.

— bab

Using programmer tests in place of some or all documentation?

I’m writing an article for an agile software magazine about how to use programmer tests in place of some or all written documentation, and I’d appreciate some feedback on a very early outline for the article. This is what I intend to cover:


  1. Different audiences for documentation


    1. Application programmers — users of our libraries

    2. Maintenance programmers — modifiers and extenders of our libraries


  2. Components of a good test


    1. Good names

    2. Clearly defined setup, processing, and asserting sections


  3. Test sufficiency — do I tell the whole story with my tests


    1. Capturing design decisions

    2. Covering interesting use cases

    3. Showing error behavior

    4. Interactions with the environment

    5. Test organization for different audiences


      1. Application Programmers want to find common scenarios easily

      2. Maintenance Programmers want to follow original developer’s train of thought



  4. What prose and UML are needed to supplement these tests

  5. Future directions in Tests as Documentation (Automation topics mostly)

Obviously this is still pretty rough. I intend to use the tests for a Caching block I just wrote for Microsoft Enterprise Library along with pseudocode for the library to show the design decisions. The goal is to show devs how to create tests that will server as documentation.

Does this sound remotely interesting? Any suggestions about other things to cover?

And for those of you in St. Louis, I’m going to be giving a talk on this subject at this month’s XPSTL (9/1).

— bab

Button’s Law of Design Maturity

The prevalence of Singletons in a design is inversely proportional to the maturity of the design, and the designers.

Be it known, that from this day forward, this shall be known as Button’s Law. (Just kidding)

There is a note a truth here, however. I’ve talked to a lot of the best designers that I know in the field, and we all share a common opinion. (Not that I’m putting myself in the same class as these developers. I’m merely stating that we share the same opinion!!!) Singletons can be a useful pattern, but they are also the most abused pattern seen in the wild. You can read about my personal feelings about singletons in Scott Densmore’s blog.

Joshua Kerievsky has written an excellent summary of this point of view in his new book, Refactoring to Patterns. Reading this chapter in his book is what brought this rant to mind.

I now return you to your regularly scheduled programming.

— bab

Quick trip report from XP/Agile Universe

I just got back from the 4th annual XP/Agile Universe conference, held this year in Calgary. I was only able to attend the first 2 days due to other obligations, but I wanted to share what happened while I was there.

It started Sunday, with a day full of tutorials. I’ve been the tutorial chair for the last three years, and I’ve noticed a shift in the attendance patterns. The first year, we had more tutorials focused on basic agile programming techniques, like TDD and refactoring. Now, the tutorials seem more focused on project management and process details. And the tutorial seats sold definitely favored the PM and process fields rather than the more technical practices.

Monday morning, the conference officially opened with three keynotes. In keeping with tradition at this conference, one of the keynotes was from an “outsider”, someone not part of the XP/Agile movement. This year it was Chris Avery, who talked about Ultimate Agility. This talk was about how to change yourself without actually changing. This fit in very nicely with the psychological bent of many of the original XP’ers. I’ve personally always had a bit of a problem with the touchy-feely side of the agile methods, but they are an integral part of the change to agility. The next two talks were more to my liking personally. Each of them described a particular part of “Crossing the Chasm”, the process of bringing XP and the agile methods into the mainstream. Mary Poppendieck talked about how to sell XP to management, which is the challenge that we face now. We are at the precipice of the chasm with respect to adoption of agility — we are at the point where the early adopters and visionaries are already on-board with XP, but we now have to start talking to the pragmatists. These are the ones who aren’t interested in technology or adopting new things just for the sake of their newness — they want to find out about other pragmatists are doing with it. This means that we can’t sell XP to this group the same way we sold it to the early-adopters. Instead, we have to find them others in their same shoes who are using and succeeding while being agile. And, finally, Brian Marick talked about how TDD has crossed that chasm. Writing code test first and refactoring have made it into the mainstream, as demonstrated by the prevalence of xUnit-like tools built into many IDEs. Writing unit tests is becoming part of our everyday jobs for more and more people, and now we just have to ride that wave and try to stay in front of its crest. (Jim Newkirk has an excellent summary of Brian’s talk)

As with most conferences, the best part is the hallway conversations that happen between sessions and in the bars at night. This conference was no exception, and I had a great time seeing and talking to old friends. One thing that did happen at this conference is that a lot of my friends had new books either on sale or close to being released. I bought a couple of them, and I’m looking forward to the rest. These are just a few of the new books I saw:

  • Joshua Kierevsky — Refactoring to Patterns. This book looks like a followon to Fowler’s Refactoring book, but at a higher level.
  • J. B. Rainsberger — JUnit Recipes. Can’t wait to read this one. It’s full of practical advice on how to test lots of different situations
  • Michael Feathers — Working Effectively with Legacy Code. Michael’s specialty is teaching teams how to tame their legacy code base, and he has put together a book very much like Fowler’s and Kierevsky’s books in that it is a book of refactoring recipes
  • Herb Sutter — Exceptional C++ Style. Followon book to Exceptional C++ and More Exceptional C++. I try to read everything Herb writes, and Stan Lippman recommended it in his blog, too!

I’m sure there were a few other books I forgot, but the universe of agile books out there is getting larger and better. (I need to write a book!)

Finally, it was announced at this conference that the Agile Alliance is taking over and merging the two North American agile conferences, the Agile Development Conference and XP/Agile Universe. The merged conference will have its new name announced the end of the week, and it will be held in Denver in 2005.

— bab

Background Noise in a Team Workspace

I’ve been working on a project along with about 10 other people in a really small conference room at Microsoft in a room we’ve dubbed The Triangle Lounge. This room could comfortably fit about 5 people, but we’ve been cramming 8-10 in there every day. Why do we do it? Because we find that we are much more productive as a team if all sit in the same place.

When we all sit in the same room, we have a lot more opportunities to have spontaneous conversations about whatever kind of problem comes up. We’ve also found that having everyone in the same room has made us into a team. We’ve gotten to know each other, so we joke around, someone starts quoting some movie and the entire room finishes the quote, we can also handle the mini-crises that come up every day during development. Immediately. Without calling a meeting. Because we’re all there in the same room.

As you might expect with so many people in a small room, it can get really noisy. This noise was initially a pretty big problem to those on our team who were not used to working in an open workspace environment. They had problems concentrating on their tasks, and they would get drawn into every conversation. But over time, we all learned Project Selective Hearing.

Project Selective Hearing

When a crowd of people are all in the same room, all carrying on different conversations, it is really difficult to focus on a task. Your mind hears snippets of many conversations and tries to listen to them all. The net effect of this is that your mental energies are spent listening rather than concentrating on what you’re really trying to do.

That all changes, however, if the conversations all center around your task. As you work for longer in an environment like this, your mind learns to process what it is hearing in the background. Somehow, and I don’t know how, you begin to subconsciously listen to everything that is said without being distracted. And you always seem to hear the right word at the right time to join in the right conversation. But you also don’t hear the extra words that you don’t need to hear.

It happened on our team. We went from being constantly distracted with all the background conversations to only hearing the ones that really involved us. And once we heard them, we were able to turn around and join in. Working together in an open workspace and developing this skill of Project Specific Hearing has allowed us to work together closely without being overwhelmed.

Have your teams noticed this same effect? I’d love to hear about it.

— bab

Microsoft Enterprise Library

One thing I haven’t mentioned about myself yet is that I’m part of the Microsoft Enterprise Library team. I’ve been working on this project since its first day, back in February, and MS has finally taken the wraps off the project. This has freed us up to start talking about it, and talking I intend to do! You can see the blogs of the other Enterprise Library team members at the far left of my blogs. All of them are recommended reading, as they are all very bright, energetic, agile programmers.

What we’ve been doing is taking the Patterns and Practices Application Blocks written by Microsoft, and merging them with similar functionality from an MS partner, and releasing them as a coherent library. This library has a consistent coding style and content, documentation, and has tooling to help with configuration. We’ve been working as a joint team in Redmond for almost 6 months now, and we’re ready for our first release.

You can learn all you want to about what is in Enterprise Library V1.0 here and a GotDotNet workspace coming soon, but I want to tell you about my part in this, and what I’ll be blogging about over the next few weeks or months.

First of all, I was one of the developers on the project. Given that I have 17 years of experience developing software, I was one of the more senior people on the team, but I was still a developer. I consider this to be a great situation, since it gives me to combine the two loves of my professional life — writing code, and helping younger developers grow and improve.

As a developer on this project, I, of course, am responsible for my own deliverables. As of today, that has consisted of the Data Access Block and a reimplementation of the Caching Block to address some issues that came up with the original implementation. I’ve also done code reviews, fixed bugs around the system, and written and reviewed documentation. In the beginning, I was also one of the few main drivers towards agility on this project, insisting on Test Driven Development as a large part of our process, and helping all the other developers on the team learn how to write code test first. (To be entirely fair, Jim Newkirk of NUnit fame is the Microsoft Development Lead in Patterns and Practices, and his voice was heard much more loudly and clearly than mine on this topic. Jim did a great job in starting off the agile approach, and did a lot throughout the development cycle to encourage all levels of the team to stick with the approach every day. Scott Densmore, another MS dev, was the driving force to get developers to adopt it internally, and I was the in-the-room resource for helping folks learn this stuff, as well as the Pain in the Butt coach who helped them stick with it daily.)

What I intend to do over the next little bit is to write about our development process, how being agile helped us deliver a tremendous amount of content in a short amount of time, and how we grew as a team over the project. I’ve already started writing a bit about this, as the previous post about Project-Specific Hearing was taken from experience in the Triangle Lounge, which is what we call our little work room in Redmond. I’ll also talk about technical issues faced and overcome during the implementation, which I’ve also begun doing with my post about the Active Object pattern.

My main goal in writing about these topics is to share with people how it is possible to take a bunch of very different, very strong-willed, and intelligent people, and help them form a very-well jelled team. Through our actions, both overt and covert, we came together to work towards our common goal, but not without a lot of pain, especially in the beginning. But we made it, and I want to tell you how. More on this later…

— bab

The Present I Promised

As promised in the previous blog entry, here is the code I used to implement the Active Object pattern on a .Net project I’ve been working on.

Here’s the setup for the pattern again. I was working on rewriting some caching functionality on this project, and I had some housekeeping operations that had to happen in the background. This meant that I had to have multiple threads operating at the same time, but I had several choices as to how to do this.

Kind of the default way of doing stuff on multiple threads in .Net is to kick off the processing on a ThreadPool thread. This works out well if you don’t need to actively manage the threads, but just want them to operate on their own. Users of this functionality start it using the usual asynchronous method invocation stuff in .Net. The problem with this is that the threading policy is exposed to the entire world. If you ever wanted to change to programmatically controlled threads rather than thread pool threads, or wanted to call the methods in the same thread of the caller, etc, you’d have to change all of the calling code in the entire application. This is Bad.

What you’d rather do is to encapsulate the threading policy inside a class. That way, changing your mind is easy and cheap.

Now that we’ve decided where to put the logic, we have to decide how to represent it. What we’d really like to happen is for clients to talk to our class as if they were directly calling a method, but to have our class invoke the method for us, on our own thread of control. This is the essence of Active Object. It is used to decouple the invocation of a method from that method’s execution. The advantage of it is that allows you to serialize these method invocations, eliminating threading concerns from the invoked code. Single threaded code is much more simple to write, much less error prone, and generally to be preferred if at all possible.

So, how is this done? It’s actually pretty easy. The overall process involves a few moving parts.

  • Target Class — class on which methods are called
  • Command Classes — Part of communication mechanism between threads
  • Command Queue — Transfers command objects between threads

Target Class

The target class needs to have two different sets of methods. The first methods make up the public API, and is used by outside callers. This API doesn’t actually cause the behavior to happen, but it does arrange for the other set of APIs, the private set, to be called through the communication mechanism. In my case, this class was called the BackgroundScheduler, and it was responsible for allowing scavenging-type operations to happen when needed.

using System;
using System.Threading;

namespace Example
{
    internal class BackgroundScheduler
    {
        private ProducerConsumerQueue inputQueue = new ProducerConsumerQueue();
        private Thread inputQueueThread;
        private ScavengerTask scavenger;

        public BackgroundScheduler(ScavengerTask scavenger)
        {
            this.scavenger = scavenger;

            ThreadStart queueReader = new ThreadStart(QueueReader);
            inputQueueThread = new Thread(queueReader);
            inputQueueThread.IsBackground = true;
        }

        public void StartScavenging()
        {
            inputQueue.Enqueue(new StartScavengingMsg(this));
        }

        internal void DoStartScavenging()
        {
            scavenger.DoScavenging();
        }

        private void QueueReader()
        {
            while (true)
            {
                IQueueMessage msg = inputQueue.Dequeue() as IQueueMessage;
                try
                {
                    msg.Run();
                }
                catch (ThreadAbortException)
                {
                }
            }
        }
    }
}

BackgroundScheduler is responsible for orchestrating all the activities in this little drama. It owns something called a ProducerConsumerQueue, which is what holds the IQueueMessage objects as they are passed between threads. It also owns and manages its own internal thread, which is where the processing actually happens. When a request to do something comes in, the BackgroundScheduler takes that request, creates a derived IQueueMessage, queues that message, and returns to the caller. Later, the internal thread runs, picks up the message from the other side of the ProducerConsumerQueue, and executes it. The nice part about this is that each operation runs to completion before the next background task starts. Single threaded code!

Queue Messages


The IQueueMessage interface is really very simple, consisting of only a simple Run method.

namespace Example
{
    internal interface IQueueMessage
    {
        void Run();
    }
}

This method is implemented in the StartScavengingMessage.

namespace Example
{
    internal class StartScavengingMsg : IQueueMessage
    {
        private BackgroundScheduler callback;

        public StartScavengingMsg(BackgroundScheduler callback)
        {
            this.callback = callback;
        }

        public void Run()
        {
            callback.DoStartScavenging();
        }
    }
}

When the Run method of this class is called, it just calls back to the BackgroundScheduler, invoking the internal API to cause the behavior to get run. At construction, each IQueueMessage instance is given a reference to the object to callback, so that it can invoke the behavior. One of the criticisms I received on this piece of code during the review of it was that I could have used a delegate instead of the IQueueMessage interface and saved myself from having to write simple command class derived classes, but I thought that the explicit interface communicated better than a delegate. Maybe that’s my old C++ background shining through, but I think it is easier to read like this, so I kept it as you see it.

Producer Consumer Queue


The final piece of the puzzle is the ProducerConsumerQueue. This is a special kind of queue where producers can store their messages onto one side of the queue from any thread in the system, but the queue is drained from a single consumer on its own thread. The consumer waits until there is a message to read from the queue, then reads it and returns it.

using System;
using System.Collections;
using System.Threading;

namespace Example
{
    internal class ProducerConsumerQueue
    {
        private object lockObject = new Object();
        private Queue queue = new Queue();

        public int Count
        {
            get { return queue.Count; }
        }

        public object Dequeue()
        {
            lock (lockObject)
            {
                while (queue.Count == 0)
                {
                    if (WaitUntilInterrupted())
                    {
                        return null;
                    }
                }

                return queue.Dequeue();
            }
        }

        public void Enqueue(object o)
        {
            lock (lockObject)
            {
                queue.Enqueue(o);
                Monitor.Pulse(lockObject);
            }
        }

        private bool WaitUntilInterrupted()
        {
            try
            {
                Monitor.Wait(lockObject);
            }
            catch (ThreadInterruptedException)
            {
                return true;
            }

            return false;
        }
    }
}

This class is actually pretty simple. There is a Mutex that is shared between the Enqueue and Dequeue methods. In Dequeue, the code waits for the Mutex to be pulsed. When it receives this pulse, it pulls off an object and returns it to the caller. In the meantime, from any other thread in the system, other code is free to add messages to the front of the queue. Each of these enqueue operations causes the Mutex to be pulsed, triggering the dequeue in the other thread.

Obviously, given the multithreaded nature of this class, all operations that consume any of the shared state of the ProducerConsumerQueue must be locked using the lockObject. This is the only place in the BackgroundScheduler that explicitly has to understand threading issues.

Conclusion


That’s all there is to it. Using these few simple classes, I was able to allow callers to invoke behavior that my implementation chose to run in the background. The caller didn’t know anything about my threading policy, as it was entirely contained in my BackgroundScheduler object, so I was free to change that policy on a whim. I was able to keep the code inside the ScavengerTask (not shown) single threaded, since I was guaranteed that only one instance of it would be running at a time, and I was able to control when and where that code ran.

The only improvement I’d like to make in this little grouping is that I’d like to find a way to pass only an interface to the callback methods to the IQueueMessage objects. In C++, I’d do this by creating a private base class and passing that reference to the objects, but I can’t figure out a similar solution in .Net. There are some times that I long for the expressiveness and power that is C++ 🙂

Hopefully some of you actually read down to here, and if so, thanks for listening!

— bab

A Pattern and a Present

What’s In a Name


It’s 1995, and the GoF book had just been published. For those of you who do not know what the GoF book is, it is the original Design Patterns book, written by Ralph Johnson, Erich Gamma, John Vlissides, and Richard Helm. At that point in time, very few developers had ever heard of patterns as reusable problem solutions. I was fortunate to be hooked in with some very smart people, so I was pushed towards reading this book, and it has served me well to this day.

Cut to 2004, almost 10 years after this book was published, and still, very few programmers have read this book. This book contains such important knowledge that it should be read cover to cover by anyone developing OO software. In addition to the patterns described in this book, there are thousands more patterns available on the web, in books, in magazines, and in the heads of your fellow developers. Each of these patterns represents some practical knowledge that you can use to solve an interesting problem the next time you face it.

For example…

I was implementing a cache library the other day, very much like the ASP.Net cache, with a few more features to it. The basic operation of this cache is pretty simple, in that it just takes keys and values provided by the user and stores them both into an in-memory table, and into a replaceable backing store. The backing store can be nothing, to provide only in-memory functionality, or it can be SQLServer, Isolated Storage, files, etc. Everything happens in the same thread as the caller, with respect to adding, removing, finding, etc.

But, there are background operations that go on as well. Every so often, a piece of the cache wakes up and wanders through the items in the cache and checks to see if any of them consider themselves to be expired, according to criteria and policies provided by the user. And after each add is done, the cache goes through and makes room for subsequent adds based on a scavenging algorithm. As I said, both of these operations go on in the background, as the user is adding, removing, etc, stuff from the cache in multiple application threads. Given the opportunity, there is every chance that I could have multiple expirations and scavengings happening at the same time, which would be a nightmare to manage. So I was looking for a solution to serlialize these requests without adding a ton of extra complexity to the code, and without changing anything in the client code.

What really needed to happen was that I needed to have the client code call the existing background task scheduler in the same way as always, but the background scheduler needed to be made a little smarter. This was the perfect place to apply the Active Object pattern. Active Object is a tool you can use to allow a server object, like our background scheduler, to receive requests from callers, in the thread of the caller, and execute that request in the server’s own thread. The way it does this is through some extra machinery that allows it to reify the request into a Command object (Command is one of those GoF patterns. I’ll show an example in a minute.), store it into a special kind of queue called a Producer/Consumer Queue, and have the scheduler pull the request off the queue in the scheduler’s thread. Now it can execute the requests one at a time, without having to worry about implementing some sort of complicated locking system, etc.

This blog entry has gotten pretty long, so I’m going to end it here. I’ll start putting up code samples later, to better explain what I was talking about.

Online example of TDD’ed code

Over the years, I’ve gotten a lot of requests for non-trivial examples of code entirely written using TDD and Simple Design. I can finally give you that example.

This code is for the Offline Application Block, part of Microsoft’s Smart Client initiative. Basically, it implements a framework that will allow client code to operate in much the same way, whether it is connected to the internet or not. It was written over a period of about 12 weeks, and was done entirely test first.

Due to legal restrictions, Microsoft is unable to release unit tests along with the application block, but the unit tests are available through the
GotDotNet Smart Client workspace
. You’ll have to join the workspace to see them, but if you want examples, it should be worth it.

I’d also welcome any kinds of questions about the design, the tests, or how the block was written.

— bab