Sometimes you accomplish nothing but you learn everything. That’s what happened to me yesterday in the Triangle Lounge. Quick background — I’m working on the Enterprise Library project in the Microsoft patterns & practices group. We’ve been working on consolidating many of the current MS Application Blocks into a single, coherent offering. We all sit in one room, called the Triangle Lounge, including the 3 remaining developers and the 3 testers.

This post centers around our relationship with the testers. They sit in the same room as us, laugh with us, talk with us, but I don’t feel like they’re really part of the team. They’re the testers. While we’re busy developing code and writing tests, they are still manually inspecting and testing everything that we do, clearly apart from our culture. This bothers me at several levels.

First of all, there is this natural tension in our relationship, since they are busy attacking everything that we do. That’s their job, and they are the best that I’ve ever met at doing it. But I feel like I’m on the defensive when I’m talking to them, justifying my design and code. The words that I most hate hearing are, “Brian, can I ask you a question?”, because this always leads to an issue they found in the code, and leads to me feeling defensive. I don’t like that, and I’m not sure what to do about it.

Second, I don’t like that they manually test everything. We spend all this time writing automated tests for our software, and they write small test programs that they drive manually. All of their knowledge about what they’ve tested and how they’ve found bugs is wrapped up either in their heads or bug reports, and we can’t easily get at it or use it. This is how they were trained, and I’m not seeing any inclination to change. The problem with it is that it makes any changes to the software harder to justify, since there has to be a separate, manual QA pass through the code once we’re finished. If their tests were automated… (Side note — would I feel less defensive about my code if I were presented with an automated test that I could run and inspect, rather than just a bug report? Interesting question)

Third, they write too many bugs. I mean this seriously. I feel that the testing team is an island, cut off from input from our customer, the Product Manager. They write bugs on any issue they see, whether it is major or exceedingly minor, and onus is on someone else to figure out if the bug is worth fixing. I guess this is probably the role of backend QA on a project, and I need to look at how we can improve our process to prevent us from spending time fixing things that don’t need to be fixed. But that additional step feels aggravating to me. And that’s where this story begins (finally!)

This post is really about what happens on a team when there are two cultures interacting. I’m trying really hard to get the developers to walk into our customer’s office whenever we have a question about how something should work, and get the answer straight from the horse’s mouth (Sorry, Tom, I’m not calling you a horse!). After all, he is the guy who understands the context where our product will live, and he is ultimately responsible for its content. And we, as a development team, are doing a really good job of that. To be honest, the job has become easier over time, as the development team makeup has changed, and we’ve gotten smaller and filled with people who are more devoutly agile. Our team is now three developers, me, Scott Densmore, and Peter Provost. We just lost TSHAK, who was a bit younger, but easily test-infected and really good. With Scott and Peter left on the team, we’re definitely feeling pretty agile about ourselves, so we’re doing a lot better at involving Tom. But the testing team just seems different. They are an island. There have been questions raised before where interaction with our customer would have sped things up tremendously, but I’ve never seen it happen. The testers log issues in our bug tracking tool, and wait until someone notices them before anyone starts talking. And this is what got me yesterday.

There was a bug logged that in certain, exceptional circumstances would cause an exception to be logged to the event log twice. It would happen exceedingly rarely, there was a pretty easy workaround for it, it was minimally harmful, and it was only found through manually injecting code to cause exceptions into the application code. This was a great catch in finding it, but I’m not sure if it was handled properly. I hope it doesn’t seem that I’m dumping on the testers, because I don’t want to. The same tester who wrote this bug also helped me tremendously in getting the Caching Application Block tested to the level of quality where it is today — I respect the work that our testers do tremendously (Prashant, Rohit, and Mani), I just really want to change how they work.

Anyhow, this bug was written, and it was given a severity of 2, which is pretty high. In fact, we’re not supposed to ship with any Sev 2’s, so I spent all day yesterday trying to write a test to reproduce it, so I could fix it (Remember, TDD? Write a failing test, then make it pass. Words to live by). The best way that I could figure out to write a test to expose this bug involved creating some CAS-modifying code, so that when the application code tried to access the EventLog, it would fail, and send me down the appropriate error chain to cause the bug. I wrote this test, I was all proud of it, cuz I was being really clever, and then I tried to run it.

Security Exception at line XXX. Additional information: Request failed.

Well, that was helpful. I spent the next 5 hours trying to figure out why this was happening, asked questions of everyone I could find, had Peter call in favors from guys he knew across the country, and involved the Program Manager who is in charge of CAS for the entire .Net framework and his test team. Finally, after hours of struggling, it came out that I was seeing an interaction between strongly named assemblies, explicit CAS Deny requests, and reflection. It turns out that strongly named assemblies have an implicit LinkDemand for FullTrust in front of all their public interfaces. When trying to instantiate a type through reflection from an assembly such as this, the LinkDemand is converted to a regular Demand for FullTrust, which was running into my Deny in my test, which caused the Demand stack walk to fail, and give me that lovely error message. But it took most of the combined resources of Microsoft to figure this out. And the way to solve it was to add [Assembly:AllowPartiallyTrustedCode] to the AssemblyInfo file of any assembly where a type was being created through reflection. Without this attribute, the creation process would always fail unless the calling code had FullTrust.

So, the serendipity in all of this (remember the name of this lengthy article?) was that it raised the issue of how our project would interact with partially trusted code. No one had considered this before, but my day spent doing this raised a number of issues and problems, which I talked with Tom and the test team about, and now we are considering what to do about partially trusted code.

The other part of this, and this is the part that really bothers me, and ties together this whole post, is that I also explained the bug I spent an entire day fixing to Tom, and he was amazed that I had put so much effort into something that was clearly not worth it. He said to resolve it as “Won’t fix”. He had better things for me to do with an entire day’s worth of work. So I reverted all of my code, closed the bug and moved on.

And why did this bother me? Because, due to a lack of communication on all parts, I wasted a day’s work. Our bugs go through a triage process, and Tom needs to be involved with that, to make sure that bug prioritization matches his business prioritization.  The test team needs a better definition of what they should and should not prioritize as important. And I need to be more cognizant of getting input whenever I have a question about a bug. I can try to affect the first two, but I’m definitely going to change the last point through my own actions.

I wrote this post because I care about agile development processes, and we clearly were not being agile here. We followed our process to the letter, but it ended up wasting time because we didn’t interact as people. That, to me, was the most important bug of the day.

Sorry for the rambling.

— bab


Programmer Tests as Agile Documentation talk at St. Louis OOSIG last night

I gave another talk on this subject last night at the St. Louis OOSIG meeting. Not a huge crowd, as the announcement came a little late (that’s my story and I’m sticking to it :)). Presentation went really well I thought, and I’m really getting my thoughts and ideas in focus around this subject. I do think there is a tool and a technique missing from my bag of tricks that would make life easier when using tests as docs.

There has got to be a way to navigate from a source method to the list of tests for that method. I believe that this map should be build by annotating the tests with the name(s) of the method(s) that it is exercising. And I believe that the documentation generators should build a map that lists all the methods of an application class as hotlinks. Choosing one of these hotlinks takes you to another page with a list of tests that go along with that method.

I think that would make life a lot easier when it came time to find out how to use a particular method.

Anyway, here is a link to the slides I used last night. As always, comments are appreciated.

— bab

Simple Solution To Hashtables and Thread Safety

Shortly after I posted the original entry on this, I figured out a much more simple way than using version numbers, etc.

Item item = null;
bool lockWasSuccessful = false;

while(true) {
    lock(hashtable) {
        item = hashtable[key];
        lockWasSuccessful = Monitor.TryEntry(item);

    if(lockWasSuccessful == false) {

// If we reach here, the item was successfully locked

    try {

// Application code goes here

    finally {


So, the secret here is that I use TryEnter to try to lock the object while the hashtable is locked. I then drop the lock on the hashtable and check to see if the TryEnter worked. If it did not, I just repeat the process. If it did, then I’m free to go do application stuff, remembering to drop the lock when I finish. Tres simple!

Does that solution make sense? It is the simplest one I can think of that satisfies my performance requirements. Much more simple than the version number scheme I was using.

Any comments?

— bab


Now playing: Dream Theater – Six Degrees Of Inner Turbulence (Disc 2) – VIII. Losing Time – Grand Finale

Thread-safety and hash tables

I’m storing information in a hashtable. The information is keyed off a string, and the key is volatile — items can be removed, added, replaced, etc., from the hash table. The problem that I’m studying is how to manipulate this information in a thread-safe way without locking the entire hash table all the time and killing performance.

Solution number 1 was to lock the entire hash table during each operation on the data in the hash table. I had code like this:

lock(hashtable) {
  Item item = (Item)hashtable[key];

The problem with this is that I couldn’t take advantage of SQLServer’s ability to handle several database operations simultaneously. In the presence of lots of threads, my CPU usage in this style never got about 50% or so, indicating that there was lots of contention for the lock that was preventing any parallelism entirely (kind of the point of threads, eh???)

Next attempt was the naive way:

Item item = (Item)hashtable[key];
lock(item) {

Obviously not thread safe. Between the first and second line, someone can come in and delete or replace the item in the hash table, so I could be dealing with old data, possibly persisting data to the database that is stale and overwriting the new data. This is bad.

Next attempt was to be a little clever and wrap the data in the cache with an object that was longer-lived than an individual cache item. This way the above attempt was OK to do, because I was always guaranteed that the wrapper was the same and I could lock it to prevent changes to its contents. It looked something like this:

ItemWrapper wrapper = (ItemWrapper)hashtable[key];
lock(wrapper) {
  Item item = wrapper.Item;

Our problem here was that I could never get rid of these wrappers. Once in the hashtable, they had to live there forever. If I did start getting rid of them, I’d get right back into the same place where I was before with the race condition between getting stuff out of the cache and locking it.

What I really wanted was a single method call that I could make on a hashtable that would return me an Item from the hashtable already locked, and locked in a thread-safe way. And I couldn’t lock the hashtable while I was trying to lock the Item, because that would lead to the same performance problems that I had originally. So what was I to do????

Well, I started looking at Optimistic Locking as a way around this. I embedded a version number inside my Item object and started doing various manipulations to that version number in an attempt to satisfy all my requirements. But this is getting very complicated, and I’m still not convinced of my thread safety.

So, my question for you, my loyal reader (all 1 of you :))… Have I missed something? Is there some other, simpler way of approaching this problem that I missed? Any comments and hints appreciated.

— bab


Now playing: Dream TheaterHollow Years

Another name for unit tests…

There are so many names being used for TDD’ed unit tests right now:

  • Unit Tests (of course :))
  • Programmer Tests
  • Technology Facing Tests
  • and more I can’t think of right now

But, my friend Michael Hill (who doesn’t blog and I wish he would) came up with a different name for them at the XP Universe conference a few years ago. We were talking at the bar (where else would you find Hill???) with a bunch of people. For some reason, during the conversation, we started to think of different names for unit tests, probably to try to guide people away from the “test” aspect of them, and move them more towards what we considered to be the more important reasons to have them.

Hill came up with calling Mobility Tests, because they allow you to move your code wherever it needs to go, at will. This name brings up a totally different picture of unit/programmer/technology-facing tests to me. It brings to mind that the biggest benefit of having these tests after the code is written is that they let you say YES when asked to make a change. They give you the freedom to make changes you would never make otherwise.

Mobility tests. I really like that name.

— bab (first post through BlogJet. I hope it works!)

Updated outline for Agile Tests as Documentation

I’ve updated my outline a bit and fleshed it out some. This is the new outline, so please feel free to comment.

  1. Introduction

    1. problem

      1. Documentation is an expensive anchor around a team’s neck
      2. Lots of money to produce (40% on current project)
      3. Expensive to change (increases inertia and cost of change)
      4. Difficult to make comprehensive (need source anyways)

    2. Need something different that will promote minimal inertia, cost, and is accurate and comprehensive.

      <LI>Agile Developers write tests for code as it is developed

      1. Tests assert behavior of system (create invariant)
      2. Tests provide record of development stream (thought processes of developer)
      3. Tests change as code changes

    3. Can tests be used as documentation?

  2. Who is our audience and what do they need?

    1. Application programmers — users of our libraries

      1. Concerned with finding out what they want to know and getting back to work quickly.
      2. Concerned with most common usage scenarios most times, but still care about exception cases. Order of tests not important

    2. Maintenance programmers — modifiers and extenders of our libraries

      1. Concerned with understanding underlying design and decisions
      2. Need guide through code. Order of tests important to them.

    3. Evaluators — tire kickers

      1. Want to get moving quickly
      2. Less concerned with tests than sample code and quick starts
      3. Still want to understand architecture and design as part of evaluation, so will act as maintenance programmer in some ways.

  3. My contention is that tests can do part of the job, but some text is still needed
  4. Description of Caching design — equivalent to short text and few UML diagrams

    1. Basic functions (add, remove, get, flush)
    2. Factories and equivalency of CacheManagers
    3. Background operations
    4. Liveliness of cache references (missing test???)

  5. Components of a good test

    1. 3 A-s (Bill Wake)

      1. Arrange
      2. Act
      3. Assert

    2. Strong names for everything

      1. Test name tells what is being tested. Makes interesting tests easier to find
      2. Good variable names inside tests. Makes code easier to read

    3. Clear assertions. Defines purpose of test. Assertion should assert strongest thing that can be said about test. Should show thought into what underlying code does
    4. Suite name should tell reader what task fixture is testing/documenting

  6. Test sufficiency — do I tell the whole story with my tests

    1. Capturing design decisions
    2. Covering interesting use cases
    3. Showing error behavior
    4. Interactions with the environment
    5. Test organization for different audiences

      1. Application Programmers want to find common scenarios easily
      2. Maintenance Programmers want to follow original developer’s train of thought

  7. Conclusion: What prose and UML are needed to supplement these tests
  8. Future directions in Tests as Documentation (Automation topics mostly)
  9. Final question — Did using tests as documentation save me anything?