Fast forward to August of 2004 when I took a new job with a consulting firm in Ohio. I was hired and sent to a two week boot camp experience with one of their most senior consultants where I learned about how they develop software. On the list of things to learn, and the one I struggled with the most, was TDD. Mostly by force I learned how to do it. It was awful and I thought, I'm going to ditch this practice as soon as I get out of here.
But then something strange happened. After the bootcamp I was put on a team and asked to clean up after another consultant. Apparently one of the guys on the team felt like I did, that TDD was nuts, and hadn't done it. Worse though, he hadn't done TAD (Test After Development). So we had some pretty hefty swaths of code with no test coverage at all. I spent about three weeks cleaning up his mess. All the while thinking that getting test coverage here was a good idea, and I was glad I didn't have to do that TDD thing. When I got done I had a rude surprise waiting for me. I thought I was going off to another project, but instead I was given work on this team. Work that involved developing new software that had to be TDD'd.
The tech lead on the project was keeping a pretty close eye on me, as well as my mentor who happened to be on the project. I was called repeatedly for not formatting my code correctly and it was embarrassing. But I was smart enough to commit my tests with the my code, and as a consequence I tried really hard to use TDD. So in those next three weeks I forced myself to use TDD as much as I could. I learned to love it.
For several years after this experience I travelled the country writing code for various organizations and using TDD as a technique. I was very pleased with the outcome and I became very proficient with the tooling and the practice. But when asked why I was using this technique (I was surrounded by nay-sayers) my only answer was, to ensure that the code is throughly tested, and to provide a suite of regression tests to prevent accidental breakage. Later I evolved my answer to include mention of Continuous Integration and tried to explain how we could use TDD and CI to do automatic reconnaissance of issues caused by changes.
Those are all great answers to why we do TDD, but I don't think they hit on the essence of TDD. The real deep purpose of TDD in my mind is this, we use TDD to figure out what needs to be done, all those other things are happy consequences of having done TDD.
Here is how that works. Given a problem statement like, I need code that finds the difference between two object graphs, how do we design/develop a solution? No matter what technique we are using BDUF, TDD, or something in between, we have to ask a bunch of questions. Here are some questions we might ask;
- Are we comparing the object graphs by type and value, or only one or the other?
- What constitutes a difference? Is 1 == 1.0?
- What should the output of the difference look like?
- Who is the 'source' of the difference? (left argument or right)