First off, I should say that, as with most methodologies, there isn't one normative approach to doing TDD, so I'll try to say "GOOS" when what I'm discussing comes from the book, and "TDD" when it appears to be generic. In the foreword to the book, Kent Beck says, in effect, "this is a good book. It's not how I would do it, but I learned from it." At first, I thought this was a case of what you might call "dissing with faint praise." Perhaps this is because TDDers are so often dogmatic: "TDD IS GOOD FOR YOU. DO IT NOW! BECAUSE I SAID SO," or "you can't call yourself a professional if you don't use it." Then if someone raises objections, the answer is too often "well, there are some hopeless ideologues you just can't convince." Good grief! Why not just say "it made me a more effective professional," a statement that should be perfectly capable of making other professional take notice without all the acrimony? But in any case, I think it is healthy that there are different views of TDD because differences help to drive out what is meaningful. I admit it -- I'm a bit of a Hegelian at heart.
What is missing in the discussions I've seen of TDD, and what the GOOS book tries to answer, is the rationale behind doing it. I'm not looking for "it's X% more effective." I want to understand why it is more effective -- what makes it work? This is a fundamental difference, and it's not academic either. The fact is, when you take on a new project, you have to take on an approach to that project. Saying "I'll use TDD" doesn't get you far, any more than saying "I'll use OO." TDD needs to be applied. Understanding how to apply TDD comes from understanding what TDD is good for, how it works, what sort of things in the project setup to look for. This is not so easy -- the TDD guys are right to argue that you can't give canned answers. But that's precisely why it is important to have a philosophy of TDD as well as a methodology.
OK, so to the beginnings of clearing up my conceptual misunderstandings with TDD. In the following points, I'll list the misconception first, then discuss why it's a misconception.
- TDD = no design up front. If GOOS is right, this is nonsense. There's plenty of design up front. There are meetings with stakeholders, rough designs drawn on whiteboards, state diagrams, CRC cards, discussions of what constitutes the first "slice," discussions of what test technologies to use, and likely more. It was surprising to me, actually, to see how many times the authors of GOOS used the phrase "after discussions" or some similar idea. It's irrelevant to me whether design is documented in some fancy 150k tool or on a napkin. The point is, someone has thought about it. Someone has worked through a preliminary version of what needs to happen -- there is a direction. Perhaps anti-design-up-front TDDers prefer to call this "planning." Fine. I don't care.
- TDD = writing unit tests. I am sure I have read TDD folks saying something to the effect that "unit tests are all you need." That's such a bizarre statement, I can't believe I would have randomly made it up. In fact, having worked on some decent-sized systems, "unit test only" is one of the biggest problems I had with TDD. How on earth can you say that's sufficient to unit test a system with a million lines of code? Rubbish! All the interesting problems are integration problems; unit testing is trivial by comparison. The same goes for working with external libraries or applications. GOOS takes the sensible perspective that tests need to exist at multiple levels, with "end-to-end" tests at the highest level, just as the conventional development methodologies I've used would indicate, though done via a different approach. Sanity! Thank goodness for that!
- TDD = start by writing test cases. One of my problems with TDD has been where to start. In the past when I've tried TDD, I thought "OK, so I need to write tests first. What do I start with?" Then, typically, I picked an object I thought I understood (usually a low-level object) and wrote tests for it. Then, after exhaustively testing all the operators, assignment possibilities etc. I'd end up with a nice unit test which needed extensively rewritten when I tried to fit the object in with other objects. GOOS is quite clear on this point: doing this is a bad idea. More than that, it's fundamentally a misconception of what TDD is supposed to accomplish. In the GOOS way of thinking, tests are intended to drive the design of the system by pushing development from areas of knowledge into areas where knowledge is insufficient. This implies you must start from knowledge. But at the beginning where does the knowledge come from? It must come from constraints like client requirements, available technologies and preliminary design team discussions, and in the beginning, that means you have only high-level details. So "beginning" means gathering requirements and priorities, holding preliminary design discussions, and working out a plan which includes identifying a first "slice" of capability from which the application can grow. Only then can you start with the first test.
- TDD = bottom-up. Given that the first word of their title is "growing," you might think that GOOS advocates starting at the bottom with a seed of code, and adding more bits of code until the project is done. What this fails to take into account, however, are the different levels at which TDD operates. As mentioned in point 3), GOOS begins with a slice of capability. This is at the application level, and the first test is an end-to-end test which I think is intended to test the life of the application and its interfaces, though in minimal way -- it might just instantiate an object, connect to a database, retrieve one thing, and shut down, for example. Adding capability involves driving down into the support classes by adding tests that require those supports, then making those tests pass. Thus, if I am understanding this correctly, GOOS is inherently top-down, setting up the application framework first then pushing down to lower levels.
- TDD = getting good code coverage. Code coverage is certainly likely to be a benefit of a TDD approach, but it's not the point of TDD. Rather, TDD is a model of programming that proceeds by placing constraints on the solution space, and increasing those constraints until all requirements are satisfied. This is really the fundamental difference between a TDD approach (or in principle any test-first approach), and a conventional test-last approach. In a test-last approach, there are, of course, constraints; they are simply implicit and embedded in the code. Testing is done at the end to ensure that the implicit constraints meet the requirements. In a test-first approach, constraints are explicit, and are themselves developed as part of the design as the software grows. A potential problem with test-first is that you might spend a lot of time formally expressing and revising those constraints in test cases. A potential problem with test-last is that implicit constraints might be hard to test. And yes, I'm calling these "potential" problems. I'm in no position yet to say whether the effort to do TDD is justified, or gets better results than well-designed test-last code.