Thinking through what I need to do for the plot capability I'm writing, I've had a realization. I thought I was writing a library. That might, in fact, still be the form that the plot stuff takes. But thinking about how plot might be used, there's a bifurcation. On the one hand, I might want to construct a plot instance for each cycle, do the drawing, and throw it away. On the other hand, I might want a plot instance that persists and can be updated. I realized that if I take the first route, plot doesn't have a state -- it's really a function, not an object. How did I get to this conclusion? Because writing tests forced me to construct a plot, and think about how it is used rather than just focusing on the object capabilities.
Of course, it's impossible to tell whether I would have arrived at the same conclusion without doing TDD. But even if I had arrived there, I'm not sure it would have happened so early on.
I've been revising my specifications for the plot capability. I'm not sure about format, but here's the current version.
Top-level story: A user wants to add a plot to an display for a sim variable.
Constraints: Existing architecture provides a data source abstraction and a user interface. The user interface will need to change to use plotlib to request a plot drawing. UI will provide plot specifications including an image to draw into. Plot must construct the image using the primitive drawing functions provided by the image interface, according to the specifications.
Specifications include:
1) axes
2) titles (are these user-specifiable?)
3) legends
4) one or more variable to plot
5) styles (color, line style, fonts etc.)
6) a data source, provided as a data river instance
Desirable:
- Plots should be cross-platform
The specifications might not all be relevant to plot. For example, perhaps font needs to be handled at a different level, leaving plot with simply a writeText(string) function, or a setFontSize(int), or even setFontSize(FontSizes) using an enum such as normal, large, small etc. I'm not going to worry about this for now. That's down the road for sure.
I think once I get used to the "run and see the tests pass" I might actually like it. What's surprising me at the moment is that the tests are driving the design to some degree. Based on the great advice I received on the TDD board, I'm feeling free to think about design on both large and small scales, while always coming back to "OK, but what's the next test?", and "how does that spec translate into a test?" One of the advantages of not looking ahead is that I'm not having to carry everything around in my head all the time. I don't have to think "I'm going to write the Plot constructor, and it will have a Spec, and the Spec will need to have an Image, and the Image will need to be constructed, and the Spec will also have Axes which might be an concrete instantiation of some abstract PlotObject abstract class, oh, and ..." If I want to throw up some test balloons like this to help me see where I might be going, I do. But I also realize that I need to get to them via the tests, because the tests show what's necessary on a practical level to get the objects working.
So I suppose that this means I am starting to see the tests playing a positive role in developing the design, and also in refactoring. I've done a bit of the latter already, to reflect my updated specs, and yes, it was nice being able to run the unit tests and see them pass even though at this stage they are fairly trivial. It reminds me of why I prefer using static rather than dynamic languages. Just as the C++ compiler catches all sorts of type errors that might make it though in a dynamic language, so the tests catch all sorts of logic errors that might make it through a compilation without them. Writing the unit tests is like building a customized "logic compiler" for my code.
One big remaining question is the amount of time spent refactoring tests and the quality of the tests that I end up with. TDD advocates tend to minimize this issue, but I don't believe them. There's actually a book on this subject (xUnit Test Patterns: Refactoring Test Code) where the author writes the following:
We started doing eXtreme Programming "by the book" using pretty much all of the practices it recommended, including pair programming, collective ownership, and test-driven development. Of course, we encountered a few challenges in figuring out how to test some aspects of the behavior of the application, but we still managed to write tests for most of the code. Then, as the project progressed, I started to notice a disturbing trend: It was taking longer and longer to implement seemingly similar tasks.
I explained the problem to the developers and asked them to record on each task card how much time had been spent writing new tests, modifying existing tests, and writing the production code. Very quickly, a trend emerged. While the time spent writing new tests and writing the production code seemed to be staying more or less constant, the amount of time spent modifying existing tests was increasing and the developers' estimates were going up as a result. When a developer asked me to pair on a task and we spent 90% of the time modifying existing tests to accommodate a relatively minor change.
The problem is that the people who invent methods use a lot of tacit knowledge as they develop. This is noticeable in the books they write. When I was reading Kent Beck's "Test-Driven Development by Example," there were several occasions when I thought "OK, I can see that the way he goes is a legitimate way to go, but it's not the way I would have gone. I wonder why he chose it?" It's one of those Alistair Cockburn things where we don't know what we know, and therefore can't tell whether or not everything that needs to be expressed has been expressed. even if we could express it.
I will probably need to get that book on xUnit refactoring at some point. But elsewhere on a forum I saw someone else say something to the effect of "if the team doesn't use this book, they will get into trouble." Of course, that individual could be wrong, and certainly he is speaking in a context. But the fact remains: there's no royal road to "clean and working code" developed quickly that's easy to maintain. TDD might start with two sentences worth of rules, but the outworking of the rules is still many books worth of material and experience and that's no bad thing; it implies to me that TDD has enough to it to stand a chance of working across a range of projects.
Enough for now. Off to write some tests.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment