Monday, July 20, 2015

Bash and eternal sleep

I was trying to write a script to run a couple of background processes then wait for an interrupt signal from the keyboard. There's a nice way to do this with waits, but an even shorter way (if you have GNU -- i.e. almost all Linux machines) is:

function shutdown() {
# do stuff
}

trap shutdown INT
sleep inf

Wednesday, November 27, 2013

Stoic Week Day 2: Radical Moderation

It was inevitable that self-discipline should come up, I suppose. But the Stoic twist on this is different than I expected, though consistent with the idea that only character matters. The materials for today had this lovely description of Cato:
This was the character and this the unswerving creed
of austere Cato: to observe moderation, to hold to the goal,
to follow nature, to devote his life to his country,
to believe that he was born not for himself but for all the world.
This idea of unswervingly observing moderation appeals to me -- not the ascetic path of radical self-denial, but rather something like a combination of the idea of balance in all things that I associate with the Greek, with tenacity and strength of purpose. I admire this way of thinking, the prudence of counting the cost before beginning, the steely eye that wants to know the truth of what is before contemplating what might be. This is a mindset very compatible with engineering, where there is little choice but to follow nature, since nature assuredly will not follow you.  It brings to mind words like "solid," "dependable," "salt of the earth."

I wonder, though, about the other aspects of life, about inspiration, surprise and wonder. I wonder if there is Stoic art, and if so, what it is like. Yesterday, I read about one Stoic who started each day by rising at dawn and meditating while staring at the sun beginning to rise into the starry sky. At the time I thought of this along the lines of Kant's concept of the sublime: something quietly vast, and awe-inspiring beyond our ability to take in. But I wonder whether that is correct. It could also be that the Stoic wishes to begin the day this way because it puts the self into practical perspective, pushing the hopefulness of waking back into an everyday frame. Inspiration and sublimity become problems, since their tendency is not to hold to the goal, but to transcend it; not to follow nature, but to rise above it.

In this connection, I thought about Luther's comment about pagans trembling at the rustling of a leaf, because the supernatural is so intertwined with the natural that one can never be sure when wind is wind, and when it is the harbinger of divine wrath. [Oddly, when I Googled the phrase, I found a mixture of conservative Christian sites quoting Luther with approval, and pagan sites talking about the loveliness of rustling leaves!] What I fear from the Stoics is the same thing I fear from the modern scientific reductionists, namely (to reverse Arthur C. Clarke's maxim) that any sufficiently reduced form of magic is indistinguishable from technology -- from know-how and mechanism. I am just Stoic enough to believe that if this is the case, we must face it, while at the same time hoping it is not entirely true. As I type this, there is a large fluffy cat resting his head on me, and it is wonderful in a way that is hard to describe. If this is mechanism, then the wonder serves to point out that mechanism has more to it than appears at first glance.

But I digress. I am trying something different with diet that might or might not lead somewhere; we shall see. For that purpose, holding to the plan certainly beats the faddishness of the latest weight-loss scheme

Tuesday, November 26, 2013

Living like a Stoic

Today begins "Stoic Week 2013" (material here). It's an interesting idea to try out thinking about life from another paradigm. But, actually, looking at what the Stoics believed, I see a lot of points of contact with my own beliefs. The materials present 3 central ideas of Stoicism.

  • The idea that value, or the "good life" lies within the self, and is defined by character.
  • The importance of recognizing what we do and don't have control over -- essentially we control only ourselves, and not even all of that. The most important thing we can control are our judgments. Feelings like anger or sadness result from the perspective we adopt, and that perspective can be altered.
  • The understanding of Nature (and society) as an interconnected and cooperative system, rather than a Darwinian competitive mass of individuals struggling individually for existence.
The Stoic prescription is a little improvement every day, guided by reflection on the day's events in the evening, and the formation of improvements in the morning. The meditative aspects remind me very much of what we called "devotions" when I was an Evangelical Christian. It's interesting to find another tradition of the same vintage as Christianity that contains this reflective aspect. I'm not sure about daily improvement -- it's sounds good, but my last experience of this is that it becomes exhausting and leads to frustration.

Today's meditation focused on what we can control, and trying to be aware of feelings and desires and from whence they originate. Today, being just an ordinary sort of work day, it's not as though I have big dramatic examples.
  • I realize the desire to work on fun, easy stuff first rather than more difficult, longer-term tasks.  This is in part a judgment about "low-hanging fruit," partly about wanting to complete a task before I start another one, and partly just doing what I like. So it's not all one way, but perhaps partly this is the kind of thing the Stoics might classify as "wrong desires"
  • I was annoyed with my wife for interrupting me during work. This is the kind of thing the Stoics would say is pointless, since the annoyance comes from outside myself, and I can't change it.
The question is whether I can really not be annoyed by interruptions. I think this is somewhat possible. I am not annoyed when work colleagues interrupt me to ask questions, for example, so the annoyance is contextual, rather than about the act of being interrupted. I also think that I am more impatient than I used to be, and there is no reason to believe that I could not be so again. In a way, this is a good test case, because interruptions are bound to happen, and my schedule is such that they will often be at inconvenient times.

Thinking more about the actual event, the annoyance had as much to do with frustration at not being able to solve a coding problem -- a background level of annoyance, leaving me more prone to reacting to small stimuli. The background annoyance is not all bad. Sometimes, it gives me an edge that keeps me alert and focused on the problem. The feeling of being sharply focused and on the cusp of discovering a solution makes interruptions all the more annoying. 

As a practical matter, I don't think I would want to lose this drive. From a Stoic point of view, I don't know how to classify it. What is it that makes the Stoic try to make each better than the last?  Surely it is some kind of drive, some form of energy derived from the sense of accomplishment in improving the self; the Stoic way is not the Buddhist way of detachment from the illusion of personhood. If so, then Stoicism is about channeling that energy where it matters. So does solving these coding problems matter for the development of my "rational character"? And if not, or if there is only a weak correlation, to what else should the energy be directed?

Sunday, January 29, 2012

Unit test only?

How far can unit tests take you? There's a very interesting presentation here: http://www.infoq.com/presentations/integration-tests-scam. I've been of the opinion that you can't get around doing higher-level tests, however this is somewhat based on the way I've learned software development. The typical setup I've worked in is hierarchical. At the bottom you have unit tests, usually specific to a single object. Then you have unit integration tests, usually specific to a small set of closely related objects providing a significant chunk of capability. Above that, there's application-level tests, which verify that the capabilities integrate nicely. Above that, there's system level tests of various types, which ensure that all system requirements are met. Above that are systems integration tests, testing communications between systems.

In principle, though, at any given point in time an interface/communication/message consists of two objects -- a sender and a receiver. If the system is highly modular, these objects will likely be small and fairly simple. Certainly in a system of any size there will be differences in level of abstraction, but the basic idea of sender and receiver is fundamental. If this is granted, then in principle it ought to be provable that if all possible inputs and outputs to objects are known, then unit testing is sufficient.

I have two questions regarding this position. The first has to do with emergent behavior. The second is about the feasibility of determining the complete mapping between inputs and outputs. It may be that the two questions are connected, and are really two sides of the same coin, but let start with emergent behavior. My question here is not clear yet; it's more of an uneasiness. Consider the firing of neurons in a neural network simulating a brain, say, or perhaps some much simpler neural network but large enough to be non-trivial, so that the behavior of the system relies on complex interactions between the neurons. Each neuron has a set of inputs and a set of outputs. Each neuron is, in itself, simple. Complex behavior arises not from individual neurons, but from patterns of neuron firing. Now let us suppose that we wanted to test that our network is working properly using the unit test method. Testing each neuron is simple enough. Take a neuron. Construct a set of input values. Test that the outputs follow the neuron activation function. Easy!

However, there is a big gap between what these unit tests tell us about the system, and what the user or higher-level programmer wants to know about the system. True, the neurons may be coded perfectly, but what we care about is at a level that is difficult to relate to the unit tests. I am not talking here about the difference between user tests and programmer tests. A programmer looking at this network from a higher level has the same issue -- what do the unit tests say about the overall network behavior? Or, put another way, where does the confidence come from that the system works correctly? Tests at the neuron level are not sufficient to determine this.

I believe a similar kind of problem is envisioned by one of the commentators to the video who noted that a Mars Rover failed to land correctly because the parachute system deployed with a jerk that the (separately tested) detachment system interpreted as a landing. In principle, this problem could be caught if the characteristic for each system are known and the specs are consistent -- the sensor inputs from the parachute deployment need to be well-characterized through parachute testing, and those characteristics can then be fed to the detachment system independently. If the sensor readings for a landing are very similar to those generated by a parachute jerk, then there is an engineering problem to solve.

In practice, however, though it might be straightforward to test each sensor, this may not determine the overall behavior of the system. System complexity is especially likely when interacting with hardware with many design parameters, working in unpredictable environments. A parachute detachment might be triggered by the combined results from a hundred different sensors, with the value from each sensor variable according to wind, temperature, deterioration due to aging and other factors changing over time. What is needed is to know the overall system behavior, given the component behaviors. Effectively, what needs to be tested is not the design of each component, but the system design model. If the design is wrong -- if the detachment system detaches the parachute early because of a flaw -- then it could still be the case that each individual sensor is working correctly.

The question is: is unit testing sufficient to catch a system-level design flaw? This in part depends on whether "units" are allowed to occur at different levels of abstraction. Let's suppose they can, otherwise unit tests are going to be very limited in scope. So now we have a "decision maker" object somewhere in the detachment system that periodically takes inputs from the sensors and makes a decision on whether or not to detach the parachute.

Off the bat, it seems to me that a unit test for the decision maker object is really an integration test by another name. Granted, there may be time advantages in being able to mock up sensors rather than interact with the hardware. But from the perspective of what the test needs to accomplish, from a design point of view, the unit test takes results from a lower level of abstraction and processes them. The coder for the unit test needs to know about the different possible input values from those sensors and what the appropriate outcomes should be. In terms of the thinking behind the test design, that is de facto integration.

Is the new and cool unit-test-only method, then, really that different from the old-and-crusty unit + unit integration + ... method? I am beginning to think it is less different than I imagined at first. All that appears to be missing is a top-level integration object that, in the traditional view, would represent the system. If we envision the system as an object with a given set of inputs and outputs, and everything at a lower level substituted by mocks, then the unit test for this system object is just an end-to-end test. The same idea applies one level higher up for systems of systems.

Broadening unit testing in this way, we can get a reasonable correspondence between old-style and unit-only testing. Old-style reflects likely changes in responsibility as code is looked at from a higher and higher level. Unit-only emphasizes that, regardless of the level, the same thing is happening -- there are input and outputs.

This correspondence suggests to me that old-style and unit-only testing ultimately share the same strengths and weaknesses. You may conduct a traditional interface test with the parachute and detachment systems, even bring in the hardware if you like, but this does not guarantee that integration problems will be found if the set of inputs and outputs have very complex relations, and the problem rarely occurs. If it is possible to break the complex input-output relations into something simpler, that is all to the good, regardless of test style. The real gains from unit-test only style are not from demanding "unit tests only!" They come from making a system where the objects are cohesive and loosely coupled, and from having test structures such as mocks that support testing objects individually. Credit to the agile guys for coming up with methodologies that push that issue front and center where it belongs.

Thursday, January 26, 2012

TDD does something useful

Thinking through what I need to do for the plot capability I'm writing, I've had a realization. I thought I was writing a library. That might, in fact, still be the form that the plot stuff takes. But thinking about how plot might be used, there's a bifurcation. On the one hand, I might want to construct a plot instance for each cycle, do the drawing, and throw it away. On the other hand, I might want a plot instance that persists and can be updated. I realized that if I take the first route, plot doesn't have a state -- it's really a function, not an object. How did I get to this conclusion? Because writing tests forced me to construct a plot, and think about how it is used rather than just focusing on the object capabilities.

Of course, it's impossible to tell whether I would have arrived at the same conclusion without doing TDD. But even if I had arrived there, I'm not sure it would have happened so early on.

I've been revising my specifications for the plot capability. I'm not sure about format, but here's the current version.

Top-level story: A user wants to add a plot to an display for a sim variable.
Constraints: Existing architecture provides a data source abstraction and a user interface. The user interface will need to change to use plotlib to request a plot drawing. UI will provide plot specifications including an image to draw into. Plot must construct the image using the primitive drawing functions provided by the image interface, according to the specifications.

Specifications include:
1) axes
2) titles (are these user-specifiable?)
3) legends
4) one or more variable to plot
5) styles (color, line style, fonts etc.)
6) a data source, provided as a data river instance

Desirable:
- Plots should be cross-platform

The specifications might not all be relevant to plot. For example, perhaps font needs to be handled at a different level, leaving plot with simply a writeText(string) function, or a setFontSize(int), or even setFontSize(FontSizes) using an enum such as normal, large, small etc. I'm not going to worry about this for now. That's down the road for sure.

I think once I get used to the "run and see the tests pass" I might actually like it. What's surprising me at the moment is that the tests are driving the design to some degree. Based on the great advice I received on the TDD board, I'm feeling free to think about design on both large and small scales, while always coming back to "OK, but what's the next test?", and "how does that spec translate into a test?" One of the advantages of not looking ahead is that I'm not having to carry everything around in my head all the time. I don't have to think "I'm going to write the Plot constructor, and it will have a Spec, and the Spec will need to have an Image, and the Image will need to be constructed, and the Spec will also have Axes which might be an concrete instantiation of some abstract PlotObject abstract class, oh, and ..." If I want to throw up some test balloons like this to help me see where I might be going, I do. But I also realize that I need to get to them via the tests, because the tests show what's necessary on a practical level to get the objects working.

So I suppose that this means I am starting to see the tests playing a positive role in developing the design, and also in refactoring. I've done a bit of the latter already, to reflect my updated specs, and yes, it was nice being able to run the unit tests and see them pass even though at this stage they are fairly trivial. It reminds me of why I prefer using static rather than dynamic languages. Just as the C++ compiler catches all sorts of type errors that might make it though in a dynamic language, so the tests catch all sorts of logic errors that might make it through a compilation without them. Writing the unit tests is like building a customized "logic compiler" for my code.

One big remaining question is the amount of time spent refactoring tests and the quality of the tests that I end up with. TDD advocates tend to minimize this issue, but I don't believe them. There's actually a book on this subject (xUnit Test Patterns: Refactoring Test Code) where the author writes the following:

We started doing eXtreme Programming "by the book" using pretty much all of the practices it recommended, including pair programming, collective ownership, and test-driven development. Of course, we encountered a few challenges in figuring out how to test some aspects of the behavior of the application, but we still managed to write tests for most of the code. Then, as the project progressed, I started to notice a disturbing trend: It was taking longer and longer to implement seemingly similar tasks.

I explained the problem to the developers and asked them to record on each task card how much time had been spent writing new tests, modifying existing tests, and writing the production code. Very quickly, a trend emerged. While the time spent writing new tests and writing the production code seemed to be staying more or less constant, the amount of time spent modifying existing tests was increasing and the developers' estimates were going up as a result. When a developer asked me to pair on a task and we spent 90% of the time modifying existing tests to accommodate a relatively minor change.

The problem is that the people who invent methods use a lot of tacit knowledge as they develop. This is noticeable in the books they write. When I was reading Kent Beck's "Test-Driven Development by Example," there were several occasions when I thought "OK, I can see that the way he goes is a legitimate way to go, but it's not the way I would have gone. I wonder why he chose it?" It's one of those Alistair Cockburn things where we don't know what we know, and therefore can't tell whether or not everything that needs to be expressed has been expressed. even if we could express it.

I will probably need to get that book on xUnit refactoring at some point. But elsewhere on a forum I saw someone else say something to the effect of "if the team doesn't use this book, they will get into trouble." Of course, that individual could be wrong, and certainly he is speaking in a context. But the fact remains: there's no royal road to "clean and working code" developed quickly that's easy to maintain. TDD might start with two sentences worth of rules, but the outworking of the rules is still many books worth of material and experience and that's no bad thing; it implies to me that TDD has enough to it to stand a chance of working across a range of projects.

Enough for now. Off to write some tests.

TDD with QtCreator

I'm feeling better about TDD. I might even be seeing some real benefits even though there's hardly any code yet.

To begin with, let's get some setup out of the way. One challenge was to get gtest working with QtCreator. A question I've had is whether to have one executable containing all my tests, or whether to split tests into multiple executables. This is somewhat significant, since my goal is to type CTRL-R in QtCreator and have the tests build and run -- since the TDD methodology means running tests often, it has to be easy and fast. On the other hand, it seems logical to split tests up into separate executables in case I only want to run one set (e.g. the image tests). I came up with the following solution.
1) Have the makefile produce multiple executables.
2) Have the makefile produce a run_test.sh script that runs all the executables, stopping if there's a breakage and echoing "ALL TESTS PASS" if not. This makes it easy to see when all the tests pass (hopefully the majority of the time). The rule to make the script is quite simple:

make_exec_script: $(TESTS)
echo '#!/bin/sh' > $(EXEC_SCRIPT)
for i in $(TESTS); do \
echo "./$$i && \\" >> $(EXEC_SCRIPT); \
done
echo 'echo " " && \\' >> $(EXEC_SCRIPT); \
echo 'echo \* \* \* ALL TESTS PASSED \* \* \*' >> $(EXEC_SCRIPT)
chmod 755 $(EXEC_SCRIPT)

Now I just set the project up to execute the script, and I'm done. Granted, this won't work on Windows. Poor Windows. Always the oddball.

Saturday, January 21, 2012

Shu-ha-ri and the art of learning

I have waited too long to write this entry.
I thought that I needed time for ideas to sink in, and since I'd posted a question on the GOOS board, I thought it would be a good idea to give time for other people to answer. Then, looking over what I've written as a summary of GOOS so far, I felt I had not done a good job of summarizing the GOOS approach, but possibly lacked the background to understand what they were after. This is always an issue with getting into something new, of course: where there is a community, there is a community language and assumed background which is not always easy for the outsider to pick up on.

To get some background, I've started reading "Agile Software Development: The Cooperative Game" (CG) by Alistair Cockburn. I've always like Cockburn's stuff -- it was his view of the human side of software development that drew me to Agile methods in the first place. I was initially going to look at a different book of his, referenced in GOOS, but I think CG is a book that I need to read. In short, it is a discussion of theories of programming, and more broadly of epistemology and communication in a programming setting.

Cockburn begins with a familiar couple of epistemic problems. Can we know what we are experiencing? And (taking other minds for granted), can we express what we know? Cockburn answers both questions in the negative. I found his discussion interesting, but not always coherent. At some level, if it is not possible to know what we experience, it is hard to see how we can then express it to ourselves. And if we cannot express it to ourselves, and cannot express it to others, then why write a book about it? The method employed is not that of philosophical discussion, building cases from axioms, or syllogisms, for example, but rather attempting to convince through stories and reflections on what he takes from those stories.

In his first story, for example, the author turns up at a party with a bottle of red wine, which the hostess insists is white, even though the label clearly says it is red. Later on, when he points out the mistake, the hostess again insists the wine is white and even points to the label, finding out only when she reads it out loud that it says "red." From this, Cockburn argues that we are subject to making mistakes when we think we know something that we don't know, and therefore we can end up producing requirements that contain observational errors. Fair enough. But this does not really address whether we can express what we know, or whether we can know what we experience. Rather, it argues that we may be mistaken about what we think we know, and may therefore end up conveying mistakes to others. One could argue that, on the contrary, it is precisely the fact that we can communicate and can understand our experiences that allows the hostess to realize a mistake has been made, and to laugh together with the author about it.

I don't think that Cockburn had a epistemological treatise in mind, though, when he drafted this book. His audience is programmers, and pragmatic ones at that. He encourages those who do not like "abstract" discussion to skip the first chapter altogether. His advice, then, should probably be seen as practical rather than theoretical, even in "abstract" chapters. Looked at this way, there is a lot to like.

Practically, our communication suffers from a lot of problems.
1) Our comprehension of our own experiences is limited by our ability to interpret those experiences.
2) Our ability to interpret is limited by many factors including language, presuppositions, eager interpretation (judging too early), and level of mastery.
3) In communicating with others, we need to establish a common vocabulary. This is impossible to do perfectly since understanding is layered in terms of learning and experience and everyone is unique in that regard. At best, we look for sufficiently similar experiences, which might involve the equivalent of an experience English speaker adopting a very simple vocabulary to talk to a child.

Expanding on point 3, Cockburn brings in an idea of learning mastery built on Aikido's concept Shu-Ha-Ri. Mastery occurs in three stages. In Shu (learn), we start from the beginning and learn one particular path or technique. Trying to learn different techniques at this stage of mastery leads only to confusion. In Ha (detach), we come to see that our technique does not always work well, and that there are other techniques that work better in certain circumstances. We look for boundaries that define when to use one technique rather than another. In Ri (transcend), we come to see techniques as means to an end, not an end in themselves, and roll our own ways of getting there specific to the task, using the knowledge of the techniques without being restricted to them.

According to Cockburn, this causes problems with a level 3 (Ri) person talks to a newbie. Ri people say things like "do what works," by which they mean something like "there's no perfect answer, but there is a multiplicity of good ones so there's no need to be prescriptive." What a newbie might hear, though, is "it doesn't matter how you code, so long as it works," or "I'm not going to help you figure out how to do it" (I'm interpolating here -- these are my words, not Cockburn's). Beginners need to know that they are getting something right.

There's an obvious parallel to my experience with GOOS and TDD so far. I may (or may not) have written this explicitly in a previous post, but what I'm looking for is ONE way to get into TDD. Looking around the web, there are plenty of people arguing for their interpretations of TDD. That's fine, but I need context first. I recognize the danger of seeing GOOS as "the one, authentic, right method." I think I have enough experience to avoid making that mistake. But I do want to understand GOOS at a deep level, deeper than just "make a slice, code a test, code the initial behavior then fill out from there." I sense that there is more to GOOS than this -- assumptions that are more or less tacit that make GOOS a good fit for Mocks rather than Mocks simply being one technology the authors have decided to employ.

Clearly, what I was doing in my post to the GOOS message board was an attempt to draw on common background, to phrase GOOS ideas in terminology I have seen before. This is good. The first step towards communication is trying to find what is in common, like two modems negotiating a baud rate. But unlike a modem, whose limitations are inherent in the hardware, I can work my way up from my current 300baud state to 56k, and who knows, maybe to T1 some day.