Stephen Freeman Rotating Header Image

Software culture

Doing pair programming tests right

In her rant on the state of the industry, Liz Keogh mentioned coding in the interview, which triggered several comments and a post from Rob Bowley, who reminded us of Ivan Moore’s excellent post. I think actually typing on a computer is essential which is why I’ve been doing it for ten years (enough with whiteboard coding), but I’ve also seen examples of cargo cult code interviews where the team didn’t quite get the point:

It’s a senior responsibility
Pair programming tests should be conducted by senior developers. First, this shows that the team thinks that actual coding is important enough that senior people have to get involved, it’s not just something they delegate. Second, now matter how smart, juniors will not have seen many different approaches, so they’re more likely to dismiss alternatives (technical and human) as bad style. They just don’t have the history. There are times when a tight group of young guns is just what you need, but not always.
Do it together
Be present for the work. Don’t just send the candidate off and tell them to submit a solution, the discussion is what’s important. Otherwise, it turns into a measure of how well someone can read a specification. It also suggests that you think that your time is too valuable to actually work with a candidate, which is not attractive. And, please, don’t play the “intentionally vague” specification game, which translates to “Can you guess what I’m thinking?” (unless you’re interviewing Derren Brown)

Be ready
Have your exercise ready. Your candidate has probably taken a day off work, so the least you can do is not waste their time (and, by implication, yours). Picking the next item off the backlog is fine, as long as it doesn’t turn out to be a configuration bug or to have already been fixed. One alternative is a canned example, which has the benefit of being consistent across candidates. An example that is too simple, however, is a good primary filter but limits what you can learn about the candidate, such as larger-scale design skills.
Have a proper setup
Your netbook is cute, portable, and looks great. That doesn’t make it suitable for pairing, not least because some candidates might have visibility issues and the keyboard will have keys in the wrong places. Use a proper workstation with a good monitor so you can both see, and talk about, the code
Allow enough time
Sometimes things take a while to settle. People need to relax in and you need time to get over your initial flash response to the candidate. Most of us do not need developers who can perform well under stress. I’ve seen great candidates that only opened up after 30 minutes. You also need to work on an example that’s interesting enough to have alternatives, which takes time. If you’re worried about wasting effort on obvious misfits, then stage the exercise so you can break early. You’re going to work with a successful candidate for some time, so it’s not worth skimping.
Give something back
This is something that Ivan mentioned. No matter how unsuitable, your candidate spent time and possibly money to come to see you, and deserves more than a cup of tea. Try to show them something new as a return. If you can’t do that then either you don’t know enough to be interviewing (remember, it should be a senior) or you messed up the selection criteria which means you’re not ready.

An example of an unhedged software call option

At a client, we’ve been reworking some particularly hairy calculation code. For better or worse, the convention is that we call a FooFetcher to get hold of a Foo when we need one. Here’s an example that returns Transfers, which are payments to and from an account. In this case, we’re mostly getting hold of Transfers directly because can identify them1.

public interface TransferFetcher {
  Transfer      fetchFor(TransferId id);
  Transfer      fetchOffsetFor(Transfer transfer);
  Set fetchOutstandingFor(Client client, CustomerReference reference);
  Transfer      fetchFor(CustomerReference reference);
}

This looks like a reasonable design—all the methods are to do with retrieving Transfers—but it’s odd that only one of them returns a collection of Transfers. That’s a clue.

When we looked at the class, we discovered that the fetchOutstandingFor() method has a different implementation from the other methods and pulls in several dependencies that only it needs. In addition, unlike the other methods, it has only one caller (apart from its tests, of course). It doesn’t really fit in the Fetcher implementation which is now inconsistent.

It’s easy to imagine how this method got added. The programmers needed to get a feature written, and the code already had a dependency that was concerned with Transfers. It was quicker to add a method to the existing Fetcher, even if that meant making it much more complicated, than to introduce a new collaborator. They sold a Call Option—they cashed in the immediate benefit at the cost of weakening the model. The team would be ahead so long as no-one needed to change that code.

The option got called on us. As part of our reworking, we needed to change how Transfer objects were constructed so we could handle a new kind of transaction. The structure we planned meant changing another object, say Accounts, to depend on a TransferFetcher, but the current implementation of TransferFetcher depended on Accounts to implement fetchOutstandingFor(). We had a dependency loop. We should have taken a diversion and moved the behaviour of fetchOutstandingFor() into an appropriate object, but then we had our own delivery pressures. In the end, we found a workaround that allowed us to finish the task we were in the middle of, with a note to come back and fix the Fetcher.

The cost of recovery includes not just the effort of investigating and applying a solution (which would have been less when the code was introduced) but also the drag on motivation. It’s a huge gumption trap to be making steady progress towards a goal and then be knocked off course by an unnecessary design flaw. The research described in The Progress Principal suggests that small blockers like this have a disproportionate impact compared to their size. Time to break for a cup of tea.

I believe that software quality is a cumulative property. It’s the accumulation of many small good or bad design decisions that either make a codebase productive to work with or just too expensive to maintain.

…and, right on cue, Rangwald talks about The Tyranny of the Urgent.


1) The details of the domain have been changed to protect the innocent, so please don’t worry too much about the detail.

Thanks to @aparker42 for his comments

Test-Driven Development and Embracing Failure

At the last London XpDay, some teams talked about their “post-XP” approach. In particular, they don’t do much Test-Driven Development because they find it’s not worth the effort. I visited one of them, Forward, and saw how they’d partitioned their system into composable actors, each of which was small enough to fit into a couple of screens of Ruby. They release new code to a single server in their farm, watching the traffic statistics that result. If it’s successful, they carefully propagate it out to the rest of the farm. If not, they pull it and try something else. In their world, the improvement in traffic statistics, the end benefit of the feature, is what they look for, not the implemented functionality.

I think this fits into Dave Snowden’s Cynefin framework, where he distinguishes between the ordered and unordered domains. In the ordered domain, causes lead to effects. This might be difficult to see and require an expert to interpret, but essentially we expect to see the same results when we repeat an action. In the complex, unordered domain, there is no such promise. For example, we know that flocking birds are driven by three simple rules but we can’t predict exactly where a flock will go next. Groups of people are even more complex, as conscious individuals can change the structure of a system whilst being part of it. We need different techniques for working with ordered and unordered systems, as anyone who’s tried to impose order on a gang of unruly programmers will know.

Loosely, we use rules and expert knowledge for ordered systems, the appropriate actions can be decided from outside the system. Much of the software we’re commissioned to build is about lowering the cost of expertise by encoding human decision-making. This works for, say ticket processing, but is problematic for complex domains where the result of an action is literally unknowable. There, the best we can do to influence a system is to try probing it and be prepared to respond quickly to whatever happens. Joseph Pelrine uses the example of a house party—a good host knows when to introduce people, when to top up the drinks, and when to rescue someone from that awful bore from IT. A party where everyone is instructed to re-enact all the moves from last time is unlikely to be equally successful1. Online start-ups are another example of operating in a complex environment: the Internet. Nobody really knows what all those people will do, so the best option is to act, to ship something, and then respond as the behaviour becomes clearer.

Snowden distinguishes between “fail-safe” and “safe-fail” initiatives. We use use fail-safe techniques for ordered systems because we know what’s supposed to happen and it’s more effective to get things right—we want a build system that just works. We use safe-fail techniques for unordered systems because the best we can do is to try different actions, none of which is large enough to damage the system, until we find something that takes us in the right direction—with a room full of excitable children we might try playing a video to see if it calms them down.

At the technical level, Test-Driven Development is largely fail-safe. It allows us, amongst other benefits, to develop code that just works (for multiple meanings of “work”). We take a little extra time around the writing of the code, which more than pays back within the larger development cycle. At higher levels, TDD can support safe-fail development because it lowers the cost of changing our mind later. This allows us to take an interim decision now about which small feature to implement next or which design to choose. We can afford to revisit it later when we’ve seen the result without crashing the whole project.

Continuous deployment environments such as at Forward2, on the other hand, emphasize “safe-fail”. The system is partitioned up so that no individual change can damage it, and the feedback loop is tight enough that the team can detect and respond to changes very quickly. That said, even the niftiest lean start-up will have fail-safe elements too, a sustained network failure or a data breach could be the end of the company. Start-ups that fail to understand this end up teetering on the edge of disaster.

We’ve learned a lot over the last ten years about how to tune our development practices. Test-Driven Development is no more “over” than Object-Orientation is, it’s just that we understand better how to apply it. I think our early understanding was coloured by the fact that the original eXtreme Programming project, C3, was payroll, an ordered system; I don’t want my pay cheque worked out by trying some numbers and seeing who complains3. We learned to Embrace Change, that it’s a sign of a healthy development environment rather than a problem. As we’ve expanded into less predictable domains, we’re also learning to Embrace Failure.


1) this is a pretty good description of many “Best Practice” initiatives
2) Fred George has been documenting safe-fail in the organisation of his development group too, he calls it “Programmer Anarchy
3) although I’ve seen shops that come close to this

Speaking and tuting at QCon London. 7-11 March

Speaking at QCon London 2011 Nat and I will be running our “TDD at the System Scale” tutorial at QCon London. Sign up soon.

I’ll also be presenting an engaging rant on why we should aspire to living and working in a world where stuff just works.

If you quote the promotion code FREE100 when you sign up, QCon will give you a discount of £100 and the same amount to Crisis Charity.

What are we being primed for?

The excellent BBC popular science programme Bang Goes the Theory, recently reproduced this experiment on priming. In the original experiment, the subjects were primed by being asked to write sentences based on sets of words: one set was neutral and the other contained words related to an elderly sterotype. The result was that

participants for whom an elderly stereotype was primed walked more slowly down the hallway when leaving the experiment than did control participants, consistent with the content of that stereotype.

In the “Bang” experiment, they took two queues of people entering the Science Museum and placed pictures of the elderly and infirm around one queue, and the young and active around the other. The result was the same, people in the queue with the elderly images took significantly longer to walk into the building.

It’s striking that such a small thing can affect how we behave.

Now, look around your work environment and consider what it’s priming you for. Are you seeing artefacts of purpose and effectiveness? Or does it speak of regimentation and decay? Now look at your computer screen. Are you seeing an environment that emphasises productivity and quality? Or does it speak of control and ugliness?

It’s amazing that some of us get anything done at all.

This isn’t about spending lots of money to look nice (although that espresso machine is appreciated). I suspect that the sort of “funky, creative” offices that get commissioned from designers dressed in black are usually an upmarket version of motivational posters.

My guess is that a truly productive environment must have some “authenticity” for the people who spend most of their days in it. Most geeks I know would be happy with a trestle-table provided they get to spend the difference on a good chair and powerful kit, and other disciplines might have other priorities.

But then, perhaps every environment is authentic since the organisation is making clear what it really values most. And what might that imply?…

Bad code isn't Technical Debt, it's an unhedged Call Option

I’d been meaning to write this up for a while, and now Nat Pryce has written up the 140 character version.

Payoff from writing a call.

This is all Chris Matts‘ idea. He realised that the problem with the “Technical Debt” metaphor is that for managers debt can be a good thing. Executives can be required to take on more debt because it makes the finances work better, it might even be encouraged by tax breaks. This is not the same debt as your personal credit card. Chris came up with a better metaphor, the Call Option.

I “write” a Call Option when I sell someone the right, but not the obligation, to buy in the future an agreed quantity of something at an price that is fixed now. So, for a payment now, I agree to sell you 10,000 chocolate santas[1] at 56 pence each, at any time up to 10th December. You’re prepared to pay the premium because you want to know that you’ll have santas in your stores at a price you can sell.

From my side, if the price of the santas stays low, I get to keep your payment and I’m ahead. But, I also run the risk of having to provide these santas when the price has rocketed to 72 pence. I can protect myself by making arrangements with another party to acquire them at 56 pence or less, or by actually having them in stock. Or, I can take a chance and just collect the premium. This is called an unhedged, or “Naked”, Call. In the financial world this is risky because it has unlimited downside, I have to supply the santas whatever they cost me to provide.

Call options are a better model than debt for cruddy code (without tests) because they capture the unpredictability of what we do. If I slap in an a feature without cleaning up then I get the benefit immediately, I collect the premium. If I never see that code again, then I’m ahead and, in retrospect, it would have been foolish to have spent time cleaning it up.

On the other hand, if a radical new feature comes in that I have to do, all those quick fixes suddenly become very expensive to work with. Examples I’ve seen are a big new client that requires a port to a different platform, or a new regulatory requirement that needs a new report. I get equivalent problems if there’s a failure I have to interpret and fix just before a deadline, or the team members turn over completely and no-one remembers the tacit knowledge that helps the code make sense. The market has moved away from where I thought it was going to be and my option has been called.

Even if it is more expensive to do things cleanly (and I’m not convinced of that beyond a two-week horizon), it’s also less risky. A messy system is full of unhedged calls, each of which can cost an unpredictable amount should they ever be exercised. We’ve all seen what this can do in the financial markets, and the scary thing is that failure, if it comes, can be sudden—everything is fine until it isn’t. I’ve seen a few systems which are just too hard to change to keep up with the competition and the owners are in real trouble.

So that makes refactoring like buying an option too. I pay a premium now so that I have more choices about where I might take the code later. This is a mundane and obvious activity in many aspects of business—although not, it seems, software development. I don’t need to spend this money if I know exactly what will happen, if I have perfect knowledge of the relevant parts of the future, but I don’t recall when I last saw this happen.

So, the next time you have to deal with implausible delivery dates, don’t talk about Technical Debt. Debt is predictable and can be managed, it’s just another tool. Try talking about an Unhedged Call. Now all we need is a way to price Code Smells.


1) There is an apocryphal story about a trader buying chocolate santa futures and forgetting to sell them on. Eventually a truckload turned up at the Wall Street headquarters.

Machiavelli on code quality

As the doctors say of a wasting disease, to start with, it is easy to cure but difficult to diagnose. After a time, unless it has been diagnosed and treated at the outset, it becomes easy to diagnose but difficult to cure.

— Nicolo Machiavelli, The Prince

via Dee Hock, Birth of the Chaordic Age

Not a charter for hackers

Just had to turn off comments since this post has become a spam target. Sorry.

Update: Kent since tweeted this nice one-liner:

a complete engineer can code for latency or throughput and knows when and how to switch


Kent Beck’s excellent keynote at the Startup Lessons Learned Conference has been attracting some attention on The Interweb. In particular, it seems like he’s now saying that careful engineering is wasteful—just copy and tweak those files to get a result now. I can already hear how this will be cited by frustrated proprietors and managers around the world (more on this in a moment).

What I think he actually said is that we should make engineering trade-offs. When we’re concerned with learning, then we want the fastest turnaround possible. It’s like a physics apparatus, it’s over-engineered if it lasts beyond the experiment. But, if we’re delivering stuff that people will actually use, that we want them to rely on, then the trade-off is different and we should do all that testing, refactoring, and so on that he’s been talking about for the last ten years. Kent brushes over that engineering stuff in his talk, but it’s easy to forget how rare timely, quality delivery is in the field.

My favourite part is Kent’s answer to the last question. A stressed manager or owner asks how to get his developers to stop being so careful and just ship something. Kent’s reply is to present the developers with the real problem, not the manager’s solution, and let them find a way. What the manager really wants is cheap feedback on some different options. The developers, given a chance, might find a better solution altogether, without being forced into arbitrarily dropping quality.

Good developers insist on maintaining quality, partly to maintain pride in their work (as Deming proposed), but also because we’ve all learned that it’s the best route to sustained delivery.

As Brian Marick pointed out recently, it’s about achieving more (much more) through relationships, not one side or another achieving dominance.

Mark Twain again

We should be careful to get out of an experience only the wisdom that is in it—and stop there—lest we be like the cat that sits down on a hot stove-lid. She will never sit down on a hot stove-lid again, and that is well; but also she will never sit down on a cold one any more.

via Gemba Panta Rei

Twain also wrote of opera, “that sort of intense but incoherent noise which always so reminds me of the time the orphan asylum burned down.”

Test-Driven Development is not an elite technique.

This “Darwinian” post that TDD Is Not For the Weak says that not everyone can cope with TDD, it’s for “the Alpha, the strong, the experienced”. I don’t want to believe this, because I think that developers who can’t cope with any level of TDD shouldn’t be coding at all, so I won’t.

This claim has been made for every technical innovation I’ve seen so far (objects, event-driven programming, etc, etc). Sometimes it’s true, but most of the time it’s about what people are used to. Michael Feathers pointed out a while ago that the Ruby community is happily exploiting techniques such as meta-programming that were traditionally regarded as needing a safe pair of hands. What’s changed is that a generation has grown up with meta-programming and doesn’t regard it as problematic. Of course, there will be a degradation in understanding as an idea rolls out beyond its originators, but there’s still some value that gets through.

Sure there’s a role for people to help the generation that is struggling to pick up a new technique, but that doesn’t mean that TDD itself will always be beyond the range of mortal developers.