Stephen Freeman Rotating Header Image

Coding

Notes on the craft of programming

Some mocks

“I don’t use a knife when I eat, if I need one it means the food hasn’t been cut up enough.”
“Forks are unnecessary, I can do everything I want with a pointed knife.” [1]

One of the things we realised when writing “Growing Object-Oriented Software”, is that the arguments about mocks in principle are often meaningless. Some of us think about objects in terms of Alan Kay’s emphasis on message passing, others don’t. In my world, I’m interested in the protocols of how objects communicate, not what’s inside them, so testing based on interactions is a natural fit. If that’s not the kind of structure you’re working with right now, then testing with mocks is probably not the right technique.

This post was triggered by Arlo Belshee’s on “The No Mocks Book”. I think he has a really good point, buried in some weaker arguments (the developers I work with don’t use mocks just to minimise design churn, they’re as ruthless as anyone when it comes to fixing structure). His valuable point is that it’s a really good idea to try setting yourself style constraints to re-examine old habits, as in this “object-oriented calisthenics exercise”. As I once wrote, Test-Driven Development itself can have this property of forcing a developer to stop and think about what’s really needed rather than pushing ahead with an implementation.

As for Arlo’s example, I don’t think I can provide a “better” solution without knowing more detail. As he points out in the comments, this is legacy code, so it’s harder to use mocks for interface discovery. I think Arlo’s partner is right that the ProjectFile.LoadFrom is problematic. For me the confusion is likely to be the combination of reading bytes from a disk and the construction of a domain structure, I’d expect better structure if I separated them. In practice, what I’d probably do is some “annealing” by inlining code and looking for a better partition. Finally, it would be great if Arlo could finish up the reworking, I can believe that he has a much better solution in mind but I’m struggling with the value of this step.

There is one more thing we agree on, the idea of building a top-level language that makes sense in the domain. Calling methods on the same object, like a Smalltalk cascade, is one way, although there’s nothing in the type to reveal the protocol—how the calls relate to each other. We did this in jMock1, where we used interface chaining to guide the programmer and the IDE as to what to do next. Arlo’s example is simple enough that the top-level can be inspected to see if it makes sense. I’ve worked in other domains where things are complicated enough that we really did need some checked examples to make us feel confident that we’d got it right.

1) of course, most of the world eats with their hands or chopsticks, so this is a culturally-opressive metaphor

On the composeability of Hamcrest matchers

A recent discussion on the hamcrest java users list got me thinking that I should write up a little style guide, in particular about how to create custom Hamcrest matchers.

Reporting negative scenarios

The issue, as raised by “rdekleijn”, was that he wasn’t getting useful error messages when testing a negative scenario. The original version looked something like this, including a custom matcher:

public class OnComposabilityExampleTest {
  @Test public void
  wasNotAcceptedByThisCall() {
    assertThat(theObjectReturnedByTheCall(),
               not(hasReturnCode(HTTP_ACCEPTED)));
  }

  private Matcher
  hasReturnCode(final int returnCode) {
    return new TypeSafeDiagnosingMatcher() {
      @Override protected boolean
      matchesSafely(ThingWithReturnCode actual, Description mismatch) {
        final int returnCode = actual.getReturnCode();
        if (expectedReturnCode != returnCode) {
          mismatch.appendText("return code was ")
                  .appendValue(returnCode);
          return false;
        }
        return true;
      }

      @Override
      public void describeTo(Description description) {
        description.appendText("a ThingWithReturnCode equal to ")
                   .appendValue(returnCode);
      }
    };
  }
}

which produces an unhelpful error because the received object doesn’t have a readable toString() method.

java.lang.AssertionError:
Expected: not a ThingWithReturnCode equal to <202>
     but: was 

The problem is that the not() matcher only knows that the matcher it wraps has accepted the value. It can’t ask for a mismatch description from the internal matcher because at that level the value has actually matched. This is probably a design flaw in Hamcrest (an early version had a way to extract a printable representation of the thing being checked), but we can use this moment to think about improving the design of the test. We can work with Hamcrest which is designed to be very composeable.

Separating concerns

The first thing to notice is that the custom matcher is doing too much, it’s extracting the value and checking that it matches. A better design would be to split the two activities and delegate the decision about the validity of the return code to an inner matcher.

public class OnComposabilityExampleTest {
  @Test public void
  wasNotAcceptedByThisCall() {
    assertThat(theObjectReturnedByTheCall(),
               hasReturnCode(not(equalTo(HTTP_ACCEPTED))));
  }

  private Matcher
  hasReturnCode(final Matcher codeMatcher) {
    return new TypeSafeDiagnosingMatcher() {
      @Override protected boolean
      matchesSafely(ThingWithReturnCode actual, Description mismatch) {
        final int returnCode = actual.getReturnCode();
        if (!codeMatcher.matches(returnCode)) {
          mismatch.appendText(" return code ");
          codeMatcher.describeMismatch(returnCode, mismatch);
          return false;
        }
        return true;
      }

      @Override
      public void describeTo(Description description) {
        description.appendText("a ThingWithReturnCode with code ")
                   .appendDescriptionOf(codeMatcher);
      }
    };
  }
}

which gives the much clearer error:

java.lang.AssertionError:
Expected: a ThingWithReturnCode with code not <202>
     but: return code was <202>

Now the assertion line in the test reads better, and we have the flexibility to make assertions such as hasReturnCode(greaterThan(25)) without changing our custom matcher.

Built-in support

This is such a common situation that we’ve included some infrastructure in the Hamcrest libraries to make it easier. There’s a template FeatureMatcher, which extracts a “feature” from an object and passes it to a matcher. In this case, it would look like:

private Matcher
hasReturnCode(final Matcher codeMatcher) {
  return new FeatureMatcher(
          codeMatcher, "ThingWithReturnCode with code", "code") {
    @Override
    protected Integer featureValueOf(ThingWithReturnCode actual) {
      return actual.getReturnCode();
    }
  };
}

and produces an error:

java.lang.AssertionError:
Expected: ThingWithReturnCode with code not <202>
     but: code was <202>

The FeatureMatcher handles the checking of the extracted value and the reporting.

Finally, in this case, getReturnCode() conforms to Java’s bean format so, if you don’t mind that the method reference is not statically checked, the simplest thing would be to avoid writing a custom matcher and use a PropertyMatcher instead.

public class OnComposabilityExampleTest {
  @Test public void
  wasNotAcceptedByThisCall() {
    assertThat(theObjectReturnedByTheCall(),
               hasProperty("returnCode", not(equalTo(HTTP_ACCEPTED))));
  }
}

which gives the error:

java.lang.AssertionError:
Expected: hasProperty("returnCode", not <202>)
     but: property 'returnCode' was <202>

Another reason not to log directly in your code

I’ve been ranting for some time that it’s a bad idea directly to mix logging with production code. The right thing to do is to introduce a collaborator that has a responsibility to provide structured notifications to the outside world about what’s happening inside an object. I won’t go through the whole discussion here but, somehow, I don’t think I’m winning this one.

Recently, a team I know provided another reason to avoid mixing production logging with code. They have a system that processes messages and have been asked to record all the accepted messages for later reconciliation with an upstream system. They did what most Java teams would do and logged incoming messages in the class that processes them. Then they associated a special appender with that class’s logger that writes its entries to a file somewhere for later checking. The appenders are configured in a separate XML file.

One day the inevitable happened and they renamed the message processing class during a refactoring. This broke the reference in the XML configuration and the logging stopped. It wasn’t caught for a little while because there wasn’t a test. So, lesson one is that, if it matters, there should have been a test for it. But this is a pretty rigorous team that puts a lot of effort into doing things right (I’ve seen much worse), so how did they miss it?

I think part of it is the effort required to test logging. A unit test won’t do because the structure includes configuration, and acceptance tests run slowly because loggers buffer to improve performance. And part of it is to do with using a side effect of system infrastructure to implement a service. There’s nothing in the language of the implementation code that describes the meaning of reporting received messages: “it’s just logging”.

Once again, if I want my code to do something, I should just say so…

Update: I’ve had several responses here and on other media about how teams might avoid this particular failure. All of them are valid, and I know there are techniques for doing what I’m supposed to while using a logging framework.

I was trying to make a different point—that some code techniques seem to lead me in better directions than others, and that a logging framework isn’t one of them. Once again I find that the trickiness in testing an example like this is a clue that I should be looking at my design again. If I introduce a collaboration to receive structured notifications, I can separate the concepts of handling messages and reporting progress. Once I’ve split out the code to support the reconciliation messages, I can test and administer it separately—with a clear relationship between the two functions.

None of this guarantees a perfect design, but I find I do better if I let the code do the work.

Test-First Development 1968

Seeing Kevlin Henney again at the Goto conference reminded me of a quotation he cited at Agile on the Beach last month.

In 1968, NATO funded a conference with the then provocative title of Software Engineering. Many people feel that this is the moment when software development lost its way, but the report itself is more lively that its title suggests.

It turns out that “outside in” development, with early testing is older than we thought. Here’s a quote from the report by Alan Perlis:

I’d like to read three sentences to close this issue.

  1. A software system can best be designed if the testing is interlaced with the designing instead of being used after the design.
  2. A simulation which matches the requirements contains the control which organizes the design of the system.
  3. Through successive repetitions of this process of interlaced testing and design the model ultimately becomes the software system itself. I think that it is the key of the approach that has been suggested, that there is no such question as testing things after the fact with simulation models, but that in effect the testing and the replacement of simulations with modules that are deeper and more detailed goes on with the simulation model controlling, as it were, the place and order in which these things are done.

It’s all out there in our history, we just have to be able to find it.

An example of an unhedged software call option

At a client, we’ve been reworking some particularly hairy calculation code. For better or worse, the convention is that we call a FooFetcher to get hold of a Foo when we need one. Here’s an example that returns Transfers, which are payments to and from an account. In this case, we’re mostly getting hold of Transfers directly because can identify them1.

public interface TransferFetcher {
  Transfer      fetchFor(TransferId id);
  Transfer      fetchOffsetFor(Transfer transfer);
  Set fetchOutstandingFor(Client client, CustomerReference reference);
  Transfer      fetchFor(CustomerReference reference);
}

This looks like a reasonable design—all the methods are to do with retrieving Transfers—but it’s odd that only one of them returns a collection of Transfers. That’s a clue.

When we looked at the class, we discovered that the fetchOutstandingFor() method has a different implementation from the other methods and pulls in several dependencies that only it needs. In addition, unlike the other methods, it has only one caller (apart from its tests, of course). It doesn’t really fit in the Fetcher implementation which is now inconsistent.

It’s easy to imagine how this method got added. The programmers needed to get a feature written, and the code already had a dependency that was concerned with Transfers. It was quicker to add a method to the existing Fetcher, even if that meant making it much more complicated, than to introduce a new collaborator. They sold a Call Option—they cashed in the immediate benefit at the cost of weakening the model. The team would be ahead so long as no-one needed to change that code.

The option got called on us. As part of our reworking, we needed to change how Transfer objects were constructed so we could handle a new kind of transaction. The structure we planned meant changing another object, say Accounts, to depend on a TransferFetcher, but the current implementation of TransferFetcher depended on Accounts to implement fetchOutstandingFor(). We had a dependency loop. We should have taken a diversion and moved the behaviour of fetchOutstandingFor() into an appropriate object, but then we had our own delivery pressures. In the end, we found a workaround that allowed us to finish the task we were in the middle of, with a note to come back and fix the Fetcher.

The cost of recovery includes not just the effort of investigating and applying a solution (which would have been less when the code was introduced) but also the drag on motivation. It’s a huge gumption trap to be making steady progress towards a goal and then be knocked off course by an unnecessary design flaw. The research described in The Progress Principal suggests that small blockers like this have a disproportionate impact compared to their size. Time to break for a cup of tea.

I believe that software quality is a cumulative property. It’s the accumulation of many small good or bad design decisions that either make a codebase productive to work with or just too expensive to maintain.

…and, right on cue, Rangwald talks about The Tyranny of the Urgent.


1) The details of the domain have been changed to protect the innocent, so please don’t worry too much about the detail.

Thanks to @aparker42 for his comments

Is Dependency Injection like Facebook?

The problem with social networks

I think there’s a description in Paul Adams’ talk about online vs. offline social networks of how Dependency Injection goes bad, particularly when using one of the many automated frameworks.

Adams describes a research subject Debbie who, in “real life” has friends and contacts from very different walks of life. She has friends from college with alternative lifestyles who post images from their favourite LA gay bars. She also trains local 10-year olds in competitive swimming. Both the college friends and swimming kids have “friended” her. She was horrified to discover that these two worlds had inadvertently become linked though her social networking account.

This is the “Facebook problem”. The assumption that all relationships are equivalent was good enough for college dorms but doesn’t really scale to the rest of the world, hence Google+. As Adams points out,

Facebook itself is not the problem here. The problem here is that these are different parts of Debbie’s life that would never have been exposed to each other offline were linked online.

Like most users, Debbie wasn’t thinking of the bigger picture when she bound the whole of her life together. She was just connecting to people she knew and commenting on some pictures of guys with cute buns.

Simile alert!

Let’s revisit the right-hand side of that illustration.

This is Nat‘s diagram for the Ports and Adapters pattern. It illustrates how some people (including us) think system components should be built, with the domain logic in the centre protected from the accidental complexity of the outside world by a layer of adapters. I do not want to have my web code inadvertently linked directly to my persistence code (or even connected to LA gay bars).

That’s the trouble with the use of DI frameworks in systems that I’ve seen, there’s only one level of relationship: get me an object from the container. When I’m adding a feature, I just want to get hold of some component—and here’s an easy way to do it. It takes a lot of rigour to step back at every access to consider whether I’m introducing a subtle link between components that really shouldn’t know about each other.

I know that most of the frameworks support managing different contexts but it seems that, frankly, that’s more thinking and organisation than most teams have time for at the beginning of a project. As for cleaning up after the fact, well it’s a good way to make a living if the company can afford it and you like solving complex puzzles. More critical, however, is that the Ports and Adapters structure is recursive. Trying to manage the environments of multiple levels of subsystem with most current containers would be, in Keith Braithwaite‘s words, “impossible and/or insane”.

new again

The answer, I believe, is to save the DI frameworks for the real boundaries of the system, the parts which might change from installation to installation. Otherwise, I gather object assembly into specialised areas of the code where I can build up the run-time structure of the system with the deft use of constructors and new. It’ll look a bit complex but no worse than the equivalent DI structure (and everyone should learn to read code that looks like lisp).

If I later find that I can’t get access to some component that I think I need, that’s not necessarily a bad thing. It’s telling me that I’m introducing a new dependency and sometimes that’s a hint that a component is in the wrong place, or that I’m trying to use it from the wrong place. The coding bump is a design feedback mechanism that I miss when I can just pull objects out of a container. If I do a good job, I should find that, most of the time, I have just the right components at the time that I need them.

Machiavelli on code quality

As the doctors say of a wasting disease, to start with, it is easy to cure but difficult to diagnose. After a time, unless it has been diagnosed and treated at the outset, it becomes easy to diagnose but difficult to cure.

— Nicolo Machiavelli, The Prince

via Dee Hock, Birth of the Chaordic Age

Calling an Oracle stored procedure with a Table parameter with Spring's StoredProcedure class

I don’t normally do this sort of thing, but this took my colleague Tony Lawrence and me a while to figure out and we didn’t find a good explanation on the web. This will be a very dull posting unless you need to fix this particular problem. Sorry about that.

We happen to be using the Spring database helper classes to talk to Oracle with stored procedures. It turns out that there’s a bug in the driver that means that you have to jump through a few hoops to pass values in when the input parameter type is a table. This should be equivalent to an array, but apparently it isn’t, so you have to set up the callable statement correctly. Where to do this was not obvious (to us) in the Spring framework.

Here’s an example stored procedure declaration:

CREATE TYPE VARCHARTAB IS TABLE OF VARCHAR2(255);

CREATE PACKAGE db_package {
  TYPE list_type IS TABLE OF VARCHAR2(50) INDEX BY BINARY_INTEGER;
PROCEDURE a_stored_procedure(
  table_in IN list_type
)

The table_in parameter type list_type is declared within a package, which means we can’t declare the parameter as an OracleTypes.ARRAY when setting up the statement. Instead we declare it as the type of the table contents OracleTypes.VARCHAR

class MyProcedure extends StoredProcedure {
  public MyProcedure(DataSource dataSource) {
    super(dataSource, "db_package.a_stored_procedure");
    declareParameter(new SqlParameter("table_in",
                                      OracleTypes.VARCHAR));
    compile();
  }

  void call(String... values) {
    execute(withParameters(values));
  }

Here’s the money quote. When setting up the parameter, you need to provide it with a SqlTypeValue.
Don’t use one of the helper base classes that come out of the box, but create an implementation directly. That gives you access to the statement, which you can cast and set appropriately.

   private Map withParameters(String... values) {
      return ImmutableMap.of("table_in",
                             oracleIndexTableWith(50, values));
   }

   private   SqlTypeValue
   oracleIndexTableWith(final int elemMaxLen, final T... values) {
     return new SqlTypeValue() {
       @Override
       public void setTypeValue(
         PreparedStatement statement, int paramIndex,
         int sqlType, String typeName) throws SQLException
      {
         ((OracleCallableStatement)statement).setPlsqlIndexTable(
            paramIndex, values, values.length, values.length,
            sqlType, elemMaxLen);
       }
     };
   }
}

That’s it. Happy copy and paste.

Keep tests concrete

This popped up on a technical discussion site recently. The original question was how to write tests for code that invokes a method on particular values in a list. The problem was that the tests were messy, and the author was looking for a cleaner alternative. Here’s the example test, it asserts that the even-positioned elements in the parameters are passed to bar in the appropriate sequence.

public void testExecuteEven() {
  Mockery mockery = new Mockery();

  final Bar bar = mockery.mock(Bar.class);
  final Sequence sequence = new NamedSequence("sequence");

  final List allParameters = new ArrayList();
  final List expectedParameters = new ArrayList();

  for (int i = 0; i < 3; i++) {
    allParameters.add("param" + i);
    if (i % 2 == 0) {
      expectedParameters.add("param" + i);
    }
  }
  final Iterator iter = expectedParameters.iterator();

  mockery.checking(new Expectations() {
   {
     while (iter.hasNext()) {
       one(bar).doIt(iter.next());
        inSequence(sequence);
      }
    }
  });

  Foo subject = new Foo();
  subject.setBar(bar);
  subject.executeEven(allParameters);
  mockery.assertIsSatisfied();
}

The intentions of the test are good, but its most striking feature is that there’s so much computation going on. This doesn’t need a new technique to make it more readable, it just needs to be simplified.

A unit test should be small and focussed enough that we don’t need any general behaviour. It just has to deal with one example, so we can make it as concrete as we like. With that in mind, we can collapse the test to this:

public void testCallsDoItOnEvenIndexedElementsInList() {
  final Mockery mockery = new Mockery();
  final Bar bar = mockery.mock(Bar.class);
  final Sequence evens = mockery.sequence("evens");

  final  List params =
    Arrays.asList("param0", "param1", "param2", "param3");

  mockery.checking(new Expectations() {{
    oneOf(bar).doIt(params.get(0)); inSequence(evens);
    oneOf(bar).doIt(params.get(2)); inSequence(evens);
  }});

  Foo subject = new Foo();
  subject.setBar(bar);
  subject.executeEven(params);
  mockery.assertIsSatisfied();
}

To me, this is more direct, a simpler statement of the example—if nothing else, there’s just less code to understand. I don’t need any loops because there aren’t enough values to justify them. The expectations are clearer because they show the indices of the elements I want from the list (an alternative would have been to put in the expected values directly). And if I pulled the common features, such as the mockery and the target object, into the test class, the test would be even shorter.

The short version of this post is: be wary of any general behaviour written into a unit test. The scope should be small enough that values can be coded directly. Be especially wary of anything with an if statement. If the data setup is more complicated, then consider using a Test Data Builder.

IWonderAboutInterfaceNames

InfoQ has just published Udi Dahan’s talk from QCon 2008 on “Intentions and Interfaces”. It’s good to see the message about focussing on Roles rather than Classes being pitched to a new audience. That’s what we were trying to talk about in our “Mock Roles, Not Objects” paper.

I wonder, however, about his style for naming roles:

interface IMakeCustomerPreferred {
  void MakePreferred();
}
interface IAddOrdersToCustomer {
  void AddOrder(Order order);
}

It took me a little while to figure it out, but to me the issue is that these interface names bind the role to the underlying implementation, or at least to a larger role. One of the things that Nat Pryce and I discuss in our book is that interfaces need refactoring too. If two interfaces are similar, perhaps there’s a common concept there and they should be collapsed—which brings us more pluggable code. This implies that roles, as described by interfaces, should aim to be context-independent. In this case, I might rename one of the interfaces to:

interface IOrderCollector {
  void AddOrder(Order order)
}

since there will be contexts in which I really don’t care that it happens to be a Customer.
That said, I think Dahan has other motivations with this naming scheme, since he also uses it to control retrieval from the ORM, but there might be other ways to achieve that.

A colleague was once accused of being so role-happy, that he defined an Integer as a combination of Addable, Subtractable, Multipliable, and Divideable.