Debugging legacy applications is, in general, no fun. In this post, I’ll describe a helpful tactic: wrapping a unit test around the questionable code.
Why begin with a test?
Imagine one is facing a pile of 1,000 lines of gibberish. Where does one start? A test? What? No way! This code is totally untestable!
Au contraire! Nothing is untestable–at some level, the code must accept some input and produce some output. Thus, we can wrap an automated test around it. But why?
For the purposes of this post, let’s suppose we’ve got a very long function of gibberish.
By beginning with a test, we’ll learn a number of things about the test:
- What kinds of inputs it expects
- What kinds of output it produces
- What makes it happy
- What makes it blow up
- What dependences the function has: other functions, other objects needed during construction, etc.
- What side effects it has
- Last but not least (and usually the hardest to learn), what makes the function happy
Once we’ve added that first test, we now:
- Have an automated regression test
- Have an automated test harness
- Can likely add more regression tests cheaply
… which means you have a chance at confidently improving the implementation without inadvertently breaking something
In summary, there really is no downside to adding a test, but there are many, many upsides.
My first example is a function I encountered that was 1,000 lines of gibberish. Actually, no—it was 3,000. The function may have made sense to the compiler and perhaps the original author—but no one else. It had everything: giant switch statements composed of mostly copy-and-paste, loop and loop variables nested all the way down to the
l layers, goodness knows how many side effects, etc.
It took us a while, but we eventually got that first test in there–which pretty much read as, “I can call this function without it blowing up.” But we got it! We then knew what kinds of input the function needed. We understood its output. We knew how to prevent it from exploding. And best of all, we now had a test suite where we could easily add tests for the known working cases and known broken cases. Progress!
The second example is from a different project. In this case, the author had implemented a Java classes’s
equals function, then implemented
equals3—all the way up to
equals function was performing deep equality on another field in the object, and guess what? Each successive
equals method was authored as copy-and-paste of the previous method, plus a little more added.
Woohoo! Thankfully, once we had tests around each of these methods, it was trivial to reimplement one function in terms of another and feel confident that the code would still behave the same way.
Sometimes, legacy code is so troublesome that the best thing to do is blow it apart. (In a controlled manner, of course.) See another one of my posts on this tactic, Need to Debug a Legacy App? Throw an Exception.
Debugging legacy application is never easy; I hope that the above tips are helpful. I got the ideas from Michael Feathers at Agile 2006–thanks so much, Michael! Luckily for us, Michael has authored the definitive guide on this topic: Working Effectively with Legacy Code.