Earlier this year I had the unusual opportunity to work on a project with another developer (I’ll call him Dave – not his real name) in which each of us was free to choose our own method of application development.
This is certainly not an ideal situation. At first I considered following the client’s (Dave’s company) development methods just for consistency, but I ultimately decided to follow TDD for my portions of the project for several reasons:
- I had discussed it with the PM and Dave and they were interested in seeing how this “TDD Thing” they had been hearing about worked.
- The features of the application that the two of us were working on were easily differentiated.
- I was interested in seeing first-hand the results of TDD vs non-TDD literally side-by-side.
I also invited Dave to pair with me, even if only on occasion, and see for himself how TDD works and to see what he thought of it. He agreed it would be interesting, but it never happened.
The project started off with the both of us sitting down with an Excel sheet filled with the new features to be added to an application that had been started about a year ago, but had languished for the last several months in its not-yet-usable state. We spent the better part of the day discussing the features and estimating them.
I had been successful in selling the idea of point-based estimating (as opposed to direct time estimates) and we chose a point scale a bit higher than what I would normally have chosen – the ‘average’ story was set at 200 points. Based on this we ended up defining just over 11,000 points of stories over the course of the project.
The original plan when my contract started was for Dave to work 30 hours per week on our project while I was to work 40. Soon after we started, however, it became clear that Dave’s extra-project responsibilities were going to take up much more of his time. In the end, Dave’s hours on feature implementation work totaled 92 to my 217. This roughly correlated to amount of points we each completed. Dave was assigned and completed 3550 points of work while I was credited with 6950.
On the face of it, it would appear that Dave was slightly more efficient that me. After all I worked 2.36 times as many hours but completed only 1.96 as much work. To put it another way, he had a development velocity of 38.6 story points/dev. hours and I had a development velocity of 32.0. However, the hours outside of feature development tell the real story.
With each week’s release build we’d detail the new features that we had completed. Then the testing team would give it a good run-through, logging bugs in another Excel spreadsheet. The contents of this sheet tell an interesting story. Dave ended up with 33 entries in the list. I had a total of 22. Not a bad ratio – I had one third fewer issues while doing almost twice the amount of work. But wait – it gets even better: Of the 22 items in the issue log, five were actually additional features requested that I had time to complete due to the short bug list, so my real number of bug entries was 17.
Once the amount of time spent fixing bugs is factored into our development velocities, the relationship between Dave’s and my velocity changes dramatically. I spent 100.75 hours working on items in the testing log. If we assume that the time it took to fix Dave’s 33 bugs was proportionately similar, then we can estimate that 151 hours were spent working on these items.
So if you add this to the time spent in primary development you get:
Scott: 217 + 100 = 317 hours for 6950 points, or 22 points per hour
Dave: 92 + 151 = 243 hours for 3550 points or 14.6 points per hour
That’s a 50% improvement in overall productivity – pretty good, I’d say.
It was nice to see some hard numbers backing up what we have been preaching.