Part of a series on Making better estimates.
Estimation is frustrating: fuzzy, difficult, inexact. You can simplify the process, reduce the effort, and maybe even improve overall accuracy by trying to be less accurate at the detail level.
Estimation is an inherently squishy activity. You can’t prove your estimate is correct (at least until you’ve done the work). You have no option but to make a choice in the face of imperfect or incomplete knowledge. This can make it hard for analytical or technical people to estimate.
Trying to makes estimates in too many significant digits makes thing worse. Is this task 6.0, 6.5, or 7.0 hours of work? Call it 6 or 7 and be done. The added accuracy is false anyway, considering the nature of the problem, so there’s no use in spending time discussing it.
We believe in taking this a step further and estimating in discrete buckets related by powers of 2. Our project tasks are either 1, 2, 4, 8, 16, or 32 points. The biggest measurement is usually a sign that we need to work harder decomposing the task so as to better estimate it. Using this discrete set of estimates helps avoid pointless time and effort attempting to distinguish between a 6 and 7 point story. It simplifies and improves the use of a reference task, since the common cases are:
- relatively trivial (1 point),
- 1/2 the reference (2 points),
- same as the reference (4 points),
- twice the reference (8 points),
- four times the reference (16 points).
A false level of significance (aren’t you glad you learned about significant digits in middle school?) can also hurt when you perform operations like summing and averaging on your estimates. Telling a customer that you achieved 16.745 points per hour is silly, and invites them to expect an unattainable level of accuracy in your project management metrics.
Previous post: Absolute/real vs Relative/arbitrary
Next post: Range Estimates
Probably a good part of this series will about how to make unbiased estimates. Bias (Human Bias) negatively affects estimates by inflating or deflating them.
I have published a few articles on the subject (first one is: estimating and forecasting biases in projects). Beware though, the topic can easily become extremely philosophical.
This could be an instance of the Dunning-Kruger effect.
This cognitive bias issue appears to happen with estimates too: because we are estimating an unknown (in other words, we are currently incompetent), we have a hard time even estimating our level of incompetence.
If the D-K effect holds true for S/W estimation, not only will we make inaccurate estimates, we will struggle to even estimate the bounds of our inaccuracies (our level of incompetence). This is a very depressing way to look at estimation, but rings true in my experience.
The obvious solution is the “never estimate anything new” meme. This is an impractical solution since you would end up in a narrower and narrower niche until your niche disappears. Unfortunately, this has been observed to happen.
I’ve seen (lived :-O) attempts to combat it by attempting to work harder to make more accurate estimates. I have not seen any success stories on this approach. I think it ends up in one of two traps: either the “never estimate anything new” meme to get some realistic level of accuracy or else pretending the unknown is known and ending up with a totally unrealistic estimate tracked to unbelievable resolution right into the wall.
I don’t know much about the root causes of such bias, but I certainly see it in action. We fight this very real phenomenon by doing group estimation, formalizing the definition of our low and high estimates, and buffering as described above.