We're hiring!

We're actively seeking designers and developers for all three of our locations.

Making Better Estimates, Part 7: Range Estimates

Part of a series on Making better estimates.

Single point estimates don’t accurately represent the natural variation in a task. A range estimate of low and high is a good first step to improve accuracy. With a little definition and some simple math, range estimates can be used to much more accurately and responsibly estimate a project.

Single point estimates do a poor job of representing the variability in the actual time required for a task. The detailed explanation of why this is so involves understanding the form of the probability distribution function of a typical software task’s completion time. Steve McConnell’s book on Estimating has a good explanation of this.

A range estimate (low and high, say) gives a lot more information about the nature of the task being estimated. The difference between low and high indicates the uncertainty the team has in the estimate, or the natural variability of the task itself. A big spread indicates more uncertainty, a lower difference indicates more confidence and less variability.

Our experience has taught us that simply asking developers to make two estimates (“low” and “high”) doesn’t really result in much more information than a single point estimate. Developers tend to be optimistic when it comes to individual estimates. So without a little structure, the “low” estimate usually means the absolute smallest amount of time the task could take – an event with a very low probability of coming true. The “high” estimate means what it’s most likely to take to get done. This leaves a lot of stuff that could push the actual task completion time out unaccounted for (the long right tail of the estimate’s probability distribution function), and your overall project likely to be under-estimated.

If we’re making coarse, large-grain estimates, then we’ll use a range analysis technique described in Chapter 17 of Mike Cohn’s book Agile Estimating and Planning. This approach to project buffering is closely related to Goldratt’s critical chain project management techniques.

In this approach, our low or “aggressive but possible” (ABP) estimate is the most likely amount of time the task will take. The high  or “highly probable” (HP) estimate is a conservative estimate that takes into account possible problems. By way of example, let’s say I live 15 minutes from work. On a good day, I know my ABP estimate for getting into work is 15 minutes. But what if there’s bad traffic, construction, or an accident? I know the route and alternate routes well enough that I feel confident, given whatever conditions, that I could make it to work in 30 minutes. Making the Highly Probably estimate requires either a lot of confidence and knowledge, or a really high estimate. On a task of high variability or unknown elements, the spread between “low” (ABP) and “high” (HP) can be quite large.

So how should we use the range estimate? Summing the HP estimates for all tasks will give a very large estimate for the project. After all, it’s very unlikely that you’ll hit the high estimate on every single task. It might feel like you’ve been on such projects before, but that’s probably because you weren’t making HP estimates for the high estimate. If you’ve done your HP estimates accurately, then a 10 task project only has a one in 1 billion chance of exceeding the high estimate on every single task. Using the sum of the high estimates would be terrible sandbagging.

Using the sum of the ABP estimates is also a problem. Doing so doesn’t account for any of the natural variation in the tasks, or the asymmetry of the completion time distribution function. The approach we use is to add a project buffer to the sum of the ABP estimates. You can think about the project buffer as receiving a contribution from each task in the project. You don’t know in advance which tasks will draw upon the project buffer, but you want to make sure that each task has contributed to it in proportion to the likelihood of need. The spread between the ABP and HP estimates indicates the potential for a task to go over and make a withdrawal from the project buffer.

The calculation we favor for project buffer is:

The overall project estimate then is:

The project estimate is simply the sum of all the most likely task estimates (the ABP estimates), plus a project buffer. Since the buffer is sized based on the spread between the low and high estimates, it protects the project from variability in a responsible manner. With this approach you’re neither sandbagging, nor irresponsibly underestimating.


Previous post: False significance

Next post: Date vs duration
 

Carl Erickson (84 Posts)

Carl is the president and cofounder of Atomic Object. Learn more about Carl.

This entry was posted in Planning Your Project and tagged . Bookmark the permalink. Both comments and trackbacks are currently closed.

4 Comments

  1. faith smart
    Posted January 14, 2009 at 2:24 pm

    i did not find what i needed [JK]

  2. Karl
    Posted January 14, 2009 at 2:24 pm

    Hello AO!
    I really like your 50/90 approach to software estimates, it seems like a very simple yet powerful approach. I would be interested in some clarification though (brought on by our mutual interest in the Steve McConnell book you cited).

    1) Do you make a distinction between an estimate and a commitment to the customer? If so, how?

    2) I see on your site that you provide weekly updates to customers about “When will my project be finished?”. Does this mean you are providing re-estimates using the 50/90 tool to help combat the “Cone of Uncertainty”?

  3. Posted January 14, 2009 at 2:24 pm

    Great questions, Karl.

    Our commitment to the customer is to always know where the project stands with respect to time and budget. For ongoing projects we maintain a backlog of features, estimate in relative complexity points, and track the team’s velocity. We report on what we’ve done and extrapolate project completion with a burndown chart we deliver weekly.

    The burndown chart shows the impact of scope change. We’ll re-estimate features in the backlog when we’ve learned something new that’s relevant. Otherwise, the team’s velocity adjusts up or down and the absolute accuracy of the estimates doesn’t matter.

  4. Posted February 28, 2012 at 5:44 pm

    I really like the info you’ve presented here, so much so it inspired me to make a web app using the formula your outlined to make it easier for me come up with estimates.

    Check it out: ezranger.com

    Thanks for sharing!

6 Trackbacks

  1. By Time and Materials is dead | Atomic Spin on June 3, 2011 at 8:29 am

    [...] time and effort helping customers set a budget. We’ve established really strong patterns of estimating and tracking our work. We’ve got highly refined charts that report on progress toward completion. [...]

  2. [...] Previous posting: Range estimates [...]

  3. By Setting the Budget | Atomic Spin on July 11, 2011 at 10:49 am

    [...] and use that map to define a minimum viable product. We then estimate the development effort, in ranges of days, to implement the features that allow users to complete the defined tasks. To aid in our estimation [...]

  4. By Responsible Estimation Tool | Atomic Spin on July 18, 2011 at 11:23 am

    [...] of an application’s core features. In order to responsibly and efficiently estimate, we conduct a range analysis. We use this technique because we want to establish a responsible middle ground between our [...]

  5. [...] the tasks and dependencies, those people that would do the work gave the time estimates using the Aggressive-But-Possible (ABP) and Highly-Probable (HP) times. Everyone accepted these task estimates as fact and did not override [...]

  6. [...] I did what any smart customer would do. I pulled rank and ordered that the estimates be done using aggressive but possible (ABP) and highly probable (HP) times. Oh, and I would be responsible for the project [...]