Breaking Down Epic Stories

A couple of years ago I wrote about how I was using Epic stories for early project estimations. Recently John Rusk posted a question in the comments:

I have a question about this: “We make no attempt to restrict that the total number of points from the resulting stories adds up to the previous estimate of the Epic”.

How do you find that affects your burn charts? In particular, imagine you are half way through a project. All the epics up to that point in the project have been “expanded” into smaller user stories. But the epics after that point have not yet been expanded. Imagine also that, when expanding an epics, the total number of points in the small stories is typically greater than the total of the original epic.

This would mean that past work has reasonably accurate numbers of points associated with it, but that future work, still in epic form, is underestimated. (Since, when you decompose the future work, its point value will typically go up, so therefore its current point value is too low).

If this happens, it may skew your burn charts and make you think you’re closer to finishing than you really are.

However, from your description of your success with this approach, I take it that this problem doesn’t actually affect you. I have used a similar approach myself but, being concerned about the above problem, I have always taken care to preserve the same total point value when breaking down the epics. Hence my interest in hearing how you have tackled, or avoided, the same problem.

I am using Epics to get a very rough estimate of the size of the project, without having to spend an inordinate amount of time wading through the details at the beginning of the project (at the point of maximum ignorance as Carl is fond of saying).

As John points out, breaking an Epic down into smaller stories can definitely have an impact on the burn chart. However, my experience has been that just as many Epics end up being smaller than the original estimate as do end up being bigger. So while there is some churn as the total number of points in the backlog goes up and down as Epics are broken down, over the course of the project the backlog remains fairly accurate.

I don’t believe the tactic of preserving the original point total when breaking down an Epic is a good one. When it comes time to do a finer grained estimate, you are in a much better position to make more accurate estimates. If you truly think you have more work than the original rough estimate indicates, it is misleading to not let that be known.

One tactic I have seen used to avoid the total points line rising and falling frequently as Epics are re-estimated is to keep a story in the backlog that acts as a “bank” for points. If an Epic is re-estimated to be less points than originally thought, the remaining points going into the “bank” story. Later, when an Epic is re-estimated to more points, the difference is taken out of the bank.

When the bank grows too large then it makes sense to either remove some of the points, thus decreasing the size of the project, or what happens more frequently with fixed budget, scope-controlled projects, replace some of those “banked” points with a few lower priority stories that did not previously make the cut.

I have also been on projects where a “bank” story similar to this was used as a buffer for expected but undefined changes. Anyone who has ever done software development knows that feedback is going to lead to additional work. These kinds of changes are frequently categorized as scope creep.

The reality is that after a feature is implemented it is often obvious that something additional is needed to complete it. You can plan for this at the start of a project by adding a bank story from which points can be withdrawn throughout the project without impacting the total points line in a burn chart.

  • John Rusk says:

    Thanks Patrick.

    Any thoughts on why this has been the case: “my experience has been that just as many Epics end up being smaller than the original estimate as do end up being bigger”? As you can imagine, if that was not the case, then the skewing effect on the burnchart would be more serious.

    There’s one other aspect that I had a comment about: “If you truly think you have more work than the original rough estimate indicates, it is misleading to not let that be known.” I can see how that could be the right thing to do if you realise that the original estimate was wrong in relative terms: e.g. you realise that this particular 40-point epic is actually a completely different size from your project’s “typical” 40-point epic. However, I feel there is more risk of skewing the burn chart if it was actually correct in relative terms: e.g. “we seem to have a trend where all our 40 point stories expand out into a greater number of points when we break them down”. In that case, I feel it would not be misleading just to leave the total at 40 points, even after decomposing the story. All it means is that our velocity is lower than we hoped (i.e. each of those 40 points represents more person-hours than we originally expected). I guess you don’t have this latter case, because you tend to get an even-ish mix of over and under-estimated epics :-)

    • John – Unfortunately I can’t say for sure why we seem to get a fairly even distribution of size decreases and size increases. My guess is that since they are such large, ballpark type, estimates you have just as much of a chance of having gone over in your estimate as you do of going under. It could also be that when making rough estimates at the beginning of a project there are some things you just don’t know much about, and when the time comes to break down the Epic you know that much more about the domain, which will sometimes mean there will be less work (or risk) than was originally thought.

      As to your second point, you are correct. What you are doing is basically the same logic as leaving an 8 point story as an 8 even after you have gone past what you would consider a “normal” 8 point period of time. You should not make it a 16 point story, but rather let the velocity drop to reflect the underestimate.

      I prefer to treat the Epic as a very rough first cut – more of a placeholder. Breaking it down into smaller stories provides for better estimates. I want to be aware (and I want my customer to be aware) of what effect this has on the backlog.

      But you make a good point, and I think as long as you are consistent, and keep your customer well informed, either way will work pretty well.

  • John Rusk says:

    Thanks for your thought-provoking reply Patrick.

  • […] Breaking Down Epic Stories for more […]

  • Comments are closed.