A couple of years ago I wrote about how I was using Epic stories for early project estimations. Recently John Rusk posted a question in the comments:
I have a question about this: “We make no attempt to restrict that the total number of points from the resulting stories adds up to the previous estimate of the Epic”.
How do you find that affects your burn charts? In particular, imagine you are half way through a project. All the epics up to that point in the project have been “expanded” into smaller user stories. But the epics after that point have not yet been expanded. Imagine also that, when expanding an epics, the total number of points in the small stories is typically greater than the total of the original epic.
This would mean that past work has reasonably accurate numbers of points associated with it, but that future work, still in epic form, is underestimated. (Since, when you decompose the future work, its point value will typically go up, so therefore its current point value is too low).
If this happens, it may skew your burn charts and make you think you’re closer to finishing than you really are.
However, from your description of your success with this approach, I take it that this problem doesn’t actually affect you. I have used a similar approach myself but, being concerned about the above problem, I have always taken care to preserve the same total point value when breaking down the epics. Hence my interest in hearing how you have tackled, or avoided, the same problem.
I am using Epics to get a very rough estimate of the size of the project, without having to spend an inordinate amount of time wading through the details at the beginning of the project (at the point of maximum ignorance as Carl is fond of saying).
As John points out, breaking an Epic down into smaller stories can definitely have an impact on the burn chart. However, my experience has been that just as many Epics end up being smaller than the original estimate as do end up being bigger. So while there is some churn as the total number of points in the backlog goes up and down as Epics are broken down, over the course of the project the backlog remains fairly accurate.
I don’t believe the tactic of preserving the original point total when breaking down an Epic is a good one. When it comes time to do a finer grained estimate, you are in a much better position to make more accurate estimates. If you truly think you have more work than the original rough estimate indicates, it is misleading to not let that be known.
One tactic I have seen used to avoid the total points line rising and falling frequently as Epics are re-estimated is to keep a story in the backlog that acts as a “bank” for points. If an Epic is re-estimated to be less points than originally thought, the remaining points going into the “bank” story. Later, when an Epic is re-estimated to more points, the difference is taken out of the bank.
When the bank grows too large then it makes sense to either remove some of the points, thus decreasing the size of the project, or what happens more frequently with fixed budget, scope-controlled projects, replace some of those “banked” points with a few lower priority stories that did not previously make the cut.
I have also been on projects where a “bank” story similar to this was used as a buffer for expected but undefined changes. Anyone who has ever done software development knows that feedback is going to lead to additional work. These kinds of changes are frequently categorized as scope creep.
The reality is that after a feature is implemented it is often obvious that something additional is needed to complete it. You can plan for this at the start of a project by adding a bank story from which points can be withdrawn throughout the project without impacting the total points line in a burn chart.