2013-04-11

PERT: How To Better Add Estimates

The Problem With Adding Estimates

So far, we have always represented uncertainty in estimates by giving estimates as a span of two values, best and worst (expected) case. For example, in rough estimation for budgeting purposes, we estimate a partiular high-level feature as "2-5 weeks". The problem comes when summing several such features the straigforward way:

Feature A    2-5 weeks
Feature B    1-3 weeks
Feature C    3-6 weeks
--------------------------
Total            6-14 weeks

There are two problems with this way of estimating the total:

  • the uncertainty is overstated
  • there is some pressure/expectation that if everything goes well, the total "should" take 6 weeks, or maybe a bit more
However, if the estimates are any good, there should be almost no chance that each and every task turns out at their lowest estimate. In the same way, we can expect that not all the tasks turn out to be at their worst case.

A Solution: PERT Estimation

PERT (Program Evaluation and Review Technique), developed for the US Navy's Polaris Submarine Program in 1957 by the RAND Corporation, starts with three estimations for each task:
  • o, an overly optimistic estimate (so low that the chance of this happening is 1%)
  • n, the most likely estimate (if we would plot the estimated probability of each duration, the highest bar is at this value)
  • p, a pessimistic worst-case estimate (again, only 1 in 100 tasks should turn out to be actually worse than this estimate)
Assuming a beta distributaion (sort of a lopsided bell curve), PERT calulates the expected time (taking into account the lopsided probability distribution) and a sigma (standard deviation, a value for the uncertainty) as follows:
  • mean = (o + 4*n + p) / 6
  • sigma = (p - o)/6
The collection of (independent) tasks then has a beta distribution calculated as follows:
  • total mean = sum of means of each task
  • total sigma = square root of sum of squares of each tasks sigma
Example

We estimate our three tasks with 1% optimitic, nominal and 1% pessimistic values, and calculate each mean and sigma (rounded to 1 decimal)

Feature A:  estimate 2/3/5   - mean 3.2,  sigma 0.5
Feature B:  estimate 1/2/3   - mean 2.0,  sigma 0.3
Feature C:  estimate 3/4/6   - mean 4.1,  sigma 0.5
------------------------------------------------------------------
Total                                   mean 9.3,  sigma 0.8

If we assume the estimates were correct, this gives us a 50% chance that the total is finished in 9.3 (weeks). If we want to set a date that is more certain, we can add one, two or three sigmas for different degrees of certainty. At three sigma, we can be fairly sure (>>90%), that the total can be finished in less than 9.3 +  3*0.8 = 11.7 (weeks).

This solves the two problems mentioned above:
  • the total can be estimated much narrowly, some of the uncertainty in each task cancels out in the sum
  • it is made obvious that the estimates are in fact a probability distribution, and that a reasonable goal (which can be hit about 50% of the time) is 9.3 weeks, not the almost-impossible 6 weeks.
Remarks
  • PERT means make a big difference for very lopsided estimates - when the worst-case estimate is much higher than the nominal estimate. This is very often the case for typical software development tasks, where a 1% worst case can well be 2-3 times the nominal "most likely" case. For these, the mean turns out much higher than the sum of nominal values.
  • The entire PERT Methodology actually does much more, such as considering task dependencies, parallelizable tasks and critical paths, combined with the above.
  • The estimates themselves don't get better, just the way uncertainty is factored into the sum of tasks.
Sources
  • I saw this first in the book The Clean Coder, by Robert C. Martin
  • A good description can also be found here

Keine Kommentare:

Kommentar veröffentlichen