The Art (or Science?) of Estimation

Recently, I’ve come across the articles “Estimate Individually – Fail Globally” and “The Art and War of Estimating and Scheduling Software”.  Coincidentally, I also happened to read Steve McConnell’s book “Software Estimation: Demystifying the Black Art” which I feel is a required read for program managers, project managers, development managers and senior technical individuals.  The combination of these events has got me thinking about estimation, again.

Simply put, estimation is hard because there are so many variables that come into play.  Factors such as skill level, new technology, sick days, wrong specifications etc all have the possibility of throwing off an estimate.  This is why I think that a minimum, estimates must:

  1. Be done for all roles in the development lifecycle
  2. Be based off on evidence (historical) data
  3. Be range-based e.g. 20 – 50 days

The numero uno mistake I see is that only “coding” is estimated and  then decisions are based solely on this.  There are multiple roles involved in the process, they should all be included in the estimation process.

The facts that estimates need to be based off of evidence also means that once an estimate is done, it should not be changed.  The “actuals” need to be maintained appropriately so that future estimates can be interpreted correctly based on previous estimates and actuals.  These values (ideally) should not be stored in spreadsheets (or MS Project) but should be in a true work-item tracking system (maybe TFS?).

There is always a best case/worse case scenario and that’s why we need ranges. Even the smallest activity  (such as coding “Hello World”), falls within a time range.

Ultimately these estimates need to be rolled up into something meaningful that can be presented to management.  Maybe that’s a post for another day.

Leave a comment