The Variance Trap, Part 4

This installment of The Variance Trap compares two similar development process simulations. They differ mainly in the amount of variance in production capability in the process stages. As the animated diagram shows, there is a great deal of difference in productivity. (If you haven't read the earlier postings on this topic, you might find that reading part 1, part 2, and part 3 makes this posting easier to understand.)
The animation above shows the result of two development project simulations. As before, the simulation model is extremely simple, with no errors or feedback loops. To simulate variations in productivity the simulation system throws a die for each process stage, for each tick of the simulation system clock.

The yellow line represents a simulation with a six-sided die. The blue line represents a three-sided die, with two added to each die roll. (A computer has no problem rolling a three sided die. If you want to do it for real, use a six-sided die, count 1-2 as 1, 3-4 as 2, and 5-6 as 3.) Lets call the six sided die 1d6 and the other one 1d3+2. (If you have ever played a roleplaying game, you wont have a problem with this notation.)

The 1d6 has a range of 1-6, and an average roll of 3.5. The 1d3+2 has a range of 3-5, and an average roll of 4. as you can see, the 1d3+2 process is much faster than the 1d6 process. If you have read the previous parts of this monologue, this should come as no surprise. The 1d3+2 process has less variance than the 1d6 process. The flow is steadier, with less inventory build up during a simulation run.

The implication is that if we can reduce the statistical fluctuations in a software development process, we can increase the productivity.

Let's take stock of what we have learned so far:
  • Because of statistical fluctuations, a an unregulated development process will be slower than the slowest of the process steps. Therefore, it is impossible to accurately estimate the time required by adding together the time estimates for individual process steps. Even if the individual estimates are correct, the combined result won't be. (See Part 1 and Part 2)
  • We can make measurements and extrapolate the time required from the aggregated process. This allows us to make fairly accurate estimates relatively early on in the project. (Part 2)
  • The productivity will increase if the statistic fluctuations in the development process can be reduced. (Part 3)
It is time to set up a more accurate project simulation, and study the effects of different management strategies. Part 5 in this series uses a more accurate model of the development process, and explores the effects of changing the length of test, iteration, and release cycles.

Comments

Anonymous said…
Very good series. I've enjoyed them. I just have one nitpick/correction about the latest installment. The average result for 1d3+2 is actually 4, not 3.5. 1d3+1.5 would be the model necessary to have the same average throughput with a variance of half.

Your argument still holds however. An extra 0.5/cycle throughput could never induce the drastic difference you outline in this post.
Kallokain said…
Thanks! (1+3)/2 = 1.5 => me idiot!

I have corrected the article. Me making silly mistakes like that is one of the reasons why I need simulation software in the first place. :-)
Anonymous said…
Very nice GIFs.

Popular posts from this blog

Waterfall - The Dark Age of Software Development Has Returned!

Waterfall vs. Agile: Battle of the Dunces or A Race to the Bottom?

Interlude: The Cost of Agile