## Saturday, February 11, 2006

### The Variance Trap, Part 2

This morning I continued the variance fluctuation experiment I wrote about in my previous blog entry.

I am going to show more details now, because it is interesting to follow what happens closely. (Well, it is if you are a management nerd, like me.) Remember that our original estimate, based on the average roll of the die, was that we'd get a throughput of 35 beads in an iteration. (An iteration consists of 10 sequences of 8 die rolls.) That prediction failed. The average throughput was only 28.4.

The second try to predict the end of the project used another method. I used the average flow rate, measured over the first five iterations. This prediction indicated that 1.6 more iterations would be needed. 5+1.6, rounded up, is a total of 7.

Let's see how the flow based prediction holds up. Here is state of the system two sequences into iteration 6: The first sequence had a throughput of 0, the second had a throughput of 2. I am not feeding the system any more beads, so we can expect the group of beads in the Analysis bowl to begin to thin out. It has. There was 26 beads there, but the last two sequences have reduced that to 18. The number of beads in the Unit Test bowl (the 5th one), has 12 beads, which is one more than at the end of iteration five.

After four sequences in iteration 6, and rolling a highly unlikely series of fours and fives, sequence 3 yielded 5 beads. Sequence four yielded only 1 though, so it evens out: The distribution continues to be rather uneven, but since there are now groups of beads closer to the end of the process chain, we can expect to make good time.

At the end of iteration 6, the model looked like this: There are now no beads at all in Analysis, Design, Code, and Unit Test. Of course there is a weakness in the model here, because none of the test stages have a feedback loop to earlier process stages. The resulting effect is that no test ever fails. If tests did fail, that would of course slow down the process.

This is the system four sequences into iteration seven: Turns out the flow based prediction came fairly close, predicting the end of the project 6 secuences into iteration seven. however, I made that prediction pretty late in the project. What would the flow based predictions have been earlier on? Let's look at the average flow after each iteration, and use that to calculate how many iterations we need to move 188 beads:
1. 21/1=21 ==> 188/21= 8.9
2. (21+32)/2= 26.5 ==> 188/26.5= 7.1
3. (21+32+28)/3= 27 ==> 188/27= 7.0
4. (21+32+28+36)/4= 29.25 ==> 188/29.5= 6.4
5. (21+32+28+36+25)/5= 28.4 ==> 188/28.4= 6.6
6. (21+32+28+36+25+31)/6= 28.83 ==> 188/28.83= 6.6
If we had watched the flow rate, we would never have underestimated, and we would have had a pretty accurate estimate after iteration 3. This suggests that monitoring the flow rate of the complete system makes it possible to make more accurate predictions than we will get by making time estimates (remember 35 beads per iteration) for each stage in the process.

In other words, measuring flow is better than estimating time!

One thing to note is that the model used here was balanced, i.e. the capacity was the same at each stage. In reality that is not the case. such differences in capacity would make traditional time estimates even more unreliable. I'll look into that, and more sophisticated methods of calculating project durationin a future blog entry. First I'll write myself a little simulation software. I'm getting tired of rolling the die.