Saturday, December 13, 2008

Support the Agile Fringe

There is a bit of a controversy over the organization of the Agile 2009 conference. The conference will be organized in a number of functional areas. Unfortunately this may exclude talks about  work on the agile fringe, work that pushes the limits of agile.

David Anderson has written an open letter to the Agile 2009 organizers. There has been a lot of support. If you believe there should be a space for topics that do not neatly fit any functional domain, please support David's initiative by adding a comment to his letter.

Here is the comment I attached to David Anderson's letter:

I am not currently an Agile Alliance member, and though I would like to do so, I won't be able to attend the conference.

Nevertheless, I would like to throw in my support on this one, for the following reason:

Creativity is a necessary prerequisite for agility. The creative process consists of taking information from different domains, and recombining the information in new patterns.

Agile itself originated this way: People with experiences from different domains came together, combined what they knew, and thus created a new domain, a pattern of knowledge that did not exist before.

Sticking to a fixed set of agile sub-domains excludes creativity from the conference. It turns agile into a closed system, and closed systems stagnate. When the agile movement stagnates, ceases to move, it will become unable to adapt and improve. Faced with new challenges, agile will not be able to cope.

I do hope agile will remain open, vital, and continuously evolving. The key to that, I
believe, is to embrace creativity, not isolate oneself from it.

Kind Regards,

Henrik Mårtensson
If you believe creativity and progress are important to the agile movement, now is the time to show your support.

Thursday, December 11, 2008

A Beagle with Cream Cheese, Please!

I stood in line waiting for my turn to order at a cafe when the lady in front of me ordered "a beagle with cream cheese". She got a bagel with cream cheese, and went away happy.

If she had asked a software developer to provide a beagle with cream cheese, she would have got it. Our requirements processes are usually set up to give the customer what the customer asks for, not what the customer wants, or needs.

Note that these are three different things: asks for, wants, and needs.

Agile software development represents a shift of focus, from providing what the customer initially asks for, to what the customer wants.

Unfortunately, if we build what the customer needs, the customer probably would not want it. There is also a significant risk that the customer does not need new software.

For example, many agile teams use Kanban boards, and keep track of tasks with Post-It notes. Suppose a customer comes to such a team with a similar task-tracking problem. Would the team just show the customer how the team tracks tasks, or would they build the task tracking software the customer wants?

Thursday, November 27, 2008

More Tips for surviving the market meltdown

Christopher Lochhead, the retired chief marketing officer at Scient and Mercury guest blogged in Dan Farber's blog about Tips for surviving the market meltdown. Here is a quote:
Downturns are the best time to take market share. Most companies overreact. They get too conservative. They also forget that they are not the victims of the market.
I agree, especially with the last part. You are not victims!

Be creative! Find out what options are available, then do it!

The 20th Way to Survive the Crisis Without Firing People

Justin Roff-Marsh, a well known TOC expert, sent me the following tip:
Eliminate performance pay. Performance pay (commissions and bonuses) encourages the pursuit of local optima (which leads to waste and conflict) and signals that desirable behaviour is optional.
Sounds very good to me. The critique against performance pay is well substantiated by research.

Sunday, November 23, 2008

19 Ways to Survive the Crisis Without Firing People

The following article is an excerpt from a post i recently made to the cmsig mailing list.

Here is one thing I learned from studying military strategy:

Strategy is a game of interaction and isolation:
  • Strengthen the interactions on your own side, so you can move in a coordinated fashion with a common goal. This enables you to focus power, and to make bundles of rapid, coordinated attacks.
  • *Isolate enemy units from each other, morally, mentally and physically, so that you can pick each unit off easily.
Total strength matters much less than the ability to maneuver and coordinate.

Most business managers have got it back asswards. They isolate units in their own organization in a multitude of inventive ways, then antagonize customers so they present a united front against the company.

For example:
  • Cost Accounting and Functional Organization isolate ones own forces.
  • Management by setting goals force employees to game the system in order to reach goals that are outside the capabilities of the current organization. This creates a divide between managers and employees that can be very difficult to heal.
  • Matrix Management provides employees with multiple, often mutually exclusive, objectives.
  • Most requirement analysis methods make the implicit assumption you do not need insights into psychology, behavioral science, and business strategy to create a great product. That is true sometimes, but most of the time it is not. As a result, customers get to chose which product sucks the least, instead of which is best. (If you do not believe me, I cordially invite you to my home to watch a DVD on my DVD-player, to boil tea water in our kettle, and to use my old Windows PC.)
Oh, and "strengthen the interactions" does not mean "tangle your own value streams so you keep tripping over them". (I sat in a restaurant yesterday, and heard two people discussing how impossible it was to get their project done because meetings got rescheduled all the time. Neither person realized it was a systemic problem. From their point of view, they were working with stupid, unreliable people.)

The reality is, for most companies, a lot can be done to counteract the effects of a crisis. Here is a short list:
  • Stop relying weak systems controls, like setting goals. Evaluate proposed actions against Donella Meadows list of places to intervene in a system. See
  • Scrap the functional organization. Organize around value streams. Use a network model, where each node (or cluster) is in a different market segment. Look to The Virgin Group for an example. Look at how the military in many countries are reorganizing into network organizations. If you look deeply into their reasons for doing it, you will see that the same principles make network organization an excellent model for business organizations.
  • Replace Cost Accounting with something that resembles reality. Throughput Accounting works. Lean Accounting probably works too.
  • Provide IOHAI training for managers, partly to enable the network organization to coordinate despite being a loosely coupled structure, partly to catch business opportunities that you would otherwise miss. (IOHAI = Insight, Orientation, Harmony, Agility, Initiative. See
  • Use TOC and Lean applications to make your processes more efficient.
  • Use kanban, XmR charts (also called process control charts and process behavior charts) and cumulative flow diagrams to show what is going on in your processes. This will improve decision making, but only if you give your managers the appropriate training.
  • Institute a Process Of Ongoing Improvement (POOGI), like the TOC Focusing Steps.
  • Ensure a flow of information from the bottom and up in the organization, for example by having regular Crawford Slip sessions.
  • Make brainstorming part of the way you begin new projects. My preference is for Crawford Slip, but use whatever works in your organization.
  • Make information about effective processes and solutions to effective processes available to everyone in the company. A Wikipedia style database would do a lot.
  • Institute Management by Walking Around. See
  • Use TLTP, System Dynamics, or a combination as decision support for managers. (If it's a major decision, develop a TLTP model and a System Dynamics model independently. See if they match. If they do not, find out why.)
  • Use user interaction design like QFD, GDD or the design method in Crystal Clear to find out what would really delight your customers.
  • Eliminate fear by _rewarding_ failures the organization can learn from.
  • Read at least four management books per year, preferably one per month. Read books that focus on principles. Follow up with books about specific practices. Practice what you learn.
  • Start informal study circles. Hold a meeting each month, or perhaps every other month. At each meeting, any employee that is interested in something even vaguely relating to your business, can hold a talk about it. You'll be amazed how much energy and creativity you can unleash this way. All you need to do is figure out how to turn that energy and creativity into money. (Hint: Crawford Slip can help you there.)
  • If you need to take offensive action, study Boyd, Sun Tzu, and the 36 stratagems for inspiration. (Hint: Read Osinga's book about Boyd and Krippendorff's book about the 36 stratagems.)
  • Fail fast! Many organizations lose money because they keep flogging dead horses long after it is obvious. Fear culture and internal costing are two common culprits. Identify all obstacles to laying dead horses to rest, and get rid of those obstacles. Note that a policy of firing people at the slightest downturn is such an obstacle. If giving electrical shocks to a dead horse can protect people from getting fired, then you will have a lot of artificially induced movement of necrotic tissue in your organization. (Read Deming's Out Of the Crisis for examples.)
  • Keep vital skills within the company. Do not outsource your brains! (Thank you Bill Gates for that insight.) Cost Accounting has a lot to do with choosing the wrong thing to outsource or insource. One more reason to get rid of it. (Look at it from a 36 Stratagems point of view. If you outsource or insource, which stratagems will it leave you open too? You really should know. Goldratt's Late Night Conversations can also provide some insight. )

Not every item of above will be applicable everywhere, but there is something in there that is applicable to almost any company.

Now get off your butt and do it!

Saturday, September 13, 2008

Ants and the Theory Of Constraints

Richard Graylin posted a link to the cmsig group at Yahoo about ants that use global optimization techniques reminiscent of the Theory Of Constraints.

Now, if ants can figure this stuff out, why do so many humans have problems with it?

Friday, September 12, 2008

(In)competence Inventory

Recently I heard a talk by the founder of a company specializing in making competence inventories. The talk was quite impressive. The speaker showed several colorful slides, and talked about the very advanced software the company uses to collect information about employees, and present it in a manner that is easy for managers to understand and use.

Still, there was something that wasn't quite right. I grew suspicious at the beginning, when the speaker talked about the purpose of making a competence inventory. Knowing the skills, and the level of skill, of each employee is very important, he said, because it makes it easier to decide whom to fire when cutting costs.

Not only was that the most important purpose of having a competence inventory, he gave no other reason. (But see below for an analysis of the reasons given at the competence inventory company's web site.)

Let's think about that, not from a human point of view, but from a business point of view. (The audience consisted chiefly of Human Resources people, and nobody protested. Therefore, I suppose from a Human Resource point of view, firing people on the basis of a competence inventory is OK.)

From a business perspective, if we do a competence inventory, there should be a good reason for it. After all, making the inventory costs money, and requires people to spend time being interviewed. (An hour or two per employee, using the methods the speaker advocated.)

The idea is that having the inventory will enable us to get more of what we want than we had to spend to get the inventory in the first place.

Suppose, for the sake of argument, that what we want is more money, now and in the future. There are three ways we can get that:
  • Increase Throughput (optimize the value stream)
  • Reduce Inventory (reduce waste, work-in-process)
  • Reduce Operating Expenses (pay less for consumables, fire people)
It seems pretty clear that a competence inventory can be useful whichever way you go here. (I hope you forgive me for not going into details and proving it.) This means that a company that wants a competence inventory is likely to want it for whatever they try first.

It has been proven over and over again that increasing Throughput and reducing Inventory are much better ways of improving profitability than reducing Operating Expenses. (The Goal by Goldratt, Lean Software Development by the Poppendiecks, Out of the Crisis by Deming, The Toyota Way by Liker, etc.)

Thus, a company with reasonably competent managers would make a competence inventory for the purpose of either increasing Throughput, or reducing Inventory. Only when all else fails do they resort to reducing Operating Expenses.

We can conclude that the act of ordering the Inventory for the purpose of reducing Operating Expenses, as the speaker suggested managers should do, is itself an indicator of lack in competence at the management level.

Looking at the speakers web site, another argument is presented: they create the inventory based on current and expected future competence needs. In other words, they look at the company's current strategy, and decide which competences to look for.

This is extremely dangerous. Strategy is based on incomplete information about an environment that can change very quickly. Therefore, strategy must be emergent, and the organization must be structured so that it can take advantage of emergent strategies.

An important factor shaping emergent strategy is management's knowledge of the resources and competences that are available. It follows that if you ask people questions about a fixed set of skills, there is a great risk that you miss something important. An employee (or group of employees) may have knowledge that would reshape management's strategy dramatically, if they had known about it. (If you know what a Johari Window is, you will understand. The knowledge that affects us the most is often the knowledge others have that we don't even know exists.)

For example, the speaker showed results from a competence inventory for programmers. The inventory focused solely on programming skills and domain modelling skills. Care to guess how many important skills that were missing from that inventory? Many more things than were in it, I can promise you that.

Thus, a good competence inventory must be open-ended.

What do you think happens when employees figure out that the company makes a competence inventory in order to decide whom to fire? The most obvious consequence is that they will distort the data by lying.

Another consequence is that the people remaining in the company after the pogrome will have lost all trust in their management, and will perform significantly worse than before. (Read Business Dynamics by John D. Sterman if you want to know how serious the consequences can be, and how long they last.)

Finally, the speaker mentioned that a competence inventory must be made quickly, in order to be as cheap as possible. OK, I agree, you need to keep the cost down, but then he mentioned that his company uses one to two hours to interview each employee.

Well, if I am asked to do a competence inventory, I use Crawford Slip, which takes an hour or two for each group of people I interview. A group can be up to 500 people.

Of course, even though I am a great fan of competence inventories, I do not do one unless I am pretty sure the inventory will help the company and the people in it, rather than sink it.

You know, if I was considering hiring someone to help with competence development, the first thing I'd ask that person is what the person does to develop his/her own competence. And I'd be sure to ask specific questions: Bring four books you have read recently, and show. Show me what you have written about competence development. Show me an example of a plan that shows how increased competence leads to improved profitability (or whatever the goal is). Show me Negative Branches (undesired effects of increasing the competence within my company) and how to deal with it.

Tuesday, September 09, 2008

On the Wrong End of the Camera

I was sitting quietly, in my favorite book café, working on my Strategic Navigation book (, only a few chapters left,) when a TV team from the Swedish Television burst in. It was a small team, but it jostled me out of describing how to create a Future Reality Tree.

Oh, gosh, I thought, they've heard of me! Finally, someone with the power of TV has decided to talk to me about management, TOC, John Boyd, Strategic Navigation, Systems Thinking, and How To Improve Life For Everyone. What a chance!

Turned out they were more interested in filming books than in filming an author. The camera remained firmly pointed in the opposite direction from me throughout their visit.

On the other hand, I think they did get some nice footage of the book café, and of the very nice people who work here. If you live in Sweden, you can see it all (except me of course) on Carin 21:30 at 21:30 on September 17th.

I did take a couple of photos with my iPhone. It's true what they say. The iPhone isn't the best camera in the world. Most of the pictures turned out a bit shaky.

I must confess, I was mostly interested in the camera setup. I guess its natural when you do webcasts.

Speaking of being on the wrong end of the camera: I have promised to help film pair-programming tomorrow. Will tell you more about it in a couple of days.

Tuesday, September 02, 2008

The Primus Vicus Project Part Three: The Intermediate Objective Map

This is the third part in the Primus Vicus series. It shows how we designed an Intermediate Objective Map using information gathered during the Crawford Slip session the day before.

Wednesday, August 27, 2008

The Primus Vicus Project Part Two: The Crawford Slip Session

This is the second part in the webcast series about Primus vicus and the workshop I held there.

This episode is about the Crawford Slip Brainstorming session itself.

Tuesday, August 26, 2008

The Primus Vicus Project Part One: going Medieval

I recently held a Crawford Slip workshop at Primus Vicus, a medieval village in Halmstad, Sweden. The objective was to help the Primus Vicus society set clear goals, and to create a plan for making the village economically viable now and in the future.

The first part is about my own preparations for the workshop. Preparing for helping a medieval village are a bit different from preparing for helping a company. However, the same principles apply: begin with the big picture, the super system where the organization you will work with fits.

I hope you enjoy the webcast.

Thursday, August 21, 2008

The Scandinavian Agile Conference on October 29th

Vasco Duarte from Finland emailed me and asked me to spread the word about The Scandinavian Agile Conference on October 29th. Keynote speaker will be Gabrielle Benefield from Scrum Training Institute. The conference schedule is here.

Scrum goes Medieval

I have just spent two days leading a management workshop for Primus
Vicus, a medieval village and Living History Museum in Halmstad, Sweden.

The workshop was a practical introduction to planning and managing
using technoques from Strategic Navigation and Scrum.

Great fun, and to top it off, the Primus Vicus villagers gave me
permission to videotape the whole workshop and use the material in my
own webcasts.

I'll begin publishing the material here and on YouTube in a few days.

Sent from my iPhone

Sunday, August 03, 2008

Happy Flu (A Meme Spreading Experiment)

I've been reading up on systemic models for how diseases spread recently. So, when I stumbled on the Happy Flu meme experiment, I could not resist participating. I got the Happy Flu from Jack Vinson.

Bryan Logan at TOCThinkers

Bryan Logan has written an interesting article about how to measure in for profit organizations. It is the first part in a series. Go read!

Just in case you are wondering: My blog isn't dead, it's just resting. Material is on the way. Writing a book and blogging is stretching me a little to thin.

Sunday, July 27, 2008

Scrum Talk by Ken Schwaber

This is a Scrum talk by Ken Schwaber. It is published on Google Tech Talks.

Monday, July 07, 2008

Book Review Webcast: The Logical Thinking Process

Making a webcast requires a bit more work than writing a review. On the other hand, its fun, and the review just might reach more people. Let's see how it plays out.

Thursday, June 26, 2008

Mutual Benefit Contracts

This post is in response to a question about contracts in a comment to my article The Customer Drives the Car.

The text is an excerpt from a book I am writing about Clarity, a software development method based on systems thinking. This book is on the back burner at the moment. 

I am currently writing a book about strategy and organization with Strategic Navigation. This book will stand on its own, but it will also provide a much needed framework for the Clarity book. The reason is that originally, Clarity, like other agile methodologies, took a bottoms up approach to methodology development. Partly because of my work on Clarity, partly because of my involvement with The Theory Of constraints, Strategic Navigation, and Systems Thinking, I have become convinced that a top down approach will work better. Therefore, Clarity will be redesigned, and thoroughly tested, before I publish a book about the method as a whole.

Here is Clarity's (current) take on contracts:

A contract is an agreement to take on certain responsibilities, or to do things a certain way. All projects have contracts. Individuals in an open source development team usually have an unwritten and implicit social contract. Business partners usually have a formal, written and legally binding contract. Clarity has a few recommendations for the latter kind.

Zero Sum Game Contracts
When playing the software development game as a zero sum game, it is very hard to trust your business partners. After all, the rules of the game say that whatever one party wins has to come from what some other party puts on the table.

There is no reason for the parties to trust each other. Therefore, contracts are used as a replacement for trust. Such contracts contain a lot of information about the terrible woes that will befall deal breakers. Usually they also contain rules for how risk is going to be divided.
There are basically two variants. In a fixed price contract the vendor takes the risk. In a time-and-materials contract, the customer takes the risk.

Because the basic assumption is that the project has a fixed value the customer will quite naturally try to get as much functionality as possible per dollar, and the vendor will try to do as little as possible per dollar. Both of these strategies increase the risk that the project will fail to yield the maximum possible return on investment.

Clarity views a contract as a framework for building a relationship based on mutual benefit. The game is set up so that the parties have a common goal. Both parties win, or both parties loose. Such contracts engender trust and cooperation.

Business Practices
An important point about the mutual benefit contracts described below is that they all have negotiable scope. If the scope can’t be negotiated, there is no point to prioritizing stories. There are no less important stories that can be cut out of a delivery if need should be.

This section provides a brief overview of contract types only. For more a more in-depth treatment of mutual benefit contracts, I recommend Lean Software Development[16] and Implementing Lean Software Development[15], both by Mary and Tom Poppendieck.

Clarity does not govern in any way which types of contracts you use. The recommendation though, is that you go for mutual benefit and trust between you and your customer.

Shared Benefit Contracts
One way to engender trust is with shared benefit contracts. The parties share the development costs, and they also share the profits generated by the product. Such contracts are suitable when building a pay-per-use web site or service. In other situations, for example when building a billing system, or a free web site, the economic worth is harder to assess, and this form of contract is hard to use.

Multistage Contracts
Multistage contracts are useful in many situations. Let’s say a job is expected to take six months. The vendor might make six monthly deliveries, getting paid a fixed price after each one. This reduces risk for both vendor and customer. The investment per stage is only a sixth of the entire project cost, so the customer does not risk as much. If things should go very awry with the customer relations, the vendor can also terminate after each month. There is no risk for either party of getting trapped in an eighteen month project from Hell.

It is fairly common for customers to order more functionality, in more increments than originally agreed. This offsets the risk of loosing a contract half-way through now and then.

Target Schedule Contracts
A target schedule contract sets a fixed final delivery date. With a target schedule contract, the development team can add resources and reduce scope to meet the final delivery date. The customer knows from the outset when the product will be ready, but not how much it will cost. The most important features are worked on and delivered first. It is important to make small releases, and see to it that they are actually deployed each time, or the delivery date may slip due to unforeseen problems.

Target Cost Contracts
A target cost contract has fixed cost, but leaves the scope negotiable. It is very important that the most valuable features are worked on and delivered first. A target cost contract may induce the customer to cram extra features into the application once development is under way. This is usually handled by having a clause in the contract that triggers negotiations about equitable cost sharing if the actual cost is significantly different from the target cost.
Vendor incentive is often provided by giving the vendor a bonus if the project
is completed below the target cost.

Wednesday, June 11, 2008

Strategic Principles - Apple, iPhone and John Boyd's Maneuver Conflict

Time to scratch the webcast itch again. This time I am talking about principles of business strategy.

Some time ago, I Googled out more about Apple's iPhone strategy. More than a million hits, yet I found nothing about the strategic principles Apple uses.

As it turns out, almost every move Apple makes can be interpreted in terms of Maneuver Conflict, a military strategic doctrine created by the late col. John Boyd, U.S. Air Force.

Boyd's ideas has had tremendous impact on military strategic thinking the past twenty years. His ideas are slowly trickling over into the business community.

There will be more videos on Boyd's ideas, and on Strategic Navigation, a business strategy method that combines Boyd's Maneuver Conflict and The Theory Of Constraints.

Monday, June 09, 2008

Steve jobs Announces iPhone 3G

Steve jobs has just announced iPhone 3G at WWDC 2008.

The new iPhone has 3G and GPS. Standby time has been improved to 300 hours. Talk time is up to 10 hours with 2G, 5 hours with 3G. You can expect to watch video for 7 hours, or listen to audio 24 hours on a charge. Price has been cut to $199 for the 8GB model and $299 for the 16GB model.

Why am I interested? I have taken an interest in Apple lately. Working on a webcast on business strategy with Apple and iPhone examples. Check back here in a day or two.

Tuesday, May 20, 2008

One More OODA Loop Through the IOHAI Hoop

In a previous article I wrote about IOHAI - John Boyd's leadership model. I used Intermediate Objective maps to deconstruct IOHAI in a manner consistent with the ideas in Boyd's essay Destruction & Creation.

A friend told me it was the most incomprehensible essay I had ever written. I regard that as a challenge to make one more OODA loop through the IOHAI hoop. (Though I should warn you I write to comprehend, not to be comprehended.)

In the previous article I mentioned there was another way to decontruct IOHAI. Actually, there is an infinity of ways. Here is one, created under the assumption that Orientation means "good Orientation". Under that assumption, Insight becomes a prerequisite for Orientation. I have also made Orientation a prerequisite for Harmony. We cannot achieve Harmony if we are no good at Orienting ourselves:

Now let's look at Initiative, hanging there all by itself. Initiative is a personal property. Some people have more, others have less. It is a bit more complicated than that though.

Initiative is heavily influenced by training and environment. Thus, lack of training, or the wrong training, will inhibit initiative. So will an environment that punishes initiative, for example by punishing failure. I did a webcast about the effects of fear in organizations awhile ago:

Using the information in the webcast (which is assembled from a variety of sources) and combining it with the IOHAI IO map, we get:

At this point, we do not have purely a prescription for good qualities in a leader. We have connected IOHAI back to the environment.

In other words, we need a fear free, failure tolerant organizational culture in order to grow good leaders according to the tenets of IOHAI.

Mission Statements Ackording to Dr. Ackoff and John Boyd

I just read an article about mission statements by Dr. Russel Ackoff. The article is old, but Ackoffs views are certainly worth thinking about.

Compare with Boyd (Patterns of Conflict, slide 144):
Unifying vision
A grand ideal, overarching theme, or noble philosophy that represents a coherent paradigm within which individuals as well as societies can shape and adapt to unfolding circumstances—yet offers a way to expose flaws of competing or adversary systems.
Looks like we can take elements from Dr. Ackoff's article, and use it to better understand what Boyd meant. Or, we can use Boyd's ideas about organization in order to understand the importance and relevance of Dr. Ackoff's ideas about mission statements.

Deconstructing again, and I haven't even drunk any coffee today.

Sunday, May 18, 2008

Deconstructing IOHAI

In 1976 John Boyd wrote a famous essay, Destruction & Creation. The essay is essentially about how creativity works. Boyd's idea was that a body of knowledge in the human mind can be likened to an island consisting of connected information. Creativity is the process of destroying the connections between the pieces of information, and constructing new patterns by assembling the knowledge in different ways.

Sometimes we just break a single body of knowledge apart, and reassemble it again. Sometimes we insert pieces of information from other bodies of knowledge. This process of destruction and construction of patterns of knowledge is sometimes called analysis and synthesis, but there is a simpler word: deconstruction. (In case this article gets too heavy, check out the Babylon 5 episode The Deconstruction of Falling Stars.)

Boyd deconstructed military strategy in his Discourse on Winning and Losing. (Scroll about half-way down the web page to find the Discourse.) Of course, learning from the Discourse requires deconstructing it.

In this essay, I have used two tools to deconstruct Boyds IOHAI concept, his Theme for Vitality and Growth. The first tool is The Logical Thinking Process, the second tool is lots of coffee. (I may have overdone the latter, and not used enough of the former. You tell me.)

Unfortunately, Boyd himself did not write much about IOHAI himself. If you are interested in what he wrote, as he wrote it, you need to check out Patterns of Conflict, slide 144, and Organic Design, slides 12-17. You can also use a shortcut. Chet Richards has put the IOHAI concept together.

Boyd contended that organizations are prone to growth pains that can get debilitating or even lethal. (See Organic Design, slide 20) I have deconstructed his argument in the form of a Current Reality Tree:

The diagram above doesn't give you any information you cannot get from Boyd's presentation. The one possible advantage is a better overview of the idea. (Then again, translating an idea from its original format to a new format may introduce distorsion, so perhaps you should check Organic Design, just to make sure.)

From Boyd's argument about complexity limiting the ability of an organization to adapt, he evolved a model for an organization that can grow without becoming increasingly rigid. Again, translating Boyd to a Logical Thinking Process tool, we get the following Intermediate Objective map:

The IOHAI concept (yes, I know, I still haven't told you what it is), is meant to enable organizational unity of purpose. That is, it focuses on the following part of the organizational model:
The connection between IOHAI and the organizational model isn't new information, it is there in Boyd's presentation. It is not all that obvious either, so maybe we have already gained some insight.

Now for some serious deconstruction. The first step, again, is to translate the IOHAI concept to an Intermediate Objective map:

You'll find IOHAI in the bottom row of boxes. (Click on the picture if you want to see a larger picture.)

IOHAI is a set of Necessary Conditions for leaders at all levels in a Boyd model decentralized organization. (If you haven't read the boxes in the picture yet, now is a good time to do so.)

Grabbing information from Boyd's, and Chet Richard's, slides is itself a deconstruction process. The main advantage is that we can now make the deconstruction process very explicit.

First, let us focus on the part of the IOHAI model directly concerned with the organizational model, like this:

Now we have a little more focus than before. We are ready to destroy the connections between IOHAI and organizational unity:

Looking at our five pieces: Insight, Orientation, Harmony, Agility, and Initiative, is there any way to rearrange them in a manner that still makes sense?

Seems to me Insight should be a prerequisite for Harmony. We can't very well perceive and create interactions between seemingly disconnected events and entities unless we understand how things, and the connections between things, work.

Orientation, on the other hand, stands by itself. It is just a description of how we work. The orientation process exists as long as we are alive, even if we are completely disconnected from reality. Therefore, neither Insight, nor Harmony, can be prerequisites. We don't need much Initiative to orient ourselves, so Initiative can't be a prerequisite either. (You may wonder why Orientation isn't a prerequisite for Insight. It is. I didn't make that connection until after I uploaded the diagrams to the blog. I'll fix it the next time I deconstruct the diagrams.)

Agility, on the other hand, is about shifting between patterns we perceive or create using Harmony, so Harmony is a prerequisite for Agility.

Initiative is necessary to get things moving, but we can imagine Initiative without much direction. (As in fad of the month management imperatives, for example, or random requirements changes in projects.) Thus, well let Initiative remain where it is.

Reconnecting the entities in the diagram to reflect our new understanding we get:

We didn't change anything above the Pursue a Noble Vision box, so we can broaden our perspective again:
There you have it: IOHAI deconstructed. What, if anything have we gained?

For one thing, we have a starting point. Organizational unity must begin with insight into how systems work.

The act of translating IOHAI to the IO map format does itself give us an insight: each intermediate objective is necessary to overcome some obstacle.

Why is that important? It is important because we can go from this:

to this:

That is, we can use an IO map as input to a Future Reality Tree.

What does this buy us? It is a transition from a necessity logic based structure to a sufficiency logic based structure.

Orientation done using necessity logic and sufficiency logic are very different things. they yield different results. One tells us what is necessary, the other what is sufficient.

We now have two possibilities, either:
We have gained insight into how IOHAI is connected to Boyd's organizational model, and where to start when implementing IOHAI in an organization. We have also gained new insight allowing us to improve on how to deconstruct Boyd in the future.
Maybe I should cut back on the coffee.

Wednesday, May 14, 2008

Knowledge Associates

I do not advertise much in this blog, not even my own services. Sometimes I make exceptions though:

Knowledge Associates is a new Theory Of Constraints consultancy in Victoria, Australia. If you live in the neighborhood (i.e. the Southern Hemisphere), and want a problem solved in your organization, why not talk to them? They might be able to help. They certainly do have an attractive offer.

While on the subject of advertising, if you live in the Northern hemisphere and want a similarly attractive offer, there is only one place to go that I know of.

And now back to our regular programming.

Thursday, May 01, 2008

IO map for Strategic Methods

The IO map above describes the Necessary Conditions that must be fulfilled in order to have an organization that can create and execute strategy effectively in a fast changing environment. The idea isn't mine. Originally it is Colonel John Boyd's idea. William Dettmer translated Boyd's Maneuver Warfare into a civilian version, Strategic Navigation.

I have added some of the obstacles (hexagons) that an organization must overcome in order to create and execute strategy effectively.

The map is interesting both because it explains what it is that makes Strategic Navigation and Maneuver Warfare effective, and because it provides a simple way of evaluating any strategic method.

If the necessary conditions aren't fulfilled, then the method, even though it may have great strengths, also has weaknesses that will keep it from being as effective as it should be.

You may have read critical studies that show strategic methods often do not work. Boyd worked out why some methods do work, and others don't. If you study the map, you will see that most strategic methods fail in the execution stage. They do that because the strategic methods fail to address the issue of how to organize so that the strategies can be carried out effectively. (See According to Hard Facts, Dangerous Half-Truths & Total Nonsense, by Jeffrey Pfeffer and Robert Sutton.)

If you are interested in the topic, please do comment on the map. It isn't quite "dried out" yet, so I expect to change some things. What do you think?

Tuesday, April 15, 2008

One Revolution Through the OODA Loop

Our environment shapes our actions in ways that are not obvious, until we make a conscious effort to step outside the box. You will probably agree that it is plausible that our experiences and our environment influences the strategic choices we make. Sounds reasonable, even if it is a rather generic statement.

There is a strategic model that incorporates this idea, the OODA loop. Look and behold the OODA loop in all its gory complexity:
The OODA loop is originally a military strategic concept created by U.S. Air Force Colonel John Boyd. Some experts consider Boyd to be the greatest military genius of the past two thousand years. Like Sun Tzu and Musashi, his ideas have been applied to business strategy as well as military strategy. Boyd developed a strategic model that is called Maneuver Warfare. (Boyd did not use this name himself. He called it Maneuver Conflict.)

The OODA loop isn't prescriptive in the way the PDCA/PDSA, TOC Focusing Steps, or Test-Driven Design loops are prescriptive. The OODA loop describes how we all interact with our environment, whether we know it or not. However, if we understand the OODA loop, and consciously develop our ability to go through it faster, we can develop considerable strategic and tactical advantages.

Look at the Orient phase in the OODA loop. There you will find several factors that influence how we interpret our observations. Therefore, they also influence our decisions and our actions.

Let's apply this to something I find particularly interesting: how organizations choose strategic methods. The act of choosing a strategy is itself a strategic decision.

If we only know one strategic paradigm, and choose a strategic method from within the range of options provided by the paradigm, we loose the ability to improve beyond what the paradigm allows.

Strategic Planning in Functional Organizations

Most organizations are functional. That is, the organizational boundaries are organized so that each part performs a specific function. Such organizations have strategy, the What To Do, in one part of the organization, and tactics, the How To Do It, in different parts.

Prestudies, strategy, process development, and execution are often treated separately. Because the different phases are done by different people, there are a number of hand-offs. At each hand-off, information is lost or distorted.

Each phase is usually executed in series, so it is difficult to go back to an earlier stage if necessary. The people who did the job are busy with something else. If they were consultants, it may not be possible to find them at all.

In effect, the strategic models used by functional organizations often look like this:
Functional organizations like to be on the OODA bandwagon too. Because the organization has difficulties implementing the concepts, they modify the meaning of the loop instead. I recently saw a strategy book that depicted the OODA loop like this: The picture above captures the idea of an Observe-Orient-Decide-Act cycle. Unfortunately, the thing that makes OODA work are
  1. the many little feedback loops that have been removed
  2. the awareness of how our observed data is filtered and shaped in the Orientation phase.
  3. the conscious choice to use the OODA loop at all levels in the organization
  4. the strategic thinking that goes with it
Boyd believed it is absolutely necessary to be able to switch paradigms at will. This is entirely missing from the simplified loop. (Much like ISO removed the statistical know-how necessary to use PDCA effectively in the ISO 9001 standard. Most annoying.)

Operating with a crippled OODA loop and a strategic model that separates strategy and action may not kill you, but the faster the environment changes, the more hampered your organization will be by its own strategic model. If the environment changes fast enough, the organization may eventually become so out of touch with reality that it collapses and dies. (Remember Facit, anyone?)

When business strategy experts like Sutton and Pfeffer, and William Dettmer, talk about most strategic methods being of questionable value, it is this kind of strategy they are talking about: strategic models that may very well produce impressive results on a planning level, but never manage to produce measurable results.

For example, the book where I found the crippled OODA loop stated that implementation was outside the scope of the book. Unfortunately, this is likely to kill the value of the strategic planning model, even if the model itself is good. Execution will be separate in time, space, and resources. This severely reduces the chance of success.

I am not saying a functional organization should ditch their traditional planning models on my say-so. I am saying their leaders need to think extra hard about which strategic models that fit their purposes best, and compare that with what they actually use.

The mistake most do, is to pick the strategic model that is most convenient for the current organization. This has the effect of freezing the organization in its current state. Everyone keeps doing the same old thing. New strategies are indistinguishable from old strategies, and are adapted to be convenient rather than effective.

Here is one thing to consider: If the strategic planning model has little feedback, or if there is a significant delay in feedback, how are the strategic planners supposed to evaluate their strategy? Even if there is feedback, the entire process is so slow the original planners may not even be there anymore, especially if they were hired guns. Without feedback, there is no basis for improvement. An individual can do strategic planning for twenty years and still be lousy at it, because he never gets any feedback. The same goes for the organization.

Strategic Planning in Flow Organizations

In a flow organization each part of the organization is responsible for a flow, a series of operations. Such organizations tend to use strategic models that integrate strategy and tactics into a unified whole.

For example, Toyota and Hewlett-Packard use Hoshin Kanri (Policy Deployment), sometimes called Hoshin Planning. Hoshin Kanri integrates planning and execution. Separating them makes no sense whatsoever.

The Hoshin Kanri strategic cycle is slow - usually a year, but there are smaller, daily feedback cycles that help keeping the organization on track.

Strategic Navigation

Though Hoshin Kanri is proven to be very effective, my preference is for William Dettmer's Strategic Navigation. Strategic Navigation combines the principles of Maneuver Warfare with the analysis and planning tools of The Logical Thinking Process. The result is fast high quality strategic planning, and seamless integration between planning and execution.

Because of the short OODA loop cycle time, a Strategic Navigation team can run many passes through the strategic level OODA loop while the strategy is executed. At each pass, the strategy is refined, so that it is always current.

Of course, you don't refine your strategy twice a week just because you can, you change and revise as often as needed.

The OODA concept applies at all levels in the organization. That is part of its power. Using the same concepts everywhere furthers understanding. Understanding is necessary to bridge the gap between planning and execution.

The Strategic Planning model places great emphasis on the organization understanding what its leaders want. Upper management sets objectives and provides a resource framework. Units lower in the organization have a lot of freedom in how to achieve the objectives.

A functional organization can use Strategic Navigation as well as it can use anything else. There is a varying amount of organizational inertia to overcome though. fortunately, Strategic Navigation also offers an opportunity for an organization to improve itself, becoming faster, and more efficient.

Strategic Navigation has guidelines for how to make an organization more flexible and responsive. The ideas were originally developed in order to make the U.S. military capable of quick response, and countering guerilla tactics, but the range of application is much wider than that. Business organizations, hospitals, any kind of disaster response, police, non-profit organizations, all can benefit from becoming faster, more responsive, and less wasteful.

Thus, a functional organization that goes with Strategic Navigation can become as responsive to its environment as it wants to be, up to and including abandoning the functional model for a more modern flow organization or decentralized network model.

Oh, and don't discount the fun factor. A Strategic Navigation organization is fun to work in. Management is free to focus on the really important stuff. At lower levels, employees have a lot of freedom in how to achieve their goals. (When I began studying Boyd, I found I had to modify some of my notions about how the military works. My own military service, more than 25 years ago, wasn't very Boydish.)

Does it work? Yes, it has been proven to work. For example the U.S. Marine Corps has reorganized along Maneuver Warfare principles, making the organization much quicker to respond than it used to be. The most famous use of Maneuver Warfare to date is Operation Desert Storm.

I expect to be writing a good deal about Strategic Navigation the next few months.

Friday, March 28, 2008

IO Map for Process Design

I had reason to think about how to design processes recently, and just for fun I drew the Intermediate Objective map you see above. This is a process level IO map. It shows the necessary conditions that must be fulfilled in order to have a good process design.

The map is meant to be generic, so you may have to adapt it to fit a special situation. In most cases, it can be used as is. Note though, that one of the necessary conditions for creating a good process is understanding how the goal of the process furthers the goal of the organization using the process. In other words, you need an IO map for the organization (or equivalent) in order to create a good process.

I won't write a long-winded article with a detailed explanation of the map, at least not for now. However, if you have any questions, don't hesitate to ask.

Wednesday, March 12, 2008

Change Or Die

Larry Leach pointed to an interesting article about behavior change in individuals and organizations in the CriticalChain group at Yahoo.

According to the article, medical research shows nine people out of ten would rather die than change their behavior. In other words, scaring people into eating less junk food, quit smoking and drinking, and exercising more has only a 10% success rate.

On the other hand, with positive reinforcement, 77% will change.

Minds On Fire

You might wish to read this article, by John Seely Brown and Richard Adler, on how we learn. It has sparked a lot of discussion in the blogosphere.

Here is a quote:

Compelling evidence for the importance of social interaction to learning comes from the landmark study by Richard J. Light, of the Harvard Graduate School of Education, of students’ college/university experience. Light discovered that one of the strongest determinants of students’ success in higher education—more important than the details of their instructors’ teaching styles—was their ability to form or participate in small study groups. Students who studied in groups, even only once a week, were more engaged in their studies, were better prepared for class, and learned significantly more than students who worked on their own.6

The emphasis on social learning stands in sharp contrast to the traditional Cartesian view of knowledge and learning—a view that has largely dominated the way education has been structured for over one hundred years. The Cartesian perspective assumes that knowledge is a kind of substance and that pedagogy concerns the best way to transfer this substance from teachers to students. By contrast, instead of starting from the Cartesian premise of “I think, therefore I am,” and from the assumption that knowledge is something that is transferred to the student via various pedagogical strategies, the social view of learning says, “We participate, therefore we are.”

Sweden has used a strictly Cartesian system since 1864. I used to think the system was fairly good, but the more I have learned about learning, the more I have become aware of the flaws.

Choice - Peace On Streets

Clarke Ching blogged about the video above. You can also find it on Youtube.

I am really glad I live in a country where this is less of a problem than it is in many other places in the world.

Tuesday, March 11, 2008

Business Value

Managers like to talk about business value. It usually goes like this:
A group of managers sit around a table. It may be in a conference room, around a dinner table, or in a bar. "We must offer more business value," one of them says. Everyone nods in agreement. Then it grows silent. Uncomfortable. Everyone carefully avoids looking directly at anyone else. Finally, someone breaks the silence: "Anything interesting on TV tonight?"
Agile software developers caught on to the notion of delivering business value several years ago. As a result, software developers now have the same sort of dead end conversations about business value their managers have, with the same result.

I have played this little game myself upon occasion, both as part owner of a consultancy, and as a developer. It always played out the same way.

I was reminded about this today when I read an InfoQ article about business value. The article quoted extensively from an article by Joe Little, Toward A General Theory of Business Value. Both articles raise the question "what is business value?" Luke Hohmann has also blogged about business value, and InfoQ has an article about measuring success from the customer's point of view.

I'll focus on answering questions raised in the first two articles mentioned. Both articles are invitations to much needed discussion. Little does list a set of requirements for a business value based engineering approach, but before examining that, lets begin with a basic definition of the term business value.

Wikipedia has the following definition:
In management, business value is an informal term that includes all forms of value that determine the health and well-being of the firm in the long-run.
The first thing to note is that business value depends on context. The term business value means different things for different companies.

What is less obvious, but extremely important, is that because different parts of the same organization have different goals, business value means different things to different people in the same organization.

Thus, a group of managers that sit down and try to agree on a simple, yet specific definition are doomed to fail. Software developers have a slightly different problem: they usually don't know what their own organizations goals are, and even less about their customers goals. Under those circumstances, they too are doomed to fail.

Still, if you look at it from a systems thinking, or Theory Of Constraints, point-of-view, there isn't much of a problem at all. For a systems thinker it is obvious where to begin:
Before we can know what adds value to a system, we must know the goal of the system.
The goal of a company is usually to make as much money as possible now and in the future.

What brings us closer to the goal? The top level is easy. We need a healthy cash flow, and we need to optimize the Return On Investment (ROI).

Return On Investment is a composite value:

ROI = (Throughput - Operating Expense) / Investment

The question becomes what do we need to ensure a healthy cash flow, optimal Throughput, and optimized Inventory and Operating Expenses?

We can express this using an Intermediate Objective map, like this:
An Intermediate Objective map is unique. It is possible for two organizations to have identical maps, but it is extremely unlikely. Therefore, replacing the question marks with specifics, requires working with each organization.

I published my own organization's IO map awhile ago. Have a look at it if you want a full-blown example.

Once you understand the goal of an organization, or, using TOC terminology, the Goal, Critical Success Factors (CSF), and Necessary Conditions (NC), you have a basis for defining business value for an organization. Something has positive business value if it helps you achieve an NC, CSF, or the Goal. Usually, the something affects an NC, so the effect on the CSFs and the Goal is indirect.

Note how important it is to delve down a bit. If you stop to high, for example at the ROI component level, you get a definition that is technically correct, but practically useless. Delve too low, and you will get bogged down in too much detail. (TOC has other tools for that.)

You should be aware of two things: first, there is more to constructing an IO map than I have shown here. IO maps are part of a system, The Logical Thinking Process (TLTP), and you need to know the system very well to make a good IO map. Second, there are alternative methods, for example Strategy Maps and Strategy & Tactics Trees.

Now we are ready to tackle a the problem of creating a Business Value based engineering approach. I am going to walk through each point in Joe Little's list:
  • a well-communicated high-level definition of business value (this might have multiple dimensions)
An Intermediate Objective map does that very well. Actually, the map shown above constitutes a good high level definition.
  • a well-communicated operational definition of BV for the specific effort
That would be a process level IO map.
  • a clear way to measure whether that hypothesis (as embodied in the operational definition) was correct; or how incorrect it was (probably a likelier outcome)
That would be a measurement system based on the IO map. I am working on a webcast about this, so I won't delve into it here.
  • a confirmation that this definition actually energized behavior (eg, the programmers wanted to see success in that way) and that it was used in a way that allowed small adjustments toward a better outcome
You get such confirmation by measuring an whether an action regarding an entity low in the IO map has an effect on an entity higher up.

Then again, your tastes may run to behavioral analysis techniques. I'd suggest using the IMPACT model, and of course ABC analysis. (See Unlock Behavior, Unleash Profits by Leslie Braksick.)
  • a clear set of practices for using this definition throughout the course of the project
That is what the measurement system provides, incitement to move towards the NCs in the IO map.
  • a clear way to modify those practices, so that better practices could emerge over time
Since we are using TOC's IO maps, why not also use TOC's Focusing Steps for continuous improvement? I feel another webcast coming on.
  • a common understanding about the time-boxes around the practices, and a continual questioning about whether those time-boxes (presumably at different frequencies, depending on the practice) were the most effective in guiding a better delivery of business value in this specific case
Now we are talking! Full TLTP analysis! OODA loop based Strategic Navigation! Many agile teams already use an andon (from Lean). Maybe it's time to throw in a bunch of other Lean techniques, and data analysis tools from Six Sigma.
  • a set-out way to develop the people to perform these engineering practices (training and the like)
Ah, hrrm, got me there. Training in all the techniques I have mentioned above is available, but as far as I know, there is no training program that brings it all together.

One of the problems is that for such a training program to work, you will probably have to train software engineers and their managers together. Otherwise it will be hard to get them to pull in the same direction.
The agile community already has everything it needs to define business value for every agile software development project in the world. The tools are available. The information needed can usually be obtained through a TLTP analysis (or similar).

Of course, the same goes for the entire business community. After all, we are talking about the same tools, tools for examining systems, and reasoning about systems, tools that can be used by managers and engineers alike.

TOC and Systems Thinking tools aren't unknown in the agile community. David Anderson wrote a book about TOC in software development, Tom and Mary Poppendieck have written two excellent books about adapting Lean techniques for agile software development, Kent Beck mentions TOC in his eXtreme Programming 2.0 book.

Systemic approaches to specifying requirements, like Goal Directed Design, have been around for a long time. Alistair Cockburn uses a systemic approach for specifying requirements in the Crystal family of agile methodologies. (I almost wrote analysis, but specifying requirements well requires synthesis, understanding the whole, before analysis, understanding the parts.)

The agile community seems to be facing the same problem managers have been facing for more than forty years. All the tools, and all the knowledge they need is there, right before them, but as a whole, the community does not get it. Individuals, and small groups do, no question about it. The community as a whole, probably not.

The lethargy of the business community as a whole is a problem for the community, but for you and your company, whether you are a manager or a foot soldier, it is an opportunity. Learn how to define and deliver maximum business value, and you have a competitive advantage that will last for decades. Learn what it means to get inside the OODA loop of your competition.

Thursday, March 06, 2008

The Graph That Got Away

I have been thinking a lot about how to present information about agile and TOC improvement efforts lately. Last night I had a look at some old process data, and had the opportunity to reflect upon a route I didn't take at the time I was involved in the improvement project. The project was switching from traditional RUP-based development to Scrum (with a healthy dose of TOC).

At the time I collected the data, from an andon (Kanban board) I had helped a development team set up, the top priority was finding and exploiting bottlenecks in the process. The second priority was reducing inventory (Design-In-Process). Therefore I focused on Throughput and DIP.

That was the right decision under the circumstances, and it did work. The manager I worked for had experience with Lean, and had no problem understanding what the development team and I was doing. That particular manager did not need my assistance in explaining what was happening to other managers, but what if he had? Is there something besides productivity data I could have provided to support him?

Yes, there is. Tracking what happened in the project on the andon did not just yield productivity data, it also showed how people spent their time. Most managers are pretty obsessed with keen on spending time well. (Yes I know, Throughput and Inventory, not time, is what really counts, but time based arguments are what most people are used to, so a time based argument is what I'll use here.)

The left bar in the graph above shows how people spent their time at the beginning of the change project, while there was still a lot of holdover from the previous, RUP-based, development method.

The right bar shows how people spent their time about four months later. As you can see, there is more time spent on producing stuff, and less time spent on fixing defects and administrative tasks and planning. I haven't a graph for it, but during this time, quality went way up, and communications with other stakeholders improved. The reason is that the priorities were clear. The team knew what to do at all times, and so they could communicate and plan more efficiently.

If one is enamored by tales of hyperproductivity in agile development teams, the difference between the two bars may not seem all that much to write about, but look at the graph to the left. It shows the difference in how time was spent.

An increase in effective production time by 69% is pretty good. This does not translate directly to a 69% increase in productivity (or revenue) but it shows there is increased potential. Before the project everyone but the manager who hired me would have laughed at the idea that there was room for a 69% improvement.

Data like this does indeed strengthen the case for going agile, or TOC.

Of course, as a TOC practitioner, I am well aware that agile (usually but not always) focuses on measures targeted to improve productivity and code quality, and that the bottleneck is often elsewhere. (It often is external to the development team.) However, easy to understand data about one part of the organization like the graphs above, can be useful to gain support for a TOC analysis and an improvement program that focuses on the true bottleneck.

Improving the wrong thing is a common reason for failure in improvement programs. Suppose there is an improvement program, agile, TQM, Six Sigma, Lean, whatever, that shows results similar to the one above, and there is no corresponding improvement in the company's revenue stream. It is a pretty safe bet development team productivity wasn't the organization's bottleneck.

On the other hand, you can prove an improvement in how time was spent after instituting a process improvement program. Now you are in a position to point out that the only thing needed to improve the organization as a whole, is to make a similar improvement at the current bottleneck.

Of course, with TOC you are in a pretty good position to find the bottleneck, regardless of whether it is a physical constraint, or a policy constraint.

Thursday, February 28, 2008

Ron Davison's Systems Thinking Webcasts

If you are interested in systems thinking, you might want to watch Ron Davison's webcasts. I haven't viewed the whole series yet, but what I have seen is interesting.

Wednesday, February 27, 2008

How Organizations Change, Part 3: Drive Out Fear

I have just released part three in the How Organizations Change series. I decided to split the material into more digestible chunks, so I discuss only one of the root causes that make it difficult for organizations to learn and adapt: fear.

The webcast contains material from a ZDNet Australia interview with Lloyd Taylor, VP of Operations at LinkedIn. I would like to thank Brian Haverty, Editorial Director of CNet Australia for permission to use the interview. The full interview is available at VP-of-Technical-Operations/0,139023731,339285616,00.htm

The webcast also contains an excerpt from the A Day with Dr. Russell L. Ackoff conference at JudgeLink. I would like to thank Dr. Ackoff for his kind permission to use the material in my webcast. Dr. Ackoff's talk at the conference was inspired, to say the least. You can view it all at

Wednesday, February 20, 2008

Time Sheets Are Lame!

Speaking of measurements that do not work, Jeff Sutherland has written an interesting article about time sheets in software development. Good stuff. Go have a look.

Agile Productivity Metrics Again

Ken Judy posted a thoughtful reply to my post commenting his post about productivity metrics. Judy writes:
Just to be clear, my objection is not that agile should not be justified by hard numbers but that I haven't seen a metric for productivity gain specifically that both stood systematic scrutiny and was economically feasible for the average business to collect.
If you have an andon (board with sticky notes representing units of work) set up, it is easy for the ScrumMaster (or project manager, if you do not use Scrum), to enter information about when each sticky note is moved into a spreadsheet. This takes the ScrumMaster a few minutes every day. (Or every other day. I would not recommend measuring less frequently, because if you do, you will miss information about inventory build up, and slow down responses to problems.)

From the raw data the spreadsheet can produce:
  • A burn-down graph. The usual way of visualizing progress in Scrum projects
  • A cumulative flow-chart, showing build up of inventory in each process stage. This is a very valuable tool for finding process bottlenecks
  • A Throughput chart, where Throughput is defined in terms of goal units per time unit. A goal unit may be a Story Point or Function Point, or even Story or Use Case. (Story Points and Function Points are a little bit more uniform in size, so they work better.) To be useful, the Throughput chart must have an upper and a lower statistical control limit. Without that, the chart is just garbage.
If you have a truly agile development team, where every member is a generalist, and everyone works on everything, the Scrum burn-down graph tells you everything you need to know.

The more specialization, and the more process stages you have, the more important the cumulative-flow chart becomes. I won't go into details here, but see David Anderson's book and
Reinertsen's Managing the Design Factory. This chart is useful to pinpoint the Capacity Constrained Resource in the project, which is a prerequisite for effective improvement efforts. It is also useful when judging the impact on events on the project, because prject velocity is determined by CCR velocity. (Bear in mind the CCR can and does shift.)

Both of the charts discussed above measure Design-In-Process (Inventory in TOC terms), but velocity can be derived from them. There is a catch though, as Judy points out, there are unknown measurement errors. In addition, velocity varies, a lot, for a multitude of reasons.

The throughput chart shows velocity. If that was all there is to it, it would be a less than useful tool. Fortunately, there is more: the statistical control limits. They show (if you are using 3 sigma limits) the upper and lower bounds of the velocity with 95% probability.

You can do a lot with this information:
  • If there are measurement points outside the upper and lower control limits, the development process is out of statistical control. That means you have a problem the company management, not the project team, is responsible for fixing.
  • When you take actions to reduce uncertainty, the distance between the upper and lower control limit will change. Thus, you can evaluate how effective your risk management is. A narrow band means the process is more predictable than if the band is wider. This is important when, for example, predicting the project end date.
  • You can prove productivity improvements. If you have a stable process, and then make a process change (shorter iterations for example), and productivity rises above the upper control limit, then you have a real productivity improvement. (Or someone is gaming the system. However, if someone is, it will most likely show up in other statistics.)
  • You can evaluate the effect of various measures, because you know how big a change must be to be statistically significant.
I have worked with project teams that have Throughput variations of more than +-50%. The Throughput chart is very useful. It would be useful even if the variation was considerably greater, because the amount of variation is in itself a measure of the size of the problem. (I won't delve into the intricacies of actually finding root cause of the problem here. Let's just say the TOC Thinking Process comes in handy.)

So, the data is feasible to collect. There is no additional overhead compared to what ScrumMasters already do, because the new information is derived from the same data they use to create burn-down charts. It is just processed a little bit differently.

I would also say the information is extremely useful. However, I agree with Judy that productivity information on its own does not tell the whole story. For example, a feature may have a negative business value, so producing it faster means the customer will lose money faster. Also, a set of features that individually are considered valuable, may have a negative business value when considered as a set. This is usually known as "featuritis".

Using a productivity measurement without understanding it is a recipe for disaster. I agree with Judy there. The position I am advocating is that using it with understanding can bring great benefit.

Judy also writes:
The problem with justifying an agile adoption based on revenue gains is there are so many other considerations that attempts to credit any single factor become dubious.
This is both true and false. It is true because that is the way it is in most companies. Nobody understands the system, so nobody can really tell which factors have an effect or not. Attributing success, or failure, to agile under such circumstances is bad politics, not good management.

On the other hand the statement is false because it is quite possible to figure out which factor, or factors, that limit the performance of a company. If the constraint is the software development process, then implementing agile will help. (Assuming it is done correctly, of course.) If the software development process is not the constraint, implementing agile will not help. Note that symptoms often arise far from the constraint itself. For example, a problem in development may show up in marketing, or vice versa. (Figuring out such causal connections is an important part of what I do for a living.)

The reason it is possible to figure out what the constraint is, is that companies are tightly coupled systems. In a tightly coupled system, the constraint can determined. Much of the time it is even quite easy to do so. The real trouble begins after that, when you try to fix the problem.

The method I use to Find-and-Fix is primarily the Theory Of Constraints (TOC). There are other methods around.

Judy finishes with:
If someone can propose a relevant metric that is economical for a small to medium size business to collect, that can be measured over time in small enough units to show increased performance due to specific process changes, and doesn't create more problems than it solves, I will be happy to consider it.
I can do that. So can any decent TOC practitioner or systems thinker. There are a few catches though:
  • Measurements must be tailored to the system goal. Very few organizations are exactly alike in terms of goals, intermediate objectives, root problems, and constraints. Therefore, measurements must be tailored to fit each specific organization.
  • Organizations change over time. When objectives or internal constraints change, measurement systems must also change.
  • The environment changes over time. This means external constraints may appear, or disappear. For this reason too, measurement systems must change over time.
The lag between the change that makes a change in measurement systems necessary, and the change in the measurement system, can be very great. Again, I won't go into details, but most companies use accounting practices that are about a hundred years out of date. (This is the reason for much of the friction between accountants and everyone else.)

There is no "best practice" set of measurements for software development. What you measure must be determined by your goals, and by the system under measurement. Once this is understood, measurements can be tailored to be what they are supposed to be: a tool set for problem solving.

Measuring is like anything else, it is very difficult if you haven't learned how to do it. A prerequisite for measuring complex systems, like software development teams and organizations, is understanding the system. To do that, you need to know a bit about systems thinking. You do not have to be the world's greatest expert, but you need to be well versed in the basics.

The first thing to do if you want to evaluate an effort to measure, is to ask for the systems map the measurements are derived from. The presence of such a map does not prove the presence of a good measurement system. However, the absence virtually guarantees the measurement system is dysfunctional in some way.

In 1992 Norton and Kaplan introduced the balanced scorecard system for creating measurements. It didn't work very well, precisely because there was no way to connect measurements to strategic objectives. In 2001, they rectified the problem by introducing strategy maps. I do not use this method myself, so I haven't evaluated. Seems to be on the right track though. Unfortunately, most people who design balanced scorecards use the earlier, flawed method. Go figure...

I use Intermediate Objective Maps, which are part of The Logical Thinking Process, a TOC application for doing systems synthesis and analysis. An alternative is using Strategy&Tactics Trees. However, S&T is currently poorly documented, and there is only a handful of people that can do them well.

It is also possible to use a combination of Current Reality Trees and Future Reality Trees to figure out what to measure. That is what I did before learning to use IO Maps.

So, IO Maps, S&T Trees, CRT+FRT, and the revised version of balanced scorecards, can be used to figure out what to measure.

As far as I know, none of these tools are part of any agile method. Not even FDD uses them, despite strong ties to TOC. Consequently, few agile practitioners have come into contact with the tools and the knowledge base for creating good measurements.

Consequently, the difficulty of making useful measurements is perceived to be greater than it really is. Tailoring a measurement system to fit an organization is a skill that can be learned. It is just not part of the agile repertoire, yet. I hope it will be.

Oh, in closing, a good measurement system must be able to measure itself. That is, if a measure does not work as intended, it must show up as an inconsistency between different measures. Otherwise, mistakes in the measurement system are very hard to catch. Fortunately, this can usually be accomplished fairly easily.